I am refering the doc in http://sualeh.github.io/SchemaCrawler/how-to.html#api
But I never get what I want. The tables returned is always empty.
final Connection connection = ... // Get a MYSQL connection;
final SchemaCrawlerOptions options = new SchemaCrawlerOptions();
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.maximum());
options.setTableInclusionRule(new IncludeAll());
options.setTableNamePattern("*");
final Catalog catalog = SchemaCrawlerUtility.getCatalog(connection, options);
for (final Schema schema: catalog.getSchemas())
{
Collection<Table> tables = catalog.getTables(schema);
//
// The size of tables is always 0
//
System.out.println(tables);
}
You should not set the table name pattern, so please remove the following line:
options.setTableNamePattern("*");
Sualeh Fatehi, SchemaCrawler
Related
I am testing Couchbase, and I am making a very simply query:
public async Task SelectRandomJobs(int nbr)
{
IBucket bucket = await cluster.BucketAsync("myBucket");
IScope scope = bucket.Scope("myScope");
IQueryResult<JObject> result = await scope.QueryAsync<JObject>("SELECT * FROM myCollection WHERE Id = {id}");
// The Metrics.* has default values
Console.WriteLine(result.MetaData.Metrics.ElaspedTime);
}
Here are the values:
I was expecting ElaspedTime (misspelled!) and ExecutionTime to be not null. There is a AnalyticsQueryAsync method, but that did work for me (error 24045).
Why are those values null?
-- UPDATE --
I followed the advice of Eric, but I got the same results:
So you will need to enable Metrics for this query, I have provided a code sample below with two possible ways of doing this, it is covered in our docs but maybe could be easier to find or have better examples, this is something I will investigate further and see if we can make it clearer in future editions of the docs.
I have used the travel-sample dataset and tried to set the code up similar to your example so that it will be easy to implement for you.
As for why the times are null by default and the other fields are zero, that seems to just be a design decision for this class.
About the misspelling, we have filed a ticket to get the spelling corrected. Thank you for pointing that out.
using System;
using System.Threading.Tasks;
using Couchbase;
using Couchbase.Query;
namespace _3x_simple
{
class Program
{
static async Task Main(string[] args)
{
var cluster = await Cluster.ConnectAsync("couchbase://localhost", "Administrator", "password");
var bucket = await cluster.BucketAsync("travel-sample");
var myScope = bucket.Scope("inventory");
//scope path
var options = new QueryOptions().Metrics(true);
var queryResult = await myScope.QueryAsync<dynamic>("SELECT * FROM airline LIMIT 10;", options);
//cluster path
//var queryResult = await cluster.QueryAsync<dynamic>("SELECT * FROM `travel-sample`.inventory.airline LIMIT 10;", options => options.Metrics(true));
Console.WriteLine($"Execution time before read: {queryResult.MetaData.Metrics.ExecutionTime}");
await foreach(var row in queryResult){
Console.WriteLine(row);
}
Console.WriteLine($"Execution time after read: {queryResult.MetaData.Metrics.ExecutionTime}");
Console.WriteLine("Press any key to exit...");
Console.Read();
}
}
}
You won't see the execution time until after the results are read. The reason you are seeing default values for those fields is because you are trying to read that information at the wrong time/place considering your async operation.
I am trying to move data from a SPARQL endpoint to a JSONObject. Using RDF4J.
RDF4J documentation does not address this directly (some info about using endpoints, less about converting to JSON, and nothing where these two cases meet up).
Sofar I have:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
Map<String, String> headers = new HashMap<String, String>();
headers.put("Accept", "SPARQL/JSON");
repo.setAdditionalHttpHeaders(headers);
try (RepositoryConnection conn = repo.getConnection())
{
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
GraphQuery query = conn.prepareGraphQuery(queryString);
debug("Mark 2");
try (GraphQueryResult result = query.evaluate())
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
I figured a SPARQLGraphQuery should take the place of GraphQuery, but RepositoryConnection does not have a relevant prepare statement.
If I exchange
try (RepositoryConnection conn = repo.getConnection())
with
try (SPARQLConnection conn = (SPARQLConnection)repo.getConnection())
I run into the problem that SPARQLConnection does not generate a SPARQLGraphQuery. The closest I can get is:
SPARQLGraphQuery query = (SPARQLGraphQuery)conn.prepareQuery(QueryLanguage.SPARQL, queryString);
which gives a runtime error as these types cannot be cast to eachother.
I do not know how to proceed from here. Any help or advise much appreciated. Thank you
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
In RDF4J, SPARQL SELECT queries are tuple queries, so named because each result is a set of bindings, which are tuples of the form (name, value). In contrast, CONSTRUCT (and DESCRIBE) queries are graph queries, so called because their result is a graph, that is, a collection of RDF statements.
Furthermore, setting additional headers for the response format as you have done here is not necessary (except in rare circumstances), the RDF4J client handles this for you automatically, based on the registered set of parsers.
So, in short, simplify your code as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
debug("Mark 2");
try (TupleQueryResult result = query.evaluate()) {
...
}
}
If you want to write the result of the query in JSON format, you could use a TupleQueryResultHandler, for example the SPARQLResultsJSONWriter, as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
query.evaluate(new SPARQLResultsJSONWriter(System.out));
}
This will write the result of the query (in this example to standard output) using the SPARQL Query Results JSON format. If you have a non-standard format in mind, you could of course also create your own TupleQueryResultHandler implementation.
For more details on the various ways in which you can process the result (including iterating, streaming, adding to a List, or just directly sending to a result handler), see the documentation on querying a repository. As an aside, the javadoc on the RDF4J APIs is pretty extensive too, so if your Java editing environment has support for displaying that, I'd advise you to make use of it.
I am trying to do an easy search on a table that can be on any kind of database. The following query is working an the most databases, but I cannot find a solution which works on mysql.
The tables in my database are generated by the active objects framework, so I cannot change the names or config of those instances.
Here is the query that works fine on all databases but MySQL:
select * from "AO_69D057_FILTER" where "SHARED" = true AND "CONTAINS_PROJECT" = true AND UPPER("FILTER_NAME") like UPPER('%pr%').
MySql is not able to use the table name in double quotes for some reason. If I use the unquoted table name it works on MySQL but not on Postgres. Postgres is converting the table name to lowercase because it is unquoted. AO is generating the table names in upper case.
I also tried to use an alias, but that can not work because of the evaluation hierarchy of the statement.
Any suggestions how to get rid of the table name problem?
By default double quotes are used to columns.
You can change it:
SET SQL_MODE=ANSI_QUOTES;
Here is the documentation about it:
http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html
I had the same problem. I select the query according to the exception I get. In the first call of the db search, I try without quotes if it fails then I try with quotes. Then I set useQueryWithQuotes variable accordingly so that in future calls I do not need to check the exception. Below is the code snipped I am using.
private Boolean useQueryWithQuotes=null;
private final String queryWithQuotes = "\"OWNER\"=? or \"PRIVATE\"=?";
private final String queryWithoutQuotes = "OWNER=? or PRIVATE=?";
public Response getReports() {
List<ReportEntity> reports = null;
if(useQueryWithQuotes==null){
synchronized(this){
try {
reports = new ArrayList<ReportEntity>( Arrays.asList(ao.find(ReportEntity.class, Query.select().where(queryWithoutQuotes, getUserKey(), false))) );
useQueryWithQuotes = false;
} catch (net.java.ao.ActiveObjectsException e) {
log("exception:" + e);
log("trying query with quotes");
reports = new ArrayList<ReportEntity>( Arrays.asList(ao.find(ReportEntity.class, queryWithQuotes, getUserKey(), false)));
useQueryWithQuotes = true;
}
}
}else{
String query = useQueryWithQuotes ? queryWithQuotes : queryWithoutQuotes;
reports = new ArrayList<ReportEntity>( Arrays.asList(ao.find(ReportEntity.class, query, getUserKey(), false)));
}
...
}
I know that we can query or create a Mysql table from SparkSQL through the below commands.
val data = sqlContext.read.jdbc(urlstring, tablename, properties)
data.write.format("com.databricks.spark.csv").save(result_location)
val dataframe = sqlContext.read.json("users.json")
dataframe.write.jdbc(urlstring, table, properties)
Like that is there any way to drop a table ?
You can try a basic DROP operation with the JDBC driver :
val DB_URL: String = ???
val USER: String = ???
val PASS: String = ???
def dropTable(tableName: String) = {
import java.sql._;
var conn: Connection = null;
var stmt: Statement = null;
try {
Class.forName("com.mysql.jdbc.Driver");
println("Connecting to a selected database...");
conn = DriverManager.getConnection(DB_URL, USER, PASS);
println("Connected database successfully...");
println("Deleting table in given database...");
stmt = conn.createStatement();
val sql: String = s"DROP TABLE ${tableName} ";
stmt.executeUpdate(sql);
println(s"Table ${tableName} deleted in given database...");
} catch {
case e: Exception => println("exception caught: " + e);
} finally {
???
}
}
dropTable("test")
You can do that with Spark using JDBCUtils but this is quite straightforward.
You can have a look at the write mode method
dataframe.write.mode('overwrite').jdbc(urlstring, table, properties)
Overwrite mode means that when saving a DataFrame to a data source, if data/table already exists, existing data is expected to be overwritten by the contents of the DataFrame.
from: https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html#save-modes
Also, you can put in the properties to truncate if you don't want to delete the definition too.
This is a JDBC writer-related option. When SaveMode.Overwrite is enabled. This option causes Spark to truncate an existing table instead of dropping and recreating it. This can be more efficient and prevents the table metadata (e.g., indices) from being removed. However, it will not work in some cases, such as when the new data has a different schema. It defaults to false. This option applies only to writing.
from: https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
Im trying to insert new data into the DB when a user scans a barcode into a field. When I hit save on the screen it says fail to convert circular structure to json.
var report = myapp.activeDataWorkspace.BlanccoData.BMCReports.addNew();
report.c_Date = Date.now();
report.IsScannedReport = true;
if (contentItem.screen.ScanSSN == true) {
report.SSN = contentItem.value;
}
var system = myapp.activeDataWorkspace.BlanccoData.BMCSystemInfo.addNew();
// system.Report = report;
system.Barcode = contentItem.screen.Barcode;
I think the commented line is throwing the exception but I need to reference it.
thanks
Have you considered that you may have a circular relationship in your database? That is reflected in your DataSource?