Apache Drill Schema Support for SQL Server - apache-drill

I want to know whether Apache Drill supports only dbo schema?? or will it support all type of schema?
I am running my system in window 8 and with latest version of Drill(1.5) with embedded mode.
I am trying to search with same storage plugin.
My Storage Plugin(for SQLServer):
{
"type" : "jdbc",
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver",
"url" : "jdbc:sqlserver://<servername>;databaseName=<databasename>",
"username" : "<username>",
"password" : "<****>",
"enabled" : true
}
This Plugin has dbo & core schema (both have same type, no special permission).Its work for dbo schema where core schema is not working.
DBO Query:
select * from SqlServer.dbo.Attribute; //Its working.
Core Query:
select * from SqlServer.core.Users //Its not working
My question is whether Drill Supports only dbo schemna or all type of schema?

select * from <StoragePluginName>.<databaseName>.<schemaName>.<tableName>;
Ex:
seleect * from SqlServer.Test.core.Category;
This query will work for all type of user created schemas. But for dbo(default) schema
Databases name is not required. If you write database name while query through dbo
schema, it will through error.
--> But This not the good solution. Because every time we have check for
schema (default or user created schema). If it has dbo (default
schema) then database name is not required in query. And If it has
core (user created schema) schema then we have to give database name
after Storage Plugin.

Related

Are there any way to create singular-name DB table?

development environment
Lnaguage : Golang ver.1.9.2
DB : mySQL
Framework : not decided (Maybe I'll use revel)
situation
I already have DB which has singular-name table ,like "user", "page". It can't be changed.
Now I'll develop new application using this DB.
I created simple application to connect this DB, and tried to auto migrate using gorm(https://github.com/jinzhu/gorm).
I defined some models, like "user" which is same as existing DB table name, and run auto-migrate just as it written in (http://jinzhu.me/gorm/database.html#connecting-to-a-database )
db.Set("gorm:table_options", "ENGINE=InnoDB").AutoMigrate(&User{})
Then, new table "users" was created.
Question
Can I create singular-name table, like "user" with auto-migrate or those things ?
Using gorm is not required, so I'll use another orm library if it works.
I hope anyone help me !
Implement function TableName on struct User to return a custom name for the table. The ORM uses this function to get the table name for all DB operations.
func (user *User) TableName() string {
return "user"
}
Refer docs here: http://jinzhu.me/gorm/models.html#table-name-is-the-pluralized-version-of-struct-name
You have set with db instance to use singular table, like this:
db.SingularTable(true)

Apache Drill | Get table list from REST API

Any one can explain how to get table list via Drill REST API.
I have tried with show database -> use mysql.db -> show table list
Able to get DB list but not able to get table list from respective DB.
Thanks in advance.
EDIT FROM COMMENT
My JSON request like this first :
{ "queryType" : "SQL", "query" : "USE MYSQL.dbtest" }
I got default schema result, then I sent
{ "queryType" : "SQL", "query" : "SHOW TABLES" }
then i got exception like this
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: No default schema selected. Select a schema using 'USE schema' command [Error Id: e4e2d2f4-6f08-4ef9-9be3-ba3bbe2d20d9 on spark-slave:31010]
It is very hard to help with little information. Since you provided no information about your JSON request strings I have to assume.
First, https://drill.apache.org/docs/rest-api/#query shows that you can send a query to Drill. This might be what you have used.
Secondly, there is no command show table list. Even directly in Drill, this shouldn't work. The correct command, also to be found in the documentation under https://drill.apache.org/docs/show-tables/ is SHOW TABLES;
If you need any further help, please add more information.
EDIT
It seems that Drill is not "remembering" your USE command. Maybe a new session is being opened. To avoid needing to use USE, you could try including the database identifier in your SHOW statement.
{ "queryType" : "SQL", "query" : "SHOW MYSQL.dbtest.TABLES" }
I am not sure if that is possible, though. What should work is using the SELECT statement:
{ "queryType" : "SQL", "query" : "SELECT * FROM MYSQL.dbtest.<a_table_name>" }

Schema name in Create index statement while generating datanucleus JDO schema

I am trying to generate schema from the DataNucleus SchemaTool for a mysql database, that will store countries and states. Here is a sample of that code:
#PersistenceCapable
Public class State{
private String shortCode;
private String fullName;
#Column(allowsNull = "true",name="country_id")
private Country countryId;
}
The following are my schemaGeneration properties:
datanucleus.ConnectionDriverName=com.mysql.jdbc.Driver
datanucleus.ConnectionURL=jdbc:mysql://localhost:3306/geog
datanucleus.ConnectionUserName=geog
datanucleus.ConnectionPassword=geogPass
datanucleus.schema.validateTables=true
datanucleus.mapping.Catalog=geog
datanucleus.mapping.Schema=geog
In my Country class as well, I have a mapping from a Collection, so that the FK reference for States to the Country table is built correctly.
But there is one problem. In the SQL script generated, the Index part has the Schema name as part of the index name itself, which fails the entire script. Here is that piece:
CREATE INDEX `GEOG`.`MST_STATE_N49` ON `GEOG`.`MST_STATE` (`COUNTRY_ID`);
Notice the schema name in the GEOG.MST_STATE_N49 part of the index' name.
I tried setting the schema and catalog name to blank but that yields a ''.MST_STATE_N49 which still fails.
I am using MySQL Server 5.7.17 using the 5.1.42 version of the JDBC driver (yes, not the latest) on Data nucleus JDO 3.1
Any hints on how I can get rid of the schema/catalog name in the generated DDL?
Why are you putting "datanucleus.mapping.Schema" when using MySQL ? MySQL doesnt use schema last I looked. Similarly the "datanucleus.mapping.Catalog" is effectively defined by your URL! MySQL only actually supports JDBC catalog, mapping on to "database", as per this post. Since DataNucleus simply uses the JDBC driver then catalog is the only useful input.
Consequently removal of both schema and catalog properties will DEFAULT to the right place.
After the comment above from Neil Stockton, I commented out both the properties and it worked. Effectively, this is what is needed:
datanucleus.ConnectionDriverName=com.mysql.jdbc.Driver
datanucleus.ConnectionURL=jdbc:mysql://localhost:3306/geog
datanucleus.ConnectionUserName=geog
datanucleus.ConnectionPassword=geogPass
datanucleus.schema.validateTables=true
Hopefully, I can get the answer to the other question (Pt. 2 in my reply-comment above) as well.

'Relation does not exist' error after transferring to PostgreSQL

I have transfered my project from MySQL to PostgreSQL and tried to drop the column as result of previous issue, because after I removed the problematic column from models.py and saved. error didn't even disappear. Integer error transferring from MySQL to PostgreSQL
Tried both with and without quotes.
ALTER TABLE "UserProfile" DROP COLUMN how_many_new_notifications;
Or:
ALTER TABLE UserProfile DROP COLUMN how_many_new_notifications;
Getting the following:
ERROR: relation "UserProfile" does not exist
Here's a model, if helps:
class UserProfile(models.Model):
user = models.OneToOneField(User)
how_many_new_notifications = models.IntegerField(null=True,default=0)
User.profile = property(lambda u: UserProfile.objects.get_or_create(user=u)[0])
I supposed it might have something to do with mixed-case but I have found no solution through all similar questions.
Yes, Postgresql is a case aware database but django is smart enough to know that. It converts all field and it generally converts the model name to a lower case table name. However the real problem here is that your model name will be prefixed by the app name. generally django table names are like:
<appname>_<modelname>
You can find out what exactly it is by:
from myapp.models import UserProfile
print (UserProfile._meta.db_table)
Obviously this needs to be typed into the django shell, which is invoked by ./manage.py shell the result of this print statement is what you should use in your query.
Client: DataGrip
Database engine: PostgreSQL
For me this worked opening a new console, because apparently from the IDE cache it was not recognizing the table I had created.
Steps to operate with the tables of a database:
Database (Left side panel of the IDE) >
Double Click on PostgreSQL - #localhost >
Double Click on the name of the database >
Right click on public schema >
New > Console
GL

Cocuhbase delete documents matching text

We have the following documents in couchbase:
Doc1 :
{
property1 : "someval"
name : "DOC_OF_TYPE1"
}
Doc2 :
{
property1 : "someval2"
name : "DOC_OF_TYPE1"
}
Doc3 :
{
property1 : "someval2"
name : "DOC_OF_TYPE2"
}
Is there a way to view documents of "DOC_OF_TYPE1" only ? And is there a way to delete all documents of that type from couchbase?
From Couchbase Server 4.1 onwards this is made easy through the use of the N1QL queries and DML (data manipulation language).
Firstly create a primary index on your data using N1QL, this can be done via a Couchbase SDK, Query workbench (integrated in the upcoming Couchbase 4.5 release) or the CBQ tool located in the Couchbase bin directory (/opt/couchbase/bin on linux, inside the .app file on OSX and in the install directory on Windows).
The following query creates the primary index on a bucket named 'mybucket', this allows you to perform any kind of N1QL query on a bucket:
CREATE PRIMARY INDEX ON `mybucket`;
For performance and production purposes you should create a secondary index:
CREATE INDEX 'document_name' ON `mybucket`(name);
This creates an index on every document's 'name' field. You can now efficiently select documents by their name field (This works with just the primary index but that would be slower):
SELECT *, meta().id FROM `mybucket` WHERE name = 'DOC_OF_TYPE1';
Or delete them based on their name field
DELETE FROM `mybucket` WHERE name = 'DOC_OF_TYPE2';
You can find more information about N1QL in the Couchbase Server documentation