'Relation does not exist' error after transferring to PostgreSQL - mysql

I have transfered my project from MySQL to PostgreSQL and tried to drop the column as result of previous issue, because after I removed the problematic column from models.py and saved. error didn't even disappear. Integer error transferring from MySQL to PostgreSQL
Tried both with and without quotes.
ALTER TABLE "UserProfile" DROP COLUMN how_many_new_notifications;
Or:
ALTER TABLE UserProfile DROP COLUMN how_many_new_notifications;
Getting the following:
ERROR: relation "UserProfile" does not exist
Here's a model, if helps:
class UserProfile(models.Model):
user = models.OneToOneField(User)
how_many_new_notifications = models.IntegerField(null=True,default=0)
User.profile = property(lambda u: UserProfile.objects.get_or_create(user=u)[0])
I supposed it might have something to do with mixed-case but I have found no solution through all similar questions.

Yes, Postgresql is a case aware database but django is smart enough to know that. It converts all field and it generally converts the model name to a lower case table name. However the real problem here is that your model name will be prefixed by the app name. generally django table names are like:
<appname>_<modelname>
You can find out what exactly it is by:
from myapp.models import UserProfile
print (UserProfile._meta.db_table)
Obviously this needs to be typed into the django shell, which is invoked by ./manage.py shell the result of this print statement is what you should use in your query.

Client: DataGrip
Database engine: PostgreSQL
For me this worked opening a new console, because apparently from the IDE cache it was not recognizing the table I had created.
Steps to operate with the tables of a database:
Database (Left side panel of the IDE) >
Double Click on PostgreSQL - #localhost >
Double Click on the name of the database >
Right click on public schema >
New > Console
GL

Related

Schema name in Create index statement while generating datanucleus JDO schema

I am trying to generate schema from the DataNucleus SchemaTool for a mysql database, that will store countries and states. Here is a sample of that code:
#PersistenceCapable
Public class State{
private String shortCode;
private String fullName;
#Column(allowsNull = "true",name="country_id")
private Country countryId;
}
The following are my schemaGeneration properties:
datanucleus.ConnectionDriverName=com.mysql.jdbc.Driver
datanucleus.ConnectionURL=jdbc:mysql://localhost:3306/geog
datanucleus.ConnectionUserName=geog
datanucleus.ConnectionPassword=geogPass
datanucleus.schema.validateTables=true
datanucleus.mapping.Catalog=geog
datanucleus.mapping.Schema=geog
In my Country class as well, I have a mapping from a Collection, so that the FK reference for States to the Country table is built correctly.
But there is one problem. In the SQL script generated, the Index part has the Schema name as part of the index name itself, which fails the entire script. Here is that piece:
CREATE INDEX `GEOG`.`MST_STATE_N49` ON `GEOG`.`MST_STATE` (`COUNTRY_ID`);
Notice the schema name in the GEOG.MST_STATE_N49 part of the index' name.
I tried setting the schema and catalog name to blank but that yields a ''.MST_STATE_N49 which still fails.
I am using MySQL Server 5.7.17 using the 5.1.42 version of the JDBC driver (yes, not the latest) on Data nucleus JDO 3.1
Any hints on how I can get rid of the schema/catalog name in the generated DDL?
Why are you putting "datanucleus.mapping.Schema" when using MySQL ? MySQL doesnt use schema last I looked. Similarly the "datanucleus.mapping.Catalog" is effectively defined by your URL! MySQL only actually supports JDBC catalog, mapping on to "database", as per this post. Since DataNucleus simply uses the JDBC driver then catalog is the only useful input.
Consequently removal of both schema and catalog properties will DEFAULT to the right place.
After the comment above from Neil Stockton, I commented out both the properties and it worked. Effectively, this is what is needed:
datanucleus.ConnectionDriverName=com.mysql.jdbc.Driver
datanucleus.ConnectionURL=jdbc:mysql://localhost:3306/geog
datanucleus.ConnectionUserName=geog
datanucleus.ConnectionPassword=geogPass
datanucleus.schema.validateTables=true
Hopefully, I can get the answer to the other question (Pt. 2 in my reply-comment above) as well.

SAS pass through - Extract from MySQL does not work

I'm trying to build a Data Integration job uses pass through to extract data from a view in a MySQL database.
Wev'e been using pass through a lot in the project, mostly extracting data from Redshift,
however with MySQL I was not able to do make it work properly.
It keeps complaining a table is missing even though when pass through is off, view is found and data is extracted...
tried every trick I know, starting from enabling case-sensitive DBMS object names, to manually remove single/double quotes from the statement just in case MySQL confuses confuses it with something else...
No luck.
ODBC driver is [MySQL][ODBC 5.3(a) Driver][mysqld-5.5.53].
Ran on a Windows environment.
Any idea how to solve this?
Thank you in advance.
EDIT
So, first of all, one correction (even though not that important - I extract from a view, not a table).
This is the code generated by SAS Create Table transformation, pass through enabled. I only put an asterisk instead of the full list of columns:
proc sql;
connect to ODBC
(
READBUFF=10000 DATASRC="cmp.web_api" AUTHDOMAIN="MYSQL_CMP_Auth"
);
create table work."W7ZZZKOC"n as
select
*
from connection to ODBC
(
select
V_BI_ACCOUNT.ACCOUNT_NAME,
V_BI_ACCOUNT.ACQUISITION_SOURCE__C,
V_BI_ACCOUNT.ZUORA__ACTIVE__C,
V_BI_ACCOUNT.ADDRESS_LINE_1__C,
V_BI_ACCOUNT.ADDRESS_LINE_2__C,
V_BI_ACCOUNT.ADDRESS_LINE_3__C,
V_BI_ACCOUNT.AGREEMENT_DATE,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_1__C,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_2__C,
V_BI_ACCOUNT.PERSONBIRTHDATE,
V_BI_ACCOUNT.BLOCKED_REASON__C,
V_BI_ACCOUNT.BRAND__C,
V_BI_ACCOUNT.CPN__C,
V_BI_ACCOUNT.ACCCREATEDBYID,
V_BI_ACCOUNT.ACCCREATEDDATE,
V_BI_ACCOUNT.CURRENCY_PREFERENCE__C,
V_BI_ACCOUNT.CUSTOMER_FULL_NAME__PC,
V_BI_ACCOUNT.ACCOUNTID,
V_BI_ACCOUNT.ZUORA__CUSTOMERPRIORITY__C,
V_BI_ACCOUNT.DELIVERY_SALUTATION__C,
V_BI_ACCOUNT.DISPLAY_NAME,
V_BI_ACCOUNT.PERSONEMAIL,
V_BI_ACCOUNT.EMAILKEY__C,
V_BI_ACCOUNT.FACEBOOKKEY,
V_BI_ACCOUNT.FIRSTNAME,
V_BI_ACCOUNT.GENDER__C,
V_BI_ACCOUNT.PHONE,
V_BI_ACCOUNT.ACCLASTACTIVITYDATE,
V_BI_ACCOUNT.ACCLASTMODIFIEDDATE,
V_BI_ACCOUNT.LASTNAME,
V_BI_ACCOUNT.OTHER_EMAIL__C,
V_BI_ACCOUNT.PI_TYPE__C,
V_BI_ACCOUNT.ACCPARENTID,
V_BI_ACCOUNT.POSTCODE__C,
V_BI_ACCOUNT.PRIMARY_ACCOUNT_OF_THIS_CUSTOMER,
V_BI_ACCOUNT.ACCPRIMARY__C,
V_BI_ACCOUNT.ACCREASON_FOR_STATUS__C,
V_BI_ACCOUNT.ZUORA__SLA__C,
V_BI_ACCOUNT.ZUORA__SLASERIALNUMBER__C,
V_BI_ACCOUNT.SALUTATION,
V_BI_ACCOUNT.ACCSYSTEMMODSTAMP,
V_BI_ACCOUNT.PERSONTITLE,
V_BI_ACCOUNT.ZUORA__UPSELLOPPORTUNITY__C,
V_BI_ACCOUNT.X_CODE__C,
V_BI_ACCOUNT.ZUORA__ACCOUNT_ID__C,
V_BI_ACCOUNT.ZUORA__PAYMENTMETHODID__C,
V_BI_ACCOUNT.CITY,
V_BI_ACCOUNT.ORIGINAL_CREATED_DATE,
V_BI_ACCOUNT.SOURCE_SYSTEM_ID,
V_BI_ACCOUNT.STATUS,
V_BI_ACCOUNT.ZUORA__CONTACT_ID,
V_BI_ACCOUNT.ACCISDELETED,
V_BI_ACCOUNT.BILLING_ACCOUNT_NAME,
V_BI_ACCOUNT.ACZCREATEDDATE,
V_BI_ACCOUNT.ACZSYSTEMMODSTAMP,
V_BI_ACCOUNT.ACZLASTACTIVITYDATE,
V_BI_ACCOUNT.ZUORA__ACCOUNT__C,
V_BI_ACCOUNT.ZUORA__ACCOUNTNUMBER__C,
V_BI_ACCOUNT.ZUORA__AUTOPAY__C,
V_BI_ACCOUNT.ZUORA__BALANCE__C,
V_BI_ACCOUNT.ZUORA__CREDITCARDEXPIRATION__C,
V_BI_ACCOUNT.ZUORA__CURRENCY__C,
V_BI_ACCOUNT.ZUORA__MRR__C,
V_BI_ACCOUNT.ZUORA__PAYMENTTERM__C,
V_BI_ACCOUNT.ZUORA__PURCHASEORDERNUMBER__C,
V_BI_ACCOUNT.ZUORA__LASTINVOICEDATE__C,
V_BI_ACCOUNT.COUNTRY_NAME,
V_BI_ACCOUNT.COUNTRY_CODE,
V_BI_ACCOUNT.FAVOURITE_FOOTBALL_CLUB,
V_BI_ACCOUNT.COUNTY
from
web_api.V_BI_ACCOUNT as V_BI_ACCOUNT
);
%rcSet(&sqlrc);
disconnect from ODBC;
quit;
And again, when I extract data without pass through - works successfully,
I found out the problem was a column name exceeds 32 positions.
As SAS supports up column names up to 32,
the query fails to find PRIMARY_ACCOUNT_OF_THIS_CUSTOMER as the original column name is PRIMARY_ACCOUNT_OF_THIS_CUSTOMER__C.
EDIT
One more thing I found out is, MySQL doesn't like specifying schema name nor aliases.
Therefore,
From clause to only specify table name i.e : 'from v_bi_account' rather than 'web_api.v_bi_account'
and do not use aliases i.e use 'from v_bi_account' rather than 'from v_bi_account as v_bi_account'
Thank you guys so much for your help.

Django dumpdata unable to serialize existing column

I'm getting bit trying to dumpdata from a legacy db i've recently did a reverse engineering using django's inspectdb...
Other than this every query works fine. In MySQL workbench the column exists.
But when trying to export the data I get:
CommandError: Unable to serialize database: no such column: af_datper.locnac
Using traceback doesn't reveal any of my lines affecting (pasted here not to pollute http://dpaste.com/1DASN1V).
The model field already admits null values for that column and the column does exists in the database (among seeing it with the workbench, inspectdb wouldn't have picked it up...
I honestly don't know what else to do. Any takers?
A bit of digging into your traceback, I see this:
File "venv/lib/python3.5/site-packages/django/db/backends/utils.py",
line 64, in execute
return self.cursor.execute(sql, params) File "venv/lib/python3.5/site-packages/django/db/backends/sqlite3/base.py",
line 323, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such column: af_datper.locnac
From the SQLite documentation:
If a schema-name is specified, it must be either "main", "temp", or
the name of an attached database. In this case the new table is
created in the named database. If the "TEMP" or "TEMPORARY" keyword
occurs between the "CREATE" and "TABLE" then the new table is created
in the temp database. It is an error to specify both a schema-name and
the TEMP or TEMPORARY keyword, unless the schema-name is "temp". If no
schema name is specified and the TEMP keyword is not present then the
table is created in the main database.
In short, sqlite doesn't support foo.bar as a column name, unless foo is the name of the database or one of main or temp.

EF and Code First Migrations with MySQL - dbo.tablename does not exist

I have set up entity framework to use MySQL and created a migration.
When I run update-database I get error "table dbname.dbo.tablename' does not exist. Running in -Verbose mode I see the statement that causes the error :
alter table `dbo.Comments` drop foreign key `FK_dbo.Comments_dbo.Comments_Comment_ID`
When I run the query direct in MySQL workbench is throws the same error.
The problem seems to be the dbo. prefix in the migration set. Anything in the form dbo.tablename won't run saying that table does not exist. E.g. select * from dbo.tablename fails but select * from tablename works. The database was generated by Entity Framework and the code first migrations were generated by EF too.
However the migrations generate everything with the dbo. prefix, which does not work.
Does anyone have a solution to this?
I was having this problem just today as well; found my answer here:
MySqlMigrationCodeGenerator
You have to set:
CodeGenerator = new MySqlMigrationCodeGenerator();
In your context's configuration class. This will get rid of the schema gibberish for MySQL. My Configuration class looks like this:
internal sealed class Configuration : DbMigrationsConfiguration<YourContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
SetSqlGenerator("MySql.Data.MySqlClient", new MySqlMigrationSqlGenerator());
SetHistoryContextFactory("MySql.Data.MySqlClient", (conn, schema) => new MySqlHistoryContext(conn, schema));
CodeGenerator = new MySqlMigrationCodeGenerator();
}
}
I have this same issue and I applied the solution posted by Sorio and It generated a lot of problems.
CodeGenerator = new MySqlMigrationCodeGenerator();
This code line can change all the sql that you generate, for example in my case in the sql code to apply to the database all the foreign key constraints disappear.
I am still without a solution because for me this is not acceptable and I recommend you to check the sql generate with the command before use this solution:
update-database -script

How to convert H2Database database file to MySQL database .sql file?

I have some data in H2Database file and I want to convert it to MySQL .sql database file. What are the methods I can follow?
In answer to Thomas Mueller, SquirrelSQL worked fine for me.
Here is the procedure for Windows to convert a H2 database:
Go to "drivers list", where everything is red by default.
Select "H2" driver, and specify the full path to "h2-1.3.173.jar" (for
example) in "Extra Class Path". The H2 driver should display a blue
check in the list.
Select your target driver (PostgreSQL, MySQL), and
do the same, for example for PostgreSQL, specify the full path to
"postgresql-9.4-1201.jdbc41.jar" in Extra Class Path.
Go to "Aliases", then click on "+" for H2 : configure your JDBC chain, for example copy/paste the jdbc chain you obtain when you launch H2, and do the same for your target database: click on "+", configure and "test".
When you double click on your alias, you should see everything inside your database in a new Tab. Go to the tables in source database, do a multi-select on all your tables and do a right-click : "Copy Table".
Go to your target database from Alias, and do a "Paste Table". When all tables are copied altogether, the foreign key references are also generated.
Check your primary keys : from H2 to PostgreSQL, I lost the Primary Key constraints, and the auto-increment capability.
You could also rename columns and tables by a right click : "refactor". I used it to rename reserved words columns after full copy, by disabling name check in options.
This worked well for me.
The SQL script generated by the H2 database is not fully compatible with the SQL supported by MySQL. You would have to change the SQL script manually. This requires that you know both H2 and MySQL quite well.
To avoid this problem, an alternative, probably simpler way to copy the data from H2 to MySQL is to use a 3rd party tool such as the SQuirreL SQL together with the SQuirreL DB Copy Plugin plugin. (First you need to install SQuirreL SQL and on top of that the SQuirreL DB Copy Plugin.)
I created a Groovy script that does the migration from h2 to mysql. From there you could do a mysqldump. It requires that the tables exists in the Mysql database. It should work for ohter DBMS with minor changes.
#Grapes(
[
#Grab(group='mysql', module='mysql-connector-java', version='5.1.26'),
#Grab(group='com.h2database', module='h2', version='1.3.166'),
#GrabConfig(systemClassLoader = true)
])
import groovy.sql.Sql
def h2Url='jdbc:h2:C:\\Users\\xxx\\Desktop\\h2\\sonardata\\sonar'
def h2User='sonar'
def h2Passwd='sonar'
def mysqlUrl='jdbc:mysql://10.56.xxx.xxx:3306/sonar?useunicode=true&characterencoding=utf8&rewritebatchedstatements=true'
def mysqlUser='sonar'
def mysqlPasswd='xxxxxx'
def mysqlDatabase='sonar'
sql = Sql.newInstance(h2Url, h2User, h2Passwd, 'org.h2.Driver' )
def tables = [:]
sql.eachRow("select * from information_schema.columns where table_schema='PUBLIC'") {
if(!it.TABLE_NAME.endsWith("_MY")) {
if (tables[it.TABLE_NAME] == null) {
tables[it.TABLE_NAME] = []
}
tables[it.TABLE_NAME] += it.COLUMN_NAME;
}
}
tables.each{tab, cols ->
println("processing $tab")
println("droppin $tab"+"_my")
sql.execute("DROP TABLE IF EXISTS "+tab+"_my;")
sql.execute("create linked table "+tab+"_my ('com.mysql.jdbc.Driver', '"+mysqlUrl+"', '"+mysqlUser+"', '"+mysqlPasswd+"', '"+mysqlDatabase+"."+tab.toLowerCase()+"');")
sql.eachRow("select count(*) as c from " + tab + "_my"){println("deleting $it.c entries from mysql table")}
result = sql.execute("delete from "+tab+"_my")
colString = cols.join(", ")
sql.eachRow("select count(*) as c from " + tab){println("starting to copy $it.c entries")}
sql.execute("insert into " + tab + "_my ("+colString+") select "+colString+" from " + tab)
}
The H2 database allows you to create a SQL script using the SCRIPT SQL statement or the Script command line tool. Possibly you will need to tweak the script before you can run it against the MySQL database.
You can use fullconvert to convert database. it's easy to use.
Follow steps shown here
https://www.fullconvert.com/howto/h2-to-mysql