Error while converting mysql to sqlite - mysql

I have a perl script that converts Mysql dump to sqlite using the module 'SQL::Translator'.
Mysql file have following:
CREATE TABLE `table1` (
`id1` char(4) NOT NULL,
`text1` char(2) NOT NULL,
`text2` char(2) NOT NULL,
`text3` enum('N','Y') NOT NULL,
UNIQUE KEY `id1` (`id1`,`text1`,`text2`),
CONSTRAINT `table1_ibfk_1` FOREIGN KEY (`id1`) REFERENCES `table2` (`id1`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
While converting it to final sql using 'SQL::Translator' module, I am getting following line in final sql:
CREATE INDEX "table1" ON "table2" ("table1");
when converting this final sql file to sqlite using sqlite3 command, I am getting following error.
there is already an index named table1 Error: near line 540: no such
table: main.table1
I have tried to remove the line 'CREATE INDEX "table1" ON "table2" ("table1");' from final sql, then it worked fine.
PLease help

Unlike MySQL, SQLite uses the same namespace for tables and indexes — you cannot have an index with the same name as a table. You'll need to change the name of the index.

Related

The duplicate key value is (<NULL>, <NULL>)

So I'm trying to migrate a table from MySQL to MSSQL (sql server migration assistant MySQL), but I get this error:
Migrating data...
Analyzing metadata...
Preparing table testreportingdebug.testcase...
Preparing data migration package...
Starting data migration Engine
Starting data migration...
The data migration engine is migrating table '`testreportingdebug`.`testcase`': > [SwMetrics].[testreportingdebug].[testcase], 8855 rows total
Violation of UNIQUE KEY constraint 'testcase$Unique'. Cannot insert duplicate key in object 'testreportingdebug.testcase'. The duplicate key value is (<NULL>, <NULL>).
Errors: Violation of UNIQUE KEY constraint 'testcase$Unique'. Cannot insert duplicate key in object 'testreportingdebug.testcase'. The duplicate key value is (<NULL>, <NULL>).
Completing migration of table `testreportingdebug`.`testcase`...
Migration complete for table '`testreportingdebug`.`testcase`': > [SwMetrics].[testreportingdebug].[testcase], 0 rows migrated (Elapsed Time = 00:00:00:01:352).
Data migration operation has finished.
0 table(s) successfully migrated.
0 table(s) partially migrated.
1 table(s) failed to migrate.
I've just copied three rows from my table, and this is what they look like:
'1', 'Pump# TimeToService', NULL, NULL, 'A general test case comment ...', '0'
'2', 'Config.SlaveMinimumReplyDelay', NULL, NULL, NULL, '0'
'3', 'Config.RESERVED', NULL, NULL, NULL, '0'
If you are wondering how the colons in the MySQL table is setup, here you go:
Is is because right, left and comment can be null?
DDL of table
CREATE TABLE `testcase` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`TestCaseName` varchar(150) DEFAULT NULL,
`Left` int(11) DEFAULT NULL,
`Right` int(11) DEFAULT NULL,
`Comment` text,
`Hidden` tinyint(4) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `Unique` (`Left`,`Right`)
) ENGINE=InnoDB AUTO_INCREMENT=10580 DEFAULT CHARSET=utf8
Had to remove the Unique part, since their are only NULL.
ALTER TABLE `testreportingdebug`.`testcase`
DROP INDEX `Unique`;
If you want the strict equivalent in SQL Server of your MySQL table you must create it like this :
CREATE TABLE testcase (
id int NOT NULL IDENTITY PRIMARY KEY,
TestCaseName varchar(150),
[Left] int,
[Right] int,
Comment VARCHAR(max),
[Hidden] tinyint DEFAULT 0,
);
CREATE UNIQUE INDEX X_testcase_right_left
ON testcase ([Left], [Right])
WHERE [Left] IS NOT NULL
AND [Right] IS NOT NULL;
By the way, column names "Right", "left", "hidden" are SQL / MS SQL Server reserved words and should not be used at anytime for SQL identifiers (table name, colum name, proc name...)
The complete list can be obtain here

ERROR 1062 import sql file with primary and auto increment

I have a problem, I nedd to restore a sql file, but when I try to do this with mysql -u user -p --database test < file.sql I get this error ERROR 1062 (23000) at line 50: Duplicate entry '0' for key 'PRIMARY'
My first attribut is AUTO_INCREMENT and NOT NULL and PRIMARY
I have searched and the probem is in my sql file for my primary key I don't have value I have just simple quote. For example INSERT INTO log VALUES ('','app1','name','hello') as you can see my first value is only simple quote, how can I import this sql file without value beacause I have lot of lines in my file...
Definition of the table
CREATE TABLE `log` (
`id_log` int(11) NOT NULL AUTO_INCREMENT,
`application` varchar(20) NOT NULL,
`module` text NOT NULL,
`action` text NOT NULL,
PRIMARY KEY (`id_log`),
) ENGINE=InnoDB AUTO_INCREMENT=646 DEFAULT CHARSET=latin1;
You just need to rewrite your queries.
The query should look like this:
INSERT INTO log(application, module, action) VALUES ('app1','name','hello');
This will enter the remaining row in the table and consider the column id_log for auto-increment.
I would assume mysql is trying to cast '' to an int since it is an AUTO_INCREMENT field.
It casts it to 0 so for the first entry everything is fine, but on the second one, it already exists and you get the error.
You will have to replace the '' with actual, unique integer values or remove it altogether and add a columns list.

how to make mysql database schema to be compatible with h2 database

I am currently using mysql as my database and use flyway to manage database schema. All my unit tests are running against mysql and they are running really slow with adding more unit tests. Now I want to change the database from mysql to h2 memory database in unit tests. Below is my setting for h2 db connection:
#Datasource
spring.datasource.url=jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;DATABASE_TO_UPPER=true
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.default-transaction-isolation-level=1
When I run flywayMigrate, I got some sql errors. Below is one example, this sql is used to create a table on mysql but failed to run on h2.
CREATE TABLE `file_storage` (
`id` BIGINT(64) NOT NULL AUTO_INCREMENT,
`file_name` VARCHAR(45) NULL,
PRIMARY KEY (`id`))
DEFAULT CHARACTER SET = utf8;
below is the error I got from h2. I don't know what wrong with my sql. Is there a way for h2 to accept mysql database schema?
Execution failed for task ':dbschema:flywayMigrate'.
> Error occurred while executing flywayMigrate
Migration V2016_02_26_12_59__create_file_storage.sql failed
-----------------------------------------------------------
SQL State : 42000
Error Code : 42000
Message : Syntax error in SQL statement "CREATE TABLE ""FILE_STORAGE"" (
""ID"" BIGINT(64) NOT NULL AUTO_INCREMENT,
""FILE_NAME"" VARCHAR(45) NULL,
PRIMARY KEY (""ID""))
DEFAULT CHARACTER[*] SET = UTF8 "; SQL statement:
CREATE TABLE `file_storage` (
`id` BIGINT(64) NOT NULL AUTO_INCREMENT,
`file_name` VARCHAR(45) NULL,
PRIMARY KEY (`id`))
DEFAULT CHARACTER SET = utf8 [42000-190]
Location : db/migration/V2016_02_26_12_59__create_file_storage.sql (/Users/yzzhao/dev/cooltoo/cooltoo_backend/dbschema/build/resources/main/db/migration/V2016_02_26_12_59__create_file_storage.sql)
Line : 1
Statement : CREATE TABLE `file_storage` (
`id` BIGINT(64) NOT NULL AUTO_INCREMENT,
`file_name` VARCHAR(45) NULL,
PRIMARY KEY (`id`))
DEFAULT CHARACTER SET = utf8
Syntax error in SQL statement "CREATE TABLE ""FILE_STORAGE"" (
""ID"" BIGINT(64) NOT NULL AUTO_INCREMENT,
""FILE_NAME"" VARCHAR(45) NULL,
PRIMARY KEY (""ID""))
DEFAULT CHARACTER[*] SET = UTF8 "; SQL statement:
CREATE TABLE `file_storage` (
`id` BIGINT(64) NOT NULL AUTO_INCREMENT,
`file_name` VARCHAR(45) NULL,
PRIMARY KEY (`id`))
DEFAULT CHARACTER SET = utf8 [42000-190]
EDIT
I have hundreds of sql scripts which is running fine in mysql. So I don't want to change anything in these scripts. Is there a way to allow h2 accepts mysql script?
According to this description, you may try to use your H2 database in MySQL Compatibility Mode, by setting it in the connection string as MODE=MySQL. Here is exactly what is said about it:
To use the MySQL mode, use the database URL jdbc:h2:~/test;MODE=MySQL or the SQL statement SET MODE MySQL.
When inserting data, if a column is defined to be NOT NULL and NULL is inserted, then a 0 (or empty string, or the current timestamp for timestamp columns) value is used. Usually, this operation is not allowed and an exception is thrown.
Creating indexes in the CREATE TABLE statement is allowed using INDEX(..) or KEY(..). Example: create table test(id int primary key, name varchar(255), key idx_name(name));
Meta data calls return identifiers in lower case.
When converting a floating point number to an integer, the fractional digits are not truncated, but the value is rounded.
Concatenating NULL with another value results in the other value.
Text comparison in MySQL is case insensitive by default, while in H2 it is case sensitive (as in most other databases). H2 does support case insensitive text comparison, but it needs to be set separately, using SET IGNORECASE TRUE. This affects comparison using =, LIKE, REGEXP.
Your issue can be seen with your example
CREATE TABLE `file_storage`
(
'id` BIGINT(64) NOT NULL AUTO_INCREMENT,
`file_name` VARCHAR(45) NULL,
PRIMARY KEY (`id`)
)
DEFAULT CHARACTER SET = utf8;
The last line "DEFAULT CHARACTER SET = utf8" is setting a mySQL table option. H2 does not have such an option at either the table or schema level as it operates using Unicode at all times.
If you have a lot of SQL DDL statements that have been written over the years for MySQL you are likely to see a lot of such issues.

How do I append more elements to an ENUM-type in MySQL Workbench?

As the title suggests, i'm trying to add more elements to my existing ENUM-type column. I'm using MySQL Workbench 6.3 for my database.
CREATE TABLE `quantum` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`type` enum('a','b','c','d','e') CHARACTER SET latin1 NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=11173 DEFAULT CHARSET=utf8;
then I try to alter type column to add another element f
ALTER TABLE quantum
MODIFY COLUMN type enum('a','b','c','d','e','f') NOT NULL
then MySQL Workbench 6.3 is giving me some weird error
Are you sure you have the latest version of MySQL Workbench. I don't see this problem in the current one (6.3.6):
While using enum() datatype in workbench/ mysql. Mysql do not aceept ENUM() with no values/params inside the ENUM().
Use ENUM with some comma separated values :
ENUM('PENDING','SUCCESS','FAIL')

PySpark, order of column on write to MySQL with JDBC

I'm struggling a bit understanding spark and writing dataframes to a mysql database. I have the following code:
forecastDict = {'uuid': u'8df34d5a-ce02-4d02-b282-e10363690122', 'created_at': datetime.datetime(2014, 12, 31, 23, 0)}
forecastFrame = sqlContext.createDataFrame([forecastDict])
forecastFrame.write.jdbc(url="jdbc:mysql://example.com/example_db?user=bla&password=blabal123", table="example_table", mode="append")
The last line in the code throws the following error:
Incorrect datetime value: '8df34d5a-ce02-4d02-b282-e10363690122' for column 'created_at' at row 1
I can post the entire stack trace if necessary, but basically what's happening here is that the pyspark is mapping the uuid field to the wrong column in mysql. Here's the mysql definition:
mysql> show create table example_table;
...
CREATE TABLE `example_table` (
`uuid` varchar(36) NOT NULL,
`created_at` datetime NOT NULL,
PRIMARY KEY (`uuid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
...
If we change the mysql definition to the following (notice that only the order of the columns is different):
CREATE TABLE `example_table` (
`created_at` datetime NOT NULL,
`uuid` varchar(36) NOT NULL,
PRIMARY KEY (`uuid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The insert works fine. Is there a way to implement this without being dependent on the order of the columns, or what's the preferred way of saving data to an external relational database from spark?
Thanks!
--chris
I would simply force expected order on write:
url = ...
table = ...
columns = (sqlContext.read.format('jdbc')
.options(url=url, dbtable=table)
.load()
.columns())
forecastFrame.select(*columns).write.jdbc(url=url, dbtable=table, mode='append')
Also be careful with using schema inference on dictionaries. This is not only deprecated but also rather unstable.