Error when migrating MySQL database to SQLite - mysql

I have access to a MySQL database hosted on a remote server. I am attempting to migrate this to a local SQLite database. To do this, I am using this script, as suggested by this question. The usage is
./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite
I tried doing exactly that (with no dump options) and sqlite3 returned an error:
Error: near line 4: near "SET": syntax error
So far, I have found that when I only specify one of my tables in the dump options like so
./mysql2sqlite db-name table-B | sqlite3 database.sqlite
It appears to work fine, but when I specify the first table (let's call it table-A) it returns this error. I'm pretty sure it's returning this error because of the output of mysql2sqlite. The 4th line (I guess the 4th logical line, or the command that starts on the 4th actual line) of the dump file looks like this:
CREATE TABLE "Association_data_interaction" (
"id" int(10) DEFAULT NULL,
...
"Comments" text CHARACTER SET latin1,
...
"Experiment" text CHARACTER SET latin1,
"Methods" text CHARACTER SET latin1,
...
);
With many other rows removed. I don't really know SQL that well, but as far as I can tell, the migration script is trying to output a dump file with commands that can create a new database, but the script has to translate between MySQL's output commands and the commands sqlite3 wants to create a database, and is failing to properly handle the text fields. I know that when I run SHOW COLUMNS; in the MySQL database the Comments, Experiment, and Methods columns are of the "text" type. What can I do make sqlite3 accept the database?
Note: I have editing access to the database, but I would much prefer to avoid that if at all possible. I do not believe I have administrative access to the database. Also, if it's relevant, the database has about 1000 tables, most of which have about 10,000 rows and 10-50 columns. I'm not too interested in the performance characteristics of the database; they're currently good enough for me.

That script is buggy; one of the bugs is that it expects a space before the final comma:
gsub( /(CHARACTER SET|character set) [^ ]+ /, "" )
Replace that line with:
gsub( /(CHARACTER SET|character set) [^ ]+/, "" )

Related

how to restore multiple sql file to different database name for each file in mysql?

I have hundreds of SQL file which I want to restore all of the databases in different database name for each file.
I look around for a solution, but what I got is something like concat all the files into one SQL file using cat.* and then restore using the concatenated file.
But, what I want is to restore it to a different database so, I think concat is not suitable for my case.
Here's one solution: alternate USE commands with your sql files, so you change the default database before the respective database's content. Gather the whole collection together and then pipe that to the input of the mysql client.
Example using bash syntax:
(
echo "USE database1;"
cat file1.sql
echo "USE database2;"
cat file2.sql
...
) | mysql
Another solution is to run the mysql client once for each file, and specify the database name as the argument:
mysql database1 < file1.sql
mysql database2 < file2.sql
...
Re your comment:
You can write a loop in bash too.
for file in *.sql
do
db=...
mysql $db < $file
done
The tricky part above is the "..." — deciding which db goes with each input SQL file. You haven't described any way to match them, so I don't know what you'd have to do to figure that out. But if you can make that inference somehow from the filename, then you can do this without having to type every file.

How does mysqldump write binary data into files for MySQL logical backup?

I am using mysqldump to back up a table. The schema is as follows:
CREATE TABLE `student` (
`ID` bigint(20) unsigned DEFAULT NULL,
`DATA` varbinary(64) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I can use the following command to backup my data in the table.
mysqldump -uroot -p123456 tdb > dump.sql.
Now I want to write my own code using the MySQL c interface to generate the file similar to dump.sql.
So I just
read the data, and store it int char* p(using function mysql_fetch_row);
write data into file using fprintf(f,"%s",p);
However, when I check the table fields written into the file, I find that the file generated by mysqldump and by my own program are different.For example,
one data field in the file generated by mysqldump
'[[ \\^X\í^G\ÑX` C;·Qù^Dô7<8a>¼!{<96>aÓ¹<8c> HÀaHr^Q^^½n÷^Kþ<98>IZ<9f>3þ'
one data field in the file generated by my program
[[ \^Xí^GÑX` C;·Qù^Dô7<8a>¼!{<96>aÓ¹<8c> HÀaHr^Q^^½n÷^Kþ<98>IZ<9f>3þ
So, My question is: Why is writting data using sprintf(f,"%s",xx) for backup not correct? Is it enough to just add ' ' in the front and end of the string? If so, what if the data of that field happen to have ' in it?
Also, I wonder what it means to write some unprintable characters into a text file.
Also, I read stackoverflow.com/questions/16559086 and tried --hex-blob option. Is it OK if I transform every byte of the binary data into hex form and then write simple text strings into the dump.sql.
Then, instead of getting
'[[ \\^X\í^G\ÑX` C;·Qù^Dô7<8a>¼!{<96>aÓ¹<8c> HÀaHr^Q^^½n÷^Kþ<98>IZ<9f>3þ'
I got something like
0x5B5B095C18ED07D1586009433BB751F95E44F4378ABC217B9661D3B98C0948C0614872111EBD6EF70BFE98495A9F33FE
All the characters are printable now!
However, If I choose this method, I wonder if I can meet problems when I use other encoding schemes other than latin1.
Also, the above words are all my own ideas, I also wonder I there are other ways to back up data using the C interface.
Thank you for your help!
latin1, utf8, etc are CHARACTER SETs. They apply to TEXT and VARCHAR columns, not BLOB and VARBINARY columns.
Using --hex-blob is a good idea.
If you have "unprintable characters" in TEXT or CHAR, then either you have been trying to put a BLOB into such -- naughty -- or the print mechanism does is not set for the appropriate charset.

How to have column with character value equal to the enclosing character value in mysql load data in file

I'm using mysqlimport,which uses LOAD DATA INFILE command. My question is the following: Assume I have --fields-enclosed-by='"', and that I have column with values which have double quoted string, such as "5" object" (which stands for 5 inches). The problem is that when mysql encounter the double quote string after the 5, it treats it as the enclosing character, and things are messed up. How to use mysqlimport with such values? I don't want to just use another character to enclosing, because this other character as well may occur in the data. So what is a general solution for this?
I guess it is will be different this way to import csv.
To solve above issue in another way,
Export or get or convert old data into sql format rather than csv format.
Import the same sql data using mysql command line tool.
mysql -hservername -uusername -p'password' dbname < 'path to you sql imported file.sql'

Converting SQL Server data to MySQL

I am attempting to convert some SQL Server data into mySQL, but keep running into hurdle after hurdle.
I assumed the easiest way would be to export the SQL Server data as CSV and import it that way (little did I know it wasn't that easy...!). Once I figured out how to get it to add quotes to the CSV, I attempted an import into mySQL. Now, for some reason the data was all imported WITH the quotes. Here is my mySQL query:
LOAD DATA INFILE 'file.csv' INTO TABLE test
FIELDS ENCLOSED BY '"'
TERMINATED BY ","
LINES TERMINATED BY "\n"
This imports, but it seems to ignore the quotes, and when commas are present in the data it completely screws up the row. The data is also imported including the quotes, see this data directly from the table:
"1" "1" "0" "{08CA6F70-735D-46ED-8EAB-C17A8BED1FCD}" "Reporting_1" "60" "Reporting" "ABC" "2008-04-21 19:25:28.013000000" "False" "True" "False" "True" "3" "164" "2033" "565077" "7929083" "334980" "2013-01-11 15:35:45.970000000" "False" "" "0" "False" "" "0" "0" "" "" "False" "False" "True" "True" "" "" "False"
I need to do some post processing with PHP on this data, so I assumed I could just strip the quotes out with my code. The quotes were not trimmed using the trim() function (it just wouldn't work), so I str_replaced the quotes and this did seem to work, however:
I am remapping the data and inserting it into another mySQL table (with different column names). When inserting the processed data above, not all of it makes it across with the DB query. Take this query for instance:
INSERT INTO tesxf.xf_node (`node_id`,`parent_node_id`,`title`,`description`,`node_name`,`node_type_id`,`display_in_list`)
VALUES
('2123','2281','Container Name','','38064','Forum','1')
The node_id and title etc make it into the new db, but the parent_node_id never does. The value is always inserted as 0. When I copy the query above and run it manually, it inserts the data correctly. Furthermore, if I run the data through mysql_real_escape_string, it comes out like this (Only converted two fields here):
INSERT INTO tesxf.xf_node
(`node_id`,`parent_node_id`,`title`,`description`,`node_name`,`node_type_id`,`display_in_list`)
VALUES
('673','\0\06\08\08\0\0','\0\0S\0t\0o\0r\0y\0 \0W\0r\0i\0t\0i\0n\0g\0 \0-\0 \0F\0o\0r\0u\0m\0\0','','240576','Forum','1')
I have never seen this before. It makes me think that perhaps the data received from the imported table is in some odd format with some hidden characters I can't see? How can I strip these out? Could this be the reason the data didn't import properly in the first place?
I am running SQL server 2012 I think (not too familiar with it). My mySQL instance is running on OSX 10.6 and is usually fine (I have done similar imports using postgres and oracle etc).
Mysql is : mysql Ver 14.12 Distrib 5.0.92, for apple-darwin10.0 (i386) using EditLine wrapper
PHP is: PHP 5.3.26 (cli) (built: Jul 7 2013 18:30:38)
Losing my mind here... I think my next attempt will be piping constructed queries into a text file and then running that manually... I hope there is something simple that I have missed.
CSV files were transferred over FTP in binary mode, and I applied :set ff=unix to them using vim (as I read on here that could be an issue).
I have also attempted to install the SQL Server/freetds wrapper for PHP on a this server + 1 other, but don't get me started on that!!
Have you tried using MySql migration toolkit? If not you may try it. Here is the link MySql Migration Toolkit

ZF2 Doctrine2 MySql charset error

I've setup a MySQL DB with utf8_unicode_ci collation, and all the tables and columns on it have de same collation.
My Doctrine config have SET NAMES utf8 as a connection option and my html files use utf8 charset.
The text saved on those tables contain accented characters (á,è,etc).
The problem is that when I save the content to the DB, it saves with strange characters, like when I try to save ISO in UTF8 table. (e.g.: Notícias)
The only workaround that i've found is to, utf8_decode before save, and utf8_encode before printing.
That means that, for some reason, something in between is messing up utf8 with iso.
What might be?
Thanks.
EDIT:
I've setup to encode before saving and decode before printing, and it prints correctly but in DB my chars change to:
XPTÓ -> XPTÓ
This makes searching in DB for "XPTÓ" impossible...
I would print bin2hex($string); at each step of the original workflow (i.e. without encode/decode steps).
Go through each of:
the raw $_POST data
the values you get after form validation
the values that get put in your bound Entity
the values you'd get from the db if you query it directly using PDO (get this from
$em->getConnection())
the values that get populated into your Entity on reload (can do this via $em->detach($entity); $entity = $em->find('Entity', $id);)
You'd be looking at the point at which the output changes, and focus your search there.
I would also double check:
On the db: SHOW CREATE TABLE 'table' shows CHARSET=utf8 for the whole table (and nothing different for the individual columns)
That the tool you use to see your database values (Navicat, phpMyAdmin) has got the correct encoding set.