How can I transfer data between 2 MySQL databases? - mysql

I want to do that using a code and not using a tool like "MySQL Migration Toolkit". The easiest way I know is to open a connection (using MySQL connectors) to DB1 and read its data. Open connection to DB2 and write the data to it. Is there a better/easiest way ?

First I'm going to assume you aren't in a position to just copy the data/ directory, because if you are then using your existing snapshot/backup/restore will probably suffice (and test your backup/restore procedures into the bargain).
In which case, if the two tables have the same structure generally the quickest, and ironically the easiest approach will be to use SELECT...INTO OUTFILE... on one end, and LOAD DATA INFILE... on the other.
See http://dev.mysql.com/doc/refman/5.1/en/load-data.html and .../select.html for definitive details.
For trivial tables the following will work:
SELECT * FROM mytable INTO OUTFILE '/tmp/mytable.csv'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\\\'
LINES TERMINATED BY '\\n' ;
LOAD DATA INFILE '/tmp/mytable.csv' INTO TABLE mytable
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\\\'
LINES TERMINATED BY '\\n' ;
We have also used FIFO's to great effect to avoid the overhead of actually writing to disk, or if we do need to write to disk for some reason, to pipe it through gzip.
ie.
mkfifo /tmp/myfifo
gzip -c /tmp/myfifo > /tmp/mytable.csv.gz &
... SEL
ECT... INTO OUTFILE '/tmp/myfifo' .....
wait
gunzip -c /tmp/mytable.csv.gz > /tmp/myfifo &
... LOAD DATA INFILE /tmp/myfifo .....
wait
Basically, one you direct the table data to a FIFO you can compress it, munge it, or tunnel it across a network to your hearts content.

The FEDERATED storage engine? Not the fastest one in the bunch, but for one time, incidental, or small amounts of data it'll do. That is assuming you're talking about 2 SERVERS. With 2 databases on one and the same server it'll simply be:
INSERT INTO databasename1.tablename SELECT * FROM databasename2.tablename;

You can use mysqldump and mysql (the command line client). These are command line tools and in the question you write you don't want to use them, but still using them (even by running them from your code) is the easiest way; mysqldump solves a lot of problems.
You can make selects from one database and insert to the other, which is pretty easy. But if you need also to transfer the database schema (create tables etc.), it gets little bit more complicated, which is the reason I recommend mysqldump. But lot of PHP-MySQL-admin tools also does this, so you can use them or look at their code.
Or maybe you can use MySQL replication.

from http://dev.mysql.com/doc/refman/5.0/en/rename-table.html:
As long as two databases are on the same file system, you can use RENAME TABLE to move a table from one database to another:
RENAME TABLE current_db.tbl_name TO other_db.tbl_name;

If you enabled binary logging on your current server (and have all the bin logs) you can setup replication for the second server

Related

how to drop data of only one table in mysqldump?

My situation: a webshop running Shopware6, database quite big (34GB total) but most of it is the logs (table log_entry = 28GB) and the saved shopping carts (table cart = 3GB).
I would like to do a mysqldump but for 2 tables log_entry and cart, I would like to save only the schema.
I know how to do only the schema for all tables with the --no-data flag or the data only with the --no-create-info flag and to ignore a table with the --ignore-table=[tablename].
Is my best option to do 2 dumps, one with the schema only and a second one with data only where I ignore the 2 tables?
that would then give
mysqldump -u user -p $dbname --no-data > backup_schema.sql
mysqldump -u user -p $dbname --no-create-info --ignore-table=$dbname.cart --ignore-table=$dbname.log_entry > backup_data.sql
If you want to use native mysqldump, you cannot avoid to make two calls as already mentioned by yourself.
We use the GDPRdump tool by SmileSA for such jobs, where you can leave out (truncate) and even anonymize data during the dump.
There is already a Shopware 6 template for this on GitHub
https://github.com/portaltech-reply/gdpr-dump-shopware
A less sophisticated solution which basically does what you already tried in a bit more flexible way and into one dump-file, is https://github.com/amenk/SelfScripts/blob/master/mysql-stripped-dump (self-link)
If it works, it might be your best bet. Although, is it possible to send it SQL statements directly in your environment? Another way might be to export the data into CSV format using an SQL statement that gets the exact data you want. This code would get just the data (username, email and state):
SELECT username, email, state
FROM TABLENAME
INTO OUTFILE '/temp/yourdata.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
This one will get the Columns & Data together:
SELECT 'username', 'email', 'state'
UNION ALL
SELECT username, email, state
FROM TABLENAME
INTO OUTFILE '/temp/yourdatafull.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
I know this might not be the exact answer you were looking for, but it might give you an alternative idea for a secondary or alternative backup method. Or at the very least it is handy to drop a file that will load easy into excel that you can play around with to do some manual calculations or data mining. Although a drawback is I believe if you do this, it will use the first datatype when you do a join so if you have dates or numbers after it might confuse it a bit. Also if you have the --secure-file-priv set to ON you will only be able to output to the specific directly specified in the MySQL settings.
Of course if you have an environment where you are saving integers and dates as strings, I think you should be fine. Will need some testing on that for sure, just stumbled across that over here if you want more information on this method:
https://www.databasestar.com/mysql-output-file/

How to output MySQL data tables in CSV format?

I need to know how can I export 10 data tables from one database into a csv format with cron job daily?
I know this script:
SELECT *
FROM TABLE NAME
INTO OUTFILE '/var/lib/mysql-files/BACKUP.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
But how can I in the same line add the another 9 tables?
Best Regards!
You should look into mysqldump with the --tab option. It runs those INTO OUTFILE statements for you, dumping each table into a separate file.
You don't want all the tables in one file, because it would make it very awkward to import later.
Always be thinking about how you will restore a backup. I tell people, "you don't need a backup strategy, you need a restore strategy." Backing up is just a necessary step to restoring.

Importing Data MySQL

I have a huge dataset what is the faster way to upload data in MySQL PHP database and is there anyway to verify all datas are imported or not.
Any suggestion or hints will be greatly appreciate. Thanks.
If the data set is simply huge (can be transferred within hours), it is not worth the effort of finding an efficient way - any script should be able to do the job. I am assuming you are reading from some non-db format (eg. plain text) ? In that way, simply read, and insert.
If you require careful processing before you insert the rows, you might want to consider creating real objects in memory and their sub-objects first and then mapping them to rows and tables - Object-Relational data source patterns will be valuable here. This will, however, be much slower, and I would not recommend it unless it's absolutely necessary, especially if you are doing it just once.
For very fast access, some people wrote a direct binary blob of objects on the disk and then read it directly into an array, but that is available in languages like C/C++; I am not sure if/how it can be used in a scripted language. Again, this is good for READING the data back into memory, not transferring to DB.
The easiest way to verify that the data has been transferred is to compare the count(*) of the db with the number of items in your file. The more advanced way is to compute hash (eg. sha1) of primary key sets.
I used LOAD DATA, this is a standard MySql Loader Tools. It's work fine and faster. there are many options.
You can use :
data file named export_du_histo_complet.txt with multiple line like this :
"xxxxxxx.corp.xxxxxx.com";"GXTGENCDE";"GXGCDE001";"M_MAG105";"TERMINE";"2013-06-27";"14:08:00";"14:08:00";"00:00:01";"795691"
sql file with (because I use Unix Shell which call SQL File) :
LOAD DATA INFILE '/home2/soron/EXPORT_HISTO/export_du_histo_complet.txt'
INTO TABLE du_histo
FIELDS
TERMINATED BY ';'
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES
STARTING BY ' '
TERMINATED BY '\n'
(server, sess, uproc, ug, etat, date_exploitation, debut_uproc, fin_uproc, duree, num_uproc)
I specified the table fields which i would import (my table has more columns)
Note that exist MySql bug, so you can't use variable to specify your INFILE.

Transferring MySQL data to another server

I have a central server and several (around 50) remote servers. I want to transfer some log data from each of the servers to the central server every night. They all run Linux, and the logs are stored in MySQL. I have a ssh access to all servers.
What is the best (easiest, safest, most reliable...) practice of transferring the data from remote servers to the central server?
thanks
Depending on your needs and the time you want to put into this, I have been using this script for a long time to backup databases.
It's a low-cost strategy that is tried and tested, very flexible and quite reliable.
You can export new lines to a csv file. Like this:
SELECT id, name, email INTO OUTFILE '/tmp/result.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
ESCAPED BY ‘\\’
LINES TERMINATED BY '\n'
FROM users WHERE timestamp > lastExport
Then transfer it via scp and import it whit mysqlimport
If database is innoDB you should import first referenced tables.
In general it is easiest to dump it with mysqldump and load it back in on all the duplicate servers. You can use some of the many options to mysqldump to control things such as locking, MVCC snapshot, which tables, and other options.
CSV is more difficult than mysqldump because you need to make sure you agree on how to terminate fields, how to escape etc.

How to dump temporary MySQL table into a file?

Is there a way to create a dump/export/save a temporary MySQL table into a file on disk(.sql file that is, similar to one that is created by mysqldump)?
Sorry, I did not read the question properly the first time around... at any rate, the best I can think of is using the SELECT ... INTO OUTFILE statement, like this:
SELECT * INTO OUTFILE 'result.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM temp_table;
This does have many limitations thought, for instance, it only dumps the raw data without including the field headers. The other thing I found that may or may not be of use is the SHOW CREATE TABLE statement. If you can find some way of combining the output from these two statements, you may be able to get a proper "dump" file as produced by my command below.
You should be able to use the mysqldump application:
mysqldump --databases temptable > file.sql
This will dump the table with CREATE decelerations.