Transferring MySQL data to another server - mysql

I have a central server and several (around 50) remote servers. I want to transfer some log data from each of the servers to the central server every night. They all run Linux, and the logs are stored in MySQL. I have a ssh access to all servers.
What is the best (easiest, safest, most reliable...) practice of transferring the data from remote servers to the central server?
thanks

Depending on your needs and the time you want to put into this, I have been using this script for a long time to backup databases.
It's a low-cost strategy that is tried and tested, very flexible and quite reliable.

You can export new lines to a csv file. Like this:
SELECT id, name, email INTO OUTFILE '/tmp/result.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
ESCAPED BY ‘\\’
LINES TERMINATED BY '\n'
FROM users WHERE timestamp > lastExport
Then transfer it via scp and import it whit mysqlimport
If database is innoDB you should import first referenced tables.

In general it is easiest to dump it with mysqldump and load it back in on all the duplicate servers. You can use some of the many options to mysqldump to control things such as locking, MVCC snapshot, which tables, and other options.
CSV is more difficult than mysqldump because you need to make sure you agree on how to terminate fields, how to escape etc.

Related

tons of CSV data into new MySQL Tables in one Database

I got a problem, and after some hours of research I just want to die.
Is there a way to import lots of CSV data into one MySQL database but creating new tables with the file name of the CSV data?
Example: If I import data1.csv into db the table should be named data1 with all the data from data1.csv.
Thanks for your suggestions and answers.
There is no built in tool/method/command/query to accomplish what you desire within MySQL alone.
What will be required is 2 parts.
1st. of course your MySQL DB where the table will be created.
2nd. some 3rd party program that can interact with your DB. Eg. (Java, JavaScript, Python, even Unix shell scripting)
Following is a sudo example of what will be needed.
What this program will have to do is relatively simple.
It will require a couple inputs:
DataBase IP, Username, Password (these can be parameters passed into your program, or for simplicity of testing hard coded directly into the program)
The next input will be your file name. data1.csv
Using the inputs the program will harvest the 'data1' name as well as the first row of the data1.csv file to name each column.
Once the program collects this info, it can Connect to the DB and run the MySQL statement for CREATE TABLE TableName (RowName1 VARCHAR(255), RowName2 VARCHAR(255), ect...)
Finally it can do a MySQL command to import he *.csv file into the newly created table. eg.
LOAD DATA LOCAL INFILE 'C:/Stuff/csvFiles/Data1.csv'
INTO TABLE `SchemaName`.`Data1`
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
Hope this helps clear up your options an approach a little.

How to synchronise data in local MySQL database with Cloud SQL database?

I have a Django application which is deployed on GAE. I have the same models on the prod server and the dev server. However, the content on both databases are different.
Actually, I'd like to do some tests on that data without screwing with the actual data on the cloud. Is there any way that I can pull the data in my Cloud SQL to my local MySQL db?
Assuming you can start fresh in development (empty tables), you could have auto_increments with primary key in development, and foreign key constraints there.
Perform
SELECT * INTO OUTFILE '/full/path/to/fileParentXXX.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM ParentXXX;
(same concept for other tables). Grab those exported CSV (comma-separated value) text files, bringing them back over the wire to development server.
Perform LOAD DATA INFILE on development with Parents first, then Children tables that have Foreign Key constraints depending on those FK's. The auto_incs should remain happy in development.
Mysql Manual page for load data link is here.

Insert Magento details to external database

I need to insert data from Magento place order form to an external database, Please give details about how I can achieve it.
Currently when we click on place order it is inserting to table sales_flat_order, i need to save it into an external DB .
As i am New to Magento please don't mind if this is a simple thing.
When you say external DB, does that mean another database on the same box? Or a remote database on another box? Will the table remain the same, or are all the fields and additional information different?
Approaches:
API: http://www.magentocommerce.com/api/rest/Resources/Orders/sales_orders.html
If it's a remote box, you can use the REST API to pull the order's (once the API is active, the role is created, the user is assigned and connected) and push the returned information to the new box programatically.
Dataflow:
You can setup a dataflow for exporting the order information, pull in the CSV/XML,parse it and upload the needed parts to the new DB.
Dataflow Extension:
Same as above, but instead of doing all the programming yourself, can install an extension like: http://www.wyomind.com/orders-export-tool-magento.html and have it ftp information to a remote server so you can check/parse the file into the new DB as needed.
Can you reveal a bit more about the environment, the amount of data/orders, etc?
Thanks.
--- Update:
Per your response, it sounds less of a Magento question here and more of a MySQL question.
In this case, you can do something as simple as "replicating" or copying over the table data to your other local db.
If you're not working with too many orders, the following may meet your needs for a 1 time deal. If you're dealing with a substantial amount of orders the approach may need to be expanded upon.
##Direct Copy:
#using stage_magento to represent your other DB
#assuming this is done with a user that has correct permissions on both databases.
#create the table
CREATE TABLE stage_magento.sales_flat_order LIKE production_magento.sales_flat_order;
#copy the data
INSERT stage_magento.sales_flat_order SELECT * FROM production_magento.sales_flat_order;
#####################
## Option 2, export to file system, import to new db
##Indirect, Export from DB/Table
SELECT * FROM production_magento.sales_flat_order INTO OUTFILE '/tmp/sales_flat_order.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\' LINES TERMINATED BY '\n' ;
##Import into New DB/Table
LOAD DATA INFILE '/tmp/sales_flat_order.csv' INTO TABLE stage_magento.sales_flat_order FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\' LINES TERMINATED BY '\n' ;

Importing Data MySQL

I have a huge dataset what is the faster way to upload data in MySQL PHP database and is there anyway to verify all datas are imported or not.
Any suggestion or hints will be greatly appreciate. Thanks.
If the data set is simply huge (can be transferred within hours), it is not worth the effort of finding an efficient way - any script should be able to do the job. I am assuming you are reading from some non-db format (eg. plain text) ? In that way, simply read, and insert.
If you require careful processing before you insert the rows, you might want to consider creating real objects in memory and their sub-objects first and then mapping them to rows and tables - Object-Relational data source patterns will be valuable here. This will, however, be much slower, and I would not recommend it unless it's absolutely necessary, especially if you are doing it just once.
For very fast access, some people wrote a direct binary blob of objects on the disk and then read it directly into an array, but that is available in languages like C/C++; I am not sure if/how it can be used in a scripted language. Again, this is good for READING the data back into memory, not transferring to DB.
The easiest way to verify that the data has been transferred is to compare the count(*) of the db with the number of items in your file. The more advanced way is to compute hash (eg. sha1) of primary key sets.
I used LOAD DATA, this is a standard MySql Loader Tools. It's work fine and faster. there are many options.
You can use :
data file named export_du_histo_complet.txt with multiple line like this :
"xxxxxxx.corp.xxxxxx.com";"GXTGENCDE";"GXGCDE001";"M_MAG105";"TERMINE";"2013-06-27";"14:08:00";"14:08:00";"00:00:01";"795691"
sql file with (because I use Unix Shell which call SQL File) :
LOAD DATA INFILE '/home2/soron/EXPORT_HISTO/export_du_histo_complet.txt'
INTO TABLE du_histo
FIELDS
TERMINATED BY ';'
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES
STARTING BY ' '
TERMINATED BY '\n'
(server, sess, uproc, ug, etat, date_exploitation, debut_uproc, fin_uproc, duree, num_uproc)
I specified the table fields which i would import (my table has more columns)
Note that exist MySql bug, so you can't use variable to specify your INFILE.

How can I transfer data between 2 MySQL databases?

I want to do that using a code and not using a tool like "MySQL Migration Toolkit". The easiest way I know is to open a connection (using MySQL connectors) to DB1 and read its data. Open connection to DB2 and write the data to it. Is there a better/easiest way ?
First I'm going to assume you aren't in a position to just copy the data/ directory, because if you are then using your existing snapshot/backup/restore will probably suffice (and test your backup/restore procedures into the bargain).
In which case, if the two tables have the same structure generally the quickest, and ironically the easiest approach will be to use SELECT...INTO OUTFILE... on one end, and LOAD DATA INFILE... on the other.
See http://dev.mysql.com/doc/refman/5.1/en/load-data.html and .../select.html for definitive details.
For trivial tables the following will work:
SELECT * FROM mytable INTO OUTFILE '/tmp/mytable.csv'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\\\'
LINES TERMINATED BY '\\n' ;
LOAD DATA INFILE '/tmp/mytable.csv' INTO TABLE mytable
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\\\'
LINES TERMINATED BY '\\n' ;
We have also used FIFO's to great effect to avoid the overhead of actually writing to disk, or if we do need to write to disk for some reason, to pipe it through gzip.
ie.
mkfifo /tmp/myfifo
gzip -c /tmp/myfifo > /tmp/mytable.csv.gz &
... SEL
ECT... INTO OUTFILE '/tmp/myfifo' .....
wait
gunzip -c /tmp/mytable.csv.gz > /tmp/myfifo &
... LOAD DATA INFILE /tmp/myfifo .....
wait
Basically, one you direct the table data to a FIFO you can compress it, munge it, or tunnel it across a network to your hearts content.
The FEDERATED storage engine? Not the fastest one in the bunch, but for one time, incidental, or small amounts of data it'll do. That is assuming you're talking about 2 SERVERS. With 2 databases on one and the same server it'll simply be:
INSERT INTO databasename1.tablename SELECT * FROM databasename2.tablename;
You can use mysqldump and mysql (the command line client). These are command line tools and in the question you write you don't want to use them, but still using them (even by running them from your code) is the easiest way; mysqldump solves a lot of problems.
You can make selects from one database and insert to the other, which is pretty easy. But if you need also to transfer the database schema (create tables etc.), it gets little bit more complicated, which is the reason I recommend mysqldump. But lot of PHP-MySQL-admin tools also does this, so you can use them or look at their code.
Or maybe you can use MySQL replication.
from http://dev.mysql.com/doc/refman/5.0/en/rename-table.html:
As long as two databases are on the same file system, you can use RENAME TABLE to move a table from one database to another:
RENAME TABLE current_db.tbl_name TO other_db.tbl_name;
If you enabled binary logging on your current server (and have all the bin logs) you can setup replication for the second server