Moving 50GB of data from one MySql server to another - mysql

I need to move some 50GB of data, spread around over 30 schemas, from one server to another.
I know about the process of exporting a schema to sql then sending the file vie ftp (for example) to the new server and importing it.
I also know I can connect directly through MySqlWorkbench or on the command line and save do it directly. But for 50GB and 30 schemas this would still take days.
Is there any way to make the process shorter?

The best way to do it by using gunzip
You can export your data by using below command
mysqldump -u [uname] -p[pass] [dbname] | gzip -9 > [backupfile.sql.gz]
Do ftp whereever you want to do
To restore compressed backup files you can do the following:
gunzip < [backupfile.sql.gz] | mysql -u [uname] -p[pass] [dbname]

Related

Importing a MySQL Database on Localhost

So I wanted to format my system and I had a lot of works that I have done on my localhost that involves databases. I followed the normal way of backing up the database by exporting it into an SQL file but I think I made a mess by making a mistake of backing up everything in one SQL file (I mean the whole localhost was exported to just one SQL file).
The problem now is: when I try to import the backed up file I mean the (localhost.sql), I get an error like
tables already exist.
information_schema
performance_schema
an every other tables that comes with Xampp, which has been preventing me from importing the database.
These tables are the phpmyadmin tables that came with Xampp. I have been trying to get past this for days.
My question now is that can I extract different databases from the same compiled SQL database file?
To import a database you can do following things:
mysql -u username -p database_name < /path/to/database.sql
From within mysql:
mysql> use database_name;
mysql> source database.sql;
The error is quite self-explanatory. The tables information_schema and performance_schema are already in the MySQL server instance that you are trying to import to.
Both of these databases are default in MySQL, so it is strange that you would be trying to import these into another MySQL installation. The basic syntax to create a .sql file to import from the command line is:
$ mysqldump -u [username] -p [database name] > sqlfile.sql
Or for multiple databases:
$ mysqldump --databases db1 db2 db3 > sqlfile.sql
Then to import them into another MySQL installation:
$ mysql -u [username] -p [database name] < sqlfile.sql
If the database already exists in MySQL then you need to do:
$ mysqlimport -u [username] -p [database name] sqlfile.sql
This seems to be the command you want to use, however I have never replaced the information_schema or performance_schema databases, so I'm unsure if this will cripple your MySQL installation or not.
So an example would be:
$ mysqldump -uDonglecow -p myDatabase > myDatabase.sql
$ mysql -uDonglecow -p myDatabase < myDatabase.sql
Remember not to provide a password on the command line, as this will be visible in plain text in the command history.
The point the previous responders seem to be missing is that the dump file localhost.sql when fed into mysql using
% mysql -u [username] -p [databasename] < localhost.sql
generates multiple databases so specifying a single databasename on the command line is illogical.
I had this problem and my solution was to not specify [databasename] on the command line and instead run:
% mysql -u [username] -p < localhost.sql
which works.
Actually it doesn't work right away because of previous attempts
which did create some structure inside mysql, and those bits in localhost.sql
make mysql complain because they already exist from the first time around, so
now they can't be created on the second time around.
The solution to THAT is to manually edit localhost.sql with modifications like
INSERT IGNORE for INSERT (so it doesn't re-insert the same stuff, nor complain),
CREATE DATABASE IF NOT EXISTS for CREATE DATABASE,
CREATE TABLE IF NOT EXISTS for CREATE TABLE,
and to delete ALTER TABLE commands entirely if they generate errors because by then
they've already been executed ((and INSERTs and CREATEs perhaps too for the same reasons). You can check the tables with DESCRIBE TABLE and SELECT commands to make sure that the ALTERations, etc. have taken hold, for confidence.
My own localhost.sql file was 300M which my favorite editor emacs complained about, so I had to pull out bits using
% head -n 20000 localhost.sql | tail -n 10000 > 2nd_10k_lines.sql
and go through it 10k lines at a time. It wasn't too hard because drupal was responsible for an enormous amount, the vast majority, of junk in there, and I didn't want to keep any of that, so I could carve away enormous chunks easily.
unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Best way to import database in localhost has simple 5 steps:
zip sql file first to compress databse size.
go to termianl.
create empty database.
Run Command unzip databse With Import database: unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Enter Password

how to migrate a large database to new server

I need to migrate my database from my old server to my new server. I have a very big problem by transferring the database because I have a large database with 5gb. I tried to transfer using c panel transfer but I can't it is not useful. I need a more efficient way to transfer the data.
Can anyone guide me with the full transfer details? How to transfer using import and export or do I need to use any other method?
MySQL type is MyISAM and size is 5gb.
You can try command line if you have access to SSH for both server as command below if not you can try using Navicat application to sync databases
SSH commands
Take mysqldump of database
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
create tar ball of SQL dump file using
tar -zcvf db.tar.gz db.sql
now upload tar.gz file to other server using scp command
scp -Cp db.gz {username}#{server}:{path}
now login on other server using SSH
Untar file using linux cmd
tar -zxvf db.tar.gz
import to database
mysql -u{username} -p {database} < db.sql
Please have a look at syntax though syntax will work but consider this as direction only
Thanks..
For large databases I would suggest to use mysqldump if you have SSH access to the server.
From the manual:
Use mysqldump --help to see what options are available.
The easiest (although not the fastest) way to move a database between two machines is to run the following commands on the machine on which the database is located:
shell> mysqladmin -h 'other_hostname' create db_name
shell> mysqldump db_name | mysql -h 'other_hostname' db_name
If you want to copy a database from a remote machine over a slow network, you can use these commands:
shell> mysqladmin create db_name
shell> mysqldump -h 'other_hostname' --compress db_name | mysql db_name
You can also store the dump in a file, transfer the file to the target machine, and then load the file into the database there. For example, you can dump a database to a compressed file on the source machine like this:
shell> mysqldump --quick db_name | gzip > db_name.gz
Transfer the file containing the database contents to the target machine and run these commands there:
shell> mysqladmin create db_name
shell> gunzip < db_name.gz | mysql db_name
You can also use mysqldump and mysqlimport to transfer the database. For large tables, this is much faster than simply using mysqldump. In the following commands, DUMPDIR represents the full path name of the directory you use to store the output from mysqldump.
First, create the directory for the output files and dump the database:
shell> mkdir DUMPDIR
shell> mysqldump --tab=DUMPDIR db_name
Then transfer the files in the DUMPDIR directory to some corresponding directory on the target machine and load the files into MySQL there:
shell> mysqladmin create db_name # create database
shell> cat DUMPDIR/*.sql | mysql db_name # create tables in database
shell> mysqlimport db_name DUMPDIR/*.txt # load data into tables
Do not forget to copy the mysql database because that is where the grant tables are stored. You might have to run commands as the MySQL root user on the new machine until you have the mysql database in place.
After you import the mysql database on the new machine, execute mysqladmin flush-privileges so that the server reloads the grant table information.

Export a large MySQL table as multiple smaller files

I have a very large MySQL table on my local dev server: over 8 million rows of data. I loaded the table successfully using LOAD DATA INFILE.
I now wish to export this data and import it onto a remote host.
I tried LOAD DATA LOCAL INFILE to the remote host. However, after around 15 minutes the connection to the remote host fails. I think that the only solution is for me to export the data into a number of smaller files.
The tools at my disposal are PhpMyAdmin, HeidiSQL and MySQL Workbench.
I know how to export as a single file, but not multiple files. How can I do this?
I just did an import/export of a (partitioned) table with 50 millions record, it needed just 2 minutes to export it from a reasonably fast machine and 15 minutes to import it on my slower desktop. There was no need to split the file.
mysqldump is your friend, and knowing that you have a lot of data it's better to compress it
#host1:~ $ mysqldump -u <username> -p <database> <table> | gzip > output.sql.gz
#host1:~ $ scp output.sql.gz host2:~/
#host1:~ $ rm output.sql.gz
#host1:~ $ ssh host2
#host2:~ $ gunzip < output.sql.gz | mysql -u <username> -p <database>
#host2:~ $ rm output.sql.gz
Take a look at mysqldump
Your lines should be (from terminal):
export to backupfile.sql from db_name in your mysql:
mysqldump -u user -p db_name > backupfile.sql
import from backupfile to db_name in your mysql:
mysql -u user -p db_name < backupfile.sql
You have two options in order to split the information:
Split the output text file into smaller files (as many as you need, many tools to do this, e.g. split).
Export one table each time using the option to add a table name after the db_name, like so:
mysqldump -u user -p db_name table_name > backupfile_table_name.sql
Compressing the file(s) (a text file) is very efficient and can minimize it to about 20%-30% of it's original size.
Copying the files to remote servers should be done with scp (secure copy) and interaction should take place with ssh (usually).
Good luck.
I found that the advanced options in phpMyAdmin allow me to select how many rows to export, plus the start point. This allows me to create as many dump files as required to get the table onto the remote host.
I had to adjust my php.ini settings, plus the phpMyAdmin config 'ExecTimeLimit' setting
as generating the dump files takes some time (500,000 rows in each).
I use HeidiSQL to do the imports.
As an example of the mysqldump approach for a single table
mysqldump -u root -ppassword yourdb yourtable > table_name.sql
Importing is then as simple as
mysql -u username -ppassword yourotherdb < table_name.sql
Use mysqldump to dump the table into a file.
Then use tar with -z option to zip the file.
Transfer it to your remote server (with ftp, sftp or other file transfer utility).
Then untar the file on remote server
Use mysql to import the file.
There is no reason to split the original file or to export in multiple files.
If you are not comfortable with using the mysqldump command line tool, here are two GUI tools that can help you with that problem, although you have to be able to upload them to the server via FTP!
Adminer is a slim and very efficient DB Manager tool that is at least as powerful as PHPMyAdmin and has only ONE SINGLE FILE that has to be uploaded to the server which makes it extremely easy to install. It works way better with large tables / DB than PMA does.
MySQLDumper is a tool developed especially to export / import large tables / DBs so it will have no problem with the situation you describe. The only dowside is that it is a bit more tedious to install as there are more files and folders (~350 files in ~1.5MB), but it shouldn't be a problem to upload it via FTP either, and it will definately get the job done :)
So my advice would be to first try Adminer and if that one also fails go the MySQLDumper route.
How do I split a large MySQL backup file into multiple files?
You can use mysql_export_explode
https://github.com/barinascode/mysql-export-explode
<?php
#Including the class
include 'mysql_export_explode.php';
$export = new mysql_export_explode;
$export->db = 'dataBaseName'; # -- Set your database name
$export->connect('host','user','password'); # -- Connecting to database
$export->rows = array('Id','firstName','Telephone','Address'); # -- Set which fields you want to export
$export->exportTable('myTableName',15); # -- Table name and in few fractions you want to split the table
?>
At the end of the SQL files are created in the directory where the script is executed in the following format
---------------------------------------
myTableName_0.sql
myTableName_1.sql
myTableName_2.sql
...

mysqldump compression

I am trying to understand how mysqldump works:
if I execute mysqldump on my pc and connect to a remote server:
mysqldump -u mark -h 34.32.23.23 -pxxx --quick | gzip > dump.sql.gz
will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it?
Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network!
You should make use of ssh + scp,
because the dump on localhost is faster,
and you only need to scp over the gzip (lesser network overhead)
likely you can do this
ssh $username#34.32.23.23 "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"
scp $username#34.32.23.23:/tmp/dump.sql.gz .
(optional directory of /tmp, should be change to whatever directory you comfortable with)
Have you tried the --compress parameter?
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compress
This is how I do it:
Do a partial export using SELECT INTO OUTFILE and create the files on the same server.
If your table contains 10 million rows. Do a partial export of 1 million rows at a time, each time in a separate file.
Once the 1st file is ready you can compress and transfer it. In the meantime MySQL can continue exporting data to the next file.
On the other server you can start loading the file into the new database.
BTW, lot of this can be scripted.

Is there a way to copy all the data in a mysql database to another? (phpmyadmin)

I want to copy all the tables, fields, and data from my local server mysql to my hosting sites mysql. Is there a way to copy all the data? (It's only 26kb, very small)
In phpMyAdmin, just export a dump (using the export) tab and re-import it on the other server using the sql tab.
Make sure you compare the results, I have had phpMyAdmin screw up the import more than once.
If you have shell access to both servers, a combination of
mysqldump -u username -p databasename > dump.sql
and a
mysql -u username -p databasename < dump.sql
on the target server is the much more fast and reliable alternative in my experience.
Have a look at
Copying MySQL Databases to Another Machine
Copy MySQL database from one server to another remote server
Please follow the following steps:
Create the target database using MySQLAdmin or your preferred method. In this example, db2 is the target database, where the source database db1 will be copied.
Execute the following statement on a command line:
mysqldump -h [server] -u [user] -p[password] db1 | mysql -h [server]
-u [user] -p[password] db2
Note: There is NO space between -p and [password]
I copied this from Copy/duplicate database without using mysqldump.
It works fine. Please ensure that you are not inside mysql while running this command.
If you have the same version of mysql on both systems (or versions with compatible db file sytsem), you may just copy the data files directly. Usually files are kept in /var/lib/mysql/ on unix systems.