Move database to another server - mysql

I am using mysqldump to move my database to another sever. But database has tables with million of rows and mysql restore takes too long(4 houres).
is there any way I do this faster?

Here's the way I have done this in the past using mysql replication
Dump SQL on source machine with binary logging turned on (use the --master-data option) this will give you data at that point in time and allow you to import the data on your new server while new data is being populated on the old server.
after the import (4 hours you said?) then you can START SLAVE on the new server and the new server will replay the binary logs and catch up to the old server and keep in sync until the actual switchover happens.
How to setup mysql replication

Yes, you can kill the mysqld on the source server, once it is down you can copy the entire datadir to the new server and start both servers once copy is done.

Related

MySQL Database migration to new server

We have a common database in MySQL 5.6 and many services are using that. one of the services want to migrate some tables from common database to new MySQL server 5.7.
The old MySQL server continuously using by another service. The total data size is around 400GB.
Is there any recommended procedure?
Two Approaches
Approach: 1
create a slave with mysql version 5.7 and replicate only the common database with the below option replicate-db
At the point of no feeds happening on master, and no lag in slave. Use this as a new server, by stopping the slave and disconnect the master from slave.
On slave:
STOP SLAVE
To use RESET SLAVE, the slave replication threads must be stopped
$> RESET SLAVE
On Master:
Remove the replication user
FLUSH LOGS
Approach:2
Try the backup method
Since the db size is 400 GB, the mysqldump won't be sufficient.
Try partial backup method using xtrabackup:
xtrabackup --backup --tables-file=/tmp/tables.txt
Once the Backup has been completed, verify and restore it into the new server version 5.7.
Reference:
https://www.percona.com/doc/percona-xtrabackup/2.4/xtrabackup_bin/xbk_option_reference.html#cmdoption-xtrabackup-tables-file
np: On both approaches, make sure to check the table/mysql version compatibility [5.6 vs 5.7]

How to get mysql to disk

Without stopping mysql I am trying to copy it to disk and copy that directory over to a new server to use. Apparently FLUSH and mysqldump don't work, since mysqldump points to a new file, and FLUSH only updates the logfile files.
Is there any way to dump what is in the mysql database to the filesystem?
From Restoring MySQL database from physical files:
You should be able to restore by copying them in your database folder (In linux, the default location is /var/lib/mysql/)
If the server is running the database is stored in memory. It feels deficient if mysql is inable to flush the memory to disk. Please let me know if this is possible while the mysql server is running.
EDIT: Any type of snapshot, AWS, DigitalOcean, Azure, is worthless without this figured out.

What is the best practice to export and import mysql database?

I have been working in a project which involves Mysql and PHP. While creating package for testing i have exported mysql database as sql file from MySql Workbench and imported into linux machine's mysql server with
mysql>source mydatabase.sql;
is this the correct way to export and import mysql database?. And also the database file contains creation of schema, inserting data, and creating indexes scripts. Importing takes long time with this file. My immediate manager suggests me to export without indices and import database, then execute index creation scripts. Is this is the correct way? Does indices takes long time while importing database?
Thanks in advance!
Percona Server comes with a modified mysqldump tool so you can choose the --innodb-optimize-keys option. This exports the tables, data, and index definitions like stock mysqldump, but during import it defers creation of secondary indexes until after the data has been loaded into the table. This takes advantage of InnoDB's fast index creation feature.
Another option is to use a physical backup tool, like Percona XtraBackup (as user #Up_One) mentioned. This means that the full data files are backed up, including the indexes. On restore, nothing needs to be rebuilt, because the indexes are already populated. There are pros and cons to using physical backup tools too, but restore time is a big advantage.
Export depends on the MySQL version/vendor you are running !
MySQL(older versions) - uses mysqldump
MySQL(percona) - uses XtraBackup
MySQL(EE - 5.6) - uses MySQL Enterprise Backup which is more complex
and much faster.
Fastest way is to avoid mysqldump.
I like Percona more then the others !
Another way but is more advanced i think is to create the DB on the destination server, and copy the database files directly. Assuming user has access to the destination server's mysql directory:
As root user:
[root#server-A]# /etc/init.d/mysqld stop
[root#server-A]# cd /var/lib/mysql/[databasename]
[root#server-A]# scp * user#otherhost:/var/lib/mysql/[databasename]
[root#server-A]# /etc/init.d/mysqld start
Important things here are: Stop mysqld on both servers before copying DB files, make sure the file ownership and permissions are correct on the destination before starting mysqld on the destination server.
[root#server-B]# chown mysql:mysql /var/lib/mysql/[databasename]/*
[root#server-B]# chmod 660 /var/lib/mysql/[databasename]/*
[root#server-B]# /etc/init.d/mysqld start
With time being your priority here, the use of compression will depend on whether the time lost waiting for compression/decompression (with something like gzip) will be greater than the time wasted transmitting uncompressed data; that is, the speed of your connection.
I prefer to use mysqldump! (Check this: http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)

RHEL5 rSync Mysql server

Objective: to be able to synchronize 2 linux server realtime.
My concern is after using Rsync to mirror the mysql server. The only thing it wasnt able to synchronize is the entries (ie. inserting data to the database using the insert query). How will I be able to solve this?
Things I've done:
scp the keys of the 2 server so that password wont be asked for each transaction
I used
rsync -avc /var/lib/mysql/ root#10.1.99.XXX:/var/lib/mysql/
to sync the database/tables, but wasn't able to sync the entries.
Isubaki,
It's not quite as simple as just using rsync, as mysql may have the files open at the time you are pushing them across. Linux will do the file copy ok, but using this technique, the table is locked in memory until the database is restarted.
I do have a script that will do the sync part, but it does require a database restart, which may not be what you want (you mention realtime sync)

Migrating a MySQL server from one box to another

The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.