The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.
Related
I have a very large .SQL file, of 90 GB
It was generated with a dump on a server:
mysqldump -u root -p siafi > /home/user_1/siafi.sql
I downloaded this .SQL file on a computer with Ubuntu 16.04 and MySQL Community Server (8.0.16). It has 8GB of RAM
So I did these steps in Terminal:
# Access
/usr/bin/mysql -u root -p
# I create a database with the same name to receive the .SQL information
CREATE DATABASE siafi;
# I establish the privileges. User reinaldo
GRANT ALL PRIVILEGES ON siafi.* to reinaldo#localhost;
# Enable the changes
FLUSH PRIVILEGES;
# Then I open another terminal and type command for the created database to receive the data from the .SQL file
mysql --user=reinaldo --password="type_here" --database=siafi < /home/reinaldo/Documentos/Code/test/siafi.sql
I typed these same commands with other .SQL files, only minor ones, with a maximum of 2GB. And it worked normally
But this 90GB file is processing for over twelve hours without stopping. I do not know if it's working
Please, is there any more efficient way to do this? Maybe splitting the .SQL file?
Break the file up into smaller chunks and process them separately.
You're probably hitting the logging high-water mark and mysql is trying to roll everything back, and that is a slow process.
Split the file into approx 1Gb chunks, breaking on whole lines. Perhaps using:
split -l 1000000 bigfile.sql part.
Then run them in order using your current command.
You'll have to experiment with split to get the size right, and you haven't said what your OS is, and split implementations/options vary. split --number=100 make work for you.
2 things that might be helpful:
Use pv to see how much of the .sql file has already been read. This can give you a progress bar which at least tells you it's not suck.
Log into MySQL and use SHOW PROCESSLIST to see what MySQL currently is executing. If it's still running, just let it run to completion.
If turned on, it might really help to turn off the binlog for the duration of the restore. Another thing that may or may not be helpful... if you have the choice, try to use the fastest disks available. You may have this kind of option if you're running on hosters like Amazon. You're going to really feel the pain if you're (for example) doing this on a standard EC2 host.
You can use third party tools like
https://philiplb.de/sqldumpsplitter3/
Very easy to use, can define size, location etc...
Or use this one also
same but interface its bit colorful and use to use
https://sqldumpsplitter.net/
I have been working in a project which involves Mysql and PHP. While creating package for testing i have exported mysql database as sql file from MySql Workbench and imported into linux machine's mysql server with
mysql>source mydatabase.sql;
is this the correct way to export and import mysql database?. And also the database file contains creation of schema, inserting data, and creating indexes scripts. Importing takes long time with this file. My immediate manager suggests me to export without indices and import database, then execute index creation scripts. Is this is the correct way? Does indices takes long time while importing database?
Thanks in advance!
Percona Server comes with a modified mysqldump tool so you can choose the --innodb-optimize-keys option. This exports the tables, data, and index definitions like stock mysqldump, but during import it defers creation of secondary indexes until after the data has been loaded into the table. This takes advantage of InnoDB's fast index creation feature.
Another option is to use a physical backup tool, like Percona XtraBackup (as user #Up_One) mentioned. This means that the full data files are backed up, including the indexes. On restore, nothing needs to be rebuilt, because the indexes are already populated. There are pros and cons to using physical backup tools too, but restore time is a big advantage.
Export depends on the MySQL version/vendor you are running !
MySQL(older versions) - uses mysqldump
MySQL(percona) - uses XtraBackup
MySQL(EE - 5.6) - uses MySQL Enterprise Backup which is more complex
and much faster.
Fastest way is to avoid mysqldump.
I like Percona more then the others !
Another way but is more advanced i think is to create the DB on the destination server, and copy the database files directly. Assuming user has access to the destination server's mysql directory:
As root user:
[root#server-A]# /etc/init.d/mysqld stop
[root#server-A]# cd /var/lib/mysql/[databasename]
[root#server-A]# scp * user#otherhost:/var/lib/mysql/[databasename]
[root#server-A]# /etc/init.d/mysqld start
Important things here are: Stop mysqld on both servers before copying DB files, make sure the file ownership and permissions are correct on the destination before starting mysqld on the destination server.
[root#server-B]# chown mysql:mysql /var/lib/mysql/[databasename]/*
[root#server-B]# chmod 660 /var/lib/mysql/[databasename]/*
[root#server-B]# /etc/init.d/mysqld start
With time being your priority here, the use of compression will depend on whether the time lost waiting for compression/decompression (with something like gzip) will be greater than the time wasted transmitting uncompressed data; that is, the speed of your connection.
I prefer to use mysqldump! (Check this: http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)
Rather than running mysqldump (e.g.) every morning, would it be fine to just rely on the daily server image backups that Rackspace Cloud Servers do? Or, is there a future headache that I'm not seeing?
The concern would be a potentially inconsistent state if any database operation occurred while the backup image was being created. Some of the database transactions may not have been flushed to disk yet, still residing in memory.
Since mysqldump can use the internal state of the database, I'd recommend using a cron job to regularly perform a mysqldump, and then backing up the output of the mysqldump with Cloud Backups.
Something like the following for the cron job:
#!/bin/sh
mysqldump -h DB_HOST -u DB_USER -p'DB_PASSWORD' \
DB_NAME > YOUR_WEB_ROOT/db_backup.sql gzip -f PATH_TO_BACKUPS/db_backup.sql
References:
http://www.rackspace.com/cloud/backup/
http://www.rackspace.com/knowledge_center/article/rackspace-cloud-backup-backing-up-databases
Indeed, the future headache would be trying to restore data that was possibly in an inconstant state when the image was taken.
The problem stems from any files that MySQL had open and writing to at the time of the image. MySQL dump performs a lock to ensure that nothing is being modified while the dump is taking place.
I would recommend using something like Holland Backup as it provides a framework for backing up MySQL.
The benefit of using something like Holland is that you have a bit more control over the process. For example, you can control when to purge out older backups that are no longer needed.
Holland also does a check to make sure you have enough free space to perform a backup.
The default provider is mysqldump, but there are options for other providers such as using XtraBackup.
Rackspace Image is not an ideal way to perform a MySQL backup, Cloud Image is like a snapshot to the entire server! You'd have to use Rackspace Cloud Backup instead, not a Cloud Image. However, Holland Backup is a good way to perform scheduled backup using Cron Jobs.
I am dealing with an incremental backup solution for a mysql database in centos. I need to write a perl script to take incremental backup. then i will run this script by using crontabs. I am a bit confused. There are solutions but not really helping. I did lots of research. there are so many ways to take full backup and incremental backup for files. I can easily understand them but I need to take an incremental backup of a mysql database. I do not know how to do it. Can anyone help me either advising a source or a piece of code.
The incremental backup method you've been looking at is documented by MySQL here:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
What you are essentially going to want to do is set up your mysql instance to write any changes to your database to this binary log. What this means is any updates, deletes, inserts etc go in the binary log, but not select statements (which don't change the db, therefore don't go in the binary log).
Once you have your mysql instance running with binary logging turned on, you take a full backup and take note of the master position. Then later on, to take an incremental backup, you want to run mysqlbinlog from the master position and the output of that will be all the changes made to your database since you took the full backup. You'll want to take note of the master position again at this point, so you know the point that you want to take the next incremental backup from.
Clearly, if you then take multiple incremental backups over and over, you need to retain all those incremental backups. I'd recommend taking a full backup quite often.
Indeed, I'd recommend always doing a full backup, if you can. Taking incremental backups is just going to cause you pain, IMO, but if you need to do it, that's certainly one way to do it.
mysqldump is the ticket.
Example:
mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql
-u = mysql database user name
-p = mysql database password
Note: there is no space after the -p option. And if you have to do this in perl, then you can use the system function to call it like so:
system("mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql") or die "system call failed: $?";
Be aware though of the security risks involved in doing this. If someone happened to do a listing of the current processes running on a system as this was running, they'd be able to see the credentials that were being used for database access.
I use the following rsync command to backup my MySQL data to a machine within the LAN network. It works as expected.
rsync -avz /mysql/ root:PassWord#192.168.50.180:: /root/testme/
I just want to make sure that this is the correct way to use rsync.
I will also like to know if the 5 minute crontab entry for this will work.
don't use the root user of the remote machine for this. In fact, never directly connect to the root user, that's a major security risk. In this case, simply create a new user with few privileges that may only write to the backup location
Don't use a password for this connection, but instead use public-key authentication
Make sure that MySQL is not running when you do this, or you can easily get a corrupt backup.
Use mysqldump to create a dump of your database while MySQL is running. You can then safely copy that dump.
I find a better way of doing backups of MySQL is to use the replication facility.
set up you backup machine as a slave of your master. Each transaction is then automatically mirrored.
You can also shut down the slave and perform a full backup to tape from it. When you restart the slave it synchronises with the master again.
I don't really know about your rsync command, but I am not sure this is the right/best way to make backup with MySQL ; you should probably take a look at this page of the manual : 6.1. Database Backups
DB backups are not necessarily as simple as one might think, considering problems suchs as locks, delayed write, and whatever optimizations MySQL can do with its data... Especially if your tables are not using the MyISAM engine.
About the "5 minutes crontab" : you are doing this backup every five minutes ? If your data is that sensible, you should probably think about something else, like replication to another server, to always have an up-to-date copy.