Using rsync to backup MySQL - mysql

I use the following rsync command to backup my MySQL data to a machine within the LAN network. It works as expected.
rsync -avz /mysql/ root:PassWord#192.168.50.180:: /root/testme/
I just want to make sure that this is the correct way to use rsync.
I will also like to know if the 5 minute crontab entry for this will work.

don't use the root user of the remote machine for this. In fact, never directly connect to the root user, that's a major security risk. In this case, simply create a new user with few privileges that may only write to the backup location
Don't use a password for this connection, but instead use public-key authentication
Make sure that MySQL is not running when you do this, or you can easily get a corrupt backup.
Use mysqldump to create a dump of your database while MySQL is running. You can then safely copy that dump.

I find a better way of doing backups of MySQL is to use the replication facility.
set up you backup machine as a slave of your master. Each transaction is then automatically mirrored.
You can also shut down the slave and perform a full backup to tape from it. When you restart the slave it synchronises with the master again.

I don't really know about your rsync command, but I am not sure this is the right/best way to make backup with MySQL ; you should probably take a look at this page of the manual : 6.1. Database Backups
DB backups are not necessarily as simple as one might think, considering problems suchs as locks, delayed write, and whatever optimizations MySQL can do with its data... Especially if your tables are not using the MyISAM engine.
About the "5 minutes crontab" : you are doing this backup every five minutes ? If your data is that sensible, you should probably think about something else, like replication to another server, to always have an up-to-date copy.

Related

Is relying on Rackspace Cloud Server backup images an okay way to handle mysql backups?

Rather than running mysqldump (e.g.) every morning, would it be fine to just rely on the daily server image backups that Rackspace Cloud Servers do? Or, is there a future headache that I'm not seeing?
The concern would be a potentially inconsistent state if any database operation occurred while the backup image was being created. Some of the database transactions may not have been flushed to disk yet, still residing in memory.
Since mysqldump can use the internal state of the database, I'd recommend using a cron job to regularly perform a mysqldump, and then backing up the output of the mysqldump with Cloud Backups.
Something like the following for the cron job:
#!/bin/sh
mysqldump -h DB_HOST -u DB_USER -p'DB_PASSWORD' \
DB_NAME > YOUR_WEB_ROOT/db_backup.sql gzip -f PATH_TO_BACKUPS/db_backup.sql
References:
http://www.rackspace.com/cloud/backup/
http://www.rackspace.com/knowledge_center/article/rackspace-cloud-backup-backing-up-databases
Indeed, the future headache would be trying to restore data that was possibly in an inconstant state when the image was taken.
The problem stems from any files that MySQL had open and writing to at the time of the image. MySQL dump performs a lock to ensure that nothing is being modified while the dump is taking place.
I would recommend using something like Holland Backup as it provides a framework for backing up MySQL.
The benefit of using something like Holland is that you have a bit more control over the process. For example, you can control when to purge out older backups that are no longer needed.
Holland also does a check to make sure you have enough free space to perform a backup.
The default provider is mysqldump, but there are options for other providers such as using XtraBackup.
Rackspace Image is not an ideal way to perform a MySQL backup, Cloud Image is like a snapshot to the entire server! You'd have to use Rackspace Cloud Backup instead, not a Cloud Image. However, Holland Backup is a good way to perform scheduled backup using Cron Jobs.

Efficient way to backup MySQL database

I'm using mysqldump to backup my database. Since the database and webserver are on the same machine, the mysqldump takes all the CPU and the site 'goes down' until the mysqldump finishes.
Is the solution to move database to another machine and do backups on that machine? Are there other alternatives?
It might be a little too much, but I'll suggest to use replication.
There is a master-slave replication with MySQL. This will allow you to have and identical DB (read-only) on another machine at all times and doesn't require your machine to work too hard since it happens all the time.
It's also pretty easy to set up. You can read more about it here:
mysql site description
i use mysql administrator from the old mysql gui tools to create backup from my website to my pc.
~90 mb backup take less than 2 minutes
If you want smooth backups (without interfering the production system) the master-slave replication is a very good way to do it. However you might not want to reserve a server for a backup slave and indeed mysqldump is using a lot of resources.
You can try Percona XtraBackup which is an open source tool. Works on the file system level and much faster than mysqldump. http://www.percona.com/doc/percona-xtrabackup/ You can even try it on your current setup as it doesn't put any locks on tables.

Big Database backup best practice

I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.

Migrating a MySQL server from one box to another

The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.

How do I backup a MySQL database?

What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.