I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.
Related
I am handling few databases that is growing very fast. Now it is around 12GB and in next few months it will cross 15GB. At this situation I only have the traditional backup process mysqldump running in cronjob. I found significant delay in backup and restore time (hours, even days) for the whole database. Knowing that physical backups are much faster than logical, I found these two, but cannot implement those because of some limitations and company policy.
MySQL Enterprise Backup : which is commercial
Percona XtraBackup : Percona came with this tool specially for Linux environments. And not all companies will agree to use 3rd party tools even though it is open source.
Please suggest any other better backup and restore mechanism for big database specially when the databases are mix of Innodb and MyISAM. Looking for some good suggestions on snapshots
I have personally used Percona XtraBackup toolset. It does some simple procedures to get all the committed data in innodb binlog when it finishes the normal storage. Reading about MySQL Enterprise Backup also can do this.
Incremental backups/snapshots can be made with both Percona XtraBackup and MySQL Enterprise, if that's what you're looking for.
Using a read-only slave, you can maintain separation of the read locks. This will keep your locks from being held on the master read if you backup using mysql free tools.
Another option if you have a slave is you can shut off the slave and rsync the data somewhere for backup.
First I would try using "--single-transaction" with mysqldump as long as all your tables are innodb and you aren't modifying table structure during dump. This will attempt to dump from a single point in time but only when it can be performed without locking your tables.
I haven't tried xtrabackup but it looks like that will do what you want as well.
You could also try setting up mysql replication and dumping from the slave.
Lastly LVM snapshots but that takes a bit more work.
Is it correct to use a backup of a vm as a means of restoring a MySQL database?
Are there any dangers in doing this?
My own feeling is that a vm backup/snapshot is at the os not the db level and therefore may not backup the database in the correct way. Has anybody any advice on this?
It's perfectly fine as long as you do one of two things:
Either ensure consistency of the tables by either shutting down the database or using something like FLUSH TABLES WITH READ LOCK while doing the snapshot (you probably don't want to do this)
Use a transactionally-safe storage engine such as InnoDB (the default) for all tables that are likely to change around the time of the snapshot, and rely on its ability to recover from what looks like a crashed state, i.e. the copy of a running server.
Once you realise that taking a snapshot of a running VM and booting the snapshot on another machine looks just like pulling the plug on that server and rebooting it, your choice becomes relatively easy: Make sure the system can recover from pulling the plug, and it can recover from a VM snapshot backup.
Based on a recommendation from Jeff Hunter posted on the VMWare blog, the answer is no, it's not safe to rely on the snapshots for MySQL backups. His recommendation is basically to dump the db through a separate process (and then allow the snapshot to copy the dump).
I have a huge MySQL InnoDB database (about 15 Go, and 100M rows), on a Debian server.
I have to save my database every two hours in another server, but without affect performances.
I looked at the MySQL replication, but it does not correspond to the fact that I look for, because I also want to protect of problems which the application could possibly cause.
What would be the best way of dealing with it?
Thank you very much!
I think you need incremental backups.
You can use Percona XtraBackup to make fast incremental backups. This works only if your database uses only InnoDB tables.
Refer to the documentation about how to create incremental backups:
http://www.percona.com/doc/percona-xtrabackup/howtos/recipes_ibkx_inc.html
Have you looked at writing a script that uses mysqldump to dump the contents of the DB, transfers it over to the backup DB (piping it to SSH would work), and inserts it via the command line?
There are options so that mysqldump won't lock the tables and so won't degrade performance too much.
I'm currently using mysqldump to back up databases that are growing rapidly in size. Though I run it late at night, there have been occasional problems when it happens to run during a moment of high traffic (which happens at night sometimes). For example, last night one of my sites locked up just after the time of the database backup with a completely full (and non-clearing) processlist.
Does anyone have a suggestion for a better way to approach this? Putting the site in a temporary maintenance state during backup is not an option as the goal is to maximize availability (some sql dumps take awhile). One idea that comes to mind is to run both master and slave copies and shut down + back up the slave copy, leaving the master copy alone during the process. Hopefully there is a simpler solution though - I'd rather not run a slave copy for backup purposes only unless absolutely necessary. Any suggestions?
Thanks.
Two thoughts:
run the slave. If nothing else, it gives you a warm spare for your production traffic in case of failure. You can also run reports and tools from it, freeing up cycles from your production server.
get to innodb and use mysqldump --single-transaction (see man page)
Good luck!
I use Percona Xtrabackup, which is similar to InnoDB Hot Backup with more functionality and is distributed for free. Xtrabackup takes snapshots without locking innodb tables and will record the current master logfile info and, if requested, the slave info if you are taking a backup from a slave.
I would recommend running a slave and doing a backup like this or with mysqldump. The slave gives you a hot backup that you can quickly switch over to and be up and running within minutes if your master blows up due to a hardware issue or various software or user error issues that take out the server. The backup with xtrabackup or mysqldump gives you a backup that you can use to restore data in case you accidentally drop a table or delete some rows you shouldn't have, since the replicated server wouldn't save you there.
What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.