Using VM to backup MySQL database - mysql

Is it correct to use a backup of a vm as a means of restoring a MySQL database?
Are there any dangers in doing this?
My own feeling is that a vm backup/snapshot is at the os not the db level and therefore may not backup the database in the correct way. Has anybody any advice on this?

It's perfectly fine as long as you do one of two things:
Either ensure consistency of the tables by either shutting down the database or using something like FLUSH TABLES WITH READ LOCK while doing the snapshot (you probably don't want to do this)
Use a transactionally-safe storage engine such as InnoDB (the default) for all tables that are likely to change around the time of the snapshot, and rely on its ability to recover from what looks like a crashed state, i.e. the copy of a running server.
Once you realise that taking a snapshot of a running VM and booting the snapshot on another machine looks just like pulling the plug on that server and rebooting it, your choice becomes relatively easy: Make sure the system can recover from pulling the plug, and it can recover from a VM snapshot backup.

Based on a recommendation from Jeff Hunter posted on the VMWare blog, the answer is no, it's not safe to rely on the snapshots for MySQL backups. His recommendation is basically to dump the db through a separate process (and then allow the snapshot to copy the dump).

Related

Restoring database to new cluster

we are using Percona 5.7.16-10 server. I would like to expand current solution with XtraDB cluster. So meanwhile I created other machines and started cluster (running on 5.7.17-11-57 Percona XtraDB Cluster version) and I did some testing there (everything seems to be working fine). Now I would like to dump current database from running server and insert it into cluster. There is no problem to stop the cluster (since it is for testing). But when I create mysqldump as I was used to, I'm not able to insert it into cluster because of pcx_strict_mode (info here) - with enforcing Percona-XtraDB-Cluster prohibits use of LOCK TABLE/FLUSH TABLE <table> WITH READ LOCK with pxc_strict_mode = ENFORCING because the mysqldump creates script which contains table lock which is prohibited. So I've tested several more options, like MASTER which should don't check this rule, but it didn't help, because the insert query from the dump get stuck and nothing is happening.
Is there any mysqldump option to avoid table locking queries, or do I have to restore it somehow via XtraBackup and use XtraBackup for current running server?
I've read several topics here, but didn't match anyone with the same issue. Everyone is solving how to restore cluster from some fail, not from the scratch.
I will be glad for any suggestion for mysqldump or what is proper way to "insert" old database into cluster.
If you can take down your current machine, and if you are building the cluster from scratch, then I think these (on mysqldump) would avoid that strict_mode, and possibly some other hiccups:
--skip_add_locks --skip-lock-tables
And do not use
--single-transaction --lock-all-tables
It might also be wise to get the first node in the cluster loaded with the data, then add on the other nodes, letting them use SST to load themselves.
If you need to keep the current server alive, then we need to discuss making it a Master and one node of the new cluster be a Slave. Plus XtraBackup would probably be better suited. But now, the locks and single-transaction would be necessary. So setting that strict_mode to DISABLED seems right since the cluster is being built, and not yet live.
(Caveat: I have no experience performing your task; if someone else provides a more convincing Answer, go with them.)

What is an efficient way to maintain a local readonly copy of a live remote MySQL database?

I maintain a server that runs daily cron jobs to aggregate data sources and generate reports, accessible by a private Ruby on Rails application.
One of our data sources is a partial dump of one of our partner's databases. The partner runs an active application and the MySQL DB has hundreds of tables. They have given us read-only access to a relatively underpowered readonly slave of their application DB.
Because of latency issues and performance bottlenecking on their slave DB, we have been maintaining a limited local copy of their DB. We only need about 20 tables for our reports, so I only dump those tables. We also only need the data to a daily granularity, so realtime sync is not a requirement.
For a few months, I had implemented a nightly cron which streamed the dump of the necessary tables into a local production_tmp database. Then, when all tables were imported, I dropped production and renamed production_tmp to production. This was working until the DB grew to over 25GB, and we started running into disk space limitations.
For now, I have removed the redundancy step and am just streaming the dump straight into production on our local server. This feels a bit flimsy to me, and I would like to implement a safer approach. Also, currently doing the full dump/load takes our server over 2 hours, and I'd like to implement an approach that doesn't take as long. The database will only keep growing, so I'd like to implement something future proof.
Any suggestions would be appreciated!
I take it you have never heard of, or considered MySQL Replication?
The idea is that you do your backup & restore once, and then configure the replica to "subscribe" to a continuous stream of changes as they are made on the primary MySQL instance. Any change applied to the primary is applied automatically to the replica within seconds. You don't have to do the backup & restore procedure again, unless the replica gets damaged.
It takes some care to set up and keep working, but it's a much more efficient method of keeping two instances in sync.
#SusannahPotts mentions hot backup and/or incremental backup. You can get both of these features for free, without paying for MySQL Enterprise using Percona XtraBackup.
You can also consider using MySQL Transportable Tablespaces.
You'll need filesystem access to run either Percona XtraBackup or MySQL Enterprise Backup. It's not possible to use these physical backup tools for Amazon RDS, for example.
One alternative is to create a replication slave in the same network as the live system, and run Percona XtraBackup on that slave, where you do have filesystem access.
Another option is to stream the binary logs to another host (see https://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html) and then transfer them periodically to your local instance and replay them.
Each of these solutions has pros and cons. It's hard to recommend which solution is best for you, because you aren't sharing full details about your requirements.
This was working until the DB grew to over 25GB, and we started running into disk space limitations.
Some question marks "here":
Why don't you just increase the available Diskspace for your database? 25 GB seems nothing when it comes down to disk-space?
Why don't you modify your script to: download table1, import table1_tmp, drop table1_prod, rename table1_tmp to table1_prod; rinse and repeat.
Other than that:
Why don't you ask your partner for a system with enough performance to run your reports on? I'm quite sure, he would prefer this rather than having YOU download sensitive data every day to your "local site"?
Last thought (requires MySQL Enterprise Backup https://www.mysql.de/products/enterprise/backup.html):
Rather than dumping, downloading and importing 25 GB every day:
Create a full backup
Download and import
Use Differential or incremental backups from now.
The next day you download (and import) only the data-delta: https://dev.mysql.com/doc/mysql-enterprise-backup/4.0/en/mysqlbackup.incremental.html

Big Database backup best practice

I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.

Is copying the /var/lib/mysql directory a good alternative to mysqldump?

Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump.
are all informations needed contained in that directory?
can single tables be imported in another mysql?
can there be problems while restoring those files on a (probably slightly) different mysql server version?
Yes
Yes if the table is using the MyISAM (default) engine. Not if it's using InnoDB.
Probably not, and if there is, you just need to execute mysql_upgrade to fix them
To avoid getting databases in a inconsistent state, you can either shutdown MySQL or use LOCK TABLES and then FLUSH TABLES before the backup. The second solution is a little better because the MySQL server will remain available during the backup (albeit read only).
This approach is only going to work safely if you shut the database down first. Otherwise you could well end up in an inconsistent state afterwards. Use the /etc/init.d/mysql stop command first. You can then restart it after the backup is taken.
It's perfectly OK as long as you shut down the MySQL sever first and use exactly the same version to retrieve the "backup". Otherwise it isn't.
For a complete discussion of the 2 strategies, you need to read this: https://dev.mysql.com/doc/refman/5.5/en/backup-types.html
The currently best free and open-source solution seems to be Percona's: http://www.percona.com/software/percona-xtrabackup
I'll go with a strong NO.
From my experience, backing up/restoring raw mysql data files can be used only on the same os/server version. It does not work cross platform (eg. ubuntu/macos) with same server versions nor if mysql server versions are different on same platform.
Percona XtraBackup (innobackupex) from Percona MySQL distro will let you do live & differential mysql backup and serve you the backup files that can be restored by copying to /var/lib/mysql/. You need to be running Percona Server for MySQL to use all of this.

How do I backup a MySQL database?

What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.