Are docker-hosted databases somehow exempt from backup best practices? - mysql

As far as I was aware, for MS SQL, PostgreSQL, and even MySQL databases (so, I assumed, in general for RDBMS engines), you cannot simply back up the file system they are hosted on, but need to do an SQL-level backup to have any hope of internal consistency and therefore ability to actually restore.
But then answers like this and indeed the official docs referenced seem to suggest that one can just tar away on database data:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
These two ideas seem at odds with one another. Is there something special about how Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something used as the official example when you can't use it to back up a production database? That can't be right...)

Under certain circumstances, it should be safe to use the image of a database on a disk:
The database server is not running.
All persistent data is on the disk system(s) being backed up (logs, tables spaces, temporary storage).
All components are restored together.
You are restoring the image to the same server on the same path.
The last condition is important, because some aspects of the database configuration may be stored in operating system files.
You need to do the backup within the database whenever the server is running. The server is responsible for the internal consistency of the data, and the disk image may not be complete or recoverable. If the server is not running, then the state of the database should be consistent in the persistent storage.

Related

What are the best practices for Mysql backup

We have one php application and mysql server running on one of our production server.
Mysql server is currently 4GB big with intention to grow up to tens or even up to hundreds of GB.
What am curious to find out is what are the best practices for backup of mysql database in condition that application must be live under any circumstance? What is better, to have mysql replication server on which we will run backup scripts or to run on live server? What is more likely to slow down We have possibility to add additional server(s) if needed. Where do I need to store mysql dumps? Is it suggested to ftp copy mysql backup files to remote server.
What is the best practice to organize web application backup if don't have problem with number of server instances?
MySQL backup methods are documented on MySQL documentation.
The ideal backup solution will be to use MySQL Enterprise Backup. This is a licensed product sold on Oracle store. It is very fast compared to mysqldump.
MySQL Enterprise Backup: A licensed product that performs hot backups
of MySQL databases. It offers the most efficiency and flexibility when
backing up InnoDB tables, but can also back up MyISAM and other kinds
of tables.
If you are looking for a free solution with MySQL community edition, then you can install another replication server and either run mysqldump to take backup or make a raw data backup. During backup on your replication server, your main master database will be running. Since your data is big or will get bigger, it is recommended to backup raw data files. It is basically a process of copying data and log files from disk. Details are explained on MySQL documentation.
For larger databases, where mysqldump would be impractical or
inefficient, you can back up the raw data files instead. Using the raw
data files option also means that you can back up the binary and relay
logs that will enable you to recreate the slave in the event of a
slave failure.
Finally, you should copy backup files to another physical disk on the same to recover from disk failures or to another physical server to easily recover from complete server failures.
Replication is something that protects against hardware errors, for example, a hard disk crashed.
Backup - protects against software errors, for example, due to the human factor, data has been deleted from a table.
It is definitely good practice to combine both of these technologies by running a utility to create a backup on a replica. This not only reduces the load on the product database, but also covers more recovery scenarios.
In case of a hardware error, you can restore the most up-to-date data from the replica, and in cases of data corruption, you can already consider about from the what date to use the backup for recovery. Well, if your both the main server and the replica fail, then the backup will also save you.
What is the best way to make backups?
mysqldump is a good solution for small databases. This is a utility for creating logical backups nad it is included to MySQL Server. At the output, the utility creates a .sql file to recreate the database.
For large databases, it is better to use a physical backup. There are two ways on how to do it.
mysqlbackup is a utility included with MySQL Enterprise Solution. As a result, you get a binary file. Such a backup is created much faster than using mysqldump and is less load on the server.
xtrabackup, from Percona, is a lot like the MySQL Enterprise backup utility, but it's free. A more detailed comparison can be found here.
How often the backups should be made?
The more often you make backups, the better, but you can't make many such backups - since you will run out of space in the backup storage. There are two ways:
Find a compromise between the frequency of backups and the duration of storage.
Use incremental backups. The above utilities support incremental backups, but the management of such backups is more complicated (read more here)
Where the backups should be stored?
Anywhere you prefer, but not in the same place as the MySQL Server. Overall, I think using cloud storage is a good choice. Almost everyone today has a command line interface.
How to automate a backup?
The process of creating regular backups should be automated, and a person should intervene in it only in case of failure. A good backup process should include the following steps:
Creating a backup copy
Compression\Encryption
Uploading to storage
Sending success\fail notification
Removing old backups from the storage (so that it does not overflow)
The simplest script that implements this can be found, for example, here.
Something else?
Yes, the most important thing is not to create a backup and then restore it. Therefore, it is best practice to regularly test the recovery scenarios.
Happy backups!
What is better, to have mysql replication server on which we will run backup scripts or to run on live server
It depends on your db size (and time needed to dump it using mysqldump) and your reliability requirements.
If your db is relatively small and mysqldump dumps it in seconds or in a few minutes then its ok to just run scheduled backups. For most cases it is sufficient to have a daily backup which runs at a time when your app is mostly idle (at night when you clients are sleeping). You can use a nice tool automysqlbackup for that: it cares about the scheduling and backup rotation, all you need to do is to add it as your cron task and set up its config once.
Setting up a replica is only needed if:
Your backup takes long time (dozens of minutes or hours) to complete so you can not just stop your service for that long.
You can not afford loosing any history in case of main db crash. E.g. if you process financial transactions you may want to ensure that nothing will be lost if master db server dies.
In this cases you may want a replica with backups. Though you must understand that adding replication adds a new layer of problems: replicas may go out of sync, silently crash (and you will not notice that as the master and your app is running fine) etc.

Using VM to backup MySQL database

Is it correct to use a backup of a vm as a means of restoring a MySQL database?
Are there any dangers in doing this?
My own feeling is that a vm backup/snapshot is at the os not the db level and therefore may not backup the database in the correct way. Has anybody any advice on this?
It's perfectly fine as long as you do one of two things:
Either ensure consistency of the tables by either shutting down the database or using something like FLUSH TABLES WITH READ LOCK while doing the snapshot (you probably don't want to do this)
Use a transactionally-safe storage engine such as InnoDB (the default) for all tables that are likely to change around the time of the snapshot, and rely on its ability to recover from what looks like a crashed state, i.e. the copy of a running server.
Once you realise that taking a snapshot of a running VM and booting the snapshot on another machine looks just like pulling the plug on that server and rebooting it, your choice becomes relatively easy: Make sure the system can recover from pulling the plug, and it can recover from a VM snapshot backup.
Based on a recommendation from Jeff Hunter posted on the VMWare blog, the answer is no, it's not safe to rely on the snapshots for MySQL backups. His recommendation is basically to dump the db through a separate process (and then allow the snapshot to copy the dump).

Moving mysql files across servers

I have a massive MySQL database (around 10 GB), and I need to copy it to a different server (slicehost). I don't want to do a DB dump and reimport b/c I think that would take forever. Is it possible to just move the raw SQL files from one machine to the next, setup an identical mysql server, and flip the switch?
Generally, yes. It's preferable to have the same underlying architecture and server version, but those aren't critically necessary. Make sure you stop the source server so that the raw files are a consistent copy.
I do this all the time when overwriting my dev database. We have backups on a replica that are made from tarring up /var/mysql when the server is stopped. I move those to another machine, overwrite iblog and ibdata, then overwrite all the directories in data except for mysql and test.
It should work.
This is the principle that the mysqlhotcopy tool uses, although this tool is meant to be run while the server is operating.
You don't have a "massive" database, you have a smallish database at 10G. So dump/restore should not be a problem.
Copying the files directly might work in a subset of circumstances, but dump/restore is much better (i.e. less chance of problems).
Clearly, try it on a non-production system with the same version(s) of mysql and data size first to ensure that it's going to work on production.

Is copying the /var/lib/mysql directory a good alternative to mysqldump?

Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump.
are all informations needed contained in that directory?
can single tables be imported in another mysql?
can there be problems while restoring those files on a (probably slightly) different mysql server version?
Yes
Yes if the table is using the MyISAM (default) engine. Not if it's using InnoDB.
Probably not, and if there is, you just need to execute mysql_upgrade to fix them
To avoid getting databases in a inconsistent state, you can either shutdown MySQL or use LOCK TABLES and then FLUSH TABLES before the backup. The second solution is a little better because the MySQL server will remain available during the backup (albeit read only).
This approach is only going to work safely if you shut the database down first. Otherwise you could well end up in an inconsistent state afterwards. Use the /etc/init.d/mysql stop command first. You can then restart it after the backup is taken.
It's perfectly OK as long as you shut down the MySQL sever first and use exactly the same version to retrieve the "backup". Otherwise it isn't.
For a complete discussion of the 2 strategies, you need to read this: https://dev.mysql.com/doc/refman/5.5/en/backup-types.html
The currently best free and open-source solution seems to be Percona's: http://www.percona.com/software/percona-xtrabackup
I'll go with a strong NO.
From my experience, backing up/restoring raw mysql data files can be used only on the same os/server version. It does not work cross platform (eg. ubuntu/macos) with same server versions nor if mysql server versions are different on same platform.
Percona XtraBackup (innobackupex) from Percona MySQL distro will let you do live & differential mysql backup and serve you the backup files that can be restored by copying to /var/lib/mysql/. You need to be running Percona Server for MySQL to use all of this.

How do I backup a MySQL database?

What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.