Percona XtraBackup gets a lot of praise, from what I can see, but I find it incredibly frustrating. I'm using:
root#GR-00258:~# xtrabackup --version
xtrabackup version 2.4.9 based on MySQL server 5.7.13 Linux (x86_64) (revision id: a467167cdd4)
I can create backups of one or more single databases without problems, but there doesn't appear to be any way to restore them. The only way I have found is to restore them as a full backup into an empty /var/lib/mysql, which means mysql won't start up, of course. It seems like a remarkably poor tool for restore - what is the purpose of being able to make a backup of individual databases, if they cannot be restored?
Enough ranting - is there a way to get this to work, or am I just wasting my time? I know, I can use mysqldump into a csv file, but that is not an attractive option, when the databases are ~500GB - ~1TB.
Percona XtraBackup allows you to make a physical backup of a MySQL datadir without blocking access for clients. It's significantly faster than mysqldump to make a backup.
Restoring is also very fast. All you have to do is copy the backup files to a new datadir (after doing the prepare step, which you could do at the time you create the backup).
When restoring a full backup, you can't do that when the destination MySQL Server is running. You must shut down mysqld then copy the files into place, make sure the files have the right ownership and permissions, then start mysqld.
It's not easy to import selected tables or schemas without overwriting. But it is possible:
You must use the --export option when you prepare the backup. Then you can import individual tablespaces to an existing MySQL datadir. But you must do this one tablespace at a time, unfortunately. There's no way to do it in one step for all tables in a schema. You should be able to write a script to do that.
See a complete example of importing tablespaces from a backup here: https://www.percona.com/doc/percona-xtrabackup/8.0/xtrabackup_bin/restoring_individual_tables.html
Related
I'm looking for some insight whether or not rsyncing a copy of the data folder from MariaDB well running in docker will provide a usable backup. I'm deploying several containers with mapped folders in a production environment using docker.
I'm thinking of using rsnapshot for nightly backups as it uses hardlinks incrementally and I can specify the number of weekly / daily / monthly copies to keep.For the code and actual files I suspect this will work wonderfully.
For MariaDB I could run mysqldump every night but this would essentially use a new copy of the database each time instead of an incremental one. If I could rsync the data folder and be 100% sure the backup would be fully intact it would be advantageous I presume. Is there any chance this backup method would fail if data was written during the rsync? Would all the files inside of MariaDB change with daily usage (it wouldn't be advantageous if so)?
This is probably a frequent question, but I can't find a really exact match right now.
The answer is NO — you can't use filesystem-level copy tools to back up a MySQL database unless the mysqld process is stopped. In a Docker environment, I would expect the container to stop if the mysqld process stops.
Even if there are no queries running, the InnoDB engine is probably doing writes in the background to flush pages from memory into the tablespace, clean up rolled-back transactions, or finish some deferred index merges.
If you try to use rsync or cp or any other filesystem-level tools to copy InnoDB files, you will only get corrupted files that can't be restored.
Some people use LVM snapshots to get an atomic snapshot of the whole filesystem as of a single instant, and this can be used to get quick backups.
Another useful tool is Percona XtraBackup, which copies the InnoDB tablespace files while it is also copying the InnoDB transaction log continually. Only with both of these in sync can the backup be restored. Read the documentation here: https://www.percona.com/doc/percona-xtrabackup/LATEST/index.html
At my current job, we use Percona XtraBackup to make nightly backups for thousands of MySQL instances. We run Percona Server (not MariaDB) in Docker pods, and Percona XtraBackup runs as another container in the pod. It works very well, and it's free, open-source software.
I have been working in a project which involves Mysql and PHP. While creating package for testing i have exported mysql database as sql file from MySql Workbench and imported into linux machine's mysql server with
mysql>source mydatabase.sql;
is this the correct way to export and import mysql database?. And also the database file contains creation of schema, inserting data, and creating indexes scripts. Importing takes long time with this file. My immediate manager suggests me to export without indices and import database, then execute index creation scripts. Is this is the correct way? Does indices takes long time while importing database?
Thanks in advance!
Percona Server comes with a modified mysqldump tool so you can choose the --innodb-optimize-keys option. This exports the tables, data, and index definitions like stock mysqldump, but during import it defers creation of secondary indexes until after the data has been loaded into the table. This takes advantage of InnoDB's fast index creation feature.
Another option is to use a physical backup tool, like Percona XtraBackup (as user #Up_One) mentioned. This means that the full data files are backed up, including the indexes. On restore, nothing needs to be rebuilt, because the indexes are already populated. There are pros and cons to using physical backup tools too, but restore time is a big advantage.
Export depends on the MySQL version/vendor you are running !
MySQL(older versions) - uses mysqldump
MySQL(percona) - uses XtraBackup
MySQL(EE - 5.6) - uses MySQL Enterprise Backup which is more complex
and much faster.
Fastest way is to avoid mysqldump.
I like Percona more then the others !
Another way but is more advanced i think is to create the DB on the destination server, and copy the database files directly. Assuming user has access to the destination server's mysql directory:
As root user:
[root#server-A]# /etc/init.d/mysqld stop
[root#server-A]# cd /var/lib/mysql/[databasename]
[root#server-A]# scp * user#otherhost:/var/lib/mysql/[databasename]
[root#server-A]# /etc/init.d/mysqld start
Important things here are: Stop mysqld on both servers before copying DB files, make sure the file ownership and permissions are correct on the destination before starting mysqld on the destination server.
[root#server-B]# chown mysql:mysql /var/lib/mysql/[databasename]/*
[root#server-B]# chmod 660 /var/lib/mysql/[databasename]/*
[root#server-B]# /etc/init.d/mysqld start
With time being your priority here, the use of compression will depend on whether the time lost waiting for compression/decompression (with something like gzip) will be greater than the time wasted transmitting uncompressed data; that is, the speed of your connection.
I prefer to use mysqldump! (Check this: http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)
Good day to you.
I used my MySQL server with "innodb_file_per_table" option, and now server is crashed. I want to recover this server using this way:
Uninstall old MySQL
Install new MySQL
Add "innodb_file_per_table" in MySQL configuration
Copy databases folders (only my, not mysql) from old MySQL/data to new MySQL/data
In every folder I have two files, .frm and .ibd, and looks like this files have all data from my databases tables.
But, after copying, tables in this databases didn't work: when I try to open some table, I get error: Table xxx doesn't exist in engine.
I tried REPAIR command, but it isn't helpful.
If you know how to finish my way of repair — please help.
...I know that I need to copy ibdata1 also, but it looks so dead for recovery, so I try way that I try.
REPAIR command won't help with InnoDB.
If you're lucky enough the best you can do is:
1. Start MySQL with innodb_force_recovery=4 (try 5, 6 values if InnoDB fails to start). Make sure innodb_purge_threads=0.
2. Dump the database with mysqldump tool. Yes, it may be slow, but there is no other choice.
3. Create new empty InnoDB table space and reload the dump.
If MySQL fails to start with innodb_force_recovery=6, then recovery from backups is the only option. Well, you can fetch records from *.ibd files, but this is tedious job - google data recovery tool from percona
UPDATE: Data recovery toolkit moved to GitHub
You need to copy everything, not only the data folder
For example, without the ibdata file, mysql don't know where the tables are stored.
https://serverfault.com/questions/487159/what-is-the-ibdata1-file-in-my-var-lib-mysql-directory
What is the best method to do a MySQl backup with compression? Also, how do you dump that to specific directory such a C:\targetdir
mysqldump command will output CREATE TABLE and INSERT commands that are sufficient to recreate your whole database. You can back up individual tables or databases with this command.
You can easily compress this. If you want it to be compressed as it goes, you will need some sort of streaming tool for the command line. On UNIX it would be mysqldump ... | gzip. On Windows, you will have to find a tool that works with pipes.
This I think is what you are looking for. I will list other options just because.
FLUSH TABLES WITH READ LOCK will flush all data to the disk and lock them from changing which you can do while you are making a copy of the data folder.
Keep in mind, when doing restores, if you want to preserve the full capability of MySQL bin logs, you will not want to restore parts of a database by touching the files directly. Best option is to have an alternate data dir with restored files and dump from there, then feed to your production database using regular mysql connection channels. Any direct changes to the filesystem will not be recorded by binlogs.
If you restore the whole database using files, you will be OK. Just not if you to peices.
mysqldump does not have this problem
Replication will allow you to back up to another instance of MySQL running on the same or different machine.
binlogs. Given a static copy of a database, you can use these to move it forward in time. binlogs are a log of all the commands that ever changed the data. If you have binlogs back to day one, then you may already have what you are looking for. You can run all the commands from the binlogs from day one to any date you wish and then you have a copy of the database from that date.
I recommend checking out Percona XtraBackup. It's a GPL licensed alternative to MySQL's paid Enterprise Backup tool and can create consistent non-blocking backups from databases even when they are written to. See this article for more information on why you'd want to use this over mysqldump.
You could use a script like AutoMySQLBackup, which automatically does a backup every day, keeping daily, weekly and monthly backups, keeping your backup directory pretty clean and uncluttered, while still providing you a long history of backups.
The backups are also compressed, naturally.
Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump.
are all informations needed contained in that directory?
can single tables be imported in another mysql?
can there be problems while restoring those files on a (probably slightly) different mysql server version?
Yes
Yes if the table is using the MyISAM (default) engine. Not if it's using InnoDB.
Probably not, and if there is, you just need to execute mysql_upgrade to fix them
To avoid getting databases in a inconsistent state, you can either shutdown MySQL or use LOCK TABLES and then FLUSH TABLES before the backup. The second solution is a little better because the MySQL server will remain available during the backup (albeit read only).
This approach is only going to work safely if you shut the database down first. Otherwise you could well end up in an inconsistent state afterwards. Use the /etc/init.d/mysql stop command first. You can then restart it after the backup is taken.
It's perfectly OK as long as you shut down the MySQL sever first and use exactly the same version to retrieve the "backup". Otherwise it isn't.
For a complete discussion of the 2 strategies, you need to read this: https://dev.mysql.com/doc/refman/5.5/en/backup-types.html
The currently best free and open-source solution seems to be Percona's: http://www.percona.com/software/percona-xtrabackup
I'll go with a strong NO.
From my experience, backing up/restoring raw mysql data files can be used only on the same os/server version. It does not work cross platform (eg. ubuntu/macos) with same server versions nor if mysql server versions are different on same platform.
Percona XtraBackup (innobackupex) from Percona MySQL distro will let you do live & differential mysql backup and serve you the backup files that can be restored by copying to /var/lib/mysql/. You need to be running Percona Server for MySQL to use all of this.