I have a huge MySQL InnoDB database (about 15 Go, and 100M rows), on a Debian server.
I have to save my database every two hours in another server, but without affect performances.
I looked at the MySQL replication, but it does not correspond to the fact that I look for, because I also want to protect of problems which the application could possibly cause.
What would be the best way of dealing with it?
Thank you very much!
I think you need incremental backups.
You can use Percona XtraBackup to make fast incremental backups. This works only if your database uses only InnoDB tables.
Refer to the documentation about how to create incremental backups:
http://www.percona.com/doc/percona-xtrabackup/howtos/recipes_ibkx_inc.html
Have you looked at writing a script that uses mysqldump to dump the contents of the DB, transfers it over to the backup DB (piping it to SSH would work), and inserts it via the command line?
There are options so that mysqldump won't lock the tables and so won't degrade performance too much.
Related
I am handling few databases that is growing very fast. Now it is around 12GB and in next few months it will cross 15GB. At this situation I only have the traditional backup process mysqldump running in cronjob. I found significant delay in backup and restore time (hours, even days) for the whole database. Knowing that physical backups are much faster than logical, I found these two, but cannot implement those because of some limitations and company policy.
MySQL Enterprise Backup : which is commercial
Percona XtraBackup : Percona came with this tool specially for Linux environments. And not all companies will agree to use 3rd party tools even though it is open source.
Please suggest any other better backup and restore mechanism for big database specially when the databases are mix of Innodb and MyISAM. Looking for some good suggestions on snapshots
I have personally used Percona XtraBackup toolset. It does some simple procedures to get all the committed data in innodb binlog when it finishes the normal storage. Reading about MySQL Enterprise Backup also can do this.
Incremental backups/snapshots can be made with both Percona XtraBackup and MySQL Enterprise, if that's what you're looking for.
Using a read-only slave, you can maintain separation of the read locks. This will keep your locks from being held on the master read if you backup using mysql free tools.
Another option if you have a slave is you can shut off the slave and rsync the data somewhere for backup.
First I would try using "--single-transaction" with mysqldump as long as all your tables are innodb and you aren't modifying table structure during dump. This will attempt to dump from a single point in time but only when it can be performed without locking your tables.
I haven't tried xtrabackup but it looks like that will do what you want as well.
You could also try setting up mysql replication and dumping from the slave.
Lastly LVM snapshots but that takes a bit more work.
I'm using mysqldump to backup my database. Since the database and webserver are on the same machine, the mysqldump takes all the CPU and the site 'goes down' until the mysqldump finishes.
Is the solution to move database to another machine and do backups on that machine? Are there other alternatives?
It might be a little too much, but I'll suggest to use replication.
There is a master-slave replication with MySQL. This will allow you to have and identical DB (read-only) on another machine at all times and doesn't require your machine to work too hard since it happens all the time.
It's also pretty easy to set up. You can read more about it here:
mysql site description
i use mysql administrator from the old mysql gui tools to create backup from my website to my pc.
~90 mb backup take less than 2 minutes
If you want smooth backups (without interfering the production system) the master-slave replication is a very good way to do it. However you might not want to reserve a server for a backup slave and indeed mysqldump is using a lot of resources.
You can try Percona XtraBackup which is an open source tool. Works on the file system level and much faster than mysqldump. http://www.percona.com/doc/percona-xtrabackup/ You can even try it on your current setup as it doesn't put any locks on tables.
I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.
Want to copy some big tables which are used very often from main server to local server in production mode ofc.
Is it safe?
Some suggestions, tools are welcome. :)
I'm guessing from asking about MYI tables that all of your tables are MyISAM tables and not InnoDb ones. Each MyISAM table is made up of three files: .frm, .MYD and .MYI they contain the structure, data and index respectively.
People advise against copying these raw files from running systems but I've found that as long as you're sure nothing's writing to the tables then copying them works fine (I've done this more times than I care to remember).
If you're doing this on a replica then just stop replication on the slave before copying. If it's just a single server or the master I'd recommend running FLUSH TABLES WITH READ LOCK before you start copying files, this will prevent any process from writing to the tables. When you're done release the lock with UNLOCK TABLES.
I would always recommend doing a CHECK TABLE on the tables you've copied in this manner. I think this is what I've used in the past mysqlcheck --all-tables --fast --auto-repair
If you're using LVM on the server then taking an LVM snapshot could be another way of getting hold of a clean snapshot.
If you're going to be doing this regularly I'd recommend either using replication to keep the local server up to date or to setup a slave which you can use for taking backups (as it's not the main database there's no problem with you stopping it, dumping tables, etc)
Yes it's alright if you
Shut the mysql server down cleanly before you copy the files, or at least take a global read lock
Take the .frm .MYI and .MYD files at the same time.
Have identical my.cnf and run the same MySQL version on each system
So it's basically ok, but not necessarily a good idea.
If you have different versions of mysql (You can only generally move to a more recent version, not an older one), or are running with a significantly different my.cnf (i.e. any of the fulltext indexing parameters differ, and you have fulltext indexes), you may need to rebuild the table. Rebuilding the table can be done on the destination server with ALTER TABLE blah ENGINE=MyISAM; this might still be a bit faster than a mysqldump / restore.
What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.