Data change during doing Database backup - mysql

I have a question regarding to mysql DB backup:
1.AT 8.00.00 AM ,I do backup Database by using mysqldump command.It somtime takes 5 second to finish.
2.While database backup is inprogress(At 8.00.01 AM ), someone make some changes to DB
Will backup version containt data changes of step 2 ?
I have goolge but not found explaination yet.Pls help me !

Percona provides a free tool for this purpose called xtrabackup.
For InnoDB tables MySQL uses log files to store DML commands, so that commands can be rolled back and some other things.
You can backup your database (in a non locking way), and after you created the backup you can apply the logs, so that you have a backup which holds the database status when the backup is finished. Don't know the commands without looking them up now, sorry, you'll have to have a look into the documentation.

It depends on your mysqldump command and which table the dump is on at the time of the update:
If you used the --single-transaction option, then, no, it will not contain the late change.
If you did not use that option then:
if the table being updated has not been dumped yet, then yes it will have the change.
If the table being updated has been dumped already, then no it won't have the change.
Chances are, you don't want that change in your backup, because you would like your backup to be consistent to a point in time. Here is some more discussion about all those kinds of things:
How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?

Related

Backup frequently a huge MySQL database

I have a huge MySQL InnoDB database (about 15 Go, and 100M rows), on a Debian server.
I have to save my database every two hours in another server, but without affect performances.
I looked at the MySQL replication, but it does not correspond to the fact that I look for, because I also want to protect of problems which the application could possibly cause.
What would be the best way of dealing with it?
Thank you very much!
I think you need incremental backups.
You can use Percona XtraBackup to make fast incremental backups. This works only if your database uses only InnoDB tables.
Refer to the documentation about how to create incremental backups:
http://www.percona.com/doc/percona-xtrabackup/howtos/recipes_ibkx_inc.html
Have you looked at writing a script that uses mysqldump to dump the contents of the DB, transfers it over to the backup DB (piping it to SSH would work), and inserts it via the command line?
There are options so that mysqldump won't lock the tables and so won't degrade performance too much.

mysql cluster lost data after restore

all,
I use mysqldump to backup mysql cluster data with 10 million lines data daily. Recently, our cluster is crashed after a update, then we restore the .sql file generated by mysqldump. When restoring the database, we got key duplication errors/problem, and then I use "-f" to force the restore process. And finally, the restore process completed and all tables is back. Some tables are smaller, we think that is because the duplicate lines are ignored.
But recently, we find some data is missing, it seems that some duplicated data dose not restored correctly.
May I know whether there is a nice way to avoid this in restore process or how to check whether we have duplication before mysqldump?
Couple of suggestions - take a look at the errors that are generated when not using the force option and see if you can figure out how to fix the root cause. Using the force option allows the restore to continue after the error but the failed rows will still be lost.
Is there a reason why you're using mysqldump rather than the backup command within ndb_mgm - which is an online operation? If using the native Cluster (on-line!) backup then you use the ndb_restore command to restore your data.

Big Database backup best practice

I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.

mysql best backup method? and how to dump backup to target directory

What is the best method to do a MySQl backup with compression? Also, how do you dump that to specific directory such a C:\targetdir
mysqldump command will output CREATE TABLE and INSERT commands that are sufficient to recreate your whole database. You can back up individual tables or databases with this command.
You can easily compress this. If you want it to be compressed as it goes, you will need some sort of streaming tool for the command line. On UNIX it would be mysqldump ... | gzip. On Windows, you will have to find a tool that works with pipes.
This I think is what you are looking for. I will list other options just because.
FLUSH TABLES WITH READ LOCK will flush all data to the disk and lock them from changing which you can do while you are making a copy of the data folder.
Keep in mind, when doing restores, if you want to preserve the full capability of MySQL bin logs, you will not want to restore parts of a database by touching the files directly. Best option is to have an alternate data dir with restored files and dump from there, then feed to your production database using regular mysql connection channels. Any direct changes to the filesystem will not be recorded by binlogs.
If you restore the whole database using files, you will be OK. Just not if you to peices.
mysqldump does not have this problem
Replication will allow you to back up to another instance of MySQL running on the same or different machine.
binlogs. Given a static copy of a database, you can use these to move it forward in time. binlogs are a log of all the commands that ever changed the data. If you have binlogs back to day one, then you may already have what you are looking for. You can run all the commands from the binlogs from day one to any date you wish and then you have a copy of the database from that date.
I recommend checking out Percona XtraBackup. It's a GPL licensed alternative to MySQL's paid Enterprise Backup tool and can create consistent non-blocking backups from databases even when they are written to. See this article for more information on why you'd want to use this over mysqldump.
You could use a script like AutoMySQLBackup, which automatically does a backup every day, keeping daily, weekly and monthly backups, keeping your backup directory pretty clean and uncluttered, while still providing you a long history of backups.
The backups are also compressed, naturally.

writing perl script to take mysql incremental backup with mysqldump

I am dealing with an incremental backup solution for a mysql database in centos. I need to write a perl script to take incremental backup. then i will run this script by using crontabs. I am a bit confused. There are solutions but not really helping. I did lots of research. there are so many ways to take full backup and incremental backup for files. I can easily understand them but I need to take an incremental backup of a mysql database. I do not know how to do it. Can anyone help me either advising a source or a piece of code.
The incremental backup method you've been looking at is documented by MySQL here:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
What you are essentially going to want to do is set up your mysql instance to write any changes to your database to this binary log. What this means is any updates, deletes, inserts etc go in the binary log, but not select statements (which don't change the db, therefore don't go in the binary log).
Once you have your mysql instance running with binary logging turned on, you take a full backup and take note of the master position. Then later on, to take an incremental backup, you want to run mysqlbinlog from the master position and the output of that will be all the changes made to your database since you took the full backup. You'll want to take note of the master position again at this point, so you know the point that you want to take the next incremental backup from.
Clearly, if you then take multiple incremental backups over and over, you need to retain all those incremental backups. I'd recommend taking a full backup quite often.
Indeed, I'd recommend always doing a full backup, if you can. Taking incremental backups is just going to cause you pain, IMO, but if you need to do it, that's certainly one way to do it.
mysqldump is the ticket.
Example:
mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql
-u = mysql database user name
-p = mysql database password
Note: there is no space after the -p option. And if you have to do this in perl, then you can use the system function to call it like so:
system("mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql") or die "system call failed: $?";
Be aware though of the security risks involved in doing this. If someone happened to do a listing of the current processes running on a system as this was running, they'd be able to see the credentials that were being used for database access.