Cross Data Center: MySQL Replication vs Simple File Copying? - mysql

Does it make sense to simply copy the mysql\data files vs mysql replication between data centers? I am having the impression mysql replication might be complex when done cross data center. And if I just copy, I could easily switch to the other data center w/o worrying if it's primary or slave. Any thoughts?

MySQL with InnoDB storage engine uses multiversioning of the rows. This means there are changes to the database that may not be yet commited (and possibli will be reverted!). If you simply copy the files, you will end up in inconsistent state.
If you are using MyISAM, copying the files is safe. Replication hovewer will transfer only the changes, while copying will transfer the entire database each time. Which is not wise with large databases.

Replication synchronizes database between data centers "live".
While coping whole database takes a lot of time and databases will desynchronize after first change is made.

Related

Why cloning an RDS cluster is faster and more space efficient than snapc

I want to create a Duplicate (clone) of an Aurora DB cluster.
Both the source and the copy are in the same region and are both for dev purposes .
Both are MySql.
I want to access each cluster via a different url .
Reading about Copy-on-write protocol for Aurora cloning.
and SQL snapshot
The aws docs state that :"Creating a clone is faster and more space-efficient than physically copying the data using a different technique such as restoring a snapshot." (source)
Yet , I don't quite understand why using a snapshot is an inferior solution ?
Snapshot is slower, because first snapshot copies entire db storage:
The amount of time it takes to create a DB cluster snapshot varies with the size your databases. Since the snapshot includes the entire storage volume, the size of files, such as temporary files, also affects the amount of time it takes to create the snapshot.
So if you database has, lets say 100GB, the first snapshot of it will require copying 100 GBs. This operation can take time.
In contrast, when you clone, there is no copy done at first. Both original and the new database use same storage. Only when a write operation is performed, they start to diverge.

Best way of backing up mysql clustered database

I have a mysql cluster database spread on 2 servers.
I want to create a backup system for this database based on the following requirements:
1. Recovery/restore should be very easy and quick. Even better if i can switch connection string at any time i like.
the back up must be like snapshots, so I want to keep copies of different day (and maybe keep the latest 7 days for example)
the copy database does not have to be clustered.
The best way to back up a MySQL Cluster is to use the native backup mechanism that gets initiated with the START BACKUP command in the `ndb_mgm.
Backup is easy (just a single command) and relatively quick. Restore is a bit more tricky, but is at least faster and more reliable than using mysqldump. See also:
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-backup.html
and
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-programs-ndb-restore.html
2) The backups are consistent snapshots and are distinguishable by an auto-incrementing backup ID, so having several snapshots is easily possible
3) The backup is clustered by default (every data node stores backup files on its own file system), but you should either have the backup directory pointing to a shared file system mount, or copy files form all nodes to a central place once a backup has finished

MySQL Wrong Restore Data Recovery

I'm almost certain about the answer, but the situation is so critical that I have to ask this question even though I'm %99 sure about the answer.
Someone in our office made a backup of a MySQL database and he restored it on a wrong destination database overwriting everything on that destination (The schema of both databases were the same). According to the structure of the MySQL backup files I know that the restore operation drops all the tables first and then creates them and fills them up with the backed up data. The question is does the restore module keeps the old data anywhere? Is there anyway of retrieving any of the old data? (logs?.. etc.)
Only if you have replicated slaves, or you used to, and have binary logs. Even then you'd need an old copy of the database you can restore, and to configure replication again.

Fast InnoDB Restore?

I am working on a development copy of a MySQL / InnoDB database that is about 5GB.
I need to restore it pretty frequently for testing alter scripts, and using the mysqldump file is taking quite a while. The file itself is about 900MB and takes around an hour to load in. I've removed data inserts for unimportant tables, and done extended inserts, etc., but it's still pretty slow.
Is there a faster way to do this? I am thinking to just make a copy of the database files from .../mysql/database-name, id_logfile#, and ibdata1, and copy them back in when I need to 'reset' the db, but is this viable with InnoDB? Is the ibdata file for one database? I only see one, even though I have multiple InnoBD db on this box.
Thanks!
I believe you can just copy the file named for the database (with the server daemon down) and all should be well. It seems like something that a little testing on a sample db should answer quickly, no?
I have no idea if it would be faster -- it might be slower, in fact, depending on how extensively your tests alter the data --, but what if you put all your tests in a transaction, and then rolled it back?
Use an LVM snapshot of the disk. Load your data (lengthy process) and take a snapshot (seconds). Then, whenever you need to go back to the snapshot, play LVM games -- again only seconds to re-load and re-clone any sized disk.
You might be able to do something with http://dev.mysql.com/doc/refman/5.1/en/multiple-tablespaces.html
See the ALTER TABLE xxx IMPORT TABLESPACE option especially.
For innodb backup and restore speed you might want to look at mydumper/myloader - much faster than mysqldump due to multi-threaded nature http://vbtechsupport.com/1716/

What is the best way to do incremental backups in MySQL?

We are using MySQL version 5.0 and most of the tables are InnoDB. We run replication to a slave server. We are thinking of backing up the MySQL log files on a daily basis.
Questions
Is there any other way of doing an incremental backup without using the log files?
What are the best practices when doing incremental backups?
AFAIK the only way of doing incremental backups is by using the binary-log. You have other options if you want to do full backups (InnoDB hotcopy), but incremental means that you need to log all transactions made.
You need to ask yourself why you're backing up data. Since you have a slave for replication, I assume the backup is primarly for reverting data in case of accidental deletion?
I would probably rotate the logs every 1 hour and take a backup of it. Meaning, restoring would leave the data at most 1 hour old, and you can restore to any point in time since the last full snapshot.
You can dump your schemas regularly with mysqldump, using always the same file name and path for each schema (i.e. replacing the latest one)
Then combine that with any backup tool that supports incremental/delta backup, for example rdiff-backup, duplicity, Duplicati or Areca Backup. An example from duplicity docs:
Because duplicity uses librsync, the incremental archives are space
efficient and only record the parts of files that have changed since
the last backup
That way your first backup would be the compressed copy of the 1st full dump, and the second would contain the compressed differences from the 1st and 2nd dump and so on. You can restore the mysqldump file of any point in time and then restore that file into MySQL.
A lot of time has passed since the last answer, and during this time several solutions and tools have appeared for implementing incremental backups.
Two main ones:
Percona XtraBackup - is an open-source hot backup utility for MySQL -
based servers that doesn’t lock your database during the backup. It
also allows you to create incremental backups. More details here.
It's pretty simple and looks something like this:
xtrabackup --backup --target-dir=/data/backups/inc1 --incremental-basedir=/data/backups/base
mysqlbackup is an utility, that is included in the mysql enterprise
edition. It is a lot like percona xtrabackup. A detailed comparison
can be found here
It has the parameter --incremental which allows you to make incremental backups. More details here
mysqlbackup --defaults-file=/home/dbadmin/my.cnf --incremental --incremental-base=history:last_backup --backup-dir=/home/dbadmin/temp_dir --backup-image=incremental_image1.bi backup-to-image
These two utilities make physical backups (copy database files), but you can still make logical backups of binlog files.
You can either write a script by yourself or use a ready-made script from github:
macournoyer/mysql_s3_backup
Abhishek-S-Patil/mysql-backup-incremental
There are also paid solutions that are, in fact, beautiful wrappers for these tools:
SqlBak
databasethink
What are the best practices when doing incremental backups?
It all depends on your architecture, the amount of data, the maximum allowable downtime interval that is acceptable for you. The maximum allowable data loss interval. Consider these things before setting up backups.
I would like to mention only one good practice, but very important, and which is very often forgotten. Test and run the recovery script regularly on another unrelated server.