Backup multiple Databases[MySQL] at 1 time? - mysql

Hi I have multiple databases need to back up daily. Currently, I am using cronjob to set a batch file to back it up. Here are my situation, I have about 10 databases need to backup, 3 of them are growing pretty fast, let me show you the current DB size:
DB1 = 35 mb
DB2 = 10 mb
DB3 = 9 mb
the rest: DBx = 5 mb
My batch file code is:
mysqldump -u root -pxxxx DB1 > d:/backup/DB1_datetime.sql
mysqldump -u root -pxxxx DB2 > d:/backup/DB2_datetime.sql
... and so for the rest
I have run this for 2 days, seems quite okay to me. But I wonder, if it will effect my website performance when executing the batch file.
If this method is not good, how do you backup multiple databases while its on live and the size keep increasing daily?

It depends on table type. If the tables are innoDB, then you should be using the --single-transaction flag so that the dumps are coherent. If you're tables are MyISAM, you have a small issue. If you run the mysqldump as is, the dump will cause the tables to lock (no writing) while performing the dump. This is obviously a huge bottleneck as the databases get larger. You can override this with the --lock-tables=false option, but you can be gauranteed that the backups won't have some inconsistent data in them.
The ideal solution would be to have a backup replication slave server that is outside of your production environment to take dumps of.

If you're still looking for a way to do that, you'll probably be interested in: this.
It creates a file for each database with the dump, and makes you save a lot of time making just the first configuration.
Hope it helps, regards.

Related

Mysqldump on RDS Read Replica Slave is 50x slower

I created a read-replica of my MySQL database on amazon RDS.
When executing the following command, it is super fast (half a second) on the master, but takes more like 30 seconds on the slave. Super annoying because I wanted to dump off of the slave so that I don't slow down the master.
mysqldump --set-gtid-purged=OFF -h myDomain.com -u dev -pmyPassword mySchema > out.sql
There are three issues to consider.
The most significant is that mysqldump does not perform well when run at a distance from the database, due to limitations in the traditional MySQL client/server wire protocol, which makes no allowance for pipelining a series of commands.
The mysqldump utility uses no magic to generate dump files -- it issues SQL statements to the server, and takes the results of those queries to generate its output.
As a result, every single object (schema, table, view, stored function/procedure, event) in the database requires at least one round trip and sometimes more than one.
For each table, mysqldump first issues SHOW CREATE TABLE t1; followed by SELECT * FROM t1; ... so a round trip time of 100 ms would mean that extracting a dump file of 150 tables would mean 150 × 2 × 0.100 = 30 seconds are simply wasted by the distance between the machine running mysqldump and the server -- and this is true even if the tables are completely empty.
This is not a recommendation, but you might take a look at mydumper, which claims to have the ability of creating the backup using multiple database connections, in parallel, and this could help mediate the cycles wasted as commands pass to the server and return to the client, by parallelizing the dump process. I don't know the quality of this code base, but something like this could help.
Next, you almost always want to use the --compress option for mysqldump. Contrary to what you might assume, this does not compress the backup file. The generated backup file is identical when this option is used, but when this feature is activated, the server compresses the data it sends to mysqldump on the wire, and mysqldump decompresses the data again before writing it out -- so this option will almost always make for a faster process unless the machine running mysqldump and the database server are connected by a low-latency, high-bandwidth network. Because the generated file is identical, there are no compatibility concerns when using this option.
Finally, there's an issue with newly-created RDS servers that you need to be aware of, so that it doesn't skew your benchmarks. When you create an RDS replica, it is originally seeded with data from a snapshot of the upstream master. This is, behind the scenes, an EBS snapshot of the master's hard drive, and the new database instance is backed by an EBS volume restored from that snapshot. EBS volumes are lazily-loaded from the snapshot, so they have a documented first-touch penalty. This issue could have a substantial impact on the performance of the first complete backup, but should have no meaningful impact after that.

large MYSQL DB single file

I have a large MYSQL innodb database (115GB) running on single file mode in MySQL server.
I NEED to move this to file per table mode to allow me to optimize and reduce the overall DB size.
Im looking at various options to do this, but my problem falls in there only being a small window of downtime (roughly 5 hours).
1. Setup a clone of the server as a slave. Set the slave up with file_per_table, take a mysqldump from the main DB, run in the slave and have this replicating.
I will then look to fail over to the slave.
2. The other option is the usual mysqldump, drop DB and then import.
My concern is around the time to take the mysqldump and the quality of the dump being such a large size. I have BLOB data in the DB also.
Can anyone offer advise on a good approach?
Thanks

Migrating a MySQL server from one box to another

The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.

mysqldump | mysql yields 'too many open files' error. Why?

I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).
This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).
I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?
mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process.
Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html--lock-all-tables, -x
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.
mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:
[3 Feb 2007 22:00] Sergei Golubchik
This is not really a bug.
mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to
be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big
number of tables will inevitably exhaust all available file descriptors, as LOCK needs all
tables to be opened.
Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively,
--lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all
tables in all databases (without opening them). In this case mysqldump will automatically
disable --lock-tables because it makes no sense when --lock-all-tables is used.
Edit: Please check Dave's workaround for InnoDB in the comment below.
If your database is that large you've got a few issues.
You have to lock the tables to dump the data.
mysqldump will take a very very long time and your tables will need to locked during this time.
importing the data on the new server will also take a long time.
Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.
In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.
I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.
Thought this might be helpful tip here.

What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1.
What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data?
As well, when loading the data into the second DB, is it quicker to:
1) pipe the results directly to the second MySQL server instance and use the --compress option
or
2) load it from a text file (ie: mysql < my_sql_dump.sql)
QUICKLY dumping a quiesced database:
Using the "-T " option with mysqldump results in lots of .sql and .txt files in the specified directory. This is ~50% faster for dumping large tables than a single .sql file with INSERT statements (takes 1/3 less wall-clock time).
Additionally, there is a huge benefit when restoring if you can load multiple tables in parallel, and saturate multiple cores. On an 8-core box, this could be as much as an 8X difference in wall-clock time to restore the dump, on top of the efficiency improvements provided by "-T". Because "-T" causes each table to be stored in a separate file, loading them in parallel is easier than splitting apart a massive .sql file.
Taking the strategies above to their logical extreme, one could create a script to dump a database widely in parallel. Well, that's exactly what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html) and mk-parallel-restore tools are; perl scripts that make multiple calls to the underlying mysqldump program. However, when I tried to use these, I had trouble getting the restore to complete without duplicate key errors that didn't occur with vanilla dumps, so keep in mind that your milage may vary.
Dumping data from a LIVE database (w/o service interruption):
The --single-transaction switch is very useful for taking a dump of a live database without having to quiesce it or taking a dump of a slave database without having to stop slaving.
Sadly, -T is not compatible with --single-transaction, so you only get one.
Usually, taking the dump is much faster than restoring it. There is still room for a tool that take the incoming monolithic dump file and breaks it into multiple pieces to be loaded in parallel. To my knowledge, such a tool does not yet exist.
Transferring the dump over the Network is usually a win
To listen for an incoming dump on one host run:
nc -l 7878 > mysql-dump.sql
Then on your DB host, run
mysqldump $OPTS | nc myhost.mydomain.com 7878
This reduces contention for the disk spindles on the master from writing the dump to disk slightly speeding up your dump (assuming the network is fast enough to keep up, a fairly safe assumption for two hosts in the same datacenter). Plus, if you are building out a new slave, this saves the step of having to transfer the dump file after it is finished.
Caveats - obviously, you need to have enough network bandwidth not to slow things down unbearably, and if the TCP session breaks, you have to start all over, but for most dumps this is not a major concern.
Lastly, I want to clear up one point of common confusion.
Despite how often you see these flags in mysqldump examples and tutorials, they are superfluous because they are turned ON by default:
--opt
--add-drop-table
--add-locks
--create-options
--disable-keys
--extended-insert
--lock-tables
--quick
--set-charset.
From http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html:
Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default.
Of those behaviors, "--quick" is one of the most important (skips caching the entire result set in mysqld before transmitting the first row), and can be with "mysql" (which does NOT turn --quick on by default) to dramatically speed up queries that return a large result set (eg dumping all the rows of a big table).
Pipe it directly to another instance, to avoid disk overhead. Don't bother with --compress unless you're running over a slow network, since on a fast LAN or loopback the network overhead doesn't matter.
i think it will be a lot faster and save you disk space if you tried database replication as opposed to using mysqldump. personally i use sqlyog enterprise for my really heavy lifting but there also a number of other tools that can provide the same services. unless of course you would like to use only mysqldump.
For innodb, --order-by-primary --extended-insert is usually the best combo. If your after every last bit of performance and the target box has many CPU cores, you might want to split the resulting dumpfile and do parallel inserts in many threads, up to innodb_thread_concurrency/2.
Also, tweak the innodb_buffer_pool_size on the target to the max you can afford, and increase innodb_log_file_size to 128 or 256 MB (careful with this, you need to remove the old logfiles before restarting the mysql daemon otherwise it won't restart)
Use mk-parallel-dump tool from Maatkit.
At least that would probably be faster. I'd trust mysqldump more.
How often are you doing this? Is it really an application performance problem? Perhaps you should design a way of doing this which doesn't need to dump the whole data (replication?)
On the other hand, 1.5G is quite a small database so it probably won't be much of a problem.
mydumper is a good choice, with paralel export, even paralell threads per table, and compressed files, see: