I have MySQL database which is in size of 4 TB and when I'm dumping it using mysqldump then it is taking around 2 days to dump that database in the .sql format
Can anyone help to faster this process?
OS ubuntu 14
MySQL 5.6
The single database of size 4 TB
hundreds of table average tables size is around 100 to 200 GB
Please help if anyone have any solution to this
I would:
stop the database,
copy the files in a new database
restart the database
process the data from the new place (maybe in an other machine).
If you are replicating, just stop replication, process, start replication.
These methods should improve speed, because of lack of concurrent processes that access the database (and all lock logic).
On such large databases, I would try not to have to make dumps. Just use mysql table files if possible.
In any case 2 days seems a lot, also for a old machine. Check that you are not swapping, and try to check your mysql configuration for possible problems. In general, try to get a better machine. Computer are cheaper than time to optimize.
Related
I have a 3Gb large table with 2M+ rows that is a mysql dump (.sql) file.
Quick info:
It takes 3.69 seconds to run a count query on a table on production with 950k rows, while on my local macbook it takes 1.45 seconds to run the same count query on a similar table with 2M+ rows.
I need to import it into a DB on a live production server while the server is doing the following:
1. Running crons throughout the night.
2. Numerous select and create queries are happening on other tables within the same db.
3. DB backup will be happening at some point in the night.
Will my carrying out this command:
source tablename_dump.sql
cause the the system to experience:
1. Memory shortage (assuming I do not have infinte memory and that these crons do take up a lot of memory)
2. Locking up of crons / backups.
3. Any other problems I may not have considered?
If there are issues that I should be aware of, how can I import this table into the production MySQL database.
I'm hoping that since a dump file is a series of single insert statements, MySQL will not peak, and will carry out the process in a moderate fashion till all the records have been fed in without causing any of the above issues.
I have a Windows Server with MySQL Database Server installed.
Multiple databases exist among them, database A contains a huge table named 'tlog', size about 220gb.
I would like to move over database A to another server for backup purposes.
I know I can do SQL Dump or use MySQL Workbench/SQLyog to do table copy.
But due to limited disk storage in server (less than 50gb) SQL Dump is not possible.
The server is serving other works so basically the CPU & RAM is limited too. As a result, copy table without used up CPU & RAM is not possible.
Is there any other method that can do the moving of the huge database A over to another server please?
Thanks in advance.
You have a few ways:
Method 1
Dump and compress at the same time: mysqldump ... | gzip > blah.sql.gz
This method is good because chances are your database will be less than 50GB; as the database dump should be in ASCII; you're then compressing it on the fly.
Method 2
You can use slave replication; this method will require a dump of the data.
Method 3
You can also use xtrabackup.
Method 4
You can shutdown the database, and rsync the data directory.
Note: You don't actually have to shutdown the database; you can however do multiple rsyncs; and eventually nothing will change (unlikely if the database is busy; have to do during slow time); which means the database would have sync'd over.
I've had to do this method with fairly large PostgreSQL databases (1TB+). It takes a few rsyncs: but, hey; it's the cost of 0 down time.
Method 5
If you're in a virtual environment you could:
Clone the disk image.
If you're in AWS you could create an AMI.
You could add another disk and just sync locally; then detach the disk, and re-attach to the new VM.
If you're worried about consuming resources during the dump or transfer you can use ionice and renice to limit the priority of the dump/transfer.
I have a large MYSQL innodb database (115GB) running on single file mode in MySQL server.
I NEED to move this to file per table mode to allow me to optimize and reduce the overall DB size.
Im looking at various options to do this, but my problem falls in there only being a small window of downtime (roughly 5 hours).
1. Setup a clone of the server as a slave. Set the slave up with file_per_table, take a mysqldump from the main DB, run in the slave and have this replicating.
I will then look to fail over to the slave.
2. The other option is the usual mysqldump, drop DB and then import.
My concern is around the time to take the mysqldump and the quality of the dump being such a large size. I have BLOB data in the DB also.
Can anyone offer advise on a good approach?
Thanks
Inside my cronjobs I make a full mysqldump every night.
My database has total 1.5GB data inside 20 tables.
Nearly every table has indexes.
I make backup like this:
mysqldump --user=user --password=pass --default-character-set=utf8 database
--single-transaction| gzip > "mybackupfile"
I make this for 2 months. This process takes nearly 1,5 minutes for 2 months.
Last week my hosting company changed my server. Just after the server change, this process started to long for 5 minutes. I told this to server company and they increased my CPU from 4GHz to 6 GHz so mysqldump process became 3,5 minutes. Then they increased to 12 GHz. But this didn't change the performance.
I checked my shared SSD disk performance with hdparm. It was 70 MB/sec. So I complain again. So they changed my hard disk to another one. Hard disk read speed became 170 MB/sec. So mysqldump process became 3 minutes.
But the duration is far from the previous value. What would be the cause for this performance degradation ? How can I isolate the problem ?
(Server is Centos 6.4, 12 GHz CPU, 8 GB RAM)
Edit: My company changed server again and I still have same problem. Old server has 3,5 minutes backup time now new server has 5 minutes time. Resultant file is 820 MB when zipped, 2.9 GB when unzipped.
I'm trying to find out what makes this dump slow.
Dump process started at 11:24:32 and stopped at 11:29:40. You can check it from screenshots' timestamps.
Screenshots:
General
Consumed memory
Memory and CPU of gzip
Memory and CPU of mysqldump
Disk operations
hdparm results:
/dev/sda2:
Timing cached reads: 3608 MB in 1.99 seconds = 1809.19 MB/sec
Timing buffered disk reads: 284 MB in 3.00 seconds = 94.53 MB/sec
/dev/sda2:
Timing cached reads: 2120 MB in 2.00 seconds = 1058.70 MB/sec
Timing buffered disk reads: 330 MB in 3.01 seconds = 109.53 MB/sec
Obvious thing I'd look at is whether the amount of data increased in recent months. Most database-driven applications collect more data over time, so the database grows. If you still have copies of your nightly backups, I'd look at the file sizes to see if they have been increasing steadily.
Another possibility is that you have other processes doing read queries while you are backing up. Mysqldump by default creates a read lock on the database to ensure a consistent snapshot of data. But that doesn't block read queries. If there are still queries running, this could compete for CPU and disk resources, and naturally slow things down.
Or there could be other processes besides MySQL on the same server competing for resources.
And finally, as #andrew commented above, there could be other virtual machines on the same physical server, competing for the physical resources. This is beyond your control and not even something you can test for within the virtual machine. It's up to the hosting company to manage a balanced allocation of virtual machines.
The fact that the start of the problems coincides with your hosting company moving your server to another host makes a pretty good case that they moved you onto a busier host. Maybe they're trying to crowd more VM's onto a single host to save rackspace or something. This isn't something StackOverflow can answer for you -- you should talk to the hosting company.
The number or size of indexes doesn't matter during the backup, since mysqldump just does a SELECT * from each table, and that's a table-scan. No secondary indexes are used for those queries.
If you want a faster backup solution, here are a few solutions:
If all your tables are InnoDB, you can use the --single-transaction option, which uses transaction isolation instead of locking to ensure a consistent backup. Then the difference between 3 and 6 minutes isn't so important, because other clients can continue to read and write to the database. (P.S.: You should be using InnoDB anyway.)
Mysqldump with the --tab option. This dumps data into tab-delimited files, one file per table. It's a little bit faster to dump, but much faster to restore.
Mydumper, an open source alternative to mysqldump. This has the option to run in a multi-threaded fashion, backing up tables in parallel. See http://2bits.com/backup/fast-parallel-mysql-backups-and-imports-mydumper.html for a good intro.
Percona XtraBackup performs a physical backup, instead of a logical backup like mysqldump or mydumper. But it's often faster than doing a logical backup. It also avoids locking InnoDB tables, so you can run a backup while clients are reading and writing. Percona XtraBackup is free and open-source, and it works with plain MySQL Community Edition, as well as all variants like Percona Server, MariaDB, and even Drizzle. Percona XtraBackup is optimized for InnoDB, but it also works with MyISAM and any other storage engines (it has to do locking while backing up non-InnoDB tables though).
My question is: Do you really need a dump or just a copy?
There is a great way that is far away from mysql dump, it uses Linux LVM "LVM Snapshot":
http://www.lenzg.net/mylvmbackup/
The idea is to hold the database for a milli second, then LVM will make a hot copy (which takes another milli second) and then the database can continue to write data. The LVM copy is now ready for every action you want: copy the table files or open it as new mysql instance for a dump (which is not on the production database!).
This needs some modifications to your system. Maybe those mylvmbackup scripts are not completely finished jet. But if you have time yourself you can build on them and do your own work.
Btw: if you really go this way, I'm very interested in the scripts as I am also need a fast solution to clone a database from production environment to a test system for developers. We decided to use this LVM snapshot feature but - as always - had no time to start with it.
I have a huge MySQL InnoDB database (about 15 Go, and 100M rows), on a Debian server.
I have to save my database every two hours in another server, but without affect performances.
I looked at the MySQL replication, but it does not correspond to the fact that I look for, because I also want to protect of problems which the application could possibly cause.
What would be the best way of dealing with it?
Thank you very much!
I think you need incremental backups.
You can use Percona XtraBackup to make fast incremental backups. This works only if your database uses only InnoDB tables.
Refer to the documentation about how to create incremental backups:
http://www.percona.com/doc/percona-xtrabackup/howtos/recipes_ibkx_inc.html
Have you looked at writing a script that uses mysqldump to dump the contents of the DB, transfers it over to the backup DB (piping it to SSH would work), and inserts it via the command line?
There are options so that mysqldump won't lock the tables and so won't degrade performance too much.