We have production mysql DB around 35G (innoDB) and we have notice when mysqldump start application get unstable. we use following command to run dump
mysqldump --password=XXXX --add-drop-table foo | gzip -c > foo.dmp.gz
after googling people said mysqldump lock table before dumping data so people suggested using --single-transaction flag for innoDB
so for experiment i started mysqldump manually and run some query on tables read/write and it allowed me to perform all operation while mysqldump was running so how do i reproduce this behavior that mysqldump really locking my table which causing application accessibility?
Does mysqldump lock Read Operation or Just Write on table?
We have few DB using MyISAM in that case what we should do to avoid locks
Use --single-transaction to avoid table locks on InnoDB tables.
There's nothing you can really do about MyISAM, though you really shouldn't be using MyISAM. The best workaround is to create a read replica and make backups from the replica so that the locks don't impact the application.
What you should find is that while a backup is running, a READ LOCAL lock is held on the tables in the single database that is currently being backed up, meaning that you can read from the tables but writes (insert/update/delete) will block except for certain inserts on MyISAM that can be achieved without disturbing the lock. Those may be allowed. The easiest way to see this happening is to repeatedly query SHOW FULL PROCESSLIST; to find threads that are blocking.
Related
Can someone clarify the duration of the lock used to create the initial snapshot when using mysqldump with the --single-transaction and --quick options?
I have a large table (16GB, 101M rows) in a database (InnoDB) with binary logging enabled. I do not use any FK constrains on this table.
In order to keep the BIN LOG file count manageable I need to periodically update the mysqldump seed. I want to be able to run mysqldump whilst my service continues to add new records (approx 35/sec).
According to the MySQL documentation: A lock occurs when creating the snapshot and you can continue writing to the table. So is that instant or does it depend on the size of the table? i.e. needs to read the entire contents before the lock is released.
I am concerned that whilst the generating the snapshot I'll be unable to write to the table.
Can someone please clarify what happens as the table dump begins? Happy for a link that describes the process.
When you use --single-transaction and --master-data, mysqldump does the following at the beginning.
FLUSH TABLES;
FLUSH TABLES WITH READ LOCK;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */;
SHOW MASTER STATUS;
UNLOCK TABLES;
After all of these are done, the actual backup starts. The lock is only necessary so that SHOW MASTER STATUS returns exactly the correct binlog coordinates of the beginning of the transaction.
The backup should not block writes, and writes should not block the backup; however, existing transactions need to be committed or rolled back before the FLUSH statements will finish, and these can interact with established transactions in such a way that new transactions stall waiting for any old transactions that are still open. But the issue will clear itself when those old transactions finish. If you aren't leaving long-running transactions (as you shouldn't be), you should have no issue.
My website is created using Drupal 7 on one instance of amazon & database is on amazon RDS . whenever i am taking backup of database using Mysql with root user & simultaneously when a developer try to clear the cache on the website that time website goes offline untill backup gets over say for around 30 min this happens everytime . what is the solution to avoid this problem.
Try to use this parameter when taking backup with mysqldump . --lock-tables for MyISAM storage engine. --single-transaction for Innodb storage engine.
--lock-tables Lock all tables before dumping them. The tables are locked with READ LOCAL to allow concurrent inserts in the case of MyISAM tables.
For transactional tables such as InnoDB and BDB, --single-transaction is a much better option, because it does not need to lock the tables at all.--single-transaction produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.
You could create a read replica and perform the mysqldump against the replica, to keep the load off the main server. https://aws.amazon.com/rds/details/read-replicas/
I have DB InnoDb innodb_db_1. I have turned on innodb_file_per_table.
If I go to var/lib/mysql/innodb_db_1/ I will find files table_name.ibd, table_name.frm, db.opt.
Now, I'm trying to copy these files to another DB for example to innodb_db_2(var/lib/mysql/innodb_db_2/) but nothing happened.
But if my DB will be MyIsam, I can copy in such way and everything be ok.
What suggestions to move the DB by copying the file of InnoDb DB?
Even when you use file-per-table, the tables keep some of their data and metadata in /var/lib/mysql/ibdata1. So you can't just move .ibd files to a new MySQL instance.
You'll have to backup and restore your database. You can use:
mysqldump, included with MySQL, reliable but slow.
mydumper, a community contributed substitute for mysqldump, this supports compression and parallel execution and other neat features.
Percona XtraBackup, which is free and performs high-speed physical backups of InnoDB (and also supports other storage engines). This is recommended to minimize interruption to your live operations, and also if your database is large.
Re your comment:
No, you cannot just copy .ibd files. You cannot turn off the requirement for ibdata1. This file includes, among other things, a data dictionary which you can think of like a table of contents for a book. It tells InnoDB what tables you have, and which physical file they reside in.
If you just move a .ibd file into another MySQL instance, this does not add it to that instance's data dictionary. So InnoDB has no idea to look in the new file, or which logical table it goes with.
If you want a workaround, you could ALTER TABLE mytable ENGINE=MyISAM, move that file and its .frm to another instance, and then ALTER TABLE mytable ENGINE=InnoDB to change it back. Remember to FLUSH TABLES WITH READ LOCK before you move MyISAM files.
But these steps are not for beginners. It would be a lot safer for you to use the backup & restore method unless you know what you're doing. I'm trying to save you some grief.
There is an easy procedure to move the whole Mysql InnoDB from pc A to pc B.
The conditions to perform the procedure are:
You need to have innodb_file_per_table option set
You need to be able to make a shutdown of the database
In my case i had to move whole 150Gb MySql database (the biggest table had aprox. 60Gb). Making sqldumps and loading them back was not an option (too slow).
So what I did was I made a "cold backup" of the mysql database (mysql doc) and then simply move files to another computer.
The steps to make after moving the databse are described here dba stackexchange.
I am writing this, because (assuming you are able to follow mentioned conditions) this is by far the fastest (and probalby the easiest) method to move a (large) MySql InnoDb and nobody mentioned it yet.
You can copy MyISAM tables all day long (safely, as long as they are flushed and locked or the server is stopped) but you can't do this with InnoDB, because the two storage engines handle tables and tablespaces very differently.
MyISAM automatically discovers tables by iterating the files in the directory named for the database.
InnoDB has an internal data dictionary stored in the system tablespace (ibdata1). Not only do the tables have to be consistent, there are identifiers in the .ibd files that must match what the data dictionary has stored internally.
Prior to MySQL 5.6, with the introduction of transportable tablespaces, this wasn't a supported operation. If you are using MySQL 5.6, the link provides you with information on how this works.
The alternatives:
use mysqldump [options] database_name > dumpfile.sql without the --databases option, which will dump the tables in the specified database but will omit any DATABASE commands (DROP DATABASE, CREATE DATABASE and USE), some or all of which, based on the combination of options specified, are normally added to the dump file. You can then import this with mysql [options] < dumpfile.sql.
CREATE TABLE db2.t1 LIKE db1.t1; INSERT INTO db2.t1 SELECT * FROM db1.t1; (for each table; you'll have to add any foreign key constraints back in)
ALTER TABLE on each table, changing it to MyISAM, then flushing and locking the tables with FLUSH TABLES WITH READ LOCK;, copying them over, then altering everything back to InnoDB. Not the greatest idea, since you'll lose all foreign key declarations and have to add them back on the original tables, but it is an alternative.
As far as I know, "hot copying" table files is a very bad idea (I've done it two times, and only made it work with MyISAM tables, and i did it only because I had no other choice).
My personal recomendation is: Use mysqldump. On your shell:
mysqldump -h yourHost -u yourUser -pYourPassword yourDatabase yourTable > dumpFile.sql
To copy the data from a dump file to another database, on your shell:
mysql -h yourHost -u yourUser -pYourPassword yourNewDatabase < dumpFile.sql
Check: mysqldump — A Database Backup Program.
If you insist on copying InnoDB files by hand, please read this: Backing Up and Recovering an InnoDB Database
It seems that if you have many tables, you can only perform a MySQLDump without locking them all, otherwise you can an error.
What are the side effects of performing a MySQLDump without locking all the tables; Is the DB snapshot I get this way, consistent? Do I have any other alternative for getting a backup of a MySQL DB with many tables?
The best way (if using InnoDB) is actually to run the backup on a replicated slave. That way locking will be of no consequence.
Else just use the --single-transaction flag as mentioned.
What storage engine(s) do you use?
If you are using InnoDB, then you can run mysqldump with the --single-transaction flag and get a consistent snapshot without locking the tables.
If you are using MyISAM, then you need to lock the tables to get a consistent snapshot. Otherwise any insert/update/delete statements that run on your MyISAM tables while mysqldump is running may or may not be reflected in the output depending on the timing of those statements.
The --single-transaction flag should work if your DB is of type InnoDB.
For Innodb , you need mention --single-transaction in mysqldump utility to avoid locking and get a consistent snapshot.
For MyISAM , you need to lock the tables to get consistent snapshot else you will miss DML to be logged while running dump
I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).
This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).
I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?
mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process.
Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html--lock-all-tables, -x
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.
mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:
[3 Feb 2007 22:00] Sergei Golubchik
This is not really a bug.
mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to
be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big
number of tables will inevitably exhaust all available file descriptors, as LOCK needs all
tables to be opened.
Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively,
--lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all
tables in all databases (without opening them). In this case mysqldump will automatically
disable --lock-tables because it makes no sense when --lock-all-tables is used.
Edit: Please check Dave's workaround for InnoDB in the comment below.
If your database is that large you've got a few issues.
You have to lock the tables to dump the data.
mysqldump will take a very very long time and your tables will need to locked during this time.
importing the data on the new server will also take a long time.
Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.
In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.
I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.
Thought this might be helpful tip here.