mysqldump locking tables despite --skip-lock-tables & --single-transaction options set - mysql

I'm trying to generate a dump of a database comprised of innodb tables.
Having dutifully read the mysqldump section of the relevant (5.6) manual, I used the --skip-lock-tables and --single-transaction options. When I look at the resulting dump file I see "LOCK TABLES" & "UNLOCK TABLES" around the INSERT statements for each table in the database.
--single-transaction on its own produces the same result.
Does anyone have an idea as to why mysqldump is seemingly ignoring these options?
I take it that the LOCK TABLES & UNLOCK TABLES should not be appearing with one or both of these options.

Mmh, you have dutifully but maybe not thoroughly read man mysqldump (or the manual section you mention is incomplete) ;-) Else you'd know that you need to add --skip-add-locks to your mysqldump command.

Related

Complete database reset for MySQL dump?

This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.

mysqlimport using dump

I need to restore a dumped database, but without discarding existing rows in tables.
To dump I use:
mysqldump -u root --password --databases mydatabase > C:\mydatabase.sql
To restore I do not use the mysql command, since it will discard all existing rows, but instead mysqlimport should do the trick, obviously. But how? Running:
mysqlimport -u root -p mydatabase c:\mydatabase.sql
says "table mydatabase.mydatabase does not exist". Why does it look for tables? How to restore dump with entire database without discarding existing rows in existing tables? I could dump single tables if mysqlimport wants it.
What to do?
If you are concerned with stomping over existing rows, you need to mysqldump it as follows:
MYSQLDUMP_OPTIONS="--no-create-info --skip-extended-insert"
mysqldump -uroot --ppassword ${MYSQLDUMP_OPTIONS} --databases mydatabase > C:\mydatabase.sql
This will do the following:
remove CREATE TABLE statements and use only INSERTs.
It will INSERT exactly one row at a time. This helps mitigate rows with duplicate keys
With the mysqldump performed in this manner, now you can import like this
mysql -uroot -p --force -Dtargetdb < c:\mydatabase.sql
Give it a Try !!!
WARNING : Dumping with --skip-extended-insert will make the mysqldump really big, but at least you can control each duplicate done one by one. This will also increase the length of time the reload of the mysqldump is done.
I would edit the mydatabase.sql file in a text editor, dropping the lines that reference dropping tables or deleting rows, then manually import the file normally using the mysql command as normal.
mysql -u username -p databasename < mydatabase.sql
The mysqlimport command is designed for dumps created with the mysql command SELECT INTO OUTFILE rather than direct database dumps.
This sounds like it is much more complicated than you are describing.
If you do a backup the way you describe, it has all the records in your database. Then you say that you do not want to delete existing rows from your database and load from the backup? Why? The reason why the backup file (the output from mysqldump) has the drop and create table commands is to ensure that you don't wind up with two copies of your data.
The right answer is to load the mysqldump output file using the mysql client. If you don't want to do that, you'll have to explain why to get a better answer.

Is it possible to make mysqldump skip the inserts for specific table?

I'm regularly running mysqldump against a Drupal database and man, those cache tables can get huge. Considering that the first thing I do after reloading the data is clear the cache, I'd love it if I could just skip dumping all those rows altogether. I don't want to skip the table creation (with --ignore-tables), I just want to skip all those rows of cached data.
Is it possible to tell mysqldump to dump the CREATE TABLE statement skip the INSERT statements for a specific set of tables?
There is a --no-data option that does this, but it affects all tables AFAIK. So, you'll have to run mysqldump twice.
# Dump all but your_special_tbl
mysqldump --ignore-table=db_name.your_special_tbl db_name > dump.sql
# Dump your_special_tbl without INSERT statements.
mysqldump --no-data db_name your_special_tbl >> dump.sql
You have to call mysqldump twice.
The mysql-stripped-dump script does exactly this.

How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?

I'm new to MySQL and I'm figuring out the best way to perform an on-line hot logical backup using mysqldump. This page suggests this command line:
mysqldump --single-transaction --flush-logs --master-data=2
--all-databases > backup_sunday_1_PM.sql
but... if you read the documentation carefully you find that:
While a --single-transaction dump is in process, to ensure a valid dump file
(correct table contents and binary log position), no other connection should use
the following statements: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE. A
consistent read is not isolated from those statements, so use of them on a table to
be dumped can cause the SELECT performed by mysqldump to retrieve the table contents
to obtain incorrect contents or fail.
So, is there any way to prevent this possible dump corruption scenario?
I.e. a commands that could block those statements temporarily.
PS: MySQL bug entry on this subject http://bugs.mysql.com/bug.php?id=27850
Open a mysql command window and issue this command:
mysql> FLUSH TABLES WITH READ LOCK;
This will lock all tables in all databases on this MySQL instance until you issue UNLOCK TABLES (or terminate the client connection that holds these read locks).
To confirm this, you can open another command window and try to do an ALTER, DROP, RENAME or TRUNCATE. These commands hang, waiting for the read lock to be released. Hit Ctrl-C to terminate the waiting.
But while the tables have a read lock, you can still perform a mysqldump backup.
The FLUSH TABLES WITH READ LOCK command may be the same as using the --lock-all-tables option of mysqldump. It's not totally clear, but this doc seems to support it:
Another use for UNLOCK TABLES is to
release the global read lock acquired
with FLUSH TABLES WITH READ LOCK.
Both FLUSH TABLES WITH READ LOCK and --lock-all-tables use the phrase "global read lock," so I think it's likely that these do the same thing. Therefore, you should be able to use that option to mysqldump and protect against concurrent ALTER, DROP, RENAME, and TRUNCATE.
Re. your comment: The following is from Guilhem Bichot in the MySQL bug log that you linked to:
Hi. --lock-all-tables calls FLUSH
TABLES WITH READ LOCK. Thus it is
expected to block ALTER, DROP, RENAME,
or TRUNCATE (unless there is a bug or
I'm wrong). However, --lock-all-tables
--single-transaction cannot work (mysqldump throws an error message):
because lock-all-tables locks all
tables of the server against writes
for the duration of the backup,
whereas single-transaction is intended
to let writes happen during the backup
(by using a consistent-read SELECT in
a transaction), they are incompatible
in nature.
From this, it sounds like you cannot get concurrent access during a backup, and simultaneously block ALTER, DROP, RENAME and TRUNCATE.
I thought the same thing reading that part of the documentation though, I found more information:
4.5.4. mysqldump — A Database Backup Program
http://dev.mysql.com/doc/en/mysqldump.html
For InnoDB tables, mysqldump provides a way of making an online
backup:
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This backup acquires a global read lock on all tables (using FLUSH
TABLES WITH READ LOCK) at the beginning of the dump. As soon as this
lock has been acquired, the binary log coordinates are read and the
lock is released. If long updating statements are running when the
FLUSH statement is issued, the MySQL server may get stalled until
those statements finish. After that, the dump becomes lock free and
does not disturb reads and writes on the tables. If the update
statements that the MySQL server receives are short (in terms of
execution time), the initial lock period should not be noticeable,
even with many updates.
There is a conflict with the --opt and --single-transaction options:
--opt
This option is shorthand. It is the same as specifying
--add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. It should give you a fast dump operation and produce a dump file that can be reloaded
into a MySQL server quickly.
The --opt option is enabled by default. Use --skip-opt to disable it.
If I understand your question correctly you want the actual data and the DDL (Data Definition Language) together, because if you only want the DDL you would use --no-data. More information about this can be found at:
http://dev.mysql.com/doc/workbench/en/wb-reverse-engineer-create-script.html
Use the --databases option with mysqldump if you wish to create the
database as well as all its objects. If there is no CREATE DATABASE
db_name statement in your script file, you must import the database
objects into an existing schema or, if there is no schema, a new
unnamed schema is created.
As suggested by The Definitive Guide to MySQL 5 By Michael Kofler I would suggest the follow options:
--skip-opt
--single-transaction
--add-drop-table
--create-options
--quick
--extended-insert
--set-charset
--disable-keys
Additionally, not mentioned is --order-by-primary
Also if you are using the --databases option, you should also use --add-drop-database especially if combined with this answer If you are backing up databases that are connect on different networks you may need to use the --compress option.
So a mysqldump command (without using the --compress, --databases, or --add-drop-database options) would be :
mysqldump --skip-opt --order-by-primary --single-transaction --add-drop-table --create-options --quick --extended-insert --set-charset -h db_host -u username --password="myPassword" db_name | mysql --host=other_host db_name
I removed the reference to --disable-keys that was given in the book as it is not effective with InnoDB as i understand it. The MySql manual states:
For each table, surround the INSERT statements with /*!40000 ALTER
TABLE tbl_name DISABLE KEYS /; and /!40000 ALTER TABLE tbl_name
ENABLE KEYS */; statements. This makes loading the dump file faster
because the indexes are created after all rows are inserted. This
option is effective only for nonunique indexes of MyISAM tables.
I also found this bug report http://bugs.mysql.com/bug.php?id=64309 which has comments on the bottom from Paul DuBois who also wrote a few books to which I have no reference on this specific issue other than those comments found within that bug report.
Now to create the "Ultimate Backup" I would suggest to consider something along the lines of this shell script
https://github.com/red-ant/mysql-svn-backup/blob/master/mysql-svn.sh
You can't get a consistent dump without locking tables. I just do mine during a time of day that the 2 minutes it takes to do the dump isn't noticed.
One solution is to do replication, then back up the slave instead of the master. If the slave misses writes during the backup, it will just catch up later. This will also leave you with a live backup server in case the master fails. Which is nice.
Hi that's late for any answer but the solution arrived in MariaDB with BACKUP STAGE, open source time is relativist :)

Run MySQLDump without Locking Tables

I want to copy a live production database into my local development database. Is there a way to do this without locking the production database?
I'm currently using:
mysqldump -u root --password=xxx -h xxx my_db1 | mysql -u root --password=xxx -h localhost my_db1
But it's locking each table as it runs.
Does the --lock-tables=false option work?
According to the man page, if you are dumping InnoDB tables you can use the --single-transaction option:
--lock-tables, -l
Lock all tables before dumping them. The tables are locked with READ
LOCAL to allow concurrent inserts in the case of MyISAM tables. For
transactional tables such as InnoDB and BDB, --single-transaction is
a much better option, because it does not need to lock the tables at
all.
For innodb DB:
mysqldump --single-transaction=TRUE -u username -p DB
This is ages too late, but good for anyone that is searching the topic. If you're not innoDB, and you're not worried about locking while you dump simply use the option:
--lock-tables=false
The answer varies depending on what storage engine you're using. The ideal scenario is if you're using InnoDB. In that case you can use the --single-transaction flag, which will give you a coherent snapshot of the database at the time that the dump begins.
--skip-add-locks helped for me
To dump large tables, you should combine the --single-transaction option with --quick.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
This is about as late compared to the guy who said he was late as he was to the original answer, but in my case (MySQL via WAMP on Windows 7), I had to use:
--skip-lock-tables
For InnoDB tables use flag --single-transaction
it dumps the consistent state of the database at the time when BEGIN
was issued without blocking any applications
MySQL DOCS
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
Honestly, I would setup replication for this, as if you don't lock tables you will get inconsistent data out of the dump.
If the dump takes longer time, tables which were already dumped might have changed along with some table which is only about to be dumped.
So either lock the tables or use replication.
mysqldump -uuid -ppwd --skip-opt --single-transaction --max_allowed_packet=1G -q db | mysql -u root --password=xxx -h localhost db
When using MySQL Workbench, at Data Export, click in Advanced Options and uncheck the "lock-tables" options.
Due to https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_lock-tables :
Some options, such as --opt (which is enabled by default), automatically enable --lock-tables. If you want to override this, use --skip-lock-tables at the end of the option list.
If you use the Percona XtraDB Cluster -
I found that adding
--skip-add-locks
to the mysqldump command
Allows the Percona XtraDB Cluster to run the dump file
without an issue about LOCK TABLES commands in the dump file.
Another late answer:
If you are trying to make a hot copy of server database (in a linux environment) and the database engine of all tables is MyISAM you should use mysqlhotcopy.
Acordingly to documentation:
It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database
backup. It is a fast way to make a backup of the database or single
tables, but it can be run only on the same machine where the database
directories are located. mysqlhotcopy works only for backing up
MyISAM and ARCHIVE tables.
The LOCK TABLES time depends of the time the server can copy MySQL files (it doesn't make a dump).
As none of these approaches worked for me, I simply did a:
mysqldump [...] | grep -v "LOCK TABLE" | mysql [...]
It will exclude both LOCK TABLE <x> and UNLOCK TABLES commands.
Note: Hopefully your data doesn't contain that string in it!