I have a java application that uses mysql as back end, every night we take a backup of mysql using mysqldump and application stops working for that time period(app 20 min).
Command used for taking the backup.
$MYSQLDUMP -h $HOST --user=$USER --password=$PASS $database > \
$BACKDIR/$SERVER-mysqlbackup-$database-$DATE.sql
gzip -f -9 $BACKDIR/$SERVER-mysqlbackup-$database-$DATE.sql
Is this normal or am I doing something wrong that is causing the DB to stall during that time ?
Thanks,
K
See https://serverfault.com/questions/224711/backing-up-a-mysql-database-while-it-is-still-in-use/224716#224716
I suspect you are using MyISAM, and the table is locking. I suggest you switch to InnoDB and use the single-transaction flag. That will allow updates to continue and will also preserve a consistent state.
mysqldump has to get a read lock on the tables and hold it for the duration of the backup in order to ensure a consistent backup. However, a read lock can stall subsequent reads, if a write occurs in between (i.e. read -> write -> read): the first read lock blocks the write lock, which blocks the second read lock.
This depends in part on your table type. If you are using MyISAM, locks apply to the entire table and thus the entire table will be locked. I believe that the locks in InnoDB work differently, and that this will not lock the entire table.
If tables are stored in the InnoDB storage engine, mysqldump provides a
way of making an online backup of these (see command below).
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This may help... specifically the --single-transaction option, and not the --all-databases one... (from the mysqldump manpage)
You can state --skip-lock-tables but this could lead to data being modified as you backup. This could mean your data is inconsistent and throw all sorts of errors. Best to just do your backup at the time when least people will be using it.
Related
I have a live MySQL database which is configured as a master to a slave. This slave is already replicating from the master. Additionally, the slave is intentionally behind the master by 10 minutes. Now I have a need to take a mysql dump from the master to start another slave.
If I take a dump from the master using the mysqldump --flush-logs option, like so
$ mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/dump.sql
would this be ok? My concerns are:-
Will the bin-log files be flushed (as in purged), thus causing
problems for the existing slave? This slave is relying on the bin-log
files to remain up to date.
Or would this just cause a new bin-log
file to be created leaving the older files intact. Meaning no
problems for the existing slave.
Why even bother with adding --flush-logs ?
I think you have mistaken FLUSH with PURGE. Purpose of flush is to clear and re-load caches or put pending writes to disk. In mysql some writes are done on table close (for example), sometimes you need the data to be on-disk... FLUSH will ensure data is written.
Now "why bother"... in some cases you will want to start replication by dumping SQL and saving log position, so after you import the SQL to the slave you can start from exactly the place on which you took db snapshot to be sure data is not corrupted (eg. by running single query from master - multiple times on the slave).
BTW: --single-transaction without locks is unsafe for any DB which is having writes to myisam tables, you could get databases dumped in different state... and if you already have one slave (which i assume is working correctly), then why not dumping data from the slave using FLUSH TABLES WITH READ LOCK during the whole operation, which is most safe way and always works as intended. It also read-locks the whole server during the dump but if you have working slave anyway - why bother?
I'm trying to generate a dump of a database comprised of innodb tables.
Having dutifully read the mysqldump section of the relevant (5.6) manual, I used the --skip-lock-tables and --single-transaction options. When I look at the resulting dump file I see "LOCK TABLES" & "UNLOCK TABLES" around the INSERT statements for each table in the database.
--single-transaction on its own produces the same result.
Does anyone have an idea as to why mysqldump is seemingly ignoring these options?
I take it that the LOCK TABLES & UNLOCK TABLES should not be appearing with one or both of these options.
Mmh, you have dutifully but maybe not thoroughly read man mysqldump (or the manual section you mention is incomplete) ;-) Else you'd know that you need to add --skip-add-locks to your mysqldump command.
I need to backup the whole of a MySQL database with the information about all users and their permissions and passwords.
I see the options on http://www.igvita.com/2007/10/10/hands-on-mysql-backup-migration/,
but what should be the options to backup all of the MySQL database with all users and passwords and permissions and all database data?
Just a full backup of MySQL so I can import later on another machine.
At it's most basic, the mysqldump command you can use is:
mysqldump -u$user -p$pass -S $socket --all-databases > db_backup.sql
That will include the mysql database, which will have all the users/privs tables.
There are drawbacks to running this on a production system as it can cause locking. If your tables are small enough, it may not have a significant impact. You will want to test it first.
However, if you are running a pure InnoDB environment, you can use the --single-transaction flag which will create the dump in a single transaction (get it) thus preventing locking on the database. Note, there are corner cases where the initial FLUSH TABLES command run by the dump can lock the tables. If that is the case, kill the dump and restart it. I would also recommend that if you are using this for backup purposes, use the --master-data flag as well to get the binary log coordinates from where the dump was taken. That way, if you need to restore, you can import the dump file and then use the mysqlbinlog command to replay the binary log files from the position where this dump was taken.
If you'd like to transfer also stored procedures and triggers it's may be worth to use
mysqldump --all-databases --routines --triggers
if you have master/slave replication you may dump their settings
--dump-slave and/or --master-data
Oneliner suitable for daily backups of all your databases:
mysqldump -u root -pVeryStrongPassword --all-databases | gzip -9 > ./DBBackup.$(date +"%d.%m.%Y").sql.gz
If put in cron it will create files in format DBBackup.09.07.2022.sql.gz on a daily basis.
I'm new to MySQL and I'm figuring out the best way to perform an on-line hot logical backup using mysqldump. This page suggests this command line:
mysqldump --single-transaction --flush-logs --master-data=2
--all-databases > backup_sunday_1_PM.sql
but... if you read the documentation carefully you find that:
While a --single-transaction dump is in process, to ensure a valid dump file
(correct table contents and binary log position), no other connection should use
the following statements: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE. A
consistent read is not isolated from those statements, so use of them on a table to
be dumped can cause the SELECT performed by mysqldump to retrieve the table contents
to obtain incorrect contents or fail.
So, is there any way to prevent this possible dump corruption scenario?
I.e. a commands that could block those statements temporarily.
PS: MySQL bug entry on this subject http://bugs.mysql.com/bug.php?id=27850
Open a mysql command window and issue this command:
mysql> FLUSH TABLES WITH READ LOCK;
This will lock all tables in all databases on this MySQL instance until you issue UNLOCK TABLES (or terminate the client connection that holds these read locks).
To confirm this, you can open another command window and try to do an ALTER, DROP, RENAME or TRUNCATE. These commands hang, waiting for the read lock to be released. Hit Ctrl-C to terminate the waiting.
But while the tables have a read lock, you can still perform a mysqldump backup.
The FLUSH TABLES WITH READ LOCK command may be the same as using the --lock-all-tables option of mysqldump. It's not totally clear, but this doc seems to support it:
Another use for UNLOCK TABLES is to
release the global read lock acquired
with FLUSH TABLES WITH READ LOCK.
Both FLUSH TABLES WITH READ LOCK and --lock-all-tables use the phrase "global read lock," so I think it's likely that these do the same thing. Therefore, you should be able to use that option to mysqldump and protect against concurrent ALTER, DROP, RENAME, and TRUNCATE.
Re. your comment: The following is from Guilhem Bichot in the MySQL bug log that you linked to:
Hi. --lock-all-tables calls FLUSH
TABLES WITH READ LOCK. Thus it is
expected to block ALTER, DROP, RENAME,
or TRUNCATE (unless there is a bug or
I'm wrong). However, --lock-all-tables
--single-transaction cannot work (mysqldump throws an error message):
because lock-all-tables locks all
tables of the server against writes
for the duration of the backup,
whereas single-transaction is intended
to let writes happen during the backup
(by using a consistent-read SELECT in
a transaction), they are incompatible
in nature.
From this, it sounds like you cannot get concurrent access during a backup, and simultaneously block ALTER, DROP, RENAME and TRUNCATE.
I thought the same thing reading that part of the documentation though, I found more information:
4.5.4. mysqldump — A Database Backup Program
http://dev.mysql.com/doc/en/mysqldump.html
For InnoDB tables, mysqldump provides a way of making an online
backup:
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This backup acquires a global read lock on all tables (using FLUSH
TABLES WITH READ LOCK) at the beginning of the dump. As soon as this
lock has been acquired, the binary log coordinates are read and the
lock is released. If long updating statements are running when the
FLUSH statement is issued, the MySQL server may get stalled until
those statements finish. After that, the dump becomes lock free and
does not disturb reads and writes on the tables. If the update
statements that the MySQL server receives are short (in terms of
execution time), the initial lock period should not be noticeable,
even with many updates.
There is a conflict with the --opt and --single-transaction options:
--opt
This option is shorthand. It is the same as specifying
--add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. It should give you a fast dump operation and produce a dump file that can be reloaded
into a MySQL server quickly.
The --opt option is enabled by default. Use --skip-opt to disable it.
If I understand your question correctly you want the actual data and the DDL (Data Definition Language) together, because if you only want the DDL you would use --no-data. More information about this can be found at:
http://dev.mysql.com/doc/workbench/en/wb-reverse-engineer-create-script.html
Use the --databases option with mysqldump if you wish to create the
database as well as all its objects. If there is no CREATE DATABASE
db_name statement in your script file, you must import the database
objects into an existing schema or, if there is no schema, a new
unnamed schema is created.
As suggested by The Definitive Guide to MySQL 5 By Michael Kofler I would suggest the follow options:
--skip-opt
--single-transaction
--add-drop-table
--create-options
--quick
--extended-insert
--set-charset
--disable-keys
Additionally, not mentioned is --order-by-primary
Also if you are using the --databases option, you should also use --add-drop-database especially if combined with this answer If you are backing up databases that are connect on different networks you may need to use the --compress option.
So a mysqldump command (without using the --compress, --databases, or --add-drop-database options) would be :
mysqldump --skip-opt --order-by-primary --single-transaction --add-drop-table --create-options --quick --extended-insert --set-charset -h db_host -u username --password="myPassword" db_name | mysql --host=other_host db_name
I removed the reference to --disable-keys that was given in the book as it is not effective with InnoDB as i understand it. The MySql manual states:
For each table, surround the INSERT statements with /*!40000 ALTER
TABLE tbl_name DISABLE KEYS /; and /!40000 ALTER TABLE tbl_name
ENABLE KEYS */; statements. This makes loading the dump file faster
because the indexes are created after all rows are inserted. This
option is effective only for nonunique indexes of MyISAM tables.
I also found this bug report http://bugs.mysql.com/bug.php?id=64309 which has comments on the bottom from Paul DuBois who also wrote a few books to which I have no reference on this specific issue other than those comments found within that bug report.
Now to create the "Ultimate Backup" I would suggest to consider something along the lines of this shell script
https://github.com/red-ant/mysql-svn-backup/blob/master/mysql-svn.sh
You can't get a consistent dump without locking tables. I just do mine during a time of day that the 2 minutes it takes to do the dump isn't noticed.
One solution is to do replication, then back up the slave instead of the master. If the slave misses writes during the backup, it will just catch up later. This will also leave you with a live backup server in case the master fails. Which is nice.
Hi that's late for any answer but the solution arrived in MariaDB with BACKUP STAGE, open source time is relativist :)
I want to copy a live production database into my local development database. Is there a way to do this without locking the production database?
I'm currently using:
mysqldump -u root --password=xxx -h xxx my_db1 | mysql -u root --password=xxx -h localhost my_db1
But it's locking each table as it runs.
Does the --lock-tables=false option work?
According to the man page, if you are dumping InnoDB tables you can use the --single-transaction option:
--lock-tables, -l
Lock all tables before dumping them. The tables are locked with READ
LOCAL to allow concurrent inserts in the case of MyISAM tables. For
transactional tables such as InnoDB and BDB, --single-transaction is
a much better option, because it does not need to lock the tables at
all.
For innodb DB:
mysqldump --single-transaction=TRUE -u username -p DB
This is ages too late, but good for anyone that is searching the topic. If you're not innoDB, and you're not worried about locking while you dump simply use the option:
--lock-tables=false
The answer varies depending on what storage engine you're using. The ideal scenario is if you're using InnoDB. In that case you can use the --single-transaction flag, which will give you a coherent snapshot of the database at the time that the dump begins.
--skip-add-locks helped for me
To dump large tables, you should combine the --single-transaction option with --quick.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
This is about as late compared to the guy who said he was late as he was to the original answer, but in my case (MySQL via WAMP on Windows 7), I had to use:
--skip-lock-tables
For InnoDB tables use flag --single-transaction
it dumps the consistent state of the database at the time when BEGIN
was issued without blocking any applications
MySQL DOCS
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
Honestly, I would setup replication for this, as if you don't lock tables you will get inconsistent data out of the dump.
If the dump takes longer time, tables which were already dumped might have changed along with some table which is only about to be dumped.
So either lock the tables or use replication.
mysqldump -uuid -ppwd --skip-opt --single-transaction --max_allowed_packet=1G -q db | mysql -u root --password=xxx -h localhost db
When using MySQL Workbench, at Data Export, click in Advanced Options and uncheck the "lock-tables" options.
Due to https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_lock-tables :
Some options, such as --opt (which is enabled by default), automatically enable --lock-tables. If you want to override this, use --skip-lock-tables at the end of the option list.
If you use the Percona XtraDB Cluster -
I found that adding
--skip-add-locks
to the mysqldump command
Allows the Percona XtraDB Cluster to run the dump file
without an issue about LOCK TABLES commands in the dump file.
Another late answer:
If you are trying to make a hot copy of server database (in a linux environment) and the database engine of all tables is MyISAM you should use mysqlhotcopy.
Acordingly to documentation:
It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database
backup. It is a fast way to make a backup of the database or single
tables, but it can be run only on the same machine where the database
directories are located. mysqlhotcopy works only for backing up
MyISAM and ARCHIVE tables.
The LOCK TABLES time depends of the time the server can copy MySQL files (it doesn't make a dump).
As none of these approaches worked for me, I simply did a:
mysqldump [...] | grep -v "LOCK TABLE" | mysql [...]
It will exclude both LOCK TABLE <x> and UNLOCK TABLES commands.
Note: Hopefully your data doesn't contain that string in it!