mysqldump and flush tables with read locks - mysql

I'm running the mysqdump from server1 to server2
The mysqldump command I'm using is
mysqldump -q -u dump -p######## -h ###.###.###.### --add-drop-database --add-drop-table --set-charset --all-databases > dump.sql
and the user and privileges are correct.
When I run this I do get an output file (dump.sql) but it stops at 948,920 bytes and does not increase in size even if I leave it for 1 hour.
I have tried the mysqldump 8 times now with the same message from the running process:
292186 root localhost RED Query 26 Waiting for release of readlock LOCK TABLES `OLD_RED_NOTES` READ /*!32311 LOCAL */,`RED_ADD` READ /*!32311 LOCAL */,`RED_COUNTRY` RE
If however I don't perform the FLUSH TABLES WITH READ LOCK; I can get a 7GB file without issue.
I simply cant understand why I cant get this with the table lock !
Been bashing Google for 48 hours with no joy ! Please help

Do "SHOW PROCESSLIST" as root to see what other processes are hitting your table and giving you the lock. If your tables are InnoDB, use SHOW ENGINE INNODB STATUS which will show the InnoDB locks and other information as well. If you are using MyISAM, "SHOW STATUS LIKE 'Table%'" will show you if you're hitting a lot of locks all the time or only with the dumps.

Related

How to Non-stop replication mysql server

In the production environment, I have a master and slave,but for some reason
some synchronization data of the slave is not synchronized
so the error code 1032 is caused.
I saw the solution and use the command:
set global sql_slave_skip_counter=1;
Now that db cannot be shut down, what method can I use to repair my slave
The master will always insert, delete, and update operations, Can't stop.
slave is only used to read,and I can Truncate slave.
The problem I encountered is composed of these:
The server cannot be shut down when I dump
If there is any operation to change the database, the index of binlog will change
How to solve this problem,
you can record the current position and index at the moment the dump.
mysqldump -u user -p mydb --set-gtid-purged=OFF --single-transaction --master-data=1> mydump.sql
--master-data=1
Indicates that the current position and index are recorded when finish dump.
cat idn_maindb.sql |grep "MASTER_LOG_FILE"
And you will get the index and position.

Does the command mysqldump generate the dump locally before transferring? [duplicate]

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

getting Lost connection to mysql when using mysqldump even with max_allowed_packet parameter

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

MySQL Dump waiting for table metadata lock

I am trying to do a mysqldump along the lines of:
mysqldump -u root -p db > C:\FileLocation
However, when I run the command it never finishes. I therefore used: SHOW PROCESSLIST; to see what was going on. In the state of my dump query, the state reads: 'Waiting for table metadata lock'. There are only two other processes running on the database (besides the SHOW PROCESSLIST command), both of which are sleeping.
I tried killing the other two processes and then doing the dump which worked for me. However, I'd like the dump to work regardless of whether the two processes working. Is there a way to go about that?
Managed to figure it out in the end...
All I had to do was add: --single-transaction=TRUE to the start of my original query. IE:
mysqldump --single-transaction=TRUE -u root -p db > C:\FileLocation
Which allows the process to run without having to lock tables.

mysql - how does mysqldump work?

I have a java application that uses mysql as back end, every night we take a backup of mysql using mysqldump and application stops working for that time period(app 20 min).
Command used for taking the backup.
$MYSQLDUMP -h $HOST --user=$USER --password=$PASS $database > \
$BACKDIR/$SERVER-mysqlbackup-$database-$DATE.sql
gzip -f -9 $BACKDIR/$SERVER-mysqlbackup-$database-$DATE.sql
Is this normal or am I doing something wrong that is causing the DB to stall during that time ?
Thanks,
K
See https://serverfault.com/questions/224711/backing-up-a-mysql-database-while-it-is-still-in-use/224716#224716
I suspect you are using MyISAM, and the table is locking. I suggest you switch to InnoDB and use the single-transaction flag. That will allow updates to continue and will also preserve a consistent state.
mysqldump has to get a read lock on the tables and hold it for the duration of the backup in order to ensure a consistent backup. However, a read lock can stall subsequent reads, if a write occurs in between (i.e. read -> write -> read): the first read lock blocks the write lock, which blocks the second read lock.
This depends in part on your table type. If you are using MyISAM, locks apply to the entire table and thus the entire table will be locked. I believe that the locks in InnoDB work differently, and that this will not lock the entire table.
If tables are stored in the InnoDB storage engine, mysqldump provides a
way of making an online backup of these (see command below).
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This may help... specifically the --single-transaction option, and not the --all-databases one... (from the mysqldump manpage)
You can state --skip-lock-tables but this could lead to data being modified as you backup. This could mean your data is inconsistent and throw all sorts of errors. Best to just do your backup at the time when least people will be using it.