Does the command mysqldump generate the dump locally before transferring? [duplicate] - mysql

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.

Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.

For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.

other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...

Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.

Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.

I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log

If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

Related

getting Lost connection to mysql when using mysqldump even with max_allowed_packet parameter

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

Avoid blocking mysql and PC with mysqldump

I have a root access to a mysql server, I need to dump ALL the database inside the server.
I tried with a simple mysqldump, but the server and pc seems blocked due to the large size of the databases and tables. Can I "optimize" this DUMP avoiding locking the server (and PC) ?
Thank you so much!
EDIT:
I want to EXPORT all the databases from a Mysql Server.
I need to understand what options to pass to mysqldump to avoid blocking:
The Mysql Server <---- it CAN'T go down
The PC that is goind to do this DUMP
You can disable locking:
mysqldump --skip-lock-tables
Of course you will not be able to create a consistent dump that way, so I would not recommend to use that option.
When only using MyISAM and ARCHIVE tables you might want to consider using mysqlhotcopy (included with a regular mysql package). There is similar software for other table engines like InnoDB available.
Another option is using a replication slave for backup.
Fire the dump command from commandline. :
mysqldump <other mysqldump options> --routines > outputfile.sql
If we want to backup ONLY the stored procedures and triggers and not the mysql tables and data then we should run something like:
mysqldump --routines --no-create-info --no-data --no-create-db --skip-opt <database> > outputfile.sql
If you need to import them to another db/server you will have to run something like:
mysql <database> < outputfile.sql

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.

export table using ssh

I want to export a table of mysql database using ssh so how can I take this one.
Export the table to a file, using MySQLDump, PHPMyAdmin or your favorite tool.
Transfer the file to the host of your liking, using scp or your favorite tool.
What do you mean "export using SSH?" SSH is a protocol to securely talk to a server. SCP is a way to copy files between hosts using SSH, so I assume you mean that.
mysqldump db_name tbl_name >dumpfile
scp dumpfile 127.0.0.1:.
If you dislike mysqldump, you can always just SELECT INTO OUTFILE from your favorite MySQL client (or PHP program, or just the commandline mysql). After that you transfer the file to the other host, and run LOAD DATA INFILE to load the table. You'll have to recreate the table as well.
mysqldump -uuser -ppass dbname tablename >dump_table.sql
If the dump is going to be very large, you'll probably want to run it in the background, to assure you losing connection to SSH doesn't cause a problem.
Also, you could use nice if it is a live server and don't want to effect its performance too much
nohup nice mysqldump -uuser -ppassword [other flags] database tablename > dumpfile.sql
You can then use SSH File Transfer to download it to your computer, or use scp to send it to another server:
nohup nice scp dumpfile.sql user#IP:/path/ &
Then you can load it in to mysql
nohup nice mysql -uuser -ppassword database < dumpfile.sql &
**Use nohup if its going to take a long time and losing connection could be a problem. nohup causes to run in background
**Use nice to lower the priority of the process, ie. in case is live server and performance is important. You should know nice doesn't stop your mysql from being slowed down by a query that takes a long time to finish, so beware.

Run MySQLDump without Locking Tables

I want to copy a live production database into my local development database. Is there a way to do this without locking the production database?
I'm currently using:
mysqldump -u root --password=xxx -h xxx my_db1 | mysql -u root --password=xxx -h localhost my_db1
But it's locking each table as it runs.
Does the --lock-tables=false option work?
According to the man page, if you are dumping InnoDB tables you can use the --single-transaction option:
--lock-tables, -l
Lock all tables before dumping them. The tables are locked with READ
LOCAL to allow concurrent inserts in the case of MyISAM tables. For
transactional tables such as InnoDB and BDB, --single-transaction is
a much better option, because it does not need to lock the tables at
all.
For innodb DB:
mysqldump --single-transaction=TRUE -u username -p DB
This is ages too late, but good for anyone that is searching the topic. If you're not innoDB, and you're not worried about locking while you dump simply use the option:
--lock-tables=false
The answer varies depending on what storage engine you're using. The ideal scenario is if you're using InnoDB. In that case you can use the --single-transaction flag, which will give you a coherent snapshot of the database at the time that the dump begins.
--skip-add-locks helped for me
To dump large tables, you should combine the --single-transaction option with --quick.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
This is about as late compared to the guy who said he was late as he was to the original answer, but in my case (MySQL via WAMP on Windows 7), I had to use:
--skip-lock-tables
For InnoDB tables use flag --single-transaction
it dumps the consistent state of the database at the time when BEGIN
was issued without blocking any applications
MySQL DOCS
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
Honestly, I would setup replication for this, as if you don't lock tables you will get inconsistent data out of the dump.
If the dump takes longer time, tables which were already dumped might have changed along with some table which is only about to be dumped.
So either lock the tables or use replication.
mysqldump -uuid -ppwd --skip-opt --single-transaction --max_allowed_packet=1G -q db | mysql -u root --password=xxx -h localhost db
When using MySQL Workbench, at Data Export, click in Advanced Options and uncheck the "lock-tables" options.
Due to https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_lock-tables :
Some options, such as --opt (which is enabled by default), automatically enable --lock-tables. If you want to override this, use --skip-lock-tables at the end of the option list.
If you use the Percona XtraDB Cluster -
I found that adding
--skip-add-locks
to the mysqldump command
Allows the Percona XtraDB Cluster to run the dump file
without an issue about LOCK TABLES commands in the dump file.
Another late answer:
If you are trying to make a hot copy of server database (in a linux environment) and the database engine of all tables is MyISAM you should use mysqlhotcopy.
Acordingly to documentation:
It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database
backup. It is a fast way to make a backup of the database or single
tables, but it can be run only on the same machine where the database
directories are located. mysqlhotcopy works only for backing up
MyISAM and ARCHIVE tables.
The LOCK TABLES time depends of the time the server can copy MySQL files (it doesn't make a dump).
As none of these approaches worked for me, I simply did a:
mysqldump [...] | grep -v "LOCK TABLE" | mysql [...]
It will exclude both LOCK TABLE <x> and UNLOCK TABLES commands.
Note: Hopefully your data doesn't contain that string in it!