I imported a large mysql database using
mysql -uroot -ppassword dbName
the database has gone away during the process possibly due to timeout after a few days...
is there a way to resume it? or am I out of luck and need to delete the existing db and reimport?
It might help to use a "--ignore" option on the commandline, to "resume" the import.
The semantics is that it should ignore any already imported data and only import what's not yet there.
Here's the MYSQL documentation for the ignore option:
http://dev.mysql.com/doc/refman/5.0/en/mysqlimport.html#option_mysqlimport_ignore
I'm using INSERT IGNORE INTO by stream editing (sed) my dump file like this:
nice gunzip < dumpfile.sql.gz | sed -e "s|^INSERT INTO |INSERT IGNORE INTO |g" | nice mysql -uroot -p"password" DBName
If you know the last insertion point on your query, split your mysqldump file to just before that point, and replace insert with insert ignore. You probably don't want to insert ignore the whole dataset, as each transaction is attempted.
Also, mysql server has gone away can also be indicative of violating max_allowed_packet size.
The "database has gone away" is usually indicative of the server crashing, check your mysql logs /var/log/mysqld.log or if not there run;
SELECT * FROM GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'LOG_ERROR';
I've never had a client disconnect, even in week long runs over the network. It looks like your connecting locally so a disconnect is very unlikely.
If you want to resume, you can do the following;
Check the error log to see the cause of the error and fix this first
Grep the dump file; grep -irH 'DROP TABLE'
Compare the tables restored to the grep results; note the line of the last match
Create a new file from the last matched db (inclusive); tail --lines=+10000 database.sql > resume.sql
OR; as someone else stated, use the ignore-lines option in mysqlimport
Hope this helps
Related
I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.
I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.
I'm attempting to dump all the databases from a 500Gb RDS instance into a smaller instance (100Gb). I have a lot of user permissions saved so I need to dump the mysql table.
mysqldump -h hostname -u username -ppassword --all-databases > dump.sql
Now when I try to upload the data to my new instance I get the following error:
mysql -h hostname -u username -ppassword < dump.sql`
ERROR 1044 (42000) at line 2245: Access denied for user 'staging'#'%' to database 'mysql'
I would just use a database snapshot to accomplish this, but my instance is smaller in size.
As a sanity check, I tried dumping the data into the original instance but got the same error. Can someone please advise on what I should do here? Thanks!
You may need to do the databases individually, or at least remove the mysql schema from the existing file (perhaps using grep to find the line counts for the USE database; statements and then sed to trim out the troublesome section, or see below), and then generate a dump file that doesn't monkey with the table structures or the proprietary RDS triggers in the MySQL schema.
I have not tried to restore the full mysql schema onto an RDS instance, but I can certainly see where it would go awry with the customizations in RDS and the lack of SUPER privilege... but it seems like these options on mysqldump should get you close, at least.
mysqldump --no-create-info # don't try to drop and recreate the mysql schema tables
--skip-triggers # RDS has proprietary triggers in the mysql schema
--insert-ignore # write INSERT IGNORE statements to ignore duplicates
--databases mysql # only one database, "mysql"
--skip-lock-tables # don't generate statements to LOCK TABLES/UNLOCK TABLES during restore
--single-transaction # to avoid locking up the source instance during the dump
If this is still too aggressive, then you will need to resort to dumping only the rows from the specific tables whose content you need to preserve ("user" and the other grant tables).
THERE IS NO WARRANTY on the following, but it's one from my collection. It's a one-liner that reads "old_dumpfile.sql" and writes "new_dumpfile.sql"... but switching the output off when it sees the USE or CREATE DATABASE statements with `mysql` on the same line, and switching it back on again the next time such a statement occurs without `mysql` in it. This will need to be modified if your dump file also has the DROP DATABASE statements in it, or you could generate a new dumpfile with --skip-add-drop-database.
Running your existing dump file through this should essentially remove only the mysql schema from that file, allowing you to easily restore it manually, first, and then let the rest of the database data flow in more smoothly.
perl -pe 'if (/(^USE\s|^CREATE\sDATABASE.*\s)`mysql`/) { $x = 1; } elsif (/^USE\s`/ || /^CREATE\sDATABASE/) { $x = 0; }; $_ = "" if $x;' old_dumpfile.sql > new_dumpfile.sql
I guess you can try to use workbench. There is a migration function there, create the smaller instance (100GB) first, then use that migration feature to migrate from 500GB to the 100GB one see if it works.
I have had too many access denied issues with the RDS MySQL. So running below command on RDS is my way out:
GRANT ALL ON `%`.* to '<type_the_usernamne_here>'#'%';
I am not sure whether this will be helpful in your case. But it has always been a life saviour for me.
This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.
I've exported a mysqldump of a database with InnoDB tables and foreign key relationships in them, using the --single-transaction flag (that I read somewhere I should use for InnoDB). No problems.
But when trying to import that dump into another existing database (same database, different server) I get all sorts of errors when trying to drop the tables because it would break the InnoDB relationships.
I also read that I should use foreign_key_checks=0 to avoid this, but this is a server variable, not part of the dump process. So I'm trying to figure out how to automate all this since I have a script that backs up the DB, it was working when all we had were MyISAM tables:
mysqldump -u user -p'password' --single-transaction -q database | ssh user#backup.com mysql -u user -p'password' database
Thanks.
You can dump into a file, add the required SET FOREIGN_KEY_CHECKS=0; in that file, and then feed the file to mysql.
It turns out that the mysqldump file is smart enough to detect that they are InnoDB tables and puts the appropriate comments at the top of the file. My problem was that when I exported through PHPMyAdmin it didn't put the correct comments on the file, hence causing all this trouble.
Thanks for your response.
You can also add to the mysql command line when restoring without editing the original file. This is very useful as mysql backups can become huge, and editing a GB+ file takes lots of CPU time versus adding this to the commandline,
mysql -D YourDatabaseName -u YourUserName -p --init-command="set ##foreign_key_checks=0"<YourBackupDumpFile.sql