I currently using mysqldump command as follows
mysqldump -u username -p -h hostName database tableName > dump.sql
and it fails with emitting the following error
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `table_name` at row: 1652788
Is there any other way (perhaps a parameter to mysqldump or etc) for exporting large MySQL tables?
You can add a parameter --single-transaction to the mysql dump command if you are using innodb engine. This will eliminate locks on the table and possible connection timeouts.
Also, ensure that you have given sufficient values for max packet size, and innodb lock wait timeout
Related
I'm taking a backup from a standalone database using the following command:
mysqldump -u <user> -p --databases <some databases> --no-create-info --no-create-db --skip-triggers --single-transaction --compress --order-by-primary > data.sql
When I'm importing the data into a MySQL Group Replication, I get this error:
ERROR 3098 (HY000) at line 2150: The table does not comply with the requirements by an xternal plugin.
The last line that the restore ran was: alter table disable keys and the error stopped appearing when the alter table enable keys.
Managed to figure it out.
One of the MySQL GroupReplication requirements is to have a Primary Key for each table. Unlike Standalone Mysql that doesn't require it.
I took the data from a Standalone Mysql and tried to import it into a Group Replication.
As I find out, only 1 table didn't have a Primary Key so the import always failed on that table with that error.
We have a cron that does a full dump of the MySQL5 server, and in tests of restore on a empty instance, it restores all bases, including mysql with mysql.user carrying users and permissions together.
In MySQL8 because mysql base is system, the --add-drop-database and --all-databases attributes conflict giving an error in the restore "ERROR 3552 (HY000) at line 19044: Access to system schema 'mysql' is rejected.", as it is not allowed to drop the mysql base.
Has anyone managed to get around this situation and bring users and privileges together in MySQL8 in same dumpfile?
This is the command i run to dump:
mysqldump --add-drop-database --flush-logs --single-transaction --ignore-table=mysql.innodb_index_stats --ignore-table=mysql.innodb_table_stats --quick --all-databases --triggers --routines --events -u root --password='senha' -P 3306 -h 1.1.1.1 | bzip2 > /tmp/backup.sql.bz2
The problematic SQL:
/*!40000 DROP DATABASE IF EXISTS `mysql`*/;
The best way to walk around this, just open the dump SQL file, delete this SQL,
if the file is too big, use sed.
I ran into this same scenario. I dumped a broken instance with all databases and using add-drop-statement to try and save the data, but when I went to restore it I was blocked. You can no longer drop the mysql system database.
My database backup was something like 150gb, and opening it manually was not an option (a shame as i could tell by doing head -n 50 backup.sql that the problematic statement was within the first few lines).
the statement to remove was
/*!40000 DROP DATABASE IF EXISTS `mysql`*/;
and the sed command for me was:
sed -i 's/\/\*!40000 DROP DATABASE IF EXISTS `mysql`\*\/;/ /g' backup.sql
I would paste the statement into an empty text file first, and run the command to confirm that it actually works. This way you don't waste a ton of time on the execution of a very large backup file -- as there's a chance with your version of sed, or OS, that it might resolve the regular expression differently.
I started a dump of mydatabse its over 5gb and now it is giving out a error after downloading 2gb the error is mentioned below.
I am confused have not tried anything.
mysqldump -uroot -proot -h123.123.123.123 example >example.sql
mysqldump: Error 2013: Lost connection to MySQL server when dumping table `a_dit` at row: 444444
And I don't want to start all again.
It stopped at a table, is there anyway to resume from the left table or to check and resume the dumping process?
yes you can
mysqldump -uroot -p db_name --skip-create-options table_name --where='id>666666'
or you can also use
SELECT * INTO OUTFILE 'data_path.sql' from table where id>666666
I would recommend to start again and dump every table on its own.
We have quite large (about 1TB) MySQL 5.7 DB, hosted on RDS. We want to migrate it to Aurora 5.6 - because of parallel queries (these are available only for 5.6).
It's not possible to do that by snapshot, because the version is not the same. We need to do mysqldump and then restore it.
I tried several options, but most of them always failed, because of the size of DB.
For example straight import
nohup mysqldump -h fmysql_5_7host.amazonaws.com -u user -pPass db_name | mysql -u user2 -pPAss2 -h aurora_5_6.amazonaws.com db_name
error in nohup.out :
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table
Also dump to s3 file failed
nohup mysqldump -h mysql_5_7host.amazonaws.com -u user -pPAss db_name | aws s3 cp - s3://bucket/db-dump.sql
error:
An error occurred (InvalidArgument) when calling the UploadPart operation: Part number must be an integer between 1 and 10000, inclusive
Both of previous methods worked for me on smaller DB, about 10GB, but not on 1TB.
Is there any other way how to migrate such database?
Many thanks.
I have to delete some table data in prod. db and for the records that are going to be deleted a backup of records should be copied to another local db. This involves two databases, residing in two different servers/instances.
Is it possible to do via sql (mysql) query to do this?
I would use mysqldump with a where condition to get the records out. Once you have everything saved, you can then delete them from prod. These commands should work from the command line, including the password to avoid the prompt is optional.
mysqldump -u user -pPassword -h hostname1 dbname tablename
--where 'field1="delete"'
--skip-add-drop-table --no-create-db --no-create-info > deleted.sql
mysql -u user -pPassword -h hostname2 dbname < deleted.sql
mysql -u user -pPassword -h hostname1 dbname -e 'DELETE FROM tablename WHERE field1="delete"'
I'm trying to do exactly the same thing, copy data from a table to another server, then delete it from the original.
So far I see two options:
copy data to a local database then replicate that database to another server
use Federated storage engine
Both require some serious reconfiguration of our servers as neither Federated or binary logging (required for replication) are not enabled. This would take time and it would be best if other solutions could be found.
The process needs to be executed on a daily basis, so it needs to be fully automated.
Perhaps a third option is to automate things with a cron job:
copy the data to a separate database on the same server
backup that database with mysqldump in a folder which is linked on the other server too
on the second server, restore the database from the sql dump