I'm trying to dump 13GB database , while doing that I'm encountering following error.
Here is the dump statment
mysqldump -u user_db -p resolve_production > resolve_production.sql
mysqldump: Error 1053: Server shutdown in progress when dumping table audits at row: 10506716
It is possible that the Server was shutdown or restated when the backup start, and your backup is not complete, i recommend you to dump your file again. the Backup can be corrupted or incomplete.
Related
While importing from sqldump, the following error is encountered.
ERROR 1812 (HY000) at line 753: InnoDB: A general tablespace named TABLESPACE_NAME cannot be found.
Used the following following command to import and export.
Export: mysqldump -h $DB_HOST -u$DB_USER -p$DB_PASSWORD --complete-insert --routines --triggers --single-transaction "$dbname" > "$dbname".sql
Import: mysql -h $server -u $user -p$password $dbname < $sql;
The same scripts created dump with no (TABLESPACE definitions) when dump against one of the other database servers while the one in AWS (RDS 5.7) resulted in dump with TABLESPACE definition.
Currently, dump should be exported from mysql 5.7 to mysql 5.7 running as docker container.
Should the definition be excluded while dumping or the definition be excluded while importing? Either ways, I need help from DB experts about command options to import such a database.
After some research and tests, this is what my understanding is.
When databases are created with tablespaces, mysqldump does not include the tablespace definitions. There is no option with mysqldump to ignore tablespace references. So, I had two options after taking dump.
1. Edit the dump manually and remove the references to tablespace.
2. Create the tablespace before importing the dump.
I had a luxury to import into a server that was running as container. So, mysql listed all the tablespaces that were not available (After such error reported, I could throw away the mysql container and start another one - to feel that the server is fresh). Since I knew all the tablespaces, I went with the second option and also thought that editing the dump created by tool (mysqldump) is not a good idea.
Used the following statement to create tablespace
create tablespace <tablespace-name> add datafile 'tablespace-name.ibd'
This approach worked for me.
I had the same problem. I wasn't able to create the tablespace on the db because it was shared hosting and required server admin privileges to do so, but I was able to use the generated error code to find the textual reference to the tablespace in the dump file. It is consistent every time, so a simple find and delete search took care of the whole file, and then the transaction completed with no errors.
I started a dump of mydatabse its over 5gb and now it is giving out a error after downloading 2gb the error is mentioned below.
I am confused have not tried anything.
mysqldump -uroot -proot -h123.123.123.123 example >example.sql
mysqldump: Error 2013: Lost connection to MySQL server when dumping table `a_dit` at row: 444444
And I don't want to start all again.
It stopped at a table, is there anyway to resume from the left table or to check and resume the dumping process?
yes you can
mysqldump -uroot -p db_name --skip-create-options table_name --where='id>666666'
or you can also use
SELECT * INTO OUTFILE 'data_path.sql' from table where id>666666
I would recommend to start again and dump every table on its own.
I currently using mysqldump command as follows
mysqldump -u username -p -h hostName database tableName > dump.sql
and it fails with emitting the following error
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `table_name` at row: 1652788
Is there any other way (perhaps a parameter to mysqldump or etc) for exporting large MySQL tables?
You can add a parameter --single-transaction to the mysql dump command if you are using innodb engine. This will eliminate locks on the table and possible connection timeouts.
Also, ensure that you have given sufficient values for max packet size, and innodb lock wait timeout
I'm trying to get dump of my database:
mysqldump myDatabase > myDatabase.sql
but I'm getting this error:
mysqldump: Got error: 1146: Table 'myDatabase.table' doesn't exist when using LOCK TABLES
When I go to mysql:
mysql -u admin -p
I query for the tables:
show tables;
I see the table. but when I query for that particular table:
select * from table;
I get the same error:
ERROR 1146 (42S02): Table 'myDatabase.table' doesn't exist
I tried to repair:
mysqlcheck -u admin -p --auto-repair --check --all-databases
but get the same error:
Error : Table 'myDatase.table' doesn't exist
Why I'm getting this error or how can I fix this error?
I'll really appreciate your help
For me the problem was resolved by going to /var/lib/mysql (or wherever you raw database files are stored) and deleting the .frm file for the table that the errors says does not exist.
I had an issue with doing mysqldump on the server, I realized that tables that if that tables were not used for longer time, then I do not need those (old applications that were shutdown).
The case: Cannot do backup with mysqldump, there are tables that are not needed anymore and are corrupted
At first I get the list of corrupted tables
mysqlcheck --repair --all-databases -u root -p"${MYSQL_ROOT_PASSWORD}" > repair.log
Then I analyze the log with a Python script that takes it at stdin (save as ex. analyze.py and do cat repair.log| python3 analyze.py)
#!/usr/bin/env python3
import re
import sys
lines = sys.stdin.read().split("\n")
tables = []
for line in lines:
if "Error" in line:
matches = re.findall('Table \'([A-Za-z0-9_.]+)\' doesn', line)
tables.append(matches[0])
print('{', end='')
print(",".join(tables), end='')
print('}', end='')
You will get a list of corrupted databases.
Do an export with mysqldump
mysqldump -h 127.0.0.1 -u root -p"${MYSQL_ROOT_PASSWORD}" -P 3306 --skip-lock-tables --add-drop-table --add-drop-database --add-drop-trigger --all-databases --ignore-table={table1,table2,table3 - here is output of the previous command} > dump.sql
Turn off the database, move /var/lib/mysql to /var/lib/mysql-backup, start database.
On a clean database just import the dump.sql, restart database, enjoy an instance without corrupted tables.
I recently came across a similar issue on an Ubuntu server that was upgraded to 16.04 LTS. In the process, MySQL was replaced with MariaDB and apparently the old database couldn't be automatically converted to a compatible format. The installer moved the original database from /var/lib/mysql to /var/lib/mysql-5.7.
Interestingly, the original table structure was present under the new /var/lib/mysql/[database_name] in the .frm files. The new ibdata file was 12M and the 2 logfiles were 48M, from which I concluded, that the data must be there, but later I found that initializing a completely empty database results in similar sizes, so that's not indicative.
I installed 16.04 LTS on a VirtualBox, installed MySQL on it, then copied the mysql-5.7 directory and renamed it to mysql. Started the server and dumped everything with mysqldump. Deleted the /var/lib/mysql on the original server, initialized a new one with mysql_install_db and imported the sql file from mysqldump.
Note: I was not the one who originally did the system upgrade, so there may be a few details missing, but the symptoms were similar to yours, so maybe this could help.
I have a problem with my SQL database.
My RAID failed but I recover data from the drive and now I have my old database, but it is full errors. I want to export it to SQL file and import it to new RAID and drives. ยจ
But when I tried to dump it wit this command:
root#LFCZ:/home# mysqldump -u root -password mc | gzip -9 > mc.sql.gz
It gives me this Error:
mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES
Can you help with that? Only thing I need is to get .sql file. It is a very big database (approx. 13 GB) but It is running on OVH dedicated server, so it is powerfully enough.