I did a backup. The dump is 240 GB. This is the command that I executed
mysql -u dba -p < XXX.sql
After 1 hours of the restore I had this error
'2013 Lost connection to the mysql server during the query'
After google I changed this 2 paramet
max_allowed_packet = 2148M and net_write_timeout = 10000
But unfortunatelly I always the same errors.
I also tried to add this config in the my.cnf innodb_force_recovery = 1
I had the error
Error 1036 at ine 2863: Table XXX is read only
Thanks you in advance for your help
Related
C:\Program Files\MySQL\MySQL Server 5.7\bin\mysql.exe --host=localhost --port=3306 -u root nd7b265_rahetbally
Task 'MySQL script' started at Sun May 31 09:53:09 MST 2020
ERROR 2013 (HY000) at line 118827: Lost connection to MySQL server during query
Task 'MySQL script' finished at Sun May 31 09:53:13 MST 2020
2020-05-31 09:53:13.363 - IO error: Process failed (exit code = 1). See error log.
2020-05-31 09:53:13.363 - java.io.IOException: Process failed (exit code = 1). See error log.
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.executeProcess(AbstractNativeToolHandler.java:182)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.doExecute(AbstractNativeToolHandler.java:237)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.lambda$0(AbstractNativeToolHandler.java:52)
at org.jkiss.dbeaver.runtime.RunnableContextDelegate.lambda$0(RunnableContextDelegate.java:39)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:122)
Just change connection setting from --host=localhost to --host=127.0.0.1.
It works for me.
Via right click DB > SQL editor > New SQL script
Execute following queries:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
https://stackoverflow.com/a/18979736
At mysql client program, you can config the maxallowedpacket as below.
At the mysql server, max_allowed_package is in mysqld.conf (mysql 5.7 version)
If database size is bigger 7GB, you have to make a trick with 256MB as the post https://stackoverflow.com/a/35599592
I solved it by adding these lines
wait_timeout = 28800
interactive_timeout = 28800
max_allowed_packet=100M
under [mysql] section
I want to load a large .sql file (1.5GB) but everytime it stuck on 623.9 MB with the error: MySQL server has gone away.
Running the following command: mysql -u {DB-USER-NAME} -p {DB-NAME} < {db.file.sql path}
Already changed my my.cnf file with the following values, but that did not help:
wait_timeout = 3600
max_allowed_packet = 100M
innodb_lock_wait_timeout = 2000
What am I missing here?
Refer to this link -
MySQL Server has gone away when importing large sql file -
for various combinations that have worked for this problem.
I have a problem with my SQL database.
My RAID failed but I recover data from the drive and now I have my old database, but it is full errors. I want to export it to SQL file and import it to new RAID and drives. ยจ
But when I tried to dump it wit this command:
root#LFCZ:/home# mysqldump -u root -password mc | gzip -9 > mc.sql.gz
It gives me this Error:
mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES
Can you help with that? Only thing I need is to get .sql file. It is a very big database (approx. 13 GB) but It is running on OVH dedicated server, so it is powerfully enough.
I don't think this is a dup question as I have read other posts about error 2003 and none resolve my situation.
I have a bash script that executes mysqldump on a nightly basis against many tables in an Amazon RDS database. It has worked without issue for months, but recently, I've started seeing errors. Example results for a recent week:
Day 1: success
Day 2: success
Day 3:
TBL citygrid_state: export entire table
TBL ci_sessions: export entire table
mysqldump: Got error: 2003: Can't connect to MySQL server on 'blah' (110) when trying to connect
... wrlog crit forced halt
Day 4: success
Day 5:
TBL sparefoot.consumer_lead_action_meta: export entire table
TBL sparefoot.consumer_lead_action_type: export entire table
mysqldump: Got error: 2003: Can't connect to MySQL server on 'blah' (110) when trying to connect
... wrlog crit forced halt
Since the script works completely some nights and the calls to mysqldump work dozens of times before an error occurs, my thought is that I have a timeout problem. But where? Might it be a MySQL setting? Or does the issue lie elsewhere?
Some of the MySQL timeout settings:
connect_timeout 10
interactive_timeout 14400
net_read_timeout 30
net_write_timeout 60
wait_timeout 28800
I'm trying to dump a large database (126 MB) with extended inserts in MAMP (localhost on my computer) and I get errors whatever I do.
First I tried to dump it via terminal with
/applications/MAMP/library/bin/mysql -u root -p databasename < /path/file.sql
but I got this error (line 44 is where there is the first INSERT INTO)
ERROR 2006 (HY000) at line 44: MySQL server has gone away
so I copied my-large.cnf into /applications/MAMP/conf/ renamed it to my.conf, set the new values for max_allowed_packet
[mysqldump]
quick
max_allowed_packet = 32M
and also put the line skip-character-set-client-handshake after [mysqld]
# The MySQL server
[mysqld]
skip-character-set-client-handshake
saved the file, restarted the server, tried again to dump the database with the command line and still got the same error.
I also tried to import it with MY MAMP DUMP but I get an error message after few seconds: An error occurred while processing SQL file.
I then tried with bigdump, where I set $max_query_lines = 6000; but the script doesn't even seem to run (yes, I put the file and the script in the same directory and yes, the mysql server is running).
I really don't know what else to do, what could be the problem?