MySQL Import 4GB+ SQL on MySQL 5.6 - mysql

I’m trying to import a 4GB+ SQL to MySQL 5.6 (64-bit) on Windows 7 (64-bit)
The problem is I after a few seconds (), get this message:
ERROR 2006 (HY000) at line 204: MySQL server has gone away
It does import, but it only the first 3 tables (the first 2 and the 3rd table just the structure.
I’ve been trying this command:
mysql -u root -p firedb < C:\database_2013-11-12.sql
I tried a lot of things I could find here on stackoverflow with no success yet:
[mysqld]
innodb_file_per_table
max_allowed_packet=2048M
wait_timeout=3600
net_read_timeout=3600
net_buffer_length=3600
The SQL file was created on “MySQL 5.1.72-2-log (Debian)” using this command:
mysqldump -u root -p --all-databases
I have also tried setting --max_allowed_packet when running the command like this:
mysql --max_allowed_packet=2048M -u root -p --all-databases

The Documentation states:
The most common reason for the MySQL server has gone away error is
that the server timed out and closed the connection.
By default, the server closes the connection after eight hours if
nothing has happened. You can change the time limit by setting the
wait_timeout variable when you start mysqld. See Section 5.1.4,
“Server System Variables”.
If you have a script, you just have to issue the query again for the
client to do an automatic reconnection. This assumes that you have
automatic reconnection in the client enabled (which is the default for
the mysql command-line client).
So i would start with increasing the timeout.
And if this does not help read int he attached documentation link for the rest of reasons for server has gone away error.

Related

MySQL Workbench 8.0.28 export issue on MacOS 12.3 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
Up until recently I was using MySQL Workbench 8.0.20 without any issues till I upgraded my MacOS to 12.3 after which the Workbench software itself stopped working. I then upgraded my Workbench version to 8.0.28 (latest version at the time of writing).
But after updating to the new version, I initially had issues connecting to my remote databases. I was getting the following error -
Got error: 2026: SSL connection error: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol when trying to connect
But I was able to solve that one by setting the 'Use SSL' option under the SSL tab for the connection to 'No'.
The next issue though is now I am not able to perform exports on the server using mysqldump. The Workbench software is trying to run the following command -
Running: /Applications/MySQLWorkbench.app/Contents/MacOS/mysqldump --defaults-file="/var/folders/fd/jt76prtj4z35dqd6y1y1_jcw0000gn/T/tmppuwxrtig/extraparams.cnf" --host=host.db.com --port=3306 --default-character-set=utf8 --user=logicspice --protocol=tcp --single-transaction=TRUE --column-statistics=0 --skip-triggers "database"
after which I'm getting a similar issue -
mysqldump: Got error: 2026: SSL connection error: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol when trying to connect
Is there an update I can do to a certain configuration file for either mysqldump or MySQL Workbench that will disable the use of SSL when trying to use mysqldump?
Your assistance would be much appreciated as this issue is causing delays in my development work. Thanks!
Summary of system -
Operating system - MacOS Monterey 12.3
Processor - 2.4 GHz 8-Core Intel Core i9
MySQL Workbench version - mysql-workbench-community-8.0.28-macos-x86_64.dmg
MySQL version - 5.6.10 (MySQL Community Server (GPL)) on AWS RDS
Exporting a MySQL or MariaDB database
To export the database, the mysqldump command is used from the console. Once the backup is done, the generated file can be easily moved. To start exporting the database you have to execute the following:
mysqldump -u username -p database_name > data-dump.sql
username : Refers to the name of the database user.
database_name : Must be replaced by the name of the database you want to export.
data-dump.sql : Is the file that will be generated with all the database information.
That command will not produce any visual output. So, to make sure that the SQL copy has been performed correctly, you can inspect the generated file to make sure that it is a SQL copy. To do this you can use the following statement:
head -n 5 data-dump.sql
That command should return something like this:
-- MySQL dump 10.13 Distrib 5.7.16, for Linux (x86_64)
--
-- Host: localhost Database: database_name
-- ------------------------------------------------------
-- Server version 5.7.16-0 ubuntu 0.16.04.1
It is also possible to export one or more tables instead of the entire database. To do this, you must indicate in the command the selection you want to make.
mysqldump -u username -p database_name table_name_1 table_name_2 table_name_3 > data-dump.sql
In this case, it is important to take special care with the relationships between the different records. When importing, only those tables that have been selected will be overwritten.
Importing a MySQL or MariaDB database
To import a MySQL or MariaDB dump, the first thing to do is to create the database into which the import will be done. To do this, if you do not have any database manager, you have to connect to the database server as "root" user.
mysql -u root –p
This will open the MySQL or MariaDB shell. You will then be able to create the database.
mysql> CREATE DATABASE new_database;
If everything went well, you will see something like this:
Query OK, 1 row affected (0.00 sec)
Once created, you have to exit this shell by pressing CTRL+D. Once you are in the normal command line, it will be time to launch the command that will perform the database import.
mysql -u username -p new_database < data-dump.sql
username : Is the name of the user with access to the database.
new_database : Is the name of the database where the import will be performed.
data-dump.sql : Is the name of the file containing all the sql statements to be imported.
If any errors occur during the import process, they will be displayed on the screen. As you can see, exporting and importing a MySQL or MariaDB database is a very simple process.
Note : All this is done with Ubuntu in a terminal but in MAC it is exactly the same .
Another solution : In case you still get that error I have found assuming you are using OpenSSL and not ysSSL.
Refer to the MySQL configuration variable ssl_cipher. ssl_cipher
Configure a list of ciphers including pseudo-encryption #SECLEVEL=1
For example :
ssl_cipher = "DHE-RSA-AES128-GCM-SHA256:AES128-SHA:#SECLEVEL=1"
If you need a more permissive but still secure encryption list.
"EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA128:ECDHE-RSA-AES128-SHA384:ECDHE-RSA-AES128-SHA128:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA384:AES128-GCM-SHA128:AES128-SHA128:AES128-SHA128:AES128-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4:#SECLEVEL=1"
taken from https://cipherlist.eu/ could do the job.

Mysql Error:The user specified as a definer ('mysql.infoschema'#'localhost') does not exist' when trying to dump tablespaces

After I upgraded MySQL 5.7 to MySQL 8.0, I started MySQL again and I got an error:The user specified as a definer ('mysql.infoschema'#'localhost') does not exist' when trying to dump tablespaces.
I don't understand why this problem occurs. And I want to know how to solve it
I had the same error when I accidentally downgraded my MySQL version from 8 to 5.7. At the first start the older version broke something so that version 8 was showing the error above.
In my case I had to enter the docker container where MySQL was running first
docker exec -it mysql bash
Then I basically followed the steps here
mysql -u root -p
mysql> SET GLOBAL innodb_fast_shutdown = 1;
mysql_upgrade -u root -p
This took some minutes but then everything was working again.
It may occur after some time after you set up your new system.
As a suggested solution, just try on Windows
1) open cmd.exe as Administrator
2) run mysql_upgrade.exe -uyour_user_name -pyour_password
mysql_upgrade.exe can be located at
C:\Program Files\MySQL\MySQL Server 8.0\bin
Then run the following to see if the infoschema user has appeared.
select user, host from mysql.user;
In my case, such error was caused by that I had changed the host of the dba user from % to localhost to strengthen the security.
I used "abcdba" with DDL right to create db schema, and used "abc" with CURD right for the Web service to use the DB. After the change, the read operations were OK but the write operations failed with the error message in the OP.
Flush privilege or restarting the server did not solve the problem. Then I changed to host of the dba user back to %. Then things have become normal again.
Apparently mysql does not like the changes of host of the dba user, and existing databases created by that dba user will have problem if the host of the dba user is changed.
Essentially, changing the host of the dba user is actually removing user abcdba#% and creating a new user abcdba#localhost. Here had come the error message, since abcdba#% and abcdba#localhost are 2 differently fully qualified usernames.

Maria DB: MySQL Server has gone away, nothing in error log

I'm trying to slurp a database dump into a new database on my server, and I keep getting the following error
ERROR 2006 (HY000) at line 215: MySQL server has gone away
I've tried setting max_allowed_packet=16M in /etc/my.cnf
And editing the command directly: mysql -u my_db_user -p --max_allowed_packet=1073741824 my_db < my_db.sql
I still get this error. It doesn't create an error message in the log file, either. I'm running a mariadb fork of mysql (mysql 15.1, mariadb 5.5.52), CentOs version 7.3.1611.
Not sure what to do at this point!
Try setting max_allowed_packet=2G in my.cnf.

MySQL "#2006 - MySQL server has gone away" in phpMyAdmin

The Problem
My MySQL database works fine for my web application. However, when I try to open the database with phpMyAdmin, I get this error message:
#2006 - MySQL server has gone away
And phpMyAdmin disconnects back to the login screen. Other databases work fine.
My ax_allowed_packet is set to 16. I also tried 64, but it didn't work.
Also this error occured eventually. The database is about 3 MB in size, so not very big.
Used Software
Debian Squeeze x64
MySQL (current version)
phpMyAdmin (current version)
Question
How can I fix this error in order to view and edit my database in phpMyAdmin again?
I finally found it.
Apparently, there were some incompatibility issues after upgrading to MySQL 5.6.
In order to check for such issues and fix them, you will need to do a MySQL Upgrade.
Just run the following on a terminal:
mysql_upgrade -u root -p
Enter your password, and wait until the upgrade finishes.
This fixed the problem for me.
My Reputation is not high enough so i can't comment so i will here, mysql_upgrade -u root -p worked for me i had the same issue, after upgrading mysql, everything worked. i could could log into the database using phpmyadmin but when i tried to add a user it failed , giving MySQL “#2006 - MySQL server has gone away”, Note that i also updated phpmyadmin with yum update phpmyadmin first before running mysql_upgrade -u root -p.
Now all works fine, thanks!

ERROR 2006 (HY000): MySQL server has gone away

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql