So I am stuck with this error when trying to connect my node.js application with MySQL.
It won't let me connect to MySQL from localhost, not a single command is working.
The MySql workbench also says the same
I can't use any database commands since it's not letting me access mysql. Gone through almost all possible solutions on the internet none of them worked. Please help me out here even an explanation for this would help if not the solution.
In order to access you must do the following steps :
1. Run the terminate with user permission.
2. Access the path where you have mysql installed.
3. Put the following sentence.
mysql.exe -u root -ppasw
-u : It is the user.
-p : the password but next to the p without space.
If it does not work try this in windows cmd
To restore to a single concrete database.
mysqlbinlog -database='yourFile.00004'
Explanation : The Binary log.
It has replaced the old update file.
Its mission is to update the DBs during a recovery operation.
Replication masters are used as a reminder of the statements to be sent to the slave servers.
If the name is not specified, the host is chosen.
Performance drop of 1%.
Active bins must not be opened during execution.
If you put extension to the file, it is ignored.
A new BIN_LOG file is created when :
The server is restarted
A Flush binary Logs is made
The size specified in MAX_BINLOG_SIZE is exceeded.
The files that are generated have an extension that are sequential numbers and represent the order (index) of their creation controlled by the name host_name.index.
To activate and decomment the log-bin my.ini directive. If log-bin=file is used, that name will be used to name the sequence of files.
To delete index files.
purge binary before date-time (in this format "yyyy-mm-dd hh:mm:ss" or now() or interval....)
purge binary logs to filename; deletes up to this file (this one not included)
reset master -> deletes all files
To disable the binary log, the session variable is used.
SQL_LOG_BIN : Up to version 5.6 this one is
EXECUTE DB : binlog-do-db=BD
DOES NOT RUN DB : binlog-ignore-db=BD
The commands are the continuation of the binary log.
create database
alter database
drop database
To see the content of a binary file (must not be open).
mysqlbinlog "file with its path".
To restore several binary files must be done in one step.
mysqlbinlog file1 file2 file3 file3 | mysql -u root -ppassword
To restore.
Overwrite the file >
Adds in the content respecting the content >>
To restore to a single concrete database.
mysqlbinlog -database='filenamebinlog.00004'
If the above does not work, do this first
Another option
ERROR 1130 (HY000): Host 'localhost' is not allowed to connect to this MySQL server
Cause :
mysql only has one root user, select MD5 after changing root password, then submit, reboot.
Login appears "The host 'localhost' is not allowed to connect to this MySQL server..."
Try the user table in another mysql library, overwrite, no, it is estimated that the version is different
Resolve :
Edit my.ini
Add a sentence to [mysqld]: skip-grant-tables
For example :
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
skip-name-resolve
skip-grant-tables
The purpose is :
Bypass MySQL access control, anyone can log in to the MySQL database as an administrator in the console.
It should be noted that after changing the password, the MySQL server must be stopped and restarted to take effect.
Restart the mysql service!
I'm moving a large (~80GB) database from its testbed into what will be its production environment. We're working on Windows servers. This is the first time we've worked with MySQL and we're still learning the expected behaviours.
We backed up the data with
mysqldump -u root -p --opt [database name] > [database name].sql
Which took about 3 hours and created a file 45GB in size. It copied over to its new home overnight and, next morning, I used MySQL Workbench to launch a restore. According to its log, it ran
mysql.exe --defaults-file="[a path]\tmpc8tz9l.cnf" --protocol=tcp --host=127.0.0.1 --user=[me] --port=3306 --default-character-set=utf8 --comments --database=[database name] < "H:\[database name].sql"
And it's working - if I connect to the instance I can see the database and some its tables.
The trouble is, it seems to be taking forever. I presumed it would restore in the same 3-4 time frame it took to back up, maybe faster because it's restoring onto a more powerful server with SSD drives.
But it's now about 36 hours since the restore started and the DB is apparently 30GB in size. And it appears to be getting slower as it goes on.
I don't want to interrupt it now that it's started working so I guess I just have to wait. But for future reference: is this treacle-slow restore speed normal? Is there anything we can do it improve matters next time we need to restore a big DB?
Very large imports are notoriously hard to make fast. It sounds like your import is slowing down--processing fewer rows per second--as it progresses. That probably means MySQL is checking each new row to see whether it has key-conflicts with the rows already inserted.
A few things you can do:
Before starting, disable key checking.
SET FOREIGN_KEY_CHECKS = 0;
SET UNIQUE_CHECKS = 0;
After ending restore your key checking.
SET UNIQUE_CHECKS = 1;
SET FOREIGN_KEY_CHECKS = 1;
And, if you can wrap every few thousand lines of INSERT operations in
START TRANSACTION;
INSERT ...
INSERT ...
...
COMMIT;
you'll save a lot of disk churning.
Notice that this only matters for tables with many thousands of rows or more.
mysqldump can be made to create a dump with that disables keys. https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_disable-keys
mysqldump --disable-keys
Similarly,
mysqldump --extended-insert --no-autocommit
will make the dumped sql file contain a variant of my suggestion about using transactions.
In your case if you had used --opts --no-autocommit you probably would have gotten an optimal dump file. You already used --opts.
I changed my.ini and got some improvements while also using mysqldump --extended-insert --no-autocommit
my.ini for 16GB RAM on Windows 10 mysql 7.4
# Comment the following if you are using InnoDB tables
#skip-innodb
innodb_data_home_dir="C:/xampp74/mysql/data"
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir="C:/xampp74/mysql/data"
#innodb_log_arch_dir = "C:/xampp74/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size=16M
innodb_buffer_pool_size=8G
## Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size=5M
innodb_log_file_size=2G
innodb_log_buffer_size=8M
#innodb_flush_log_at_trx_commit=1
#Use for restore only
innodb_flush_log_at_trx_commit=2
innodb_lock_wait_timeout=50
I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql
My application download mails over IMAP and stores them in a MySQL database. Earlier I was supporting mails size upto 10 MB and hence a 'mediumtext' column to store the mail content was enough. Now I need to support mails upto 30MB. So I changed the datatype for the column to 'largetext'. Yesterday a mail with size 25 MB was stored. After that whenever I execute mysqldump command it throws error:
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `ib_mailbox_backup` at row: 3369
Row 3369 contains the 25 MB mail.
In MySQL config I increased the 'max_allowed_packet' from 64M to 512M and it still fails with the same error. Executing the mysqldump command on the same machine where MySQL server is running. How do I solve this?
You can add --max_allowed_packet=512M to your mysqldump command.
Or add max_allowed_packet=512M to [mysqldump] section of your my.cnf (thanks #Varun)
Note: it will not work if it is not under the [mysqldump] section...
Some of my scripts stopped working after an upgrade to Debian 9 & MariaDB.
MariaDB on Debian introduces a new config file specifically for mysqldump settings (/etc/mysql/conf.d/mysqldump.cnf). If you had set a max_allowed_packet <> 16M in your standard /etc/mysql/my.cnf previously, the new config file will overwrite that setting. So be sure to check this new config file and either delete the entry or adjust it to your needs.
I'm not sure if the change was introduced by the swap from MySQL to MariaDB or if Debian made a change in how the config files are laid out in V9.
I had a similar error and would fail with packet size 512M on row 0. It was an innodb table that was apparently damaged (mysqlcheck showed OK). I ended up re-creating the table and then it worked fine with a small packet size of just 128M.
This Work For me.
mysqldump --max_allowed_packet=512M --routines=true -u [user] [database] > [route and File name].sql
I'm importing a MySQL dump and getting the following error.
$ mysql foo < foo.sql
ERROR 1153 (08S01) at line 96: Got a packet bigger than 'max_allowed_packet' bytes
Apparently there are attachments in the database, which makes for very large inserts.
This is on my local machine, a Mac with MySQL 5 installed from the MySQL package.
Where do I change max_allowed_packet to be able to import the dump?
Is there anything else I should set?
Just running mysql --max_allowed_packet=32M … resulted in the same error.
You probably have to change it for both the client (you are running to do the import) AND the daemon mysqld that is running and accepting the import.
For the client, you can specify it on the command line:
mysql --max_allowed_packet=100M -u root -p database < dump.sql
Also, change the my.cnf or my.ini file (usually found in /etc/mysql/) under the mysqld section and set:
max_allowed_packet=100M
or you could run these commands in a MySQL console connected to that same server:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
(Use a very large value for the packet size.)
As michaelpryor said, you have to change it for both the client and the daemon mysqld server.
His solution for the client command-line is good, but the ini files don't always do the trick, depending on configuration.
So, open a terminal, type mysql to get a mysql prompt, and issue these commands:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
Keep the mysql prompt open, and run your command-line SQL execution on a second terminal..
This can be changed in your my.ini file (on Windows, located in \Program Files\MySQL\MySQL Server) under the server section, for example:
[mysqld]
max_allowed_packet = 10M
Re my.cnf on Mac OS X when using MySQL from the mysql.com dmg package distribution
By default, my.cnf is nowhere to be found.
You need to copy one of /usr/local/mysql/support-files/my*.cnf to /etc/my.cnf and restart mysqld. (Which you can do in the MySQL preference pane if you installed it.)
The fix is to increase the MySQL daemon’s max_allowed_packet. You can do this to a running daemon by logging in as Super and running the following commands.
# mysql -u admin -p
mysql> set global net_buffer_length=1000000;
Query OK, 0 rows affected (0.00 sec)
mysql> set global max_allowed_packet=1000000000;
Query OK, 0 rows affected (0.00 sec)
Then to import your dump:
gunzip < dump.sql.gz | mysql -u admin -p database
In etc/my.cnf try changing the max_allowed _packet and net_buffer_length to
max_allowed_packet=100000000
net_buffer_length=1000000
if this is not working then try changing to
max_allowed_packet=100M
net_buffer_length=100K
On CENTOS 6 /etc/my.cnf , under [mysqld] section the correct syntax is:
[mysqld]
# added to avoid err "Got a packet bigger than 'max_allowed_packet' bytes"
#
net_buffer_length=1000000
max_allowed_packet=1000000000
#
I have resolved my issue by this query
SET GLOBAL max_allowed_packet=1073741824;
and check max_allowed_packet with this query
SHOW VARIABLES LIKE 'max_allowed_packet';
Use a max_allowed_packet variable issuing a command like
mysql --max_allowed_packet=32M
-u root -p database < dump.sql
Slightly unrelated to your problem, so here's one for Google.
If you didn't mysqldump the SQL, it might be that your SQL is broken.
I just got this error by accidentally having an unclosed string literal in my code. Sloppy fingers happen.
That's a fantastic error message to get for a runaway string, thanks for that MySQL!
Error:
ERROR 1153 (08S01) at line 6772: Got a packet bigger than
'max_allowed_packet' bytes Operation failed with exitcode 1
QUERY:
SET GLOBAL max_allowed_packet=1073741824;
SHOW VARIABLES LIKE 'max_allowed_packet';
Max value:
Default Value (MySQL >= 8.0.3) 67108864
Default Value (MySQL <= 8.0.2) 4194304
Minimum Value 1024
Maximum Value 1073741824
Sometimes type setting:
max_allowed_packet = 16M
in my.ini is not working.
Try to determine the my.ini as follows:
set-variable = max_allowed_packet = 32M
or
set-variable = max_allowed_packet = 1000000000
Then restart the server:
/etc/init.d/mysql restart
It is a security risk to have max_allowed_packet at higher value, as an attacker can push bigger sized packets and crash the system.
So, Optimum Value of max_allowed_packet to be tuned and tested.
It is to better to change when required (using set global max_allowed_packet = xxx)
than to have it as part of my.ini or my.conf.
I am working in a shared hosting environment and I have hosted a website based on Drupal. I cannot edit the my.ini file or my.conf file too.
So, I deleted all the tables which were related to Cache and hence I could resolve this issue. Still I am looking for a perfect solution / way to handle this problem.
Edit - Deleting the tables created problems for me, coz Drupal was expecting that these tables should be existing. So I emptied the contents of these tables which solved the problem.
Set max_allowed_packet to the same (or more) than what it was when you dumped it with mysqldump. If you can't do that, make the dump again with a smaller value.
That is, assuming you dumped it with mysqldump. If you used some other tool, you're on your own.