Strange MySQL "read-only" error - mysql

I'm experiencing a strange MySQL error, seemingly related to the database's read-only flag. A Web application that uses MySQL is running on Debian 7.9. It was running well for weeks, if not more, while, suddenly, attempts to access the application-powered website started producing the following error message on a blank webpage:
Error: 500 - SQLSTATE[HY000]: General error: 1290 The MySQL server is
running with the --read-only option so it cannot execute this
statement
The following are the steps that I performed as part of my investigation:
found and read read relevant info on the Internet (some pointed to MySQL's read-only flag);
based on the above, tried to find the read-only flag in MySQL config. file (my.cnf) - couldn't find it there, but read that the default value for the flag is OFF anyway;
verified the filesystem to make sure there is plenty of disk space (df -h):
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 1.4M 3.2G 1% /run
/dev/disk/by-uuid/xxxxxxxxxxxxxxxxx 113G 14G 94G 13% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.3G 72K 7.3G 1% /run/shm
ran mysqlcheck --all-databases: all tables are OK;
verified that there is plenty of RAM available on the server (free):
total used free shared buffers cached
Mem: 32898332 2090268 30808064 0 425436 970348
-/+ buffers/cache: 694484 32203848
Swap: 5105660 0 5105660
finally, I have decided to take a "snapshot" of MySQL-related processes (ps ax | grep mysql) during the problem's existence and after a temporary fix (DB restart), hoping that it could give people additional context for ideas; here are the corresponding results:
Problem:
20307 ? S 0:00 /bin/sh /usr/bin/mysqld_safe
20635 ? Sl 0:37 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
20636 ? S 0:00 logger -t mysqld -p daemon.error
36427 pts/0 S+ 0:00 grep mysql
No problem:
36948 pts/0 S 0:00 /bin/sh /usr/bin/mysqld_safe
37275 pts/0 Sl 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
37276 pts/0 S 0:00 logger -t mysqld -p daemon.error
38313 pts/0 S+ 0:00 grep mysql
UPDATE:
I just experienced the issue again and decided to check whether the global read-only flag is set to OFF or not, assuming the latter. My assumption has confirmed:
mysql> SELECT ##global.read_only;
+--------------------+
| ##global.read_only |
+--------------------+
| 1 |
+--------------------+
1 row in set (0.00 sec)
I guess, despite the default OFF value, since it is being overwritten by some process in the system, I will have to set the read-only flag to OFF explicitly and permanently via MySQL configuration file. Will report on results later in an answer.

If you're in AWS Aurora, you might be accessing the replica instance which is read-only so you need to use the DB Cluster endpoint instead.

As I see it there are two broad reasons for why your database is being set to read only:
1) MySQL is setting itself read only
I'm not sure what might cause MySQL to go read only, perhaps disk issues or corruption of database? In any case I'd expect something to appear in the logs, so check the MySQL (and system) logs.
2) A client is setting the database read only
Clients connecting to MySQL can set the database read only using the command:
SET GLOBAL read_only = ON;
however to do this the user is required to have SUPER privileges. This permission shouldn't be needed for websites, applications, etc that are using MySQL - keep it only for an admin account that you use for administering the database.
Lock down the permissions that each user has so they only have permission to do the things that they need on the databases / tables that are applicable. If you're using some out-of-the-box applications they should come with instructions detailing what permissions are required (e.g. SELECT, INSERT, DELETE, UPDATE).

Based on my question's comments (special thanks to #Eborbob) and my update, I have figured that some process in the system resets the read-only flag to ON (1), which seem to trigger the issue and results in the website becoming inaccessible. In order to fix the problem as well as make this fix persistent across software and server restarts, I decided to update MySQL configuration file my.cnf and restart the DB server.
After making the relevant update (in my case, addition) to the configuration file
read_only=0
let's verify that the flag is indeed set to OFF (0):
# mysql
mysql> SELECT ##global.read_only;
+--------------------+
| ##global.read_only |
+--------------------+
| 0 |
+--------------------+
1 row in set (0.00 sec)
Finally, let's restart MySQL server (for some reason, a dynamic reloading of MySQL configuration (/etc/init.d/mysql reload) didn't work, so I had to restart the database server explicitly:
service mysql stop
service mysql start
Voila! Now access to the website is restored. Will update my answer, if any changes will occur.

set global read_only = off;
make read only mode off later it will work sure.

Not related to the issue, but related to the error 'mysql read-only'.
Make sure you are not trying to write something to a slave instance of mysql.

I just experienced the same error and fixed it by connecting to the hostname of the mysql server as opposed to the IP address. I'm not sure why this fixed it but it did. Just FYI

the server might be set to recovery mode
find the innodb_force_recovery in my.cnf and uncomment it and restart the server then run the upgrade.

As Eborbob say it's probably a client,
Did you check your backup tool ?
Do you use some SQL proxy like proxySQL or maxscale ?
For exemple Mascale can enforce readonly by monitoring : https://jira.mariadb.org/browse/MXS-1859
Replication Manager can also change READ ONLY flag

The below error:-
The MySQL server is running with the --read-only option so it cannot execute this statement
It occurs when a user not having the write permission for the sql db tries to insert/update some data into the db.
It is quite a valid security error, as it is stating that you currently are having just --read-only rights and hence cannot execute a query that has anything to do with writing.
To resolve this error:-
Get the write access from the DBA.
e.g.
GRANT ALL PRIVILEGES ON database.table TO 'user'#'localhost';
The above query will grant all privileges to the user with username 'user'.

executing below statement worked for me
mysql> SET GLOBAL read_only = OFF;

This worked for me and you can try it.
Make a backup of your .sql file ( change your query )
Find all Engine=InnoDB
and replace them with Engine=MyISAM
and try executing again.

Related

cannot find mysql slow query log file on mac

I am trying to enable slow_query_log on mysql, but I could not find it on my mac.
I read in MySQL 5.7 Documentation that"
By default, the server writes files for all enabled logs in the data directory.
When I write show variables like '%slow_query%'; in mysql shell, I see the following:
but I can't see McBook-Pro-6-slow.log in the data directory. Here is all I can see in the data directory:
Could someone let me know why I can't see the slow log file?
In order to enable the slow_query_log, I've read here that I should add slow-query-log=1 to my.cnf. Here, my problem is that I am not sure where is mysql config file on my Mac. I've found a my-default.cnf in usr/local/mysql/support-files/ and another my.cnf file in /etc. Which one should I modify??
Thanks,
Refer to this Stackoverflow question MySQL 'my.cnf' location? which pertains to Mac OS. As you can see the permutations of locations are numerous usually compounded by different distros and MAMP XAMP WAMP bundles and Home Brew. It is not uncommon to have 2 mysql daemons on a box and not even know it.
Which is why in comments I suggested looking at the output of select ##basedir for the location of the my.ini (Windows) or my.cnf (Linux/Mac). That is not to suggest a configuration file is going to be there, but that is where it should be if one were to exist. Without it, baked-in default values are used. Often there is a stub, a suggested file, named differently (like my-default), awaiting your tweaks and a rename or copy to the appropriate file name of my.ini or my.cnf.
There is also a system variable named slow_query_log_file and its value visible if set thru SELECT ##slow_query_log_file;. For me right now it has a value of GUYSMILEY-slow.log because I did not set it in my ini (Windows) and it defaults to computername+"-slow.log".
That is the filename without the path. Where the file actually is written to is in the datadir seen with the output of select ##datadir;.
On my system this means (via ##basedir)
C:\Program Files\MySQL\MySQL Server 5.6\my.ini
would have a setting that ends up in a slowlog file written to in this absolute path (helped by ##datadir):
C:\ProgramData\MySQL\MySQL Server 5.6\data\GUYSMILEY-slow.log
and a fragment inside that log file might show something like this:
Ini and cnf changes require a MySQL daemon restart. In that configuration file a section similar to (my 5.6)
[mysqld]
basedir=C:\\Program Files\\MySQL\\MySQL Server 5.6\\
datadir=C:\\ProgramData\\MySQL\\MySQL Server 5.6\\Data\\
port=3306
log_warnings = 2
and (my 5.7)
[mysqld]
basedir=C:\\Program Files\\MySQL\\MySQL Server 5.7\\
datadir=C:\\ProgramData\\MySQL\\MySQL Server 5.7\\Data\\
port=3307
log_error_verbosity=2
the above is used within the [mysqld] section to play with settings. What I suggest is playing with this section with an innocuous setting like log_error_verbosity (5.7.2 and up) or similar, save it. Restart the deamon and determine if the variable (as Rick James would call settings because most really aren't dynamically settable). So a sanity check of select ##log_error_verbosity (5.7.2 and up) can confirm it the change is picked up. If it is, bingo, you are doing it right.
The Manual Page Server System Variables depicts the variables (settings) and whether or not they can be dynamically set/changed after the config file load via commands. Dynamic changes are reverted upon daemon restart.
How one would dynamically change a variable might look like:
SET GLOBAL log_error_verbosity=2;
Again, only certain variables are available in certain MySQL versions, such as the above, not available in older versions.
Also note multiple versions of MySQL running concurrently on a server. On mine i have 5.6.31 and 5.7.14. To access a different one via command line tools, use something like the -P 3307 switch to point at the one running on port 3307. Note the uppercase P as opposed to lowercase (which would mean prompt for password).
Determine if multiple instances are running. I use port checks such as
sudo netstat -tulpn (Linux)
netstat -aon | more (Windows, the top part, State=LISTENING)
Unfortunately these types of changes and trial and error take time and are very frustrating. Sorry I do not have a quick and easy answer for all cases.
Addendum
Notes here related to comments. In the below, w-x-y-z is a redacted IP Address.
On a Linux box (amazon ec2 redhat btw):
select ##slow_query_log;
-- 0 (so it is turned off)
SELECT ##slow_query_log_file;
-- /var/lib/mysql/ip-w-x-y-z-slow.log
select ##version;
-- 5.7.14
set global slow_query_log=1;
Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation 0.094 sec
(ok I was in MySQL Workbench as a dummied down user, off to do it as root via MySQL cmd line ...
mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.01 sec)
mysql> select ##slow_query_log;
+------------------+
| ##slow_query_log |
+------------------+
| 1 |
+------------------+
1 row in set (0.00 sec)
btw Workbench user can confirm the above `1`
at shell as linux user:
[ec2-user#ip-w-x-y-z ~]$ cd /var/lib/mysql
[ec2-user#ip-w-x-y-z mysql]$ sudo ls -la
(there were many files, only one needed to show you below)
-rw-r-----. 1 mysql mysql 179 Sep 19 01:47 ip-w-x-y-z-slow.log
[ec2-user#ip-w-x-y-z mysql]$ sudo vi ip-w-x-y-z-slow.log
(Header stub, the entire contents, no slow queries yet, log seen below):
/usr/sbin/mysqld, Version: 5.7.14 (MySQL Community Server (GPL)). started with:
Tcp port: 3306 Unix socket: /var/lib/mysql/mysql.sock
Time Id Command Argument
SHOW VARIABLES LIKE 'log_output'; to verify that it is set to FILE or FILE,TABLE.

ERROR 2006 (HY000) at line MySQL server has gone away

Problem
I encountered this error during a Mysql DB dump and restore. None of the solutions posted anywhere solved my problem, so I thought I post my own answer I found on my own for posterity.
Source Env:
CentOS 4 i386 ext3, Mysql 5.5 dump, Most tables engines are MySIAM, with a few InnoDBs.
Destination Env:
CentOS 6 x66_64 XFS, Mysql 5.6
Source DB is 25GB on disk, and a gzipped dump is 4.5GB.
Dump
Dump command from source -> destination was run like so:
mysqldump $DB_NAME | gzip -c | sudo ssh $USER#$IP_ADDRESS 'cat > /$PATH/$DB_NAME-`date +%d-%b-%Y`.gz'
This makes the dump, gzips on the fly, and writes it over SSH to the source. You don't have to do it this way, but it is convenient.
Import
On the new source DB I ran the import like so:
gunzip < /$PATH/$DB_NAME.gz | mysql -u root $DB_NAME
Note that you have to issue CREATE DATABASE DB_NAME SQL to make the new empty detination DB before starting the import.
Everytime I tried this I got this type of error:
ERROR 2006 (HY000) at line MySQL server has gone away
Source DB conf
My source DB is a virt server using VMWare so I can resize the RAM/CPU as needed. For this project I temporarily scaled up to 8CPU/16GB of RAM, and then scaled back down after the import. This is a luxury I had, that you may not.
With so much RAM I was able to tune the heck out of the /etc/my.cnf file. Everyone else had suggested increasing
max-allowed-packet
bulk_insert_buffer_size
To double or triple default values. This didn't fix it for me. Then I tried increasing timeouts after reading more online.
interactive_timeout
wait_timeout
net_read_timeout
net_write_timeout
connect_timeout
I did this and it still didn't work. So then I went crazy and set everything unreasonably high. Here is what I ended up with:
key_buffer_size=512M
table_cache=2G
sort_buffer_size=512M
max-allowed-packet=2G
bulk_insert_buffer_size=2G
innodb_flush_log_at_trx_commit = 0
net_buffer_length=1000000
innodb_buffer_pool_size=3G
innodb_file_per_table
interactive_timeout=600
wait_timeout=600
net_read_timeout=300
net_write_timeout=300
connect_timeout=300
Still no luck. I felt deflated. Then I noticed that the import kept failing at the same spot. So I reviewed the SQL. I noticed nothing strange. Nothing in the log files either.
Solution
There's something about the DB structure that's causing the import to fail. I suspect it's size related, but who knows.
To fix it I started splitting the dumps up into smaller chunks. The source DB has about 75 tables. So I made 3 dumps with approx 25 each. You just have to pass the table names to the dump command. For ex:
mysqldump $DB_NAME $TABLE1> $TABLE2....$TABLE25 | gzip -c | sudo ssh $USER#$IP_ADDRESS 'cat > /$PATH/$DB_NAME-TABLES1-25`date +%d-%b-%Y`.gz'
Then I simply imported each chunk independently on the destination. Finally, no errors. Hopefully this is useful to someone else.
The answer to this question was to split the dump into chunks by tables. Then do multiple imports. See details in the original post.

Why can't I drop MySQL Database?

Problem
I'm running MySQL 5.5.23 on Mac OS 10.8.2 and am unable to drop a particular database, but I can drop others.
When I attempt to drop the specific table I get this error:
#1548 - Cannot load from mysql.proc. The table is probably corrupted
Attempted Fixes
I have restarted the system
I have tried to restart MySQL via CLI
$ sudo /usr/local/mysql/support-files/mysql.server stop
but received this error ERROR! MySQL server PID file could not be found!
I have repaired the mysql.proc table.
REPAIR TABLE mysql.proc
REPAIR TABLE mysql.proc USE_FRM
I have repaired all mysql.* tables.
REPAIR TABLE mysql.*
When running mysqlcheck from the Command Line
mysqlcheck --repair --all-databases
mysqlcheck --repair specific-db
I received this error : mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (2) when trying to connect
Current Status
I still cannot drop the original specific database, but can drop others.
Update[1] 2013-01-05 11:15 am [New York]
Logs and Feedback (per #Thomas in comments)
To find all logs, I ran (cli):
$(ps auxww|sed -n '/sed -n/d;/mysqld /{s/.* \([^ ]*mysqld\) .*/\1/;p;}') --verbose --help|grep '^log'
I received this feedback:
130105 11:35:21 [Warning] Can't create test file /usr/local/mysql-5.5.23-osx10.6-x86_64/data/wills-mbp.lower-test
130105 11:35:21 [Warning] Can't create test file /usr/local/mysql-5.5.23-osx10.6-x86_64/data/wills-mbp.lower-test
130105 11:35:21 [Note] Plugin 'FEDERATED' is disabled. /usr/local/mysql/bin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
130105 11:35:21 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
I'm looking into the mysql_upgrade.
Update[2] 2013-01-05 4:04 pm [New York]
I ran this :
sudo /usr/local/mysql/support-files/mysql.server stop
And received this error:
ERROR! MySQL server PID file could not be found!
Update[2.1] 2013-01-05 5:37 pm [New York]
I ran ps auxww | grep mysql and found the mysqld process and killed it (sudo kill [process id]). I was then able to restart mysql successfully. However, I'm still having no luck dropping that specific database mentioned above.
Resolved
After trying to manually repair the corruption and many of the suggestions and the other answer listed here, reinstalling mySQL was the only thing that solved my problem.
On a Mac (running 10.8.2) I also had to do some manual deletions for a clean install:
sudo rm /usr/local/mysql
sudo rm -rf /usr/local/mysql*
sudo rm -rf /Library/StartupItems/MySQLCOM
sudo rm -rf /Library/PreferencePanes/My*
sudo rm -rf /Library/Receipts/mysql*
sudo rm -rf /Library/Receipts/MySQL*
sudo rm /etc/my.cnf
Articles consulted
MySQL duplicates with CONCAT error 1548 - Cannot load from mysql.proc. The table is probably corrupted
SQL error: BIGINT UNSIGNED value is out of range in (…), but it doesn't make sense
How to repair corrupted table
MySQL manager or server PID file could not be found
PHP/MySQL issue after security update 2010-005
mysql problems after Mac OS X software update
How to remove MySQL completely Mac OS X Leopard
I ran into an issue that queries on my databases (named: caloriecalculator) was taking too long and it won't drop at all. I followed these steps below and it fixed my issue:
See all MySQL processes: mysqladmin processlist -u root -p
Kill all processes relating to caloriecalculator as it was blocking my next queries to be executed.
mysqladmin -u root -p kill 4
Now run: drop database caloriecalculator;
I would try:
Backup/save any databases that have important data.
Remove mySQL
Reinstall mySQL
Restore any backed up databases.
I had this happen to me on a Linux server, and the cause was a corrupted database directory.
UPDATE: one thing to do is to go into MySQL database directory and perform a ls -la, to verify that the evil DB is the same as the others as regards permissions, ownership and so on. For example here the 'original' database cannot be dropped (it was created by a stupid tool ran as root):
drwx------ 2 mysql mysql 4096 Aug 27 2015 _db_graph
drwx------ 2 mysql mysql 4096 Jul 13 11:58 _db_xatex
drwxrw-rw- 2 root root 12288 May 18 14:27 _db_xatex_original
drwx------ 2 mysql mysql 12288 Jun 9 08:23 _db_xatex_contab
drwx------ 2 mysql mysql 12288 May 18 17:58 _db_xatex_copy
drwx------ 2 mysql mysql 4096 Nov 24 2016 _db_xatex_test
Running chown mysql:mysql _db_xatex_original; chmod 700 _db_xatex_original would fix the problem (but check inside the directory to verify there too permissions and ownerships are copacetic).
In the end, I employed the following ugly hack (after trying stopping, restarting and repairing whatever could be targeted by a REPAIR):
created a database "scapegoat"
stopped MySQL Server
copied the directory created by MySQL Server, /var/lib/mysql/scapegoat, to /tmp
restarted MySQL Server, dropped the database "scapegoat", stopped the server
Now I had a copy of a clean, empty DB dir that MySQL no longer knew anything about.
moved the "evildb" directory to /tmp (so that if thing went wrong I could put it back)
moved the "scapegoat" directory to /var/lib/mysql renaming it to "evildb"
started MySQL Server
not sure if I ran any more repairs at this point
and the "evildb" database became droppable!
My explanation is that when asked to drop a database, MySQL Server first performs some checks on the files in the database directory. If these checks fail, the drop also fails. These checks must be subtly different from the ones performed by REPAIR. Maybe in the affected directory there is something unexpected.
I think this was on a MySQL 5.1 or 5.2 on a SuSE 11.2 Linux distribution. Hope it helps.
UPDATE
On thinking back, I don't remember getting errors about "proc". So I'm less sure that the problem lies in the directory. It might be connected with the proc table, without being a table corruption. Have you tried visually inspecting the proc database table, in order to find something there that belongs to the evil DB?
USE mysql;
SELECT * FROM proc;
That, or any errors therefrom, could help in solving the problem. You might, who know, have some lines with the wrong db column. In a pinch, you could export the proc table and reload it after cleaning (either through SQL or via a disk file).
TEST
I have partial verification for the above update. By intentionally inserting rubbish into the proc table apropos a newly created database evil, I partially reproduced your symptoms (undroppable database, MySQL connection crashes on attempt). Error number is not 1548 though; but maybe it would be, if I inserted the right rubbish in that table... anyway, the useful bit is that by removing all references to the evil db, the latter became droppable again:
mysql> drop database evil;
ERROR 2013 (HY000): Lost connection to MySQL server during query
mysql> use mysql;
No connection. Trying to reconnect...
Connection id: 1
Current database: *** NONE ***
Database changed
mysql> DELETE FROM proc WHERE db = 'evil';
Query OK, 2 rows affected (0.00 sec)
mysql> drop database evil;
Query OK, 0 rows affected (0.00 sec)
I had the same problem and all I did was to delete the database directory from the mysql data directory.
If you using xampp In windows
you can also drop your database using phpmyadmin
go to home -> databases -> click on your [database name] -> drop
OR
you can also drop your database manually
go to xampp -> mysql -> data -> [database name]
delete your [database name] now.

ERROR 2006 (HY000): MySQL server has gone away

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql

MySQL Error 1153 - Got a packet bigger than 'max_allowed_packet' bytes

I'm importing a MySQL dump and getting the following error.
$ mysql foo < foo.sql
ERROR 1153 (08S01) at line 96: Got a packet bigger than 'max_allowed_packet' bytes
Apparently there are attachments in the database, which makes for very large inserts.
This is on my local machine, a Mac with MySQL 5 installed from the MySQL package.
Where do I change max_allowed_packet to be able to import the dump?
Is there anything else I should set?
Just running mysql --max_allowed_packet=32M … resulted in the same error.
You probably have to change it for both the client (you are running to do the import) AND the daemon mysqld that is running and accepting the import.
For the client, you can specify it on the command line:
mysql --max_allowed_packet=100M -u root -p database < dump.sql
Also, change the my.cnf or my.ini file (usually found in /etc/mysql/) under the mysqld section and set:
max_allowed_packet=100M
or you could run these commands in a MySQL console connected to that same server:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
(Use a very large value for the packet size.)
As michaelpryor said, you have to change it for both the client and the daemon mysqld server.
His solution for the client command-line is good, but the ini files don't always do the trick, depending on configuration.
So, open a terminal, type mysql to get a mysql prompt, and issue these commands:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
Keep the mysql prompt open, and run your command-line SQL execution on a second terminal..
This can be changed in your my.ini file (on Windows, located in \Program Files\MySQL\MySQL Server) under the server section, for example:
[mysqld]
max_allowed_packet = 10M
Re my.cnf on Mac OS X when using MySQL from the mysql.com dmg package distribution
By default, my.cnf is nowhere to be found.
You need to copy one of /usr/local/mysql/support-files/my*.cnf to /etc/my.cnf and restart mysqld. (Which you can do in the MySQL preference pane if you installed it.)
The fix is to increase the MySQL daemon’s max_allowed_packet. You can do this to a running daemon by logging in as Super and running the following commands.
# mysql -u admin -p
mysql> set global net_buffer_length=1000000;
Query OK, 0 rows affected (0.00 sec)
mysql> set global max_allowed_packet=1000000000;
Query OK, 0 rows affected (0.00 sec)
Then to import your dump:
gunzip < dump.sql.gz | mysql -u admin -p database
In etc/my.cnf try changing the max_allowed _packet and net_buffer_length to
max_allowed_packet=100000000
net_buffer_length=1000000
if this is not working then try changing to
max_allowed_packet=100M
net_buffer_length=100K
On CENTOS 6 /etc/my.cnf , under [mysqld] section the correct syntax is:
[mysqld]
# added to avoid err "Got a packet bigger than 'max_allowed_packet' bytes"
#
net_buffer_length=1000000
max_allowed_packet=1000000000
#
I have resolved my issue by this query
SET GLOBAL max_allowed_packet=1073741824;
and check max_allowed_packet with this query
SHOW VARIABLES LIKE 'max_allowed_packet';
Use a max_allowed_packet variable issuing a command like
mysql --max_allowed_packet=32M
-u root -p database < dump.sql
Slightly unrelated to your problem, so here's one for Google.
If you didn't mysqldump the SQL, it might be that your SQL is broken.
I just got this error by accidentally having an unclosed string literal in my code. Sloppy fingers happen.
That's a fantastic error message to get for a runaway string, thanks for that MySQL!
Error:
ERROR 1153 (08S01) at line 6772: Got a packet bigger than
'max_allowed_packet' bytes Operation failed with exitcode 1
QUERY:
SET GLOBAL max_allowed_packet=1073741824;
SHOW VARIABLES LIKE 'max_allowed_packet';
Max value:
Default Value (MySQL >= 8.0.3) 67108864
Default Value (MySQL <= 8.0.2) 4194304
Minimum Value 1024
Maximum Value 1073741824
Sometimes type setting:
max_allowed_packet = 16M
in my.ini is not working.
Try to determine the my.ini as follows:
set-variable = max_allowed_packet = 32M
or
set-variable = max_allowed_packet = 1000000000
Then restart the server:
/etc/init.d/mysql restart
It is a security risk to have max_allowed_packet at higher value, as an attacker can push bigger sized packets and crash the system.
So, Optimum Value of max_allowed_packet to be tuned and tested.
It is to better to change when required (using set global max_allowed_packet = xxx)
than to have it as part of my.ini or my.conf.
I am working in a shared hosting environment and I have hosted a website based on Drupal. I cannot edit the my.ini file or my.conf file too.
So, I deleted all the tables which were related to Cache and hence I could resolve this issue. Still I am looking for a perfect solution / way to handle this problem.
Edit - Deleting the tables created problems for me, coz Drupal was expecting that these tables should be existing. So I emptied the contents of these tables which solved the problem.
Set max_allowed_packet to the same (or more) than what it was when you dumped it with mysqldump. If you can't do that, make the dump again with a smaller value.
That is, assuming you dumped it with mysqldump. If you used some other tool, you're on your own.