I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench.
I noticed also that it appears whenever I run long query.
Is there away to increase the timeout value?
New versions of MySQL WorkBench have an option to change specific timeouts.
For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000.
Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.
If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:
[mysqld]
max_allowed_packet=16M
By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.
While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.
For WAMP users: you'll find the flag in the [wampmysqld] section.
Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) - for example: --net_read_timeout=100.
For reference see here and here.
SET ##local.net_read_timeout=360;
Warning: The following will not work when you are applying it in remote connection:
SET ##global.net_read_timeout=360;
Edit: 360 is the number of seconds
Add the following into /etc/mysql/cnf file:
innodb_buffer_pool_size = 64M
example:
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
innodb_buffer_pool_size = 64M
In my case, setting the connection timeout interval to 6000 or something higher didn't work.
I just did what the workbench says I can do.
The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.
On Mac
Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.
And it works 😄
There are three likely causes for this error message
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
More rarely, it can happen when the client is attempting the initial connection to the server
For more detail read >>
Cause 2 :
SET GLOBAL interactive_timeout=60;
from its default of 30 seconds to 60 seconds or longer
Cause 3 :
SET GLOBAL connect_timeout=60;
You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.
Issue the below command from your shell:
sudo mysql_upgrade -u root -p
If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.
My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli.
You have three possibilities:
increase net_buffer_length inside mysql -> this would need a server restart
create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server's net_buffer_length -> I think this is the best solution, although slower no server restart is needed.
I used following mysqldump command:
mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile
On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me
change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor
2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:\ProgramData\MySQL\MySQL Server 8.0\my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file.
3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.
I know its old but on mac
1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.
Sometimes your SQL-Server gets into deadlocks, I've been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart.
Try executing query again.
Try please to uncheck limit rows in in Edit → Preferences →SQL Queries
because You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Change "read time out" time in Edit->Preferences->SQL editor->MySQL session
I got the same issue when loading a .csv file.
Converted the file to .sql.
Using below command I manage to work around this issue.
mysql -u <user> -p -D <DB name> < file.sql
Hope this would help.
If all the other solutions here fail - check your syslog (/var/log/syslog or similar) to see if your server is running out of memory during the query.
Had this issue when innodb_buffer_pool_size was set too close to physical memory without a swapfile configured. MySQL recommends for a database specific server setting innodb_buffer_pool_size at a max of around 80% of physical memory, I had it set to around 90%, the kernel was killing the mysql process. Moved innodb_buffer_pool_size back down to around 80% and that fixed the issue.
Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000.
The error no longer occurred.
I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).
I tried to run the create table statement again without the foreign key declarations and found it worked.
Then after creating the table, I added the foreign key constrains using ALTER TABLE query.
Hope this will help someone.
This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.
Go to:
Edit -> Preferences -> SQL Editor
In there you can see three fields in the "MySQL Session" group, where you can now set the new connection intervals (in seconds).
Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.
I had the same problem - but for me the solution was a DB user with too strict permissions.
I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore
Check if the indexes are in place first.
SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'
I ran into this while running a stored proc- which was creating lots of rows into a table in the database.
I could see the error come right after the time crossed the 30 sec boundary.
I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.
I am guessing it was some client side connection that I could not spot in Workbench.
Maybe this will help someone else as well ?
If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.
Do the same step for other primary keys on other tables.
There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:
Workbench Edit → Preferences → SQL Editor → DBMS
Workbench Edit → Preferences → SSH → Timeouts
My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don't forget to restart MySQL Workbench!
Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.
Hope this helps!
Three things to be followed and make sure:
Whether multiple queries show lost connection?
how you use set query in MySQL?
how delete + update query simultaneously?
Answers:
Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
Always SET value at the top but after DELETE if its condition doesn't involve SET value.
Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES
I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query
Check mysql error log files in path /var/log/mysql (linux)
In my case reassigning Mysql owner to the Mysql system folder worked for me
chown -R mysql:mysql /var/lib/mysql
Establish connection first
mysql --host=host.com --port=3306 -u username -p
then select your db use dbname
then source dumb source C:\dumpfile.sql.
After it's done \q
Related
Earlier today [11-09-2021] one of our databases from our production environment suddenly dropped it's table for reasons we don't know. This happened around after 4am, since we still had a snapshot of our drive for that time, which is weird as no one was using or accessing the server at the time. Can someone tell if this normally happens?
This for sure its not normal behavior, you should check MySQL logs to see what was happening at that time.
In MySQL we need to see often 3 logs which are mostly important:
The Error Log. It contains information about errors that occur while the server is running (also server start and stop)
The General Query Log. This is a general record of what mysqld is doing (connect, disconnect, queries)
The Slow Query Log. Ιt consists of "slow" SQL statements (as indicated by its name).
The one that will be your starting point is The General Query Log.
By default no log files are enabled in MYSQL. All errors will be shown in the syslog (/var/log/syslog).
To Enable them just follow below steps:
1. Go to mysql conf file (/etc/mysql/my.cnf) and add following lines:
Enable general query log add following
general_log_file = /var/log/mysql/mysql.log
general_log = 1
2. Save the file and restart mysql using following commands
service mysql restart
To read content of the error log file in real time, run:
sudo tail -f $(mysql -Nse "SELECT CONCAT(##datadir, ##general_log_file)")
Hope this will help you to find out what actually happened on your database server.
I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select ##wait_timeout;
> select ##max_allowed_packet;
For me this solution didn't work out so I executed
SET GLOBAL max_allowed_packet=1073741824;
in my SQL client.
If not able to change this with MYSql service running, you should stop the service and change the variable in "my.ini" file.
For example:
max_allowed_packet=20M
If you are working on XAMPP then you can fix the MySQL Server has gone away issue with following changes..
open your my.ini file
my.ini location is (D:\xampp\mysql\bin\my.ini)
change the following variable values
max_allowed_packet = 64M
innodb_lock_wait_timeout = 500
If you are running with default values then you have a lot of room to optimize your mysql configuration.
The first step I recommend is to increase the max_allowed_packet to 128M.
Then download the MySQL Tuning Primer script and run it. It will provide recommendations to several facets of your config for better performance.
Also look into adjusting your timeout values both in MySQL and PHP.
How big (file size) is the file you are importing and are you able to import the file using the mysql command line client instead of PHPMyAdmin?
If you are using MAMP on OS X, you will need to change the max_allowed_packet value in the template for MySQL.
You can find it at: File > Edit template > MySQL my.cnf
Then just search for max_allowed_packet, change the value and
save.
I had this error and other related ones, when I imported at 16 GB SQL file. For me, editing my.ini and setting the following (based on several different posts) in the [mysqld] section:
max_allowed_packet = 110M
innodb_buffer_pool_size=511M
innodb_log_file_size=500M
innodb_log_buffer_size = 800M
net_read_timeout = 600
net_write_timeout = 600
If you are running under Windows, go to the control panel, services, and look at the details for MySQL and you will see where my.ini is. Then after you edit and save my.ini, restart the mysql service (or restart the computer).
If you are using HeidiSQL, you can also set some or all of these using that.
I solved my issue with this short /etc/mysql/my.cnf file :
[mysqld]
wait_timeout = 600
max_allowed_packet = 100M
The other reason this can happen is running out of memory. Check /var/log/messages and make sure that your my.cnf is not set up to cause mysqld to allocate more memory than your machine has.
Your mysqld process can actually be killed by the kernel and then re-started by the "safe_mysqld" process without you realizing it.
Use top and watch the memory allocation while it's running to see what your headroom is.
make a backup of my.cnf before changing it.
I got same issue with
$image_base64 = base64_encode(file_get_contents($_FILES['file']['tmp_name']) );
$image = 'data:image/jpeg;base64,'.$image_base64;
$query = "insert into images(image) values('".$image."')";
mysqli_query($con,$query);
In \xampp\mysql\bin\my.ini file of phpmyadmin we get only
[mysqldump]
max_allowed_packet=110M
which is just for mysqldump -u root -p dbname . I resolved my issue by replacing above code with
max_allowed_packet=110M
[mysqldump]
max_allowed_packet=110M
I updated "max_allowed_packet" to 1024M, but it still wasn't working. It turns out my deployment script was running:
mysql --max_allowed_packet=512M --database=mydb -u root < .\db\db.sql
Be sure to explicitly specify a bigger number from the command line if you are donig it this way.
If your data includes BLOB data:
Note that an import of data from the command line seems to choke on BLOB data, resulting in the 'MySQL server has gone away' error.
To avoid this, re-create the mysqldump but with the --hex-blob flag:
http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_hex-blob
which will write out the data file with hex values rather than binary amongst other text.
PhpMyAdmin also has the option "Dump binary columns in hexadecimal notation (for example, "abc" becomes 0x616263)" which works nicely.
Note that there is a long-standing bug (as of December 2015) which means that GEOM columns are not converted:
Back up a table with a GEOMETRY column using mysqldump?
so using a program like PhpMyAdmin seems to be the only workaround (the option noted above does correctly convert GEOM columns).
If it takes a long time to fail, then enlarge the wait_timeout variable.
If it fails right away, enlarge the max_allowed_packet variable; it it still doesn't work, make sure the command is valid SQL. Mine had unescaped quotes which screwed everything up.
Also, if feasible, consider limiting the number of inserts of a single SQL command to, say, 1000. You can create a script that creates multiple statements out of a single one by reintroducing the INSERT... part every n inserts.
i got a similar error.. to solve this just open my.ini file..here at line no 36 change the value of maximum allowed packet size ie. max_allowed_packet = 20M
Make sure mysqld process does not restart because of service managers like systemd.
I had this problem in vagrant with centos 7. Configuration tweaks didn't help. Turned out it was systemd which killed mysqld service every time when it took too much memory.
I had similar error today when duplicating database (MySQL server has gone away...), but when I tried to restart mysql.server restart I got error
ERROR! The server quit without updating PID ...
This is how I solved it:
I opened up Applications/Utilities/ and ran Activity Monitor
quit mysqld
then was able to solve the error problem with
mysql.server restart
I am doing some large calculations which involves the mysql connection to stay long time and with heavy data. i was facing this "Mysql go away issue". So i tried t optimize the queries but that doen't helped me then i increased the mysql variables limit which is set to a lower value by default.
wait_timeout
max_allowed_packet
To the limit what ever suits to you it should be the Any Number * 1024(Bytes). you can login to terminal using 'mysql -u username - p' command and can check and change for these variable limits.
For GoDaddy shared hosting
On GoDaddy shared hosting accounts, it is tricky to tweak the PHP.ini etc files. However, there is another way and it just worked perfectly for me. (I just successfully uploaded a 3.8Mb .sql text file, containing 3100 rows and 145 cols. Using the IMPORT command in phpMyAdmin, I was getting the dreaded MySQL server has gone away error, and no further information.)
I found that Matt Butcher had the right answer. Like Matt, I had tried all kinds of tricks, from exporting MySQL databases in bite-sized chunks, to writing scripts that break large imports into smaller ones. But here is what worked:
(1) CPANEL ---> FILES (group) ---> BACKUP
(2a) Under "Partial Backups" heading...
(2b) Under "Download a MySQL Database Backup"
(2c) Choose your database and download a backup (this step optional, but wise)
(3a) Directly to the right of 2b, under heading "Restore a MySQL Database Backup"
(3b) Choose the .SQL import file from your local drive
(3c) True happiness will be yours (shortly....) Mine took about 5 seconds
I was able to use this method to import a single table. Nothing else in my database was affected -- but that is what step (2) above is intended to protect against.
Notes:
a. If you are unsure how to create a .SQL import file, use phpMyAdmin to export a table and modify that file structure.
SOURCE:
Matt Butcher 2010 Article
If increasing max_allowed_packet doesn't help.
I was getting the same error as you when importing a .sql file into my database via Sequel Pro.
The error still persisted after upping the max_allowed_packet to 512M so I ran the import in the command line instead with:
mysql --verbose -u root -p DatabaseName < MySQL.sql
It gave the following error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled
I found a couple helpful StackOverflow questions:
Enable binary mode while restoring a Database from an SQL dump
Mysql ERROR: ASCII '\0' while importing sql file on linux server
In my case, my .sql file was a little corrupt or something. The MySQL dump we get comes in two zip files that need to be concatenated together and then unzipped. I think the unzipping was interrupted initially, leaving the file with some odd characters and encodings. Getting a fresh MySQL dump and unzipping it properly worked for me.
Just wanted to add this here in case others find that increasing the max_allowed_packet variable was not helping.
None of the solutions regarding packet size or timeouts made any difference for me. I needed to disable ssl
mysql -u -p -hmyhost.com --disable-ssl db < file.sql
https://dev.mysql.com/doc/refman/5.7/en/encrypted-connections.html
I want to create a table of 325 column:
CREATE TABLE NAMESCHEMA.NAMETABLE
(
ROW_ID TEXT NOT NULL , //this is the primary key
324 column of these types:
CHAR(1),
DATE,
DECIMAL(10,0),
DECIMAL(10,7),
TEXT,
LONG,
) ROW_FORMAT=COMPRESSED;
I replaced all the VARCHAR with the TEXT and i have added Barracuda in the my.ini file of MySQL, this is the attributes added:
innodb_file_per_table=1
innodb_file_format=Barracuda
innodb_file_format_check = ON
but i still have this error:
Error Code: 1118
Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
EDIT: I can't change the structure of the database because it's legacy application/system/database. The create of a new table, it's an export of the legacy database.
EDIT2: i wrote this question that is similar to others but inside there are some solution that i found on internet like VARCHAR and Barracuda, but i still have that problem so i decided to open a new question with already the classic answer inside for seeing if someone have other answers
I tried all the solutions here, but only this parameter
innodb_strict_mode = 0
solved my day...
From the manual:
The innodb_strict_mode setting affects the handling of syntax errors
for CREATE TABLE, ALTER TABLE and CREATE INDEX statements.
innodb_strict_mode also enables a record size check, so that an INSERT
or UPDATE never fails due to the record being too large for the
selected page size.
I struggled with the same error code recently, due to a change in MySQL Server 5.6.20.
I was able to solve the problem by changing the innodb_log_file_size in the my.ini text file.
In the release notes, it is explained that an innodb_log_file_size that is too small will trigger a "Row size too large error."
http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html
ERROR 1118 (42000) at line 1852:
Row size too large (> 8126). Changing some columns to TEXT or
BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
[mysqld]
innodb_log_file_size = 512M
innodb_strict_mode = 0
ubuntu 16.04 edit path:
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
on MS Windows the path will be something like:
C:\ProgramData\MySQL\MySQL Server 5.7\my.ini
Don't forget to retart the service (or restart your machine)
Have similar issue this morning and following way saved my life:
Did you try to turn off the innodb_strict_mode?
SET GLOBAL innodb_strict_mode = 0;
and then try to import it again.
innodb_strict_mode is ON using MySQL >= 5.7.7, before it was OFF.
The key parameter is: innodb_page_size
Support for 32k and 64k page sizes was added in MySQL 5.7. For both 32k and 64k page sizes, the maximum row length is approximately 16000 bytes.
The trick is that this parameter can be only changed during the INITIALIZATION of the mysql service instance, so it does not have any affect if you change this parameter after the instance is already initialized (the very first run of the instance).
innodb_page_size can only be configured prior to initializing the MySQL instance and cannot be changed afterward. If no value is specified, the instance is initialized using the default page size. See Section 14.6.1, “InnoDB Startup Configuration”.
So if you do not change this value in my.ini before initialization, the default value will be 16K, which will have row size limit of ~8K. Thats why the error comes up.
If you increase the innodb_page_size, the innodb_log_buffer_size must be also increased. Set it at least to 16M. Also if the ROW_FORMAT is set to COMPRESSED you cannot increase innodb_page_size to 32k, or 64K. It should be DYNAMIC (default in 5.7).
ROW_FORMAT=COMPRESSED is not supported when innodb_page_size is set to 32KB or 64KB. For innodb_page_size=32k, extent size is 2MB. For innodb_page_size=64k, extent size is 4MB. innodb_log_buffer_size should be set to at least 16M (the default) when using 32k or 64k page sizes.
Furthermore the innodb_buffer_pool_size should be increased from 128M to 512M at least, otherwise you will get an error on initialization of the instance (I do not have the exact error).
After this, the row size error gone.
The problem with this is that you have to create a new MySql instance, and migrate data to your new DataBase instance, from old one.
Parameters that I changed and works (after creating a new instance and initialized with the my.ini that is first modified with these settings):
innodb_page_size=64k
innodb_log_buffer_size=32M
innodb_buffer_pool_size=512M
All the settings and descriptions in which I found the solution can be found here:
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html
Hope this helps!
Regards!
For MariaDB users (version >= 10.2.2) and MySQL (version >= 5.7), the simple solution is:
ALTER TABLE `table` ROW_FORMAT=DYNAMIC;
If InnoDB strict mode is enabled this error can show.
Check enabled or not
SHOW variables LIKE '%strict%';
If enable then you can disable.
SET GLOBAL innodb_strict_mode=OFF;
For more detail information read here>>
I had the issue when importing SQL-dumps (from MySQL 8) to MariaDB on MacOS (with Brew).
Start by editing your my.cnf.
If you use Brew, it's usually store at /usr/local/etc/:
pico /usr/local/etc/my.cnf
Add this to the config:
[mysqld]
innodb_log_file_size = 1024M
innodb_strict_mode = 0
Then restart MariaDB:
brew services restart mariadb
Please notice that this in a workaround and not a fix since turning of strict mode in not fixing the problem, but since it's my local environment and not a production environment i'm ok with that.
MySQL is pretty clear about its maximum row size:
Every table (regardless of storage engine) has a maximum row size of
65,535 bytes. Storage engines may place additional constraints on this
limit, reducing the effective maximum row size.
. . .
Individual storage engines might impose additional restrictions that
limit table column count. Examples:
InnoDB permits up to 1000 columns.
InnoDB restricts row size to something less than half a database page
(approximately 8000 bytes), not including VARBINARY, VARCHAR, BLOB, or
TEXT columns.
Different InnoDB storage formats (COMPRESSED, REDUNDANT) use different
amounts of page header and trailer data, which affects the amount of
storage available for rows.
If you have 325 repeating sets of columns, you are exceeding several of the restrictions. This is also a suspicious data format. You should have 325 rows for each row in the table you want, one for each group of columns.
I recently created a table with 82 columns and had the same error with InnoDB.
To bypass the problem we switched the table format to MyISAM as it was just used for a basic form.
Changing into MyISAM is not the solution. For innodb following worked for me on mysql 8.0.27 on a huge server.
set followings on my.cnf and initialize it. Make sure you have taken backups if databases exist as initializing needs to be remove the data directory.
innodb-strict-mode=OFF
innodb-page-size=64K
innodb-log-buffer-size=256M
innodb-log-file-size=1G
innodb-data-file-path=ibdata1:2G:autoextend
I just want to provide some other people with help with a more serious variant of this problem. In some situations, the error ("Row size too large .. Changing some columns to TEXT or BLOB") will occur even with "alter table drop column" and "alter table modify column" statements!
Consequently you can become completely stuck, not able to change a varchar to a text, or drop columns (trying to solve the problem ironically results in the same message).
If you have this problem, the solution is to alter or drop multiple columns at once. You can do this in MySQL with the syntax "alter table example drop column a, drop column b, drop column c" and if you drop enough columns at once, it will actually execute rather than raising the error.
For MySQL 5.7 on Mac OS X El Capitan:
OS X provides example configuration files at /usr/local/mysql/support-files/my-default.cnf
To add variables, first stop the server and just copy above file to, /usr/local/mysql/etc/my.cnf
cmd : sudo cp /usr/local/mysql/support-files/my-default.cnf /usr/local/mysql/etc/my.cnf
NOTE: create 'etc' folder under 'mysql' in case it doesn't exists.
cmd : sudo mkdir /usr/local/mysql/etc
Once the my.cnf is created under etc. it's time to set variable inside that.
cmd: sudo nano my.cnf
set variables below [mysqld]
[mysqld]
innodb_log_file_size = 512M
innodb_strict_mode = 0
now start a server!
innodb_log_file_size=512M
innodb_strict_mode=0
These two lines worked for me, in the mysql configuration !
The following worked for me, nothing else -:
SET GLOBAL innodb_log_buffer_size = 80*1024*1024*1024;
and
SET GLOBAL innodb_strict_mode = 0;
Hope this helps someone because it wasted couple of days of my time as I was trying to do this in my.cnf with no joy.
I also encountered that. Changing "innodb_log_file_size","innodb_log_buffer_size" and the other settings in "my.ini" file did not solve my problem. I pass it by changing my column types "text" to varchar(20) and not using varchar values bigger than 20 . Maybe you can decrease the size of columns, too, if it possible.
text--->varchar(20)
varchar(256) --> varchar(20)
What fixed mine was to add
SET GLOBAL innodb_file_format=Barracuda;
SET GLOBAL innodb_file_per_table=ON;
At the beginning of my ".sql" file, as it is said in:
https://gist.github.com/tonykwon/8910261
I was having same issue. I search "innodb_strict_mode" in my.ini but couldn't found.
I then added the same, it will still show you the warning, but you can continue. just add
innodb_strict_mode = 0;
I was using XAMPP on Windows 10 and had this issue using PHPMyAdmin.
when I added innodb_log_file_size = 500M and innodb_log_buffer_size = 800M to my my.ini file, MySQL would not start.
So I tried deleting ib_logfile0 and ib_logfile1 located in (C:\xampp\mysql\data) and this did not help at all.
luckily I could re-install (I needed to upgrade XAMPP anyway)
The simple solution in my case was to set innodb_strict_mode=0 in the my.ini file.
After this I was able to create the table.
STEPS:
Close XAMPP completely.
Edit the my.ini file (located in C:\xampp\mysql\data) add innodb_strict_mode=0 in the InnoDB section.
Start XAMPP and import the table again.
N.B complete these steps as ADMIN
Tried many things but found the solution by adding the below line in my.ini and restarting the MySQL service.
innodb_strict_mode = 0
sql_mode=""
innodb_strict_mode=0
brew services stop mariadb
brew services start mariadb
MariaDB has a fairly lengthy document specifically on this issue showing how and why with several ways to resolve it.
Troubleshooting Row Size Too Large Errors With InnoDB
Possible Options:
Converting the Table to the DYNAMIC Row Format (This is default is newer versions, so may not work if you're already set to dynamic)
Converting Some Columns to BLOB or TEXT
Increasing the Length of VARBINARY Columns
Increasing the Length of VARCHAR Columns
Refactoring the Table into Multiple Tables
Refactoring Some Columns into JSON
Disabling InnoDB Strict Mode ("Unsafe" way)
None of the answers to date mention the effect of the innodb_page_size parameter. Possibly because changing this parameter was not a supported operation prior to MySQL 5.7.6. From the documentation:
The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page for 4KB, 8KB, 16KB, and 32KB page sizes. For example, the maximum row length for the default innodb_page_size of 16KB is about 8000 bytes. For an InnoDB page size of 64KB, the maximum row length is about 16000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
Note that increasing the page size is not without its drawbacks. Again from the documentation:
As of MySQL 5.7.6, 32KB and 64KB page sizes are supported but ROW_FORMAT=COMPRESSED is still unsupported for page sizes greater than 16KB. For both 32KB and 64KB page sizes, the maximum record size is 16KB. For innodb_page_size=32k, extent size is 2MB. For innodb_page_size=64k, extent size is 4MB.
A MySQL instance using a particular InnoDB page size cannot use data files or log files from an instance that uses a different page size. This limitation could affect restore or downgrade operations using data from MySQL 5.6, which does support page sizes other than 16KB.
FIX FOR MYSQL IN DOCKER
I'm using #fefe's excellent answer here to show how to fix this problem within some minutes when using docker (via docker-compose). It's quite easy as you don't have to touch MySQL's configuration files, but it requires you to export and import your entire data:
The default situation of your MySQL setup probably looks like this. Your data is saved inside the data-mysql volume.
mysql:
image: mysql:5.7.25
container_name: mysql
restart: always
volumes:
- data-mysql:/var/lib/mysql
environment:
- "MYSQL_DATABASE=XXX"
- "MYSQL_USER=XXX"
- "MYSQL_PASSWORD=XXX"
- "MYSQL_ROOT_PASSWORD=XXX"
expose:
- 3306
Make a backup of your entire data/database via SQL export, so you have a .sql.gz or something. I'm using Adminer for this.
To fix (and as explained in #fefe's answer) we have to setup the MySQL instance from zero, meaning we have to delete the mysql docker container and the mysql volume docker container. Do a docker container ls and a docker volume ls to see all your containers and volumes, and pick the two names that are your mysql instance and your mysql volume, for me it's mysql (container) and docker_data-mysql (volume).
Stop your running instances via docker-compose down (or however you usually stop your docker stuff).
To delete them, I do docker container rm mysql and docker volume rm docker_data-mysql (note that there is an underscore AND a dash in the name).
Add these settings to your mysql block in your docker setup:
mysql:
image: mysql:5.7.25
command: ['--innodb_page_size=64k', '--innodb_log_buffer_size=32M', '--innodb_buffer_pool_size=512M']
container_name: mysql
# ...
Restart your instances, the mysql and mysql volume should be build automatically, now with the new settings.
Import your database dump file, maybe with:
gzip -dc < database.sql.gz | docker exec -i mysql mysql -uroot -pYOURPASSWORD
Voila! Worked very fine for me!
I have changed the length of value from varchar(255) to varchar(25) to all varchar columns and i get the solution.
if you are using the MySQLWorkbench you have the option to change the to change the query_alloc_block_size= 16258 and save it.
Step 1. click on the options file at the left side.
Step 2: click on General and select the checkBox of query_alloc_block_size and increase their size. for example change 8129 --> 16258
On my case it was casing from Limits on Table Column Count and Row Size
and doing changes described in this answer saved my day.
Add the following to the my.cnf file under [mysqld] section.
innodb_file_per_table
innodb_file_format = Barracuda
ALTER the table to use ROW_FORMAT=COMPRESSED.
ALTER TABLE table_name
ENGINE=InnoDB
ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=8;
https://stackoverflow.com/a/15585700/2195130
If you're getting this error on Google Cloud SQL (mysql 5.7 for example) then it's probably not at this time going to be a simple fix as not all InnoDB flags are supported. If you're coming across from Mysql 5.5 as I was (for an old Wordpress setup) this could mean you need to wrangle some column types in the source database before you export.
Some more information can be found here.
I experienced the same issue on an import of a data dump. Temporarily disabling the innodb strict mode solved my problem.
-- shows the acutal value of the variable
SHOW VARIABLES WHERE variable_name = 'innodb_strict_mode';
-- change the value (ON/OFF)
SET GLOBAL innodb_strict_mode=ON;
In the case that this message appears when changing MariaDB version, I had exactly the same issue changing to MariaDB 10.6.5 and that's how I solved the issue:
Using PhpMyAdmin, I exported the .sql file from the old MariaDB version
Edited the .sql file using an editor such as Notepad++ and added the line
SET GLOBAL innodb_default_row_format='dynamic'; on top as follows:
-- phpMyAdmin SQL Dump
-- version 5.1.1
-- https://www.phpmyadmin.net/
--
-- Host: (*Your host*)
-- Generation Time: Feb 12, 2022 at 05:22 PM
-- Server version: 10.6.4-MariaDB
-- PHP Version: 8.0.3
SET GLOBAL innodb_default_row_format='dynamic';
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
3.Imported the altered .sql file to MariaDB 10.6.5
All worked fine
I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select ##wait_timeout;
> select ##max_allowed_packet;
For me this solution didn't work out so I executed
SET GLOBAL max_allowed_packet=1073741824;
in my SQL client.
If not able to change this with MYSql service running, you should stop the service and change the variable in "my.ini" file.
For example:
max_allowed_packet=20M
If you are working on XAMPP then you can fix the MySQL Server has gone away issue with following changes..
open your my.ini file
my.ini location is (D:\xampp\mysql\bin\my.ini)
change the following variable values
max_allowed_packet = 64M
innodb_lock_wait_timeout = 500
If you are running with default values then you have a lot of room to optimize your mysql configuration.
The first step I recommend is to increase the max_allowed_packet to 128M.
Then download the MySQL Tuning Primer script and run it. It will provide recommendations to several facets of your config for better performance.
Also look into adjusting your timeout values both in MySQL and PHP.
How big (file size) is the file you are importing and are you able to import the file using the mysql command line client instead of PHPMyAdmin?
If you are using MAMP on OS X, you will need to change the max_allowed_packet value in the template for MySQL.
You can find it at: File > Edit template > MySQL my.cnf
Then just search for max_allowed_packet, change the value and
save.
I had this error and other related ones, when I imported at 16 GB SQL file. For me, editing my.ini and setting the following (based on several different posts) in the [mysqld] section:
max_allowed_packet = 110M
innodb_buffer_pool_size=511M
innodb_log_file_size=500M
innodb_log_buffer_size = 800M
net_read_timeout = 600
net_write_timeout = 600
If you are running under Windows, go to the control panel, services, and look at the details for MySQL and you will see where my.ini is. Then after you edit and save my.ini, restart the mysql service (or restart the computer).
If you are using HeidiSQL, you can also set some or all of these using that.
I solved my issue with this short /etc/mysql/my.cnf file :
[mysqld]
wait_timeout = 600
max_allowed_packet = 100M
The other reason this can happen is running out of memory. Check /var/log/messages and make sure that your my.cnf is not set up to cause mysqld to allocate more memory than your machine has.
Your mysqld process can actually be killed by the kernel and then re-started by the "safe_mysqld" process without you realizing it.
Use top and watch the memory allocation while it's running to see what your headroom is.
make a backup of my.cnf before changing it.
I got same issue with
$image_base64 = base64_encode(file_get_contents($_FILES['file']['tmp_name']) );
$image = 'data:image/jpeg;base64,'.$image_base64;
$query = "insert into images(image) values('".$image."')";
mysqli_query($con,$query);
In \xampp\mysql\bin\my.ini file of phpmyadmin we get only
[mysqldump]
max_allowed_packet=110M
which is just for mysqldump -u root -p dbname . I resolved my issue by replacing above code with
max_allowed_packet=110M
[mysqldump]
max_allowed_packet=110M
I updated "max_allowed_packet" to 1024M, but it still wasn't working. It turns out my deployment script was running:
mysql --max_allowed_packet=512M --database=mydb -u root < .\db\db.sql
Be sure to explicitly specify a bigger number from the command line if you are donig it this way.
If your data includes BLOB data:
Note that an import of data from the command line seems to choke on BLOB data, resulting in the 'MySQL server has gone away' error.
To avoid this, re-create the mysqldump but with the --hex-blob flag:
http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_hex-blob
which will write out the data file with hex values rather than binary amongst other text.
PhpMyAdmin also has the option "Dump binary columns in hexadecimal notation (for example, "abc" becomes 0x616263)" which works nicely.
Note that there is a long-standing bug (as of December 2015) which means that GEOM columns are not converted:
Back up a table with a GEOMETRY column using mysqldump?
so using a program like PhpMyAdmin seems to be the only workaround (the option noted above does correctly convert GEOM columns).
If it takes a long time to fail, then enlarge the wait_timeout variable.
If it fails right away, enlarge the max_allowed_packet variable; it it still doesn't work, make sure the command is valid SQL. Mine had unescaped quotes which screwed everything up.
Also, if feasible, consider limiting the number of inserts of a single SQL command to, say, 1000. You can create a script that creates multiple statements out of a single one by reintroducing the INSERT... part every n inserts.
i got a similar error.. to solve this just open my.ini file..here at line no 36 change the value of maximum allowed packet size ie. max_allowed_packet = 20M
Make sure mysqld process does not restart because of service managers like systemd.
I had this problem in vagrant with centos 7. Configuration tweaks didn't help. Turned out it was systemd which killed mysqld service every time when it took too much memory.
I had similar error today when duplicating database (MySQL server has gone away...), but when I tried to restart mysql.server restart I got error
ERROR! The server quit without updating PID ...
This is how I solved it:
I opened up Applications/Utilities/ and ran Activity Monitor
quit mysqld
then was able to solve the error problem with
mysql.server restart
I am doing some large calculations which involves the mysql connection to stay long time and with heavy data. i was facing this "Mysql go away issue". So i tried t optimize the queries but that doen't helped me then i increased the mysql variables limit which is set to a lower value by default.
wait_timeout
max_allowed_packet
To the limit what ever suits to you it should be the Any Number * 1024(Bytes). you can login to terminal using 'mysql -u username - p' command and can check and change for these variable limits.
For GoDaddy shared hosting
On GoDaddy shared hosting accounts, it is tricky to tweak the PHP.ini etc files. However, there is another way and it just worked perfectly for me. (I just successfully uploaded a 3.8Mb .sql text file, containing 3100 rows and 145 cols. Using the IMPORT command in phpMyAdmin, I was getting the dreaded MySQL server has gone away error, and no further information.)
I found that Matt Butcher had the right answer. Like Matt, I had tried all kinds of tricks, from exporting MySQL databases in bite-sized chunks, to writing scripts that break large imports into smaller ones. But here is what worked:
(1) CPANEL ---> FILES (group) ---> BACKUP
(2a) Under "Partial Backups" heading...
(2b) Under "Download a MySQL Database Backup"
(2c) Choose your database and download a backup (this step optional, but wise)
(3a) Directly to the right of 2b, under heading "Restore a MySQL Database Backup"
(3b) Choose the .SQL import file from your local drive
(3c) True happiness will be yours (shortly....) Mine took about 5 seconds
I was able to use this method to import a single table. Nothing else in my database was affected -- but that is what step (2) above is intended to protect against.
Notes:
a. If you are unsure how to create a .SQL import file, use phpMyAdmin to export a table and modify that file structure.
SOURCE:
Matt Butcher 2010 Article
If increasing max_allowed_packet doesn't help.
I was getting the same error as you when importing a .sql file into my database via Sequel Pro.
The error still persisted after upping the max_allowed_packet to 512M so I ran the import in the command line instead with:
mysql --verbose -u root -p DatabaseName < MySQL.sql
It gave the following error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled
I found a couple helpful StackOverflow questions:
Enable binary mode while restoring a Database from an SQL dump
Mysql ERROR: ASCII '\0' while importing sql file on linux server
In my case, my .sql file was a little corrupt or something. The MySQL dump we get comes in two zip files that need to be concatenated together and then unzipped. I think the unzipping was interrupted initially, leaving the file with some odd characters and encodings. Getting a fresh MySQL dump and unzipping it properly worked for me.
Just wanted to add this here in case others find that increasing the max_allowed_packet variable was not helping.
None of the solutions regarding packet size or timeouts made any difference for me. I needed to disable ssl
mysql -u -p -hmyhost.com --disable-ssl db < file.sql
https://dev.mysql.com/doc/refman/5.7/en/encrypted-connections.html
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Lost connection to MySQL server during query
I am importing some data from a large csv to a mysql table. I am losing the connection to the server during the process of importing the file to the table.
What is going wrong?
The error code is 2013: Lost connection to the mySql server during the query.
I am running these queries from a ubuntu machine remotely on a windows server.
Try the following 2 things...
1) Add this to your my.cnf / my.ini in the [mysqld] section
max_allowed_packet=32M
(you might have to set this value higher based on your existing database).
2) If the import still does not work, try it like this as well...
mysql -u <user> --password=<password> <database name> <file_to_import
Usually that happens when you exhaust one resource for the db session, such as memory, and mysql closes the connection.
Can you break the CSV file into smaller ones and process them? or do commit every 100 rows? The idea is that the transaction you're running shouldn't try to insert a large amount of data.
I forgot to add, this error is related to the configuration property max_allowed_packet, but I can't remember the details of what to change.
The easiest solution I found to this problem was to downgrade the MySql from MySQL Workbench to MySQL Version 1.2.17. I had browsed some MySQL Forums, where it was said that the timeout time in MySQL Workbech has been hard coded to 600 and some suggested methods to change it didn't work for me. If someone is facing the same problem with workbench you could try downgrading too.
1) you may have to increase the timeout on your connection.
2)You can get more information about the lost connections by starting mysqld with the --log-warnings=2 option.
This logs some of the disconnected errors in the hostname.err file
You can use that for further investigation
3) if you are trying to send the data to BLOB columns, check server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in following link, “Packet too large”.
4) you can check the following url link
5) you should check your available disk space is bigger than the table you're trying to update link
You might like to read this - http://dev.mysql.com/doc/refman/5.0/en/gone-away.html - that very well explains the reasons and fixes for "lost connection during query" scenarios.
In your case, it might be because of the max allowed packet size as pointed by Augusto. Or if you've verified it isn't the case, then it might be the connection wait timeout setting due to which the client is losing connection. However, I do not think latter is true here because it's a CSV file and not containing queries.
I think you can use mysql_ping() function.
This function checks for connection to the server alive or not. if it fails then you can reconnect and proceed with your query.