Lost connection to mysql during query, mysql workbench - mysql

I have the same problem as this when I want to index a very large table on one of its non-unique columns which is an integer, and I tried all the solutions that are proposed in that post that has at least one vote up. I still couldn't fix it. Any other ideas?
I have enough memory:
max_allowed_packet: 2G,
innodb_buffer_pool_size: 9G
All the time out settings mentioned in this post and here are set to much higher numbers than the default.

While this is not necessary an answer for loosing connection in mysql workbench it is a workaround. When it comes to long running queries in mysql workbench, even if one changes the mysql workbench parameters, there seems to be a connection time_out issue still occuring. So, run the query from mysql command line and see if it works. If it works when you run from the mysql command line and not from workbench, you know its just a mysql workbench issue and not some other issue.

Related

Drop table times out for non-empty tables; already adjusted timeout interval

I'm having trouble deleting a table in MySQL v8.0 (on Windows 10) either from MySQL Workbench or via Python script (using mysql-connector-python). In both cases, the drop table command times out with "Error Code: 2013. Lost connection to MySQL server during query"
I previously set DBMS connection read timeout interval to 500 sec to try and work around this, but no luck.
The table in question has several hundred rows of data, and the entire .ibd file is 176kb. I suppose deleting the .ibd file directly isn't the greatest database practice?
I can create a new table and delete it, no problem. I'm running MySQL server locally.
Any suggestions on what to try next?
#obe's suggestion to restart the server resolved the issue. So it seems like that particular table got locked due to access from both Workbench and python. Database itself was not locked, since I could create/drop other tables.

Do mysql connection ids always increment, even if lower connections have been terminated?

I'm setting up mysql and have noticed that whenever I connect, the connection id always increments. I thought that might mean the connection I thought had terminated didn't, but when I checked the number of connections using sudo mysqladmin processlist, it only listed the connection needed for that one command.
Normally I would just assume this was normal behavior and ignore it, but I had some problems uninstalling my old/messy installation from back when I didn't know what I was doing. Can anyone verify that this is normal? I tried checking the mysql manual here but it wasn't specific enough to answer my question.
To list all processes running on a MySQL instance state a query like this:
SHOW PROCESSLIST
Each Connection will have a representation here (inactive ones with Command column = Sleep).
As to your question: No, Connection-ids will get re-used and will not increment forever. But you can't rely on the exact way they do so.

MySQL Power Outage Affecting MySQL Server - Shuts Down On Its Own

I am new to dealing with mysql settings and admin type issues. About 4-5 hours ago, I had two power outages within 30 minutes of eachother. As a result, my computer shutdown both times, while in the middle of what I can only assume was a around 20-30 commands on mysql at the time. After the first, mysql was unaffected. But after the second, something happened. MySQL Server cannot remain open for more than a few seconds at a time (before the outage, this was not a problem). I am running MySQL Server 5.1.
I can manually start MySQL server using the admin command line (I am running this on Windows): net start mysql. I get a message saying "The MySQL service was started successfully". Then I run a command or (max) two, and then again everything stops working with a 2013 "Lost connection to MYSQL server during query". Then I have to do restart the MySQL Server all over again.
I have some important data in the database which I cannot reach because the connection times out before I can get it out. Is there a way I can fix this connection problem easily? I know my data is in there, because I have gotten a fair amount of it out.
Any help would be appreciated. Please let me know what other information you might need, and how I can get it. I have been trying to find the error log for mysql, and have not found it yet.
And, yes, if I get through this, and even if I dont, I will make sure to create a system to update the data on a regular basis so these types of failures aren't so catastrophic in the future.
Thanks in advance

Error Code: 2013. Lost connection to MySQL server during query

I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench.
I noticed also that it appears whenever I run long query.
Is there away to increase the timeout value?
New versions of MySQL WorkBench have an option to change specific timeouts.
For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000.
Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.
If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:
[mysqld]
max_allowed_packet=16M
By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.
While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.
For WAMP users: you'll find the flag in the [wampmysqld] section.
Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) - for example: --net_read_timeout=100.
For reference see here and here.
SET ##local.net_read_timeout=360;
Warning: The following will not work when you are applying it in remote connection:
SET ##global.net_read_timeout=360;
Edit: 360 is the number of seconds
Add the following into /etc/mysql/cnf file:
innodb_buffer_pool_size = 64M
example:
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
innodb_buffer_pool_size = 64M
In my case, setting the connection timeout interval to 6000 or something higher didn't work.
I just did what the workbench says I can do.
The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.
On Mac
Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.
And it works 😄
There are three likely causes for this error message
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
More rarely, it can happen when the client is attempting the initial connection to the server
For more detail read >>
Cause 2 :
SET GLOBAL interactive_timeout=60;
from its default of 30 seconds to 60 seconds or longer
Cause 3 :
SET GLOBAL connect_timeout=60;
You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.
Issue the below command from your shell:
sudo mysql_upgrade -u root -p
If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.
My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli.
You have three possibilities:
increase net_buffer_length inside mysql -> this would need a server restart
create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server's net_buffer_length -> I think this is the best solution, although slower no server restart is needed.
I used following mysqldump command:
mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile
On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me
change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor
2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:\ProgramData\MySQL\MySQL Server 8.0\my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file.
3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.
I know its old but on mac
1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.
Sometimes your SQL-Server gets into deadlocks, I've been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart.
Try executing query again.
Try please to uncheck limit rows in in Edit → Preferences →SQL Queries
because You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Change "read time out" time in Edit->Preferences->SQL editor->MySQL session
I got the same issue when loading a .csv file.
Converted the file to .sql.
Using below command I manage to work around this issue.
mysql -u <user> -p -D <DB name> < file.sql
Hope this would help.
If all the other solutions here fail - check your syslog (/var/log/syslog or similar) to see if your server is running out of memory during the query.
Had this issue when innodb_buffer_pool_size was set too close to physical memory without a swapfile configured. MySQL recommends for a database specific server setting innodb_buffer_pool_size at a max of around 80% of physical memory, I had it set to around 90%, the kernel was killing the mysql process. Moved innodb_buffer_pool_size back down to around 80% and that fixed the issue.
Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000.
The error no longer occurred.
I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).
I tried to run the create table statement again without the foreign key declarations and found it worked.
Then after creating the table, I added the foreign key constrains using ALTER TABLE query.
Hope this will help someone.
This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.
Go to:
Edit -> Preferences -> SQL Editor
In there you can see three fields in the "MySQL Session" group, where you can now set the new connection intervals (in seconds).
Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.
I had the same problem - but for me the solution was a DB user with too strict permissions.
I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore
Check if the indexes are in place first.
SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'
I ran into this while running a stored proc- which was creating lots of rows into a table in the database.
I could see the error come right after the time crossed the 30 sec boundary.
I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.
I am guessing it was some client side connection that I could not spot in Workbench.
Maybe this will help someone else as well ?
If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.
Do the same step for other primary keys on other tables.
There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:
Workbench Edit → Preferences → SQL Editor → DBMS
Workbench Edit → Preferences → SSH → Timeouts
My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don't forget to restart MySQL Workbench!
Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.
Hope this helps!
Three things to be followed and make sure:
Whether multiple queries show lost connection?
how you use set query in MySQL?
how delete + update query simultaneously?
Answers:
Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
Always SET value at the top but after DELETE if its condition doesn't involve SET value.
Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES
I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query
Check mysql error log files in path /var/log/mysql (linux)
In my case reassigning Mysql owner to the Mysql system folder worked for me
chown -R mysql:mysql /var/lib/mysql
Establish connection first
mysql --host=host.com --port=3306 -u username -p
then select your db use dbname
then source dumb source C:\dumpfile.sql.
After it's done \q

Lost connection to MySQL server during query? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Lost connection to MySQL server during query
I am importing some data from a large csv to a mysql table. I am losing the connection to the server during the process of importing the file to the table.
What is going wrong?
The error code is 2013: Lost connection to the mySql server during the query.
I am running these queries from a ubuntu machine remotely on a windows server.
Try the following 2 things...
1) Add this to your my.cnf / my.ini in the [mysqld] section
max_allowed_packet=32M
(you might have to set this value higher based on your existing database).
2) If the import still does not work, try it like this as well...
mysql -u <user> --password=<password> <database name> <file_to_import
Usually that happens when you exhaust one resource for the db session, such as memory, and mysql closes the connection.
Can you break the CSV file into smaller ones and process them? or do commit every 100 rows? The idea is that the transaction you're running shouldn't try to insert a large amount of data.
I forgot to add, this error is related to the configuration property max_allowed_packet, but I can't remember the details of what to change.
The easiest solution I found to this problem was to downgrade the MySql from MySQL Workbench to MySQL Version 1.2.17. I had browsed some MySQL Forums, where it was said that the timeout time in MySQL Workbech has been hard coded to 600 and some suggested methods to change it didn't work for me. If someone is facing the same problem with workbench you could try downgrading too.
1) you may have to increase the timeout on your connection.
2)You can get more information about the lost connections by starting mysqld with the --log-warnings=2 option.
This logs some of the disconnected errors in the hostname.err file
You can use that for further investigation
3) if you are trying to send the data to BLOB columns, check server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in following link, “Packet too large”.
4) you can check the following url link
5) you should check your available disk space is bigger than the table you're trying to update link
You might like to read this - http://dev.mysql.com/doc/refman/5.0/en/gone-away.html - that very well explains the reasons and fixes for "lost connection during query" scenarios.
In your case, it might be because of the max allowed packet size as pointed by Augusto. Or if you've verified it isn't the case, then it might be the connection wait timeout setting due to which the client is losing connection. However, I do not think latter is true here because it's a CSV file and not containing queries.
I think you can use mysql_ping() function.
This function checks for connection to the server alive or not. if it fails then you can reconnect and proceed with your query.