Earlier today [11-09-2021] one of our databases from our production environment suddenly dropped it's table for reasons we don't know. This happened around after 4am, since we still had a snapshot of our drive for that time, which is weird as no one was using or accessing the server at the time. Can someone tell if this normally happens?
This for sure its not normal behavior, you should check MySQL logs to see what was happening at that time.
In MySQL we need to see often 3 logs which are mostly important:
The Error Log. It contains information about errors that occur while the server is running (also server start and stop)
The General Query Log. This is a general record of what mysqld is doing (connect, disconnect, queries)
The Slow Query Log. Ιt consists of "slow" SQL statements (as indicated by its name).
The one that will be your starting point is The General Query Log.
By default no log files are enabled in MYSQL. All errors will be shown in the syslog (/var/log/syslog).
To Enable them just follow below steps:
1. Go to mysql conf file (/etc/mysql/my.cnf) and add following lines:
Enable general query log add following
general_log_file = /var/log/mysql/mysql.log
general_log = 1
2. Save the file and restart mysql using following commands
service mysql restart
To read content of the error log file in real time, run:
sudo tail -f $(mysql -Nse "SELECT CONCAT(##datadir, ##general_log_file)")
Hope this will help you to find out what actually happened on your database server.
My application is loading very slow. After doing some research I got to know that MySQL is causing this slowness. I have around 15-20 users that access this server. After preliminary investigation (googling and stackoverflow), I found out that they were some queries running at the time and those were the culprit. It’s annoying to run query every now and then to look out for the queries running for a long time. Is there a workaround for this and also get email/SMS alerts for it. How can I enable email/SMS alerts to look over those queries
execute below commands on mysql from admin/root user-
slow_query_log = ON;
long_query_time = 2;
Now get slow query log file path from belowcommand-
SHOW VARIABLES LIKE 'slow_query_log_file';
Now go to your mysql db machine and check logged slow queries and optimize them.
Note: By this you can get slow queries even without restart mysql service but for better do these entries in your conf file so that you can get slow logs even after service restart.
I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench.
I noticed also that it appears whenever I run long query.
Is there away to increase the timeout value?
New versions of MySQL WorkBench have an option to change specific timeouts.
For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000.
Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.
If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:
[mysqld]
max_allowed_packet=16M
By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.
While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.
For WAMP users: you'll find the flag in the [wampmysqld] section.
Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) - for example: --net_read_timeout=100.
For reference see here and here.
SET ##local.net_read_timeout=360;
Warning: The following will not work when you are applying it in remote connection:
SET ##global.net_read_timeout=360;
Edit: 360 is the number of seconds
Add the following into /etc/mysql/cnf file:
innodb_buffer_pool_size = 64M
example:
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
innodb_buffer_pool_size = 64M
In my case, setting the connection timeout interval to 6000 or something higher didn't work.
I just did what the workbench says I can do.
The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.
On Mac
Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.
And it works 😄
There are three likely causes for this error message
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
More rarely, it can happen when the client is attempting the initial connection to the server
For more detail read >>
Cause 2 :
SET GLOBAL interactive_timeout=60;
from its default of 30 seconds to 60 seconds or longer
Cause 3 :
SET GLOBAL connect_timeout=60;
You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.
Issue the below command from your shell:
sudo mysql_upgrade -u root -p
If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.
My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli.
You have three possibilities:
increase net_buffer_length inside mysql -> this would need a server restart
create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server's net_buffer_length -> I think this is the best solution, although slower no server restart is needed.
I used following mysqldump command:
mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile
On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me
change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor
2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:\ProgramData\MySQL\MySQL Server 8.0\my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file.
3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.
I know its old but on mac
1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.
Sometimes your SQL-Server gets into deadlocks, I've been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart.
Try executing query again.
Try please to uncheck limit rows in in Edit → Preferences →SQL Queries
because You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Change "read time out" time in Edit->Preferences->SQL editor->MySQL session
I got the same issue when loading a .csv file.
Converted the file to .sql.
Using below command I manage to work around this issue.
mysql -u <user> -p -D <DB name> < file.sql
Hope this would help.
If all the other solutions here fail - check your syslog (/var/log/syslog or similar) to see if your server is running out of memory during the query.
Had this issue when innodb_buffer_pool_size was set too close to physical memory without a swapfile configured. MySQL recommends for a database specific server setting innodb_buffer_pool_size at a max of around 80% of physical memory, I had it set to around 90%, the kernel was killing the mysql process. Moved innodb_buffer_pool_size back down to around 80% and that fixed the issue.
Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000.
The error no longer occurred.
I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).
I tried to run the create table statement again without the foreign key declarations and found it worked.
Then after creating the table, I added the foreign key constrains using ALTER TABLE query.
Hope this will help someone.
This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.
Go to:
Edit -> Preferences -> SQL Editor
In there you can see three fields in the "MySQL Session" group, where you can now set the new connection intervals (in seconds).
Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.
I had the same problem - but for me the solution was a DB user with too strict permissions.
I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore
Check if the indexes are in place first.
SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'
I ran into this while running a stored proc- which was creating lots of rows into a table in the database.
I could see the error come right after the time crossed the 30 sec boundary.
I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.
I am guessing it was some client side connection that I could not spot in Workbench.
Maybe this will help someone else as well ?
If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.
Do the same step for other primary keys on other tables.
There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:
Workbench Edit → Preferences → SQL Editor → DBMS
Workbench Edit → Preferences → SSH → Timeouts
My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don't forget to restart MySQL Workbench!
Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.
Hope this helps!
Three things to be followed and make sure:
Whether multiple queries show lost connection?
how you use set query in MySQL?
how delete + update query simultaneously?
Answers:
Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
Always SET value at the top but after DELETE if its condition doesn't involve SET value.
Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES
I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query
Check mysql error log files in path /var/log/mysql (linux)
In my case reassigning Mysql owner to the Mysql system folder worked for me
chown -R mysql:mysql /var/lib/mysql
Establish connection first
mysql --host=host.com --port=3306 -u username -p
then select your db use dbname
then source dumb source C:\dumpfile.sql.
After it's done \q
I ran the mysqltuner script recently and i noticed around 5000 joins done without indexes, this has to be reduced to a small value.
There is an option that allows us to log these queries in mysql
i have added the following lines under [mysqld] section of my.cnf
log-queries-not-using-indexes
log_slow_queries=/var/log/mysqld.slow.log
But the logs still remains empty, how can i get the logging to work in order to optimize those queries
Did you restart MySQL after doing this?
sudo service mysql restart
If you didn't, it won't log anything. I've also wondered myself if there's a delay in when it actually logs queries. Give it time, the logs will show up.
How do I routinely kill MySQL queries that have been alive for "too long"?
Is there a system information table of sorts that shows all current queries, and their age?
Edit: updated question from "killing connections" to "killing queries"
Install the RubyGem mysql_manager (sudo gem install mysql_manager) and then run a command like this:
mysql-manager --kill --kill:user api --kill:max-query-time 30 --log:level DEBUG
For more options, run mysql-manager --help.
You might need to specify an alternative --db:dsn, --db:username, or --db:password.
Read more about it here: https://github.com/osterman/mysql_manager
You can execute...
SHOW FULL PROCESSLIST;
...to show you the currently executing queries.
However, you shouldn't just kill the connections, as this may cause data integrity issues. Instead, you should use the processlist output as a means of highlighting where potential problems may lie prior to correcting the issues at source. (It's sort of a (very) poor man's MySQL Enterprise Monitor in that sense.)
MySQL 5.0.x only supports the “SHOW FULL PROCESSLIST” command. There’s no ability to query and filter the process list as thought it were a SQL table, e.g. SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST. This ability was added in MySQL 5.1+
MySQL has a KILL command that can kill either a query or the entire connection.
http://dev.mysql.com/doc/refman/5.0/en/kill.html
Still, you’d need a Ruby or Perl script that runs “SHOW FULL PROCESSLIST”, identifies which queries are running “too long”, then issues the appropriate KILL commands.
You can also do this from the command line, e.g.
mysqladmin processlist
mysqladmin kill
The command "mysqladmin processlist" will show the current connection, and a Time column which indicates the time since last activity.
You can do the same with the SQL command "SHOW FULL PROCESSLIST;"