State "Waiting for table flush" in processlist - mysql

If I try to run queries (even as easy as select id from table limit 1 ) on some specific tables in a schema (only a few of them have this problem) I get stuck.
When looking at the processlist, the state is "Waiting for table flush".
Any suggestion about how do I unlock these tables so that I can query them?

Pileups of queries stuck in "Waiting for table flush" state are typically caused by something in the background excessively running "FLUSH TABLES" or ANALYZE TABLE statements. Most backup methods need to do this briefly when they begin, but if you are getting it all the time, the chances are that you have a process or a cron job somewhere doing this excessively. Find the source of this and disable it, and the problem should go away.

For MySQL
Identify the processes causing issues.
This command will help to detect processes waiting for disk I/O in D state:
watch "ps -eo pid,user,state,command | awk '\$3 == /D/ { print \$0 }'"
You can also seek processes with long runtime like this:
SHOW FULL PROCESSLIST\G
Fix the processes or related queries you previously detected.
If you need to kill the detected processes, you can find their ids with the previous full processlist and execute the command: KILL <pid>;
In case you want to fix the same issue For SQL :
Find wich processes are causing issues.
This query will list them:
SELECT TOP 20
qs.sql_handle,
qs.execution_count,
qs.total_worker_time AS Total_CPU,
total_CPU_inSeconds = --Converted from microseconds
qs.total_worker_time/1000000,
average_CPU_inSeconds = --Converted from microseconds
(qs.total_worker_time/1000000) / qs.execution_count,
qs.total_elapsed_time,
total_elapsed_time_inSeconds = --Converted from microseconds
qs.total_elapsed_time/1000000,
st.text,
qp.query_plan
FROM
sys.dm_exec_query_stats AS qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY
sys.dm_exec_query_plan (qs.plan_handle) AS qp
ORDER BY
qs.total_worker_time DESC
Fix the processes or related queries you previously detected.
If you need to kill the detected processes, you can find their ids with the previous full processlist and execute the command:
KILL <SPID>
GO
EXEC sp_who2
GO
You can find more alternatives and details at the following source questions/answers/comments:
https://serverfault.com/questions/316922/how-to-detect-the-process-and-mysql-query-that-makes-high-load-on-server How do I find out what is hammering my SQL Server?
You also can use step by step SQL related article: https://www.wearediagram.com/blog/terminating-sql-server-blocking-processes

Related

How to cancel copy table from PhpMyAdmin

I've executed the copy table operation from PhpMyAdmin and it is taking too long (big table), and now the original and new table are not responding (I can browse the other tables in PhpMyAdmin)
I think because maybe there is a read lock or something worst, is possible to cancel the operation or see at least what's happening?
Two possible things - first, you could restart the webserver to stop any running PHP scripts. That might help if PHPMyAdmin copies the data in batches.
Second, you can execute show full processlist query to see all running queries. Identifying the hung query should not be too hard. Then use kill <pid> query (replace the with the actual process ID) to kill that query.
On phpMyAdmin's main page, go to Status > Processes. You should be seeing one process with a large value under Time; use the Kill link to stop it.

Single connection on mysql "Waiting for table metadata lock" to DROP INDEX

My MySQL server went on a unexpected state. I had one single connection executing a query and it stayed in "Waiting for table metadata lock" for a long time (over 1643 secs). The query was a DROP INDEX on a very busy table within my system.
Actually I've tried the command many times. Firstly the database was busy and there were other connections performing multiple operations (both read an write). I thought this could be the reason, so I've tried in sequence:
kill any process running Query on the same table and on the same "Waiting for table metadata lock"
Kill even processes with Sleep command
Remove every process and any grants from remote hosts (after this only root could connect and no other user was connected, the application it self was disconnected from the database)
Cancel and re-run the command (reinforcing this would be the only active command)
And even in that state the problem persisted for minutes. The only alive process query was:
ID: 2398884
USER: root
HOST: localhost
DB: zoom
COMMAND: Query
TIME: 1643
STATE: Waiting for table metadata lock
INFO: DROP INDEX index_x ON tb.schema
Afterwards we decided to restart mysqld. And when the server got back the issue was gone. I was able run the drop index command.
I haven't been able to spot anyone with a similar scenario. Is this normal on some circumstances? I've tried to find which transaction is causing a “Waiting for table metadata lock”. and was not able to identify any one.
Note: Besides the drop index and my own root connection to inspect progress and status there were Binlog Dump replication queries running
No, this is not normal, and I'm sure you just haven't killed the right thread. A restart of MySQL should not be necessary. If it were, me and the company I work for would be the first to abandon it.
A metadata lock happens, when one transaction touches a table and another transaction (your drop index statement) wants to have a lock, but the first transaction isn't committed yet. Sounds too common, but play it through:
session1 > start transaction;
session1 > select * from foo;
That's what I mean with "touching". A simple select is enough and it can happen anywhere in the transaction. It doesn't matter, if you run no more statements after that or if you run another statement (as long as it's not a commit; or rollback;), this transaction prevents other transactions to get the lock for metadata.
session2 > alter table foo add column bar int;
Now session2 is waiting for the metadata lock.
Regarding what you tried:
What you have to kill must not necessarily be a transaction that is currently running statements on the same table. Killing other statements that are also waiting for metadata lock doesn't help, they are also just victims. But it doesn't hurt either.
Not a bad idea.
Not sure what you mean about that. But removing grants surely doesn't help. New grants or removed grants don't apply to transactions that are still running. A session has to be reopened for changed grants to take effect.
This doesn't help at all.
That being said, I surely don't understand why the accepted answer in your linked question has more than 100 upvotes. Those queries do indeed not show the locks at all. The second answer is right, though. Kill the transactions that are running the longest time first.
Note though, you have to check the ACTIVE x seconds parts in the output of SHOW ENGINE INNODB STATUS\G in the TRANSACTIONS section. Do not use the time value in the processlist. This only indicates the time since the last status change of this thread.
read more about metadata locks here
Oh, and also make sure to read this if you're using MySQL 5.7 or newer.

Identifying who makes a lot of INSERT requests in MySQL

Recently, I noticed that my MySQL server processes a lot of INSERT's. How can I detect user or DB on which is this activivty??
insert 33 k 97.96 k 44.21%
SHOW FULL PROCESSLIST will return every connection, user, and query currently active, if you have the PROCESS permission. That's more for immediate problems, but it has the least overhead.
If you use query logging, then instead of the regular query log (it can slow your server down noticeably) use the binary log to keep it minimal. It only tracks actions that change tables, like CREATE/DROP/ALTER and INSERT/UPDATE/REPLACE.
What you should log periodically (once a minute):
SHOW FULL PROCESSLIST;
SHOW GLOBAL STATUS;
with slow log enabled this will give you huge chance that any question can be solved.
If you have binary logging enabled you can check time/user who inserted rows.
If you have general log enabled then everything is logged.
Look in your query logs. This will show every connect into MySQL, and show every command that they execute.

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.

Log killed queries in MySQL

I have some strange bug into a application(or is it the MySQL build?) that causes queries to remain in "locked" state forever, filling up the max number of threads.
I read about setting the wait_timeout variable to kill the "bogus" threads after a period of time. This works ok, but I would like to log the killed queries for further inspection/making sure backup scripts are not killed.
Is there any possibility to do that?
Thanks.
You might be able to use the slow log, but I'm not sure if the problem is that they never complete. Worth a shot.
Also, you may be able to see what's going on by running SHOW FULL PROCESSLIST while you've got dead threads. It should show you what the problem is and what the query was.
If you can simulate this in a development environment, you could also turn on general query logging (which records every statement) and then just tail the log after it crashes.
In the past, I have tagged queries with a unique comment (per query type):
/* Query_12345 */ SELECT ... FROM ... WHERE ... LIMIT ...
A background process would poll SHOW FULL PROCESSLIST and look for any queries that were more than X seconds long, and tagged with Query_NNNNN.
Finally, it would kill them if they went on too long. This allowed the server to breath while we figured out how to optimize the 80,000,000 record table that was slowing things down.