We are running into a very strange problem with disjunct concurrent PHP processes accessing the same table (using table locks).
There is no replication involved, we're working on a monolith with the mysqli-interface of PHP 5.6.40 (I know, upgrading is due, we're working on it).
Let's say the initial value of a field namend "value" in xyz is 0;
PHP-Process 1: Modifies the table
LOCK TABLE xyz WRITE;
UPDATE xyz SET value = 1;
UNLOCK TABLE xyz;
PHP-Process 2: Depends on a value in that table (e.g. a check for access rights)
SELECT value from xyz;
Now, if we manage to make Process 2 halt and wait for the lock to be released, on a local dev-Environment (XAMPP, MariaDB 10.1.x), everything is fine, it will get the value 1;
BUT, on our production server (DebianLinux, MySQL 5.6.x) there is a seemingly necessary wait period for the value to materialize in query results.
An immediate SELECT statement delivers 0
sleep(1) then SELECT delivers 1
We always assumend that a) LOCK / UNLOCK will Flush Tables or b) A manual FLUSH TABLES xyz WITH READ LOCK will also flush caches, enforcing writing to the disc and generally will ensure that every following query of every other process will yield the expected result.
What we tried so far:
FLUSH TABLES as mentioned - no result
Explicitly acquire a LOCK before executing the SELECT statement - no result
Just wait some time - yielded the result we are looking for, but this is a dirty, unreliable solution.
What do you guys think? What might be the cause? I was thinking of: The query cache not updating in time, paging of the underlying OS not writing stuff back to the disk in time / not validating the memory page of the table data.
Is there any way you know to definitely assure consecutive consistentcy of the data?
There are different transaction isolation modes by default in the different MariadB versions.
You have set up the same mode if you expect the same result. It also seems weird to test it on different MySQL versions.
https://mariadb.com/kb/en/mariadb-transactions-and-isolation-levels-for-sql-server-users/
Your second process do start of transaction may be far before the commit actually issued.
If you do not want dig in transaction isolation just try do rollback before select(but correct solution is determine what exactly isolation your app require).
Rollback; -- may give error, but it is okay.
SELECT value from xyz;
My MySQL server went on a unexpected state. I had one single connection executing a query and it stayed in "Waiting for table metadata lock" for a long time (over 1643 secs). The query was a DROP INDEX on a very busy table within my system.
Actually I've tried the command many times. Firstly the database was busy and there were other connections performing multiple operations (both read an write). I thought this could be the reason, so I've tried in sequence:
kill any process running Query on the same table and on the same "Waiting for table metadata lock"
Kill even processes with Sleep command
Remove every process and any grants from remote hosts (after this only root could connect and no other user was connected, the application it self was disconnected from the database)
Cancel and re-run the command (reinforcing this would be the only active command)
And even in that state the problem persisted for minutes. The only alive process query was:
ID: 2398884
USER: root
HOST: localhost
DB: zoom
COMMAND: Query
TIME: 1643
STATE: Waiting for table metadata lock
INFO: DROP INDEX index_x ON tb.schema
Afterwards we decided to restart mysqld. And when the server got back the issue was gone. I was able run the drop index command.
I haven't been able to spot anyone with a similar scenario. Is this normal on some circumstances? I've tried to find which transaction is causing a “Waiting for table metadata lock”. and was not able to identify any one.
Note: Besides the drop index and my own root connection to inspect progress and status there were Binlog Dump replication queries running
No, this is not normal, and I'm sure you just haven't killed the right thread. A restart of MySQL should not be necessary. If it were, me and the company I work for would be the first to abandon it.
A metadata lock happens, when one transaction touches a table and another transaction (your drop index statement) wants to have a lock, but the first transaction isn't committed yet. Sounds too common, but play it through:
session1 > start transaction;
session1 > select * from foo;
That's what I mean with "touching". A simple select is enough and it can happen anywhere in the transaction. It doesn't matter, if you run no more statements after that or if you run another statement (as long as it's not a commit; or rollback;), this transaction prevents other transactions to get the lock for metadata.
session2 > alter table foo add column bar int;
Now session2 is waiting for the metadata lock.
Regarding what you tried:
What you have to kill must not necessarily be a transaction that is currently running statements on the same table. Killing other statements that are also waiting for metadata lock doesn't help, they are also just victims. But it doesn't hurt either.
Not a bad idea.
Not sure what you mean about that. But removing grants surely doesn't help. New grants or removed grants don't apply to transactions that are still running. A session has to be reopened for changed grants to take effect.
This doesn't help at all.
That being said, I surely don't understand why the accepted answer in your linked question has more than 100 upvotes. Those queries do indeed not show the locks at all. The second answer is right, though. Kill the transactions that are running the longest time first.
Note though, you have to check the ACTIVE x seconds parts in the output of SHOW ENGINE INNODB STATUS\G in the TRANSACTIONS section. Do not use the time value in the processlist. This only indicates the time since the last status change of this thread.
read more about metadata locks here
Oh, and also make sure to read this if you're using MySQL 5.7 or newer.
On Linux, I thought my MySQL queries were somehow not working, because a query would not show any progress in the amount of data being entered into a table. Is there a way the data available to the MySQL command line can be refreshed without exiting and re-entering the command line?
I have been searching around, but so far have only seen running mysql -e and putting that into a bash loop. I like to stay in the MySQL command line and run other commands like describe tables.
What you're seeing is probably the result of a long-running REPEATABLE-READ transaction. Until you begin a new transaction, you can see only data that was committed at the time you started your current transaction.
Normally, the mysql client operates in autocommit mode. That is, every SQL statement implicitly starts and commits its own transaction. In this mode, you should always see current data every time you query. You apparently are not using autocommit mode.
You can turn on autocommit:
mysql> SET SESSION autocommit=1;
Or you can start a new transaction at your convenience:
mysql> BEGIN;
Another option is to operate within a READ-COMMITTED transaction. This means your transaction does not need to preserve a repeatable view of data from the time you started your transaction. It always views the most recently committed changes, even while your transaction is ongoing.
mysql> SET SESSION tx_isolation='READ-COMMITTED';
mysql> BEGIN;
(Note: MySQL 8.0.3 changed the name of tx_isolation to transaction_isolation.)
The link #JimmyB posted in a comment above about transaction isolation levels is helpful reading.
I have a simple use case to solve. Imagine that somebody tells you "hey, this particular set of queries is not transactional!" and your job is to deny that statement. How could one do that?
Assumptions:
user is able to reproduce this by "clicking on one magic button" which triggers, lets say 3 following INSERTs
we have access to mysql client and all privileges
we do not have access to code base of an application, so no integration tests possible to verify this from code perspective
we are using MySQL server with InnoDb
we can tweak MySQL configuration as we want (slow queries, etc.)
There's a transaction section in the output of:
SHOW ENGINE INNODB STATUS\G
Which looks like (that's from my local MySQL currently not running any queries):
TRANSACTIONS
------------
Trx id counter 900
Purge done for trx's n:o < 0 undo n:o < 0
History list length 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 47, OS thread handle 0x7fc8b85d3700, query id 120 localhost root
SHOW ENGINE INNODB STATUS
I don't know if you can actively monitor this information, so that you can see it exactly in the moment of you 3 insert operations. You can probably use that last bullet of yours (using slow queries) here...
In addition MySQL has command counters. This counters can be accessed via:
SHOW GLOBAL STATUS LIKE "COM\_%"
Each execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback, so you can execute your code and monitor those counters.
I'm running the following MySQL UPDATE statement:
mysql> update customer set account_import_id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
I'm not using a transaction, so why would I be getting this error? I even tried restarting my MySQL server and it didn't help.
The table has 406,733 rows.
HOW TO FORCE UNLOCK for locked tables in MySQL:
Breaking locks like this may cause atomicity in the database to not be enforced on the sql statements that caused the lock.
This is hackish, and the proper solution is to fix your application that caused the locks. However, when dollars are on the line, a swift kick will get things moving again.
1) Enter MySQL
mysql -u your_user -p
2) Let's see the list of locked tables
mysql> show open tables where in_use>0;
3) Let's see the list of the current processes, one of them is locking your table(s)
mysql> show processlist;
4) Kill one of these processes
mysql> kill <put_process_id_here>;
You are using a transaction; autocommit does not disable transactions, it just makes them automatically commit at the end of the statement.
What could be happening is, some other thread is holding a record lock on some record (you're updating every record in the table!) for too long, and your thread is being timed out. Or maybe running multiple (2+) UPDATE queries on the same row during a single transaction.
You can see more details of the event by issuing a
SHOW ENGINE INNODB STATUS
after the event (in SQL editor). Ideally do this on a quiet test-machine.
mysql> set innodb_lock_wait_timeout=100;
Query OK, 0 rows affected (0.02 sec)
mysql> show variables like 'innodb_lock_wait_timeout';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 100 |
+--------------------------+-------+
Now trigger the lock again. You have 100 seconds time to issue a SHOW ENGINE INNODB STATUS\G to the database and see which other transaction is locking yours.
Take a look to see if your database is fine tuned, especially the transaction isolation. It isn't a good idea to increase the innodb_lock_wait_timeout variable.
Check your database transaction isolation level in MySQL:
mysql> SELECT ##GLOBAL.tx_isolation, ##tx_isolation, ##session.tx_isolation;
+-----------------------+-----------------+------------------------+
| ##GLOBAL.tx_isolation | ##tx_isolation | ##session.tx_isolation |
+-----------------------+-----------------+------------------------+
| REPEATABLE-READ | REPEATABLE-READ | REPEATABLE-READ |
+-----------------------+-----------------+------------------------+
1 row in set (0.00 sec)
You could get improvements changing the isolation level. Use the Oracle-like READ COMMITTED instead of REPEATABLE READ. REPEATABLE READ is the InnoDB default.
mysql> SET tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)
Also, try to use SELECT FOR UPDATE only if necessary.
Something is blocking the execution of the query. Most likely another query updating, inserting or deleting from one of the tables in your query. You have to find out what that is:
SHOW PROCESSLIST;
Once you locate the blocking process, find its id and run :
KILL {id};
Re-run your initial query.
mysql->SHOW PROCESSLIST;
kill xxxx;
and then kill which one in sleep. In my case it is 2156.
100% with what MarkR said. autocommit makes each statement a one statement transaction.
SHOW ENGINE INNODB STATUS should give you some clues as to the deadlock reason. Have a good look at your slow query log too to see what else is querying the table and try to remove anything that's doing a full tablescan. Row level locking works well but not when you're trying to lock all of the rows!
Try to update the below two parameters as they must be having default values.
innodb_lock_wait_timeout = 50
innodb_rollback_on_timeout = ON
For checking parameter value you can use the below SQL.
SHOW GLOBAL VARIABLES LIKE 'innodb_rollback_on_timeout';
Can you update any other record within this table, or is this table heavily used? What I am thinking is that while it is attempting to acquire a lock that it needs to update this record the timeout that was set has timed out. You may be able to increase the time which may help.
If you've just killed a big query, it will take time to rollback. If you issue another query before the killed query is done rolling back, you might get a lock timeout error. That's what happened to me. The solution was just to wait a bit.
Details:
I had issued a DELETE query to remove about 900,000 out of about 1 million rows.
I ran this by mistake (removes only 10% of the rows):
DELETE FROM table WHERE MOD(id,10) = 0
Instead of this (removes 90% of the rows):
DELETE FROM table WHERE MOD(id,10) != 0
I wanted to remove 90% of the rows, not 10%. So I killed the process in the MySQL command line, knowing that it would roll back all the rows it had deleted so far.
Then I ran the correct command immediately, and got a lock timeout exceeded error soon after. I realized that the lock might actually be the rollback of the killed query still happening in the background. So I waited a few seconds and re-ran the query.
In our case the problem did not have much to do with the locks themselves.
The issue was that one of our application endpoints needed to open 2 connections in parallel to process a single request.
Example:
Open 1st connection
Start transaction 1
Lock 1 row in table1
Open 2nd connection
Start transaction 2
Lock 1 row in table2
Commit transaction 2
Release 2nd connection
Commit transaction 1
Release 1st connection
Our application had a connection pool limited to 10 connections.
Unfortunately, under load, as soon as all connections were used the application stopped working and we started having this problem.
We had several requests that needed to open a second connection to complete, but could not due to the connection pool limit. As a consequence, those requests were keeping a lock on the table1 row for a long time leading the following requests that needed to lock the same row to throw this error.
Solution:
In the short term, we patched the problem by increasing the connection pool limit.
In the long term, we removed all nested connections, to fully solve the issue.
Tips:
You can easily check if you have nested connections by trying to lower your connection pool limit to 1 and test your application.
The number of rows is not huge... Create an index on account_import_id if its not the primary key.
CREATE INDEX idx_customer_account_import_id ON customer (account_import_id);
Make sure the database tables are using InnoDB storage engine and READ-COMMITTED transaction isolation level.
You can check it by SELECT ##GLOBAL.tx_isolation, ##tx_isolation; on mysql console.
If it is not set to be READ-COMMITTED then you must set it. Make sure before setting it that you have SUPER privileges in mysql.
You can take help from http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html.
By setting this I think your problem will be get solved.
You might also want to check you aren't attempting to update this in two processes at once. Users ( #tala ) have encountered similar error messages in this context, maybe double-check that...
I came from Google and I just wanted to add the solution that worked for me. My problem was I was trying to delete records of a huge table that had a lot of FK in cascade so I got the same error as the OP.
I disabled the autocommit and then it worked just adding COMMIT at the end of the SQL sentence. As far as I understood this releases the buffer bit by bit instead of waiting at the end of the command.
To keep with the example of the OP, this should have worked:
mysql> set autocommit=0;
mysql> update customer set account_import_id = 1; commit;
Do not forget to reactivate the autocommit again if you want to leave the MySQL config as before.
mysql> set autocommit=1;
Late to the party (as usual) however my issue was the fact that I wrote some bad SQL (being a novice) and several processes had a lock on the record(s) <-- not sure the appropriate verbiage. I ended up having to just: SHOW PROCESSLIST and then kill the IDs using KILL <id>
This kind of thing happened to me when I was using php
language construct exit; in middle of transaction. Then this
transaction "hangs" and you need to kill mysql process (described above with processlist;)
In my instance, I was running an abnormal query to fix data. If you lock the tables in your query, then you won't have to deal with the Lock timeout:
LOCK TABLES `customer` WRITE;
update customer set account_import_id = 1;
UNLOCK TABLES;
This is probably not a good idea for normal use.
For more info see: MySQL 8.0 Reference Manual
I ran into this having 2 Doctrine DBAL connections, one of those as non-transactional (for important logs), they are intended to run parallel not depending on each other.
CodeExecution(
TransactionConnectionQuery()
TransactionlessConnectionQuery()
)
My integration tests were wrapped into transactions for data rollback after very test.
beginTransaction()
CodeExecution(
TransactionConnectionQuery()
TransactionlessConnectionQuery() // CONFLICT
)
rollBack()
My solution was to disable the wrapping transaction in those tests and reset the db data in another way.
We ran into this issue yesterday and after slogging through just about every suggested solution here, and several others from other answers/forums we ended up resolving it once we realized the actual issue.
Due to some poor planning, our database was stored on a mounted volume that was also receiving our regular automated backups. That volume had reached max capacity.
Once we cleared up some space and restarted, this error was resolved.
Note that we did also manually kill several of the processes: kill <process_id>; so that may still be necessary.
Overall, our takeaway was that it was incredibly frustrating that none of our logs or warnings directly mentioned a lack of disk space, but that did seem to be the root cause.
I had similar error when using python to access mysql database.
The python program was using a while and for loop.
Closing cursor and link at appropriate line solved problem
https://github.com/nishishailesh/sensa_host_com/blob/master/sensa_write.py
see line 230
It appears that asking repeated link without closing previous link produced this error
I've faced a similar issue when doing some testing.
Reason - In my case transaction was not committed from my spring boot application because I killed the #transactional function during the execution(when the function was updating some rows). Due to which transaction was never committed to the database(MySQL).
Result - not able to update those rows from anywhere. But able to update other rows of the table.
mysql> update some_table set some_value = "Hello World" where id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
Solution - killed all the MySQL processes using
sudo killall -9 mysqld
sudo killall -9 mysqld_safe (restarting the server when an error occurs and logging runtime information to an error log. Not required in my case)
Had this same error, even though I was only updating one table with one entry, but after restarting mysql, it was resolved.