I am trying to delete several rows from a MySQL 5.0.45 database:
delete from bundle_inclusions;
The client works for a while and then returns the error:
Lock wait timeout exceeded; try restarting transaction
It's possible there is some uncommitted transaction out there that has a lock on this table, but I need this process to trump any such locks. How do I break the lock in MySQL?
I agree with Erik; TRUNCATE TABLE is the way to go. However, if you can't use that for some reason (for example, if you don't really want to delete every row in the table), you can try the following options:
Delete the rows in smaller batches (e.g. DELETE FROM bundle_inclusions WHERE id BETWEEN ? and ?)
If it's a MyISAM table (actually, this may work with InnoDB too), try issuing a LOCK TABLE before the DELETE. This should guarantee that you have exclusive access.
If it's an InnoDB table, then after the timeout occurs, use SHOW INNODB STATUS. This should give you some insight into why the lock acquisition failed.
If you have the SUPER privilege you could try SHOW PROCESSLIST ALL to see what other connections (if any) are using the table, and then use KILL to get rid of the one(s) you're competing with.
I'm sure there are many other possibilities; I hope one of these help.
Linux: In mysql configuration (/etc/my.cnf or /etc/mysql/my.cnf), insert / edit this line
innodb_lock_wait_timeout = 50
Increase the value sufficiently (it is in seconds), restart database, perform changes. Then revert the change and restart again.
I had the same issue, a rogue transaction without a end. I restarted the mysqld process. You don't need to truncate a table. You may lose data from that rogue transaction.
Guessing: truncate table bundle_inclusions
I am running following update -
update table_x set name= 'xyz' where id = 121;
and getting -
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
I googled it number of times and adding extra time to innodb_lock_wait_timeout not helping me out.
Please let me know the root cause of this issue and how I can solve it.
I am using mysql 5.6(master-master replication) on dedicated server.
Also table_x(Innodb table) heavily used in database. Autocommit is on.
Find out what other statement is running at the same time as this UPDATE. It sounds as if it is running a long time and hanging onto the rows that this UPDATE needs. Meanwhile this statement is waiting.
One way to see it is to do SHOW FULL PROCESSLIST; while the UPDATE is hung.
(In my opinion, the default of 50 seconds for innodb_lock_wait_timeout is much to high. Raising the value only aggravates the situation.)
If you give up on fixing the 'root cause' of the conflict, then you might tackle the issue a different way.
Lower innodb_lock_wait_timeout to, say, 5.
Programmatically catch the error when it times out and restart the UPDATE.
Do likewise for all other transactions. Other queries may also be piling up; restarting some may "uncork" the problem.
SHOW VARIABLES LIKE 'tx_isolation'; -- There may be a better setting for it, especially if a long-running SELECT is the villain.
Looks like there is some lock on any of your other transaction. You can check the status of INNODB by using this:
SHOW ENGINE INNODB STATUS\G
Check if there is any lock on the tables like this:
show open tables where in_use>0;
And then kill that processes which are locked.
I have solved the problem. I tried different values for innodb_lock_wait_timeout, also tried to change queries but got the same error. I did some research and asked my colleagues about hibernate.
They were doing numbers of transaction which include updating main table and committing in the end. So, I suggested them to use commit on each transaction. Finally I am not getting any lock wait time out errors.
What's the timeout for mysql LOCK TABLES statement?
Can't find it anywhere.
I tried to set variable innodb_lock_wait_timeout ini my.cnf but it seems it's related to another (row level) locking not to table locking.
Simply it has no effect for LOCK TABLES.
I want to set some low timeout value for case of deadlock, because if some operation will LOCK tables and something will go wrong, it will hang up the whole site!
Which is stupid for example in case of finishing purchase on your site.
My work-around is to create a dedicated lock table and just lock a row in that table. This has the advantage of only locking the processes that specifically want to be locked. Other parts of the application can continue to access the tables even if they are at some point touched by the update processes.
Setup
CREATE TABLE `mutex` (
EMPTY ENUM('') NOT NULL,
PRIMARY KEY (EMPTY)
);
Usage
set innodb_lock_wait_timeout = 1;
start transaction;
insert into `mutex` values();
[... do the real work here ... or somewhere else ... even a different machine ...]
delete from `mutex`;
commit;
Why are you using LOCK TABLES?
If you are using MyISAM (which sometimes needs LOCK TABLES), you should convert to InnoDB.
If you are using InnoDB, you should never use LOCK TABLES. Instead, depend on innodb_lock_wait_timeout (default is an unreasonably high 50 seconds). And you should check for errors.
InnoDB Deadlocks are caught and immediately cause an error. Certain non-deadlocks may wait for innodb_lock_wait_timeout.
Edit
Since the transaction looks like
BEGIN;
SELECT ...;
compute some stuff
UPDATE ... (using that stuff);
COMMIT;
You need to add FOR UPDATE on the end of the SELECT.
I think you are after the table_lock_timout variable which was introduced in MySQL 5.0.10 but subsequently removed in 5.5. Unfortunately, the release notes don't specify an alternative to use, and I'm guessing that the general attitude is to switch over to using InnoDB transactions as #Rick James has stated in his answer.
I think that removing the variable was unhelpful. Others may regard this as a case of the XY Problem, where we are trying to fix a symptom (deadlocks) by changing the timeout period of locking tables when really we should resolve the root cause by switching over to transactions instead. I think there may still be cases where table locks are more suitable to the application than using transactions and are perhaps a lot easier to comprehend, even if they are worse performing.
The nice thing about using LOCK TABLES, is that you can state the tables that you're queries are dependent upon before proceeding. With transactions, the locks are grabbed at the last possible moment and if they can't be fetched and time-out, you then need to check for this failure and roll back before trying everything all over again. It's simpler to have a 1 second timeout (minimum) on the lock tables query and keep retrying to get the lock(s) until you succeed and then proceeding with your queries before unlocking the tables. This logic is at no risk of deadlocks.
I believe the developer's attitude is summed up by the following excerpt from the documetation:
...avoid using the LOCK TABLES statement, because it does not offer
any extra protection, but instead reduces concurrency.
The correct answer is the lock_wait_timeout system variable.
From the documentation:
This variable specifies the timeout in seconds for attempts to acquire
metadata locks. The permissible values range from 1 to 31536000 (1
year). The default is 31536000.
This timeout applies to all statements that use metadata locks. These
include DML and DDL operations on tables, views, stored procedures,
and stored functions, as well as LOCK TABLES, FLUSH TABLES WITH READ
LOCK, and HANDLER statements.
I think you meant to say the default timeout value; which is 50 Seconds per MySQL Documentation it says
innodb_lock_wait_timeout Default 50 The timeout in seconds an
InnoDB transaction may wait for a row lock before giving up. The
default value is 50 seconds
I'm running the following MySQL UPDATE statement:
mysql> update customer set account_import_id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
I'm not using a transaction, so why would I be getting this error? I even tried restarting my MySQL server and it didn't help.
The table has 406,733 rows.
HOW TO FORCE UNLOCK for locked tables in MySQL:
Breaking locks like this may cause atomicity in the database to not be enforced on the sql statements that caused the lock.
This is hackish, and the proper solution is to fix your application that caused the locks. However, when dollars are on the line, a swift kick will get things moving again.
1) Enter MySQL
mysql -u your_user -p
2) Let's see the list of locked tables
mysql> show open tables where in_use>0;
3) Let's see the list of the current processes, one of them is locking your table(s)
mysql> show processlist;
4) Kill one of these processes
mysql> kill <put_process_id_here>;
You are using a transaction; autocommit does not disable transactions, it just makes them automatically commit at the end of the statement.
What could be happening is, some other thread is holding a record lock on some record (you're updating every record in the table!) for too long, and your thread is being timed out. Or maybe running multiple (2+) UPDATE queries on the same row during a single transaction.
You can see more details of the event by issuing a
SHOW ENGINE INNODB STATUS
after the event (in SQL editor). Ideally do this on a quiet test-machine.
mysql> set innodb_lock_wait_timeout=100;
Query OK, 0 rows affected (0.02 sec)
mysql> show variables like 'innodb_lock_wait_timeout';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 100 |
+--------------------------+-------+
Now trigger the lock again. You have 100 seconds time to issue a SHOW ENGINE INNODB STATUS\G to the database and see which other transaction is locking yours.
Take a look to see if your database is fine tuned, especially the transaction isolation. It isn't a good idea to increase the innodb_lock_wait_timeout variable.
Check your database transaction isolation level in MySQL:
mysql> SELECT ##GLOBAL.tx_isolation, ##tx_isolation, ##session.tx_isolation;
+-----------------------+-----------------+------------------------+
| ##GLOBAL.tx_isolation | ##tx_isolation | ##session.tx_isolation |
+-----------------------+-----------------+------------------------+
| REPEATABLE-READ | REPEATABLE-READ | REPEATABLE-READ |
+-----------------------+-----------------+------------------------+
1 row in set (0.00 sec)
You could get improvements changing the isolation level. Use the Oracle-like READ COMMITTED instead of REPEATABLE READ. REPEATABLE READ is the InnoDB default.
mysql> SET tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)
Also, try to use SELECT FOR UPDATE only if necessary.
Something is blocking the execution of the query. Most likely another query updating, inserting or deleting from one of the tables in your query. You have to find out what that is:
SHOW PROCESSLIST;
Once you locate the blocking process, find its id and run :
KILL {id};
Re-run your initial query.
mysql->SHOW PROCESSLIST;
kill xxxx;
and then kill which one in sleep. In my case it is 2156.
100% with what MarkR said. autocommit makes each statement a one statement transaction.
SHOW ENGINE INNODB STATUS should give you some clues as to the deadlock reason. Have a good look at your slow query log too to see what else is querying the table and try to remove anything that's doing a full tablescan. Row level locking works well but not when you're trying to lock all of the rows!
Try to update the below two parameters as they must be having default values.
innodb_lock_wait_timeout = 50
innodb_rollback_on_timeout = ON
For checking parameter value you can use the below SQL.
SHOW GLOBAL VARIABLES LIKE 'innodb_rollback_on_timeout';
Can you update any other record within this table, or is this table heavily used? What I am thinking is that while it is attempting to acquire a lock that it needs to update this record the timeout that was set has timed out. You may be able to increase the time which may help.
If you've just killed a big query, it will take time to rollback. If you issue another query before the killed query is done rolling back, you might get a lock timeout error. That's what happened to me. The solution was just to wait a bit.
Details:
I had issued a DELETE query to remove about 900,000 out of about 1 million rows.
I ran this by mistake (removes only 10% of the rows):
DELETE FROM table WHERE MOD(id,10) = 0
Instead of this (removes 90% of the rows):
DELETE FROM table WHERE MOD(id,10) != 0
I wanted to remove 90% of the rows, not 10%. So I killed the process in the MySQL command line, knowing that it would roll back all the rows it had deleted so far.
Then I ran the correct command immediately, and got a lock timeout exceeded error soon after. I realized that the lock might actually be the rollback of the killed query still happening in the background. So I waited a few seconds and re-ran the query.
In our case the problem did not have much to do with the locks themselves.
The issue was that one of our application endpoints needed to open 2 connections in parallel to process a single request.
Example:
Open 1st connection
Start transaction 1
Lock 1 row in table1
Open 2nd connection
Start transaction 2
Lock 1 row in table2
Commit transaction 2
Release 2nd connection
Commit transaction 1
Release 1st connection
Our application had a connection pool limited to 10 connections.
Unfortunately, under load, as soon as all connections were used the application stopped working and we started having this problem.
We had several requests that needed to open a second connection to complete, but could not due to the connection pool limit. As a consequence, those requests were keeping a lock on the table1 row for a long time leading the following requests that needed to lock the same row to throw this error.
Solution:
In the short term, we patched the problem by increasing the connection pool limit.
In the long term, we removed all nested connections, to fully solve the issue.
Tips:
You can easily check if you have nested connections by trying to lower your connection pool limit to 1 and test your application.
The number of rows is not huge... Create an index on account_import_id if its not the primary key.
CREATE INDEX idx_customer_account_import_id ON customer (account_import_id);
Make sure the database tables are using InnoDB storage engine and READ-COMMITTED transaction isolation level.
You can check it by SELECT ##GLOBAL.tx_isolation, ##tx_isolation; on mysql console.
If it is not set to be READ-COMMITTED then you must set it. Make sure before setting it that you have SUPER privileges in mysql.
You can take help from http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html.
By setting this I think your problem will be get solved.
You might also want to check you aren't attempting to update this in two processes at once. Users ( #tala ) have encountered similar error messages in this context, maybe double-check that...
I came from Google and I just wanted to add the solution that worked for me. My problem was I was trying to delete records of a huge table that had a lot of FK in cascade so I got the same error as the OP.
I disabled the autocommit and then it worked just adding COMMIT at the end of the SQL sentence. As far as I understood this releases the buffer bit by bit instead of waiting at the end of the command.
To keep with the example of the OP, this should have worked:
mysql> set autocommit=0;
mysql> update customer set account_import_id = 1; commit;
Do not forget to reactivate the autocommit again if you want to leave the MySQL config as before.
mysql> set autocommit=1;
Late to the party (as usual) however my issue was the fact that I wrote some bad SQL (being a novice) and several processes had a lock on the record(s) <-- not sure the appropriate verbiage. I ended up having to just: SHOW PROCESSLIST and then kill the IDs using KILL <id>
This kind of thing happened to me when I was using php
language construct exit; in middle of transaction. Then this
transaction "hangs" and you need to kill mysql process (described above with processlist;)
In my instance, I was running an abnormal query to fix data. If you lock the tables in your query, then you won't have to deal with the Lock timeout:
LOCK TABLES `customer` WRITE;
update customer set account_import_id = 1;
UNLOCK TABLES;
This is probably not a good idea for normal use.
For more info see: MySQL 8.0 Reference Manual
I ran into this having 2 Doctrine DBAL connections, one of those as non-transactional (for important logs), they are intended to run parallel not depending on each other.
CodeExecution(
TransactionConnectionQuery()
TransactionlessConnectionQuery()
)
My integration tests were wrapped into transactions for data rollback after very test.
beginTransaction()
CodeExecution(
TransactionConnectionQuery()
TransactionlessConnectionQuery() // CONFLICT
)
rollBack()
My solution was to disable the wrapping transaction in those tests and reset the db data in another way.
We ran into this issue yesterday and after slogging through just about every suggested solution here, and several others from other answers/forums we ended up resolving it once we realized the actual issue.
Due to some poor planning, our database was stored on a mounted volume that was also receiving our regular automated backups. That volume had reached max capacity.
Once we cleared up some space and restarted, this error was resolved.
Note that we did also manually kill several of the processes: kill <process_id>; so that may still be necessary.
Overall, our takeaway was that it was incredibly frustrating that none of our logs or warnings directly mentioned a lack of disk space, but that did seem to be the root cause.
I had similar error when using python to access mysql database.
The python program was using a while and for loop.
Closing cursor and link at appropriate line solved problem
https://github.com/nishishailesh/sensa_host_com/blob/master/sensa_write.py
see line 230
It appears that asking repeated link without closing previous link produced this error
I've faced a similar issue when doing some testing.
Reason - In my case transaction was not committed from my spring boot application because I killed the #transactional function during the execution(when the function was updating some rows). Due to which transaction was never committed to the database(MySQL).
Result - not able to update those rows from anywhere. But able to update other rows of the table.
mysql> update some_table set some_value = "Hello World" where id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
Solution - killed all the MySQL processes using
sudo killall -9 mysqld
sudo killall -9 mysqld_safe (restarting the server when an error occurs and logging runtime information to an error log. Not required in my case)
Had this same error, even though I was only updating one table with one entry, but after restarting mysql, it was resolved.
I am trying to delete several rows from a MySQL 5.0.45 database:
delete from bundle_inclusions;
The client works for a while and then returns the error:
Lock wait timeout exceeded; try restarting transaction
It's possible there is some uncommitted transaction out there that has a lock on this table, but I need this process to trump any such locks. How do I break the lock in MySQL?
I agree with Erik; TRUNCATE TABLE is the way to go. However, if you can't use that for some reason (for example, if you don't really want to delete every row in the table), you can try the following options:
Delete the rows in smaller batches (e.g. DELETE FROM bundle_inclusions WHERE id BETWEEN ? and ?)
If it's a MyISAM table (actually, this may work with InnoDB too), try issuing a LOCK TABLE before the DELETE. This should guarantee that you have exclusive access.
If it's an InnoDB table, then after the timeout occurs, use SHOW INNODB STATUS. This should give you some insight into why the lock acquisition failed.
If you have the SUPER privilege you could try SHOW PROCESSLIST ALL to see what other connections (if any) are using the table, and then use KILL to get rid of the one(s) you're competing with.
I'm sure there are many other possibilities; I hope one of these help.
Linux: In mysql configuration (/etc/my.cnf or /etc/mysql/my.cnf), insert / edit this line
innodb_lock_wait_timeout = 50
Increase the value sufficiently (it is in seconds), restart database, perform changes. Then revert the change and restart again.
I had the same issue, a rogue transaction without a end. I restarted the mysqld process. You don't need to truncate a table. You may lose data from that rogue transaction.
Guessing: truncate table bundle_inclusions