When does InnoDB time out instead of reporting deadlock? - mysql

I have a "Lock wait timeout exceeded" error from MySQL that I can't reproduce or diagnose. I'm sure it's deadlock (as opposed to a transaction grabbing a lock then twiddling its thumbs), because my logs show that another process started at the same time, also hung, then continued when the first timed out. But normally, InnoDB detects deadlocks without timing out. So I am trying to understand why this deadlock was not detected.
Both transactions are using isolation level serializable. (I have a fair understanding of InnoDB locking in this isolation level.) There is one non-InnoDB (MyISAM) table used in the transaction, which I insert into and update. However, I don't understand how it could be involved in the deadlock, because I believe MyISAM just takes a table lock during the inserts and updates (then immediately releases it since MyISAM is not transactional), so no other lock is taken while this table lock is held.
So I'm convinced that the deadlock involves only InnoDB tables, which brings me back to the question of why it was not detected. The MySQL documentation (http://dev.mysql.com/doc/refman/5.1/en/innodb-deadlock-detection.html) implies that deadlock detection pretty much always works. The problem cases I found while searching involve things like explicit "lock table", "alter table", and "insert delayed". I'm not doing any of these things, just inserts, updates, and selects (some of my selects are "for update").
I tried to reproduce by creating one MyISAM table and a couple InnoDB tables and doing various sequences of insert and update into MyISAM, and "select for update"s in InnoDB. But every time I produced a deadlock, InnoDB reported it immediately. I could not reproduce a timeout.
Any other tips for diagnosing this? I am using mysql 5.1.49.

One tip is that you can use SHOW INNODB STATUS to, you guessed it, show the status of the InnoDB engine.
The information it returns (a big hunk of text) includes info on current table locks, and the last detected deadlock (under the heading "LATEST DETECTED DEADLOCK"), so this trick isn't that useful well after the fact, but it can help you track down a hung query while it's happening.
mysqladmin debug can also print useful lock-debugging information.
A third trick is to create a magically-named table called innodb_lock_monitor as described at http://dev.mysql.com/doc/refman/5.1/en/innodb-monitors.html which gives more detailed lock debugging.
HTH!
UPDATE:
It may not be detecting a deadlock becuase it isn't actually a deadlock, but more likely that one process is waiting for a row lock on a row that is locked by another process. From the manual for the innodb_lock_wait_timeout variable:
The timeout in seconds an InnoDB
transaction may wait for a row lock
before giving up. The default value is
50 seconds. A transaction that tries
to access a row that is locked by
another InnoDB transaction will hang
for at most this many seconds before
issuing the following error:
ERROR 1205 (HY000): Lock wait timeout
exceeded; try restarting transaction
When a lock wait timeout occurs, the
current statement is not executed. The
current transaction is not rolled
back. (Until MySQL 5.0.13 InnoDB
rolled back the entire transaction if
a lock wait timeout happened.
A deadlock occurs, for example, when two processes each need to lock rows that are locked by the other process, and no amount of waiting will resolve the conflict.

I managed to reproduce and diagnose the problem. It is a deadlock involving MyISAM and InnoDB. It appears to be an interaction between transactional InnoDB row locking and non-transactional MyISAM table locking. I've filed a bug: http://bugs.mysql.com/bug.php?id=57118. At any rate, I believe the answer to my original question is, InnoDB should always detect deadlocks, unless there is a bug in MySQL. ;-)

Related

Default Concurrency Control Implementation in MySQL

What is the default implementation of concurrency control in MySQL? Is it optimistic locking (multi version concurrency control), or pessimistic locking (2 phase locking)? More specifically, how does InnoDb do it?
Internally, how does mysql (with innodb) decide on the start of a transaction whether to lock the row, or rollback after a conflict?
InnoDB uses optimistic locking.
There is no locking at the start of a transaction. How would it know which rows to lock until you execute a specific query? It doesn't even know which table(s) that you will eventually need to lock rows in.
There is no need for a rollback after a lock conflict. If you do a query in one transaction that has to wait because another session holds the lock, then your query waits up to a certain number of seconds (per the config option innodb_lock_wait_timeout, default 50 seconds).
If the other session commits before the timeout, then your session stops waiting, acquires the locks it needs, and proceeds with the query.
If your wait times out before the other session commits, your query returns an error. This still does NOT rollback your transaction; previous changes you made during your transaction are still able to be committed. You can even try the query that timed out again.
Exception: in cases of deadlock, InnoDB chooses one of the transactions involved in the deadlock, and forcibly does a rollback on one of them. It tries to choose the transaction that has modified fewer rows. If the transactions are tied, then the choice is arbitrary.

ER_LOCK_DEADLOCK called when there is no lock

Logs showing that from time to time this error is raised.
I'm reading the docs and it's very confusing because we're not locking any tables to do inserts and we have no transactions beyond individual SQL calls.
So - might this be happening because we're running out of the mySQL connection pool in Node? (We've set it to something like 250 simultaneous connections).
I'm trying to figure out how to replicate this but having no luck.
Every query not run within an explicit transaction runs in an implicit transaction that immediately commits when the query finishes or rolls back if an error occurs... so, yes, you're using transactions.
Deadlocks occur when at least two queries are in the process of acquiring locks, and each of them holds row-level locks that they happened to acquire in such an order that they each now need another lock that the other one holds -- so, they're "deadlocked." An infinite wait condition exists between the running queries. The server notices this.
The error is not so much a fault as it is the server saying, "I see what you did, there... and, you're welcome, I cleaned it up for you because otherwise, you would have waited forever."
What you aren't seeing is that there are two guilty parties -- two different queries that caused the problem -- but only one of them is punished. The query that has accomplished the least amount of work (admittedly, this concept is nebulous) will be killed with the deadlock error, and the other query happily proceeds along its path, having no idea that it was the lucky survivor.
This is why the deadlock error message ends with "try restarting transaction" -- which, if you aren't explicitly using transacrions, just means "run your query again."
See https://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html and examine the output of SHOW ENGINE INNODB STATUS;, which will show you the other query -- the one that helped cause the deadlock but that was not killed -- as well as the one that was.

MySQL "LOCK TABLES" timeout?

What's the timeout for mysql LOCK TABLES statement?
Can't find it anywhere.
I tried to set variable innodb_lock_wait_timeout ini my.cnf but it seems it's related to another (row level) locking not to table locking.
Simply it has no effect for LOCK TABLES.
I want to set some low timeout value for case of deadlock, because if some operation will LOCK tables and something will go wrong, it will hang up the whole site!
Which is stupid for example in case of finishing purchase on your site.
My work-around is to create a dedicated lock table and just lock a row in that table. This has the advantage of only locking the processes that specifically want to be locked. Other parts of the application can continue to access the tables even if they are at some point touched by the update processes.
Setup
CREATE TABLE `mutex` (
EMPTY ENUM('') NOT NULL,
PRIMARY KEY (EMPTY)
);
Usage
set innodb_lock_wait_timeout = 1;
start transaction;
insert into `mutex` values();
[... do the real work here ... or somewhere else ... even a different machine ...]
delete from `mutex`;
commit;
Why are you using LOCK TABLES?
If you are using MyISAM (which sometimes needs LOCK TABLES), you should convert to InnoDB.
If you are using InnoDB, you should never use LOCK TABLES. Instead, depend on innodb_lock_wait_timeout (default is an unreasonably high 50 seconds). And you should check for errors.
InnoDB Deadlocks are caught and immediately cause an error. Certain non-deadlocks may wait for innodb_lock_wait_timeout.
Edit
Since the transaction looks like
BEGIN;
SELECT ...;
compute some stuff
UPDATE ... (using that stuff);
COMMIT;
You need to add FOR UPDATE on the end of the SELECT.
I think you are after the table_lock_timout variable which was introduced in MySQL 5.0.10 but subsequently removed in 5.5. Unfortunately, the release notes don't specify an alternative to use, and I'm guessing that the general attitude is to switch over to using InnoDB transactions as #Rick James has stated in his answer.
I think that removing the variable was unhelpful. Others may regard this as a case of the XY Problem, where we are trying to fix a symptom (deadlocks) by changing the timeout period of locking tables when really we should resolve the root cause by switching over to transactions instead. I think there may still be cases where table locks are more suitable to the application than using transactions and are perhaps a lot easier to comprehend, even if they are worse performing.
The nice thing about using LOCK TABLES, is that you can state the tables that you're queries are dependent upon before proceeding. With transactions, the locks are grabbed at the last possible moment and if they can't be fetched and time-out, you then need to check for this failure and roll back before trying everything all over again. It's simpler to have a 1 second timeout (minimum) on the lock tables query and keep retrying to get the lock(s) until you succeed and then proceeding with your queries before unlocking the tables. This logic is at no risk of deadlocks.
I believe the developer's attitude is summed up by the following excerpt from the documetation:
...avoid using the LOCK TABLES statement, because it does not offer
any extra protection, but instead reduces concurrency.
The correct answer is the lock_wait_timeout system variable.
From the documentation:
This variable specifies the timeout in seconds for attempts to acquire
metadata locks. The permissible values range from 1 to 31536000 (1
year). The default is 31536000.
This timeout applies to all statements that use metadata locks. These
include DML and DDL operations on tables, views, stored procedures,
and stored functions, as well as LOCK TABLES, FLUSH TABLES WITH READ
LOCK, and HANDLER statements.
I think you meant to say the default timeout value; which is 50 Seconds per MySQL Documentation it says
innodb_lock_wait_timeout Default 50 The timeout in seconds an
InnoDB transaction may wait for a row lock before giving up. The
default value is 50 seconds

MySQL Lock wait timeout exceeded

I have got the error Lock wait timeout exceeded; try restarting transaction. What are the reasons for this and how to solve the problem? FYI: innodb_lock_wait_timeout = 100 in MySQL config file.
This is problem of lock contention, which ultimately result in a time-out on one of the lock. Here are a few suggestions:
Make sure you have the correct indexes which result in row-level locks not table-level lock. This will reduce the contention.
Make sure you have indexes on the foreign key constraints. To check the relational constraints during insert or update, some database lock the whole referenced table if there is no such index (don't know if this is the case of MySQL)
If problem is still here, try to make the transaction faster/smaller. Again, this will reduce the contention on the database.
Increase the timeout but keep the value reasonable
Is this happening on a high-trafficked system where transactions take a long time (i.e. tables are locked for a long time)? If so, you might want to look into your transaction code to make them shorter / more granular / more performant.

MySQL: "lock wait timeout exceeded"

I am trying to delete several rows from a MySQL 5.0.45 database:
delete from bundle_inclusions;
The client works for a while and then returns the error:
Lock wait timeout exceeded; try restarting transaction
It's possible there is some uncommitted transaction out there that has a lock on this table, but I need this process to trump any such locks. How do I break the lock in MySQL?
I agree with Erik; TRUNCATE TABLE is the way to go. However, if you can't use that for some reason (for example, if you don't really want to delete every row in the table), you can try the following options:
Delete the rows in smaller batches (e.g. DELETE FROM bundle_inclusions WHERE id BETWEEN ? and ?)
If it's a MyISAM table (actually, this may work with InnoDB too), try issuing a LOCK TABLE before the DELETE. This should guarantee that you have exclusive access.
If it's an InnoDB table, then after the timeout occurs, use SHOW INNODB STATUS. This should give you some insight into why the lock acquisition failed.
If you have the SUPER privilege you could try SHOW PROCESSLIST ALL to see what other connections (if any) are using the table, and then use KILL to get rid of the one(s) you're competing with.
I'm sure there are many other possibilities; I hope one of these help.
Linux: In mysql configuration (/etc/my.cnf or /etc/mysql/my.cnf), insert / edit this line
innodb_lock_wait_timeout = 50
Increase the value sufficiently (it is in seconds), restart database, perform changes. Then revert the change and restart again.
I had the same issue, a rogue transaction without a end. I restarted the mysqld process. You don't need to truncate a table. You may lose data from that rogue transaction.
Guessing: truncate table bundle_inclusions