All of a sudden (without any changes to related code) we are getting lock errors through active record such as:
ActiveRecord::StatementInvalid: Mysql2::Error: Lock wait timeout exceeded;
try restarting transaction: UPDATE `items` SET `state` = 'reserved', `updated_at` = '2012-09-15 17:58:21' WHERE `items`.`id` = 248220
and
ActiveRecord::StatementInvalid: Mysql2::Error: Lock wait timeout exceeded;
try restarting transaction: DELETE FROM `sessions` WHERE `sessions`.`id` = 41997883
We aren't doing our own transactions in either of these models, so the only transactions are the built in rails ones. There has not been a surge in traffic or request volume.
These errors appear to be when a "new" query tries to run on a locked table and has to wait, how do we see what it's waiting for? How do we figure out which part of our code is issuing queries that lock the tables for extended periods of time?
Any ideas on where we can look or how to investigate the cause of this?
Take a look at pt-deadlock-logger, while not directly related to rails, should give you a considerable amount of information about the deadlocks occurring.
http://www.percona.com/doc/percona-toolkit/2.1/pt-deadlock-logger.html
There is a nice writeup with some examples:
http://www.mysqlperformanceblog.com/2012/09/19/logging-deadlocks-errors/
The tool is very simple and useful. It monitors the output of SHOW ENGINE INNODB STATUS and log the new deadlocks to a file or to a table that we can later review. Let’s see how it works with an example.
The article goes on to explain that this can log information about the deadlock such as queries involved, which hosts, thread ids, etc.
I've also found it helpful to prefix queries with comments to allow tracking, such as the file or module, function, even which user. The query comments usually get passed down all the way to diagnostic tools like this, and could help track down which parts of code and in which circumstances are causing deadlocks.
Related
We are encountering a lot of deadlocks, and while we found out that the problematic Foreign Key, we could not understand why exactly it happened.
I looked into the performance_schema tables to understand but I dont think I have sufficient knowledge. Here's what I thought would help me debug deadlock:
Look into the transation ID/Thread ID of the two conflicting transactions (available from output of Show Engine innodb status)
I want to see all the statements for the two transaction, after one has failed and one has succeeded. Is that even possible?
Once I have that info, I can get more clarity, and hopefully pinpoint why the deadlock happened
I was focusing on events_statements_history_long, but with the Thread ID I got in step 1, I got no rows in response withing a minute of deadlock.
Is this the correct approach? If not, where I'm going wrong? If yet, Is there relevant literature out there which can give more clarity?
Look at the output of Show Engine innodb status as you said.
The latest deadlock information section will show you the two statements that caused the latest deadlock in the system.
If you are encountering many different deadlocks, you may want to enable innodb_print_all_deadlocks to view them in the mysqld error log.
You will be able to see the statements and lock types on which table that resulted in a deadlock.
Logs showing that from time to time this error is raised.
I'm reading the docs and it's very confusing because we're not locking any tables to do inserts and we have no transactions beyond individual SQL calls.
So - might this be happening because we're running out of the mySQL connection pool in Node? (We've set it to something like 250 simultaneous connections).
I'm trying to figure out how to replicate this but having no luck.
Every query not run within an explicit transaction runs in an implicit transaction that immediately commits when the query finishes or rolls back if an error occurs... so, yes, you're using transactions.
Deadlocks occur when at least two queries are in the process of acquiring locks, and each of them holds row-level locks that they happened to acquire in such an order that they each now need another lock that the other one holds -- so, they're "deadlocked." An infinite wait condition exists between the running queries. The server notices this.
The error is not so much a fault as it is the server saying, "I see what you did, there... and, you're welcome, I cleaned it up for you because otherwise, you would have waited forever."
What you aren't seeing is that there are two guilty parties -- two different queries that caused the problem -- but only one of them is punished. The query that has accomplished the least amount of work (admittedly, this concept is nebulous) will be killed with the deadlock error, and the other query happily proceeds along its path, having no idea that it was the lucky survivor.
This is why the deadlock error message ends with "try restarting transaction" -- which, if you aren't explicitly using transacrions, just means "run your query again."
See https://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html and examine the output of SHOW ENGINE INNODB STATUS;, which will show you the other query -- the one that helped cause the deadlock but that was not killed -- as well as the one that was.
I am running following update -
update table_x set name= 'xyz' where id = 121;
and getting -
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
I googled it number of times and adding extra time to innodb_lock_wait_timeout not helping me out.
Please let me know the root cause of this issue and how I can solve it.
I am using mysql 5.6(master-master replication) on dedicated server.
Also table_x(Innodb table) heavily used in database. Autocommit is on.
Find out what other statement is running at the same time as this UPDATE. It sounds as if it is running a long time and hanging onto the rows that this UPDATE needs. Meanwhile this statement is waiting.
One way to see it is to do SHOW FULL PROCESSLIST; while the UPDATE is hung.
(In my opinion, the default of 50 seconds for innodb_lock_wait_timeout is much to high. Raising the value only aggravates the situation.)
If you give up on fixing the 'root cause' of the conflict, then you might tackle the issue a different way.
Lower innodb_lock_wait_timeout to, say, 5.
Programmatically catch the error when it times out and restart the UPDATE.
Do likewise for all other transactions. Other queries may also be piling up; restarting some may "uncork" the problem.
SHOW VARIABLES LIKE 'tx_isolation'; -- There may be a better setting for it, especially if a long-running SELECT is the villain.
Looks like there is some lock on any of your other transaction. You can check the status of INNODB by using this:
SHOW ENGINE INNODB STATUS\G
Check if there is any lock on the tables like this:
show open tables where in_use>0;
And then kill that processes which are locked.
I have solved the problem. I tried different values for innodb_lock_wait_timeout, also tried to change queries but got the same error. I did some research and asked my colleagues about hibernate.
They were doing numbers of transaction which include updating main table and committing in the end. So, I suggested them to use commit on each transaction. Finally I am not getting any lock wait time out errors.
Once in a while I get a mysql error. The error is
Deadlock found when trying to get lock; try restarting transaction
The query is
var res = cn.Execute("insert ignore into
Post(desc, item_id, user, flags)
select #desc, #itemid, #userid, 0",
new { desc, itemid, userid });
How on earth can this query cause it? When googling I saw something about how querys that take long lock rows and cause this problem but no rows need to be touched for this insert
Deadlocks are caused by inter-transaction ordering and lock acquisitions. Generally there is one active transaction per connection (although different databases may work differently). So it is only in the case of multiple connections and thus multiple overlapping transactions that deadlocks can occur. A single connection/transaction cannot deadlock itself because there is no lock it can't acquire: it has it, or it can get it.
An insert deadlock can be caused by a unique constraint - so check for a unique key constraint as a culprit. Other causes could be locks held for select "for update" statements, etc.
Also, ensure all transactions are completed immediately (committed or rolled back) after the operation(s) that require them. If a transaction is not closed in a timely manner it can lead to such deadlock behavior trivially. While "autocommit" usually handles this, it can be changed and should not be relied upon: I recommend proper manual transaction usage.
See Mysql deadlock explanation needed and How to Cope with Deadlocks for more information. In this case, it is likely sufficient to "just try again".
I have a "Lock wait timeout exceeded" error from MySQL that I can't reproduce or diagnose. I'm sure it's deadlock (as opposed to a transaction grabbing a lock then twiddling its thumbs), because my logs show that another process started at the same time, also hung, then continued when the first timed out. But normally, InnoDB detects deadlocks without timing out. So I am trying to understand why this deadlock was not detected.
Both transactions are using isolation level serializable. (I have a fair understanding of InnoDB locking in this isolation level.) There is one non-InnoDB (MyISAM) table used in the transaction, which I insert into and update. However, I don't understand how it could be involved in the deadlock, because I believe MyISAM just takes a table lock during the inserts and updates (then immediately releases it since MyISAM is not transactional), so no other lock is taken while this table lock is held.
So I'm convinced that the deadlock involves only InnoDB tables, which brings me back to the question of why it was not detected. The MySQL documentation (http://dev.mysql.com/doc/refman/5.1/en/innodb-deadlock-detection.html) implies that deadlock detection pretty much always works. The problem cases I found while searching involve things like explicit "lock table", "alter table", and "insert delayed". I'm not doing any of these things, just inserts, updates, and selects (some of my selects are "for update").
I tried to reproduce by creating one MyISAM table and a couple InnoDB tables and doing various sequences of insert and update into MyISAM, and "select for update"s in InnoDB. But every time I produced a deadlock, InnoDB reported it immediately. I could not reproduce a timeout.
Any other tips for diagnosing this? I am using mysql 5.1.49.
One tip is that you can use SHOW INNODB STATUS to, you guessed it, show the status of the InnoDB engine.
The information it returns (a big hunk of text) includes info on current table locks, and the last detected deadlock (under the heading "LATEST DETECTED DEADLOCK"), so this trick isn't that useful well after the fact, but it can help you track down a hung query while it's happening.
mysqladmin debug can also print useful lock-debugging information.
A third trick is to create a magically-named table called innodb_lock_monitor as described at http://dev.mysql.com/doc/refman/5.1/en/innodb-monitors.html which gives more detailed lock debugging.
HTH!
UPDATE:
It may not be detecting a deadlock becuase it isn't actually a deadlock, but more likely that one process is waiting for a row lock on a row that is locked by another process. From the manual for the innodb_lock_wait_timeout variable:
The timeout in seconds an InnoDB
transaction may wait for a row lock
before giving up. The default value is
50 seconds. A transaction that tries
to access a row that is locked by
another InnoDB transaction will hang
for at most this many seconds before
issuing the following error:
ERROR 1205 (HY000): Lock wait timeout
exceeded; try restarting transaction
When a lock wait timeout occurs, the
current statement is not executed. The
current transaction is not rolled
back. (Until MySQL 5.0.13 InnoDB
rolled back the entire transaction if
a lock wait timeout happened.
A deadlock occurs, for example, when two processes each need to lock rows that are locked by the other process, and no amount of waiting will resolve the conflict.
I managed to reproduce and diagnose the problem. It is a deadlock involving MyISAM and InnoDB. It appears to be an interaction between transactional InnoDB row locking and non-transactional MyISAM table locking. I've filed a bug: http://bugs.mysql.com/bug.php?id=57118. At any rate, I believe the answer to my original question is, InnoDB should always detect deadlocks, unless there is a bug in MySQL. ;-)