Why does mysql deadlock here? - mysql

Once in a while I get a mysql error. The error is
Deadlock found when trying to get lock; try restarting transaction
The query is
var res = cn.Execute("insert ignore into
Post(desc, item_id, user, flags)
select #desc, #itemid, #userid, 0",
new { desc, itemid, userid });
How on earth can this query cause it? When googling I saw something about how querys that take long lock rows and cause this problem but no rows need to be touched for this insert

Deadlocks are caused by inter-transaction ordering and lock acquisitions. Generally there is one active transaction per connection (although different databases may work differently). So it is only in the case of multiple connections and thus multiple overlapping transactions that deadlocks can occur. A single connection/transaction cannot deadlock itself because there is no lock it can't acquire: it has it, or it can get it.
An insert deadlock can be caused by a unique constraint - so check for a unique key constraint as a culprit. Other causes could be locks held for select "for update" statements, etc.
Also, ensure all transactions are completed immediately (committed or rolled back) after the operation(s) that require them. If a transaction is not closed in a timely manner it can lead to such deadlock behavior trivially. While "autocommit" usually handles this, it can be changed and should not be relied upon: I recommend proper manual transaction usage.
See Mysql deadlock explanation needed and How to Cope with Deadlocks for more information. In this case, it is likely sufficient to "just try again".

Related

Intermittent Lock Wait Timeout Laravel DB Transaction (with 5 retries)

We've been experiencing intermittent lock timeout errors (roughly 1-2 a day out of ~250).
On checkout, we get all of the users details, save the order, process any payments, and then update the order. I think it may be the secondary update that's causing it.
Example of our code (not exactly the same but close enough):
DB::transaction(function () use ($paymentMethod, $singleUseTokenId, $requiresPayment, $chargeAccount) {
// create order locally
$order = Order::create([
'blah' => $data['blah'],
]);
// handle payment
$this->handlePayment();
// update order with new status (with a secondary transaction for safety)
DB::transaction(function () use ($order) {
$order->update([
'status' => 'new status',
]);
}, 5);
}, 5); // Retry transaction 5 times - this reduced the lock timeout errors a lot
And the intermittent error we get back is (actual values removed):
SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction (SQL: insert into `orders` (`user_id`, `customer_uuid`, `type_uuid`, `status_uuid`, `po_number`, `order_details`, `cart_identifier`, `cart_content`, `cart_sub_total`, `cart_tax`, `cart_grand_total`, `payment_type_uuid`, `shipping_address`, `uuid`, `updated_at`, `created_at`)
I've read lots up on it and some people say increase timeout (seems like a workaround), optimistic locking (I thought transactions already do that), and other things.
From what I can tell from database breadcrumbs, the order create sometimes takes a long time (eg saw one at 3s, another at 23s for some reason as it's usually 50ms insert), and then the other things happen and it tries to update the order but the row is still locked from the create().
Notes:
We have 4 foreign keys on the orders table (customer uuid, user uuid, order type uuid, order status uuid) - I feel that these may be causing issues.
Some eloquent creates take 3s, others 23s (only checked issue ones). On most orders, request is 500ms max, so these are outliers.
Any suggestions?
Solution: No primary key on orders uuid. Very silly mistake. Caused InnoDB to basically create a 6 byte key for index. And lock up from consecutive insert, then update..
If you see "lock wait timeout" errors look at other transactions. Particularly harmful are long running transactions. You can spot those in SHOW ENGINE INNODB STATUS\G. Climbing InnoDB history list indicates there are ones, too. Currently running long transactions will be listed in information_schema.INNODB_TRX.
Note if a transaction grabbed an exclusive lock it's not released until the end of the transaction, not the end of a query.
First, rule out long running queries. For example, slow UPDATE will hold a lock for its execution time.
After all queries are made reasonably fast, review your transactions. Make them as short as possible. Quite often clients open a transaction, execute a query or two then go to third-party API calls or do other heavy lifting and keep the transaction open. Other transactions meanwhile will be getting "Lock wait timeout".

ER_LOCK_DEADLOCK called when there is no lock

Logs showing that from time to time this error is raised.
I'm reading the docs and it's very confusing because we're not locking any tables to do inserts and we have no transactions beyond individual SQL calls.
So - might this be happening because we're running out of the mySQL connection pool in Node? (We've set it to something like 250 simultaneous connections).
I'm trying to figure out how to replicate this but having no luck.
Every query not run within an explicit transaction runs in an implicit transaction that immediately commits when the query finishes or rolls back if an error occurs... so, yes, you're using transactions.
Deadlocks occur when at least two queries are in the process of acquiring locks, and each of them holds row-level locks that they happened to acquire in such an order that they each now need another lock that the other one holds -- so, they're "deadlocked." An infinite wait condition exists between the running queries. The server notices this.
The error is not so much a fault as it is the server saying, "I see what you did, there... and, you're welcome, I cleaned it up for you because otherwise, you would have waited forever."
What you aren't seeing is that there are two guilty parties -- two different queries that caused the problem -- but only one of them is punished. The query that has accomplished the least amount of work (admittedly, this concept is nebulous) will be killed with the deadlock error, and the other query happily proceeds along its path, having no idea that it was the lucky survivor.
This is why the deadlock error message ends with "try restarting transaction" -- which, if you aren't explicitly using transacrions, just means "run your query again."
See https://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html and examine the output of SHOW ENGINE INNODB STATUS;, which will show you the other query -- the one that helped cause the deadlock but that was not killed -- as well as the one that was.

How is it possible to have deadlocks without transactions?

My code is a bit of a mess, I'm not sure where the problem is, but I'm getting deadlocks without using any transactions or table locking. Any information about this would help.
I've looked up deadlocks and it seems the only way to cause them is by using transactions.
Error Number: 1213
Deadlock found when trying to get lock; try restarting transaction
UPDATE `x__cf_request` SET `contact_success` = 1, `se_engine_id` = 0, `is_fresh` = 1 WHERE `id` = '28488'
Edit: Why downvotes? It's a valid question. If it's impossible just say why, so that other people can see when they run into this issue.
In InnoDB each statement is run in a transation; BEGIN and autocommit=0 are used for multi-statement transactions. Having said that, the deadlock happens between different transactions.
It seems you don't have index on the id field, or more than one record have the same id. If not, than you have an index-gap locking in place. To diagnose further, you need to provide the output of SHOW ENGINE InnoDB STATUS

How can I troubleshoot MySQL Lock Timeout Errors with Rails?

All of a sudden (without any changes to related code) we are getting lock errors through active record such as:
ActiveRecord::StatementInvalid: Mysql2::Error: Lock wait timeout exceeded;
try restarting transaction: UPDATE `items` SET `state` = 'reserved', `updated_at` = '2012-09-15 17:58:21' WHERE `items`.`id` = 248220
and
ActiveRecord::StatementInvalid: Mysql2::Error: Lock wait timeout exceeded;
try restarting transaction: DELETE FROM `sessions` WHERE `sessions`.`id` = 41997883
We aren't doing our own transactions in either of these models, so the only transactions are the built in rails ones. There has not been a surge in traffic or request volume.
These errors appear to be when a "new" query tries to run on a locked table and has to wait, how do we see what it's waiting for? How do we figure out which part of our code is issuing queries that lock the tables for extended periods of time?
Any ideas on where we can look or how to investigate the cause of this?
Take a look at pt-deadlock-logger, while not directly related to rails, should give you a considerable amount of information about the deadlocks occurring.
http://www.percona.com/doc/percona-toolkit/2.1/pt-deadlock-logger.html
There is a nice writeup with some examples:
http://www.mysqlperformanceblog.com/2012/09/19/logging-deadlocks-errors/
The tool is very simple and useful. It monitors the output of SHOW ENGINE INNODB STATUS and log the new deadlocks to a file or to a table that we can later review. Let’s see how it works with an example.
The article goes on to explain that this can log information about the deadlock such as queries involved, which hosts, thread ids, etc.
I've also found it helpful to prefix queries with comments to allow tracking, such as the file or module, function, even which user. The query comments usually get passed down all the way to diagnostic tools like this, and could help track down which parts of code and in which circumstances are causing deadlocks.

how to avoid deadlock in mysql

I have the following query (all tables are innoDB)
INSERT INTO busy_machines(machine)
SELECT machine FROM all_machines
WHERE machine NOT IN (SELECT machine FROM busy_machines)
and machine_name!='Main'
LIMIT 1
Which causes a deadlock when I run it in threads, obviously because of the inner select, right?
The error I get is:
(1213, 'Deadlock found when trying to get lock; try restarting transaction')
How can I avoid the deadlock? Is there a way to change to query to make it work, or do I need to do something else?
The error doesn't happen always, of course, only after running this query lots of times and in several threads.
To my understanding, a select does not acquire lock and should not be the cause of the deadlock.
Each time you insert/update/or delete a row, a lock is acquired. To avoid deadlock, you must then make sure that concurrent transactions don't update row in an order that could result in a deadlock. Generally speaking, to avoid deadlock you must acquire lock always in the same order even in different transaction (e.g. always table A first, then table B).
But if within one transaction you insert in only one table this condition is met, and this should then usually not lead to a deadlock. Are you doing something else in the transaction?
A deadlock can however happen if there are missing indexes. When a row in inserted/update/delete, the database need to check the relational constraints, that is, make sure the relations are consistent. To do so, the database needs to check the foreign keys in the related tables. It might result in other lock being acquired than the row that is modified. Be sure then to always have index on the foreign keys (and of course primary keys), otherwise it could result in a table lock instead of a row lock. If table lock happen, the lock contention is higher and the likelihood of deadlock increases.
Not sure what happens exactly in your case, but maybe it helps.
You will probably get better performance if you replace your "NOT IN" with an outer join.
You can also separate this into two queries to avoid inserting and selecting the same table in a single query.
Something like this:
SELECT a.machine
into #machine
FROM all_machines a
LEFT OUTER JOIN busy_machines b on b.machine = a.machine
WHERE a.machine_name!='Main'
and b.machine IS NULL
LIMIT 1;
INSERT INTO busy_machines(machine)
VALUES (#machine);