How is it possible to have deadlocks without transactions? - mysql

My code is a bit of a mess, I'm not sure where the problem is, but I'm getting deadlocks without using any transactions or table locking. Any information about this would help.
I've looked up deadlocks and it seems the only way to cause them is by using transactions.
Error Number: 1213
Deadlock found when trying to get lock; try restarting transaction
UPDATE `x__cf_request` SET `contact_success` = 1, `se_engine_id` = 0, `is_fresh` = 1 WHERE `id` = '28488'
Edit: Why downvotes? It's a valid question. If it's impossible just say why, so that other people can see when they run into this issue.

In InnoDB each statement is run in a transation; BEGIN and autocommit=0 are used for multi-statement transactions. Having said that, the deadlock happens between different transactions.
It seems you don't have index on the id field, or more than one record have the same id. If not, than you have an index-gap locking in place. To diagnose further, you need to provide the output of SHOW ENGINE InnoDB STATUS

Related

mysql deadlocking with autocommit on and no confict

I have a piece of (Perl) code, of which I have multiple instances running at the same time, all with a different - unique - value for a variable $dsID. Nearly all of them keep falling over when they try to execute the following (prepared) SQL statement:
DELETE FROM ssRates WHERE ssID IN (SELECT id FROM snapshots WHERE dsID=?)
returning the error:
Lock wait timeout exceeded; try restarting transaction
Which sounds clear enough, except for a few things.
I have autocommit enabled, and am not using (explicit) transactions.
I'm using InnoDB which is supposed to use row-level locking.
The argument passed as $dsID is unique to each code, so there should be no conflicting locks to get into deadlocks.
Actually, at present, there are no rows that match the inner SELECT clause (I have verified this).
Given these things, I cannot understand why I am getting lock problems -- no locks should be waiting on each other, and there is no scope for deadlocks! (Note, though, that the same script later on does insert into the ssRates table, so some instances of the code may be doing that).
Having googled around a little, this looks like it may be a "gap locking" phenomenon, but I'm not entirely sure why, and more to the point, I'm not sure what the right solution is. I have some possible workarounds, -- the obvious one being to split the process up: do the select clause, and then loop over results giving delete command. But really, I'd like to understand this otherwise I'm going to end up in this mess again!
So I have two questions for you friendly experts.
Is this a gap-locking thing?
If not - what is it? If yes -- why. I can't see how this condition matches the gap lock definition.
(NB, server is running MariaDB: 5.5.68-MariaDB; in case this is something fixed in newer versions).

How to find all statements which a failed transaction executed in Mysql

We are encountering a lot of deadlocks, and while we found out that the problematic Foreign Key, we could not understand why exactly it happened.
I looked into the performance_schema tables to understand but I dont think I have sufficient knowledge. Here's what I thought would help me debug deadlock:
Look into the transation ID/Thread ID of the two conflicting transactions (available from output of Show Engine innodb status)
I want to see all the statements for the two transaction, after one has failed and one has succeeded. Is that even possible?
Once I have that info, I can get more clarity, and hopefully pinpoint why the deadlock happened
I was focusing on events_statements_history_long, but with the Thread ID I got in step 1, I got no rows in response withing a minute of deadlock.
Is this the correct approach? If not, where I'm going wrong? If yet, Is there relevant literature out there which can give more clarity?
Look at the output of Show Engine innodb status as you said.
The latest deadlock information section will show you the two statements that caused the latest deadlock in the system.
If you are encountering many different deadlocks, you may want to enable innodb_print_all_deadlocks to view them in the mysqld error log.
You will be able to see the statements and lock types on which table that resulted in a deadlock.

Why does mysql deadlock here?

Once in a while I get a mysql error. The error is
Deadlock found when trying to get lock; try restarting transaction
The query is
var res = cn.Execute("insert ignore into
Post(desc, item_id, user, flags)
select #desc, #itemid, #userid, 0",
new { desc, itemid, userid });
How on earth can this query cause it? When googling I saw something about how querys that take long lock rows and cause this problem but no rows need to be touched for this insert
Deadlocks are caused by inter-transaction ordering and lock acquisitions. Generally there is one active transaction per connection (although different databases may work differently). So it is only in the case of multiple connections and thus multiple overlapping transactions that deadlocks can occur. A single connection/transaction cannot deadlock itself because there is no lock it can't acquire: it has it, or it can get it.
An insert deadlock can be caused by a unique constraint - so check for a unique key constraint as a culprit. Other causes could be locks held for select "for update" statements, etc.
Also, ensure all transactions are completed immediately (committed or rolled back) after the operation(s) that require them. If a transaction is not closed in a timely manner it can lead to such deadlock behavior trivially. While "autocommit" usually handles this, it can be changed and should not be relied upon: I recommend proper manual transaction usage.
See Mysql deadlock explanation needed and How to Cope with Deadlocks for more information. In this case, it is likely sufficient to "just try again".

InnoDB deadlock with lock modes S and X

In my application, I have two queries that occur from time to time (from different processes), that cause a deadlock.
Query #1
UPDATE tblA, tblB SET tblA.varcharfield=tblB.varcharfield WHERE tblA.varcharfield IS NULL AND [a few other conditions];
Query #2
INSERT INTO tmp_tbl SELECT * FROM tblA WHERE [various conditions];
Both of these queries take a significant time, as these tables have millions of rows. When query #2 is running, it seems that tblA is locked in mode S. It seems that query #1 requires an X lock. Since this is incompatible with an S lock, query #1 waits for up to 30 seconds, at which point I get a deadlock:
Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
Based on what I've read in the documentation, I think I have a couple options:
Set an index on tblA.varcharfield. Unfortunately, I think that this would require a very large index to store the field of varchar(512). (See edit below... this didn't work.)
Disable locking with SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
. I don't understand the implications of this, and am worried about corrupt data. I don't use explicit transactions in my application currently, but I might at some point in the future.
Split my time-consuming queries into small pieces so that they can queue and run in MySQL without reaching the 30-second timeout. This wouldn't really fix the heart of the issue, and I am concerned that when my database servers get busy that the problem will occur again.
Simply retrying queries over and over again... not an option I am hoping for.
How should I proceed? Are there alternate methods I should consider?
EDIT: I have tried setting an index on varcharfield, but the table is still locking. I suspect that the locking happens when the UPDATE portion is actually executing. Are there other suggestions to get around this problem?
A. If we assume that indexing varcharField takes a lot of disk space and adding new column will not hit you hard I can suggest the following approach:
create new field with datatype "tinyint"
index it.
this field will store 0 if varcharField is null and 1 - otherwise.
rewrite the first query to do update relying on new field. In this case it will not cause entire table locking.
Hope it helps.
You can index only part of the varchar column, it will still work, and will require less space. Just specify index size:
CREATE INDEX someindex ON sometable (varcharcolumn(32))
I was able to solve the issue by adding explicit LOCK TABLE statements around both queries. This turned out to be a better solution, since each query affects so many records, and that both of these are background processes. They now wait on each other.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
While this is an okay solution for me, it obviously isn't the answer for everyone. Locking with WRITE means that you cannot READ. Only a READ lock will allow others to READ.

Mysql with innodb and serializable transaction does not (always) lock rows

I have a transaction with a SELECT and possible INSERT. For concurrency reasons, I added FOR UPDATE to the SELECT. To prevent phantom rows, I'm using the SERIALIZABLE transaction isolation level. This all works fine when there are any rows in the table, but not if the table is empty. When the table is empty, the SELECT FOR UPDATE does not do any (exclusive) locking and a concurrent thread/process can issue the same SELECT FOR UPDATE without being locked.
CREATE TABLE t (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
display_order INT
) ENGINE = InnoDB;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION;
SELECT COALESCE(MAX(display_order), 0) + 1 from t FOR UPDATE;
..
This concept works as expected with SQL Server, but not with MySQL. Any ideas on what I'm doing wrong?
EDIT
Adding an index on display_order does not change the behavior.
There's something fun with this, both transaction are ready to get the real lock. As soon as one of the transaction will try to perform an insert the lock will be there. If both transactions try it one will get a deadlock and rollback. If only one of them try it it will get a lock wait timeout.
If you detect the lock wait timeout you can rollback and this will allow the next transaction to perform the insert.
So I think you're likely to get a deadlock exception or a timeout exception quite fast and this should save the situation. But talking about perfect 'serializable' situation this is effectively a bad side effect of empty table. The engine cannot be perfect on all cases, at least No double-transaction-inserts can be done..
I've send yesterday an interesting case of true seriability vs engine seriability, on potsgreSQl documentation, check this example it's funny : http://www.postgresql.org/docs/8.4/static/transaction-iso.html#MVCC-SERIALIZABILITY
Update:
Other interesting resource: Does MySQL/InnoDB implement true serializable isolation?
This is probably not a bug.
The way that the different databases implement specific transaction isolation levels is NOT 100% consistent, and there are a lot of edge-cases to consider which behave differently. InnoDB was meant to emulate Oracle, but even there, I believe there are cases where it works differently.
If your application relies on very subtle locking behaviour in specific transaction isolation modes, it is probably broken:
Even if it "works" right now, it might not if somebody changes the database schema
It is unlikely that engineers maintaining your code will understand how it's using the database if it depends upon subtleties of locking
Did you have a look at this document:
http://dev.mysql.com/doc/refman/5.1/en/innodb-locking-reads.html
If you ask me, mysql wasn't built to be used in that way...
My recomendation is:
If you can affort it -> Lock the whole table.