ER_LOCK_DEADLOCK called when there is no lock - mysql

Logs showing that from time to time this error is raised.
I'm reading the docs and it's very confusing because we're not locking any tables to do inserts and we have no transactions beyond individual SQL calls.
So - might this be happening because we're running out of the mySQL connection pool in Node? (We've set it to something like 250 simultaneous connections).
I'm trying to figure out how to replicate this but having no luck.

Every query not run within an explicit transaction runs in an implicit transaction that immediately commits when the query finishes or rolls back if an error occurs... so, yes, you're using transactions.
Deadlocks occur when at least two queries are in the process of acquiring locks, and each of them holds row-level locks that they happened to acquire in such an order that they each now need another lock that the other one holds -- so, they're "deadlocked." An infinite wait condition exists between the running queries. The server notices this.
The error is not so much a fault as it is the server saying, "I see what you did, there... and, you're welcome, I cleaned it up for you because otherwise, you would have waited forever."
What you aren't seeing is that there are two guilty parties -- two different queries that caused the problem -- but only one of them is punished. The query that has accomplished the least amount of work (admittedly, this concept is nebulous) will be killed with the deadlock error, and the other query happily proceeds along its path, having no idea that it was the lucky survivor.
This is why the deadlock error message ends with "try restarting transaction" -- which, if you aren't explicitly using transacrions, just means "run your query again."
See https://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html and examine the output of SHOW ENGINE INNODB STATUS;, which will show you the other query -- the one that helped cause the deadlock but that was not killed -- as well as the one that was.

Related

mysql deadlocking with autocommit on and no confict

I have a piece of (Perl) code, of which I have multiple instances running at the same time, all with a different - unique - value for a variable $dsID. Nearly all of them keep falling over when they try to execute the following (prepared) SQL statement:
DELETE FROM ssRates WHERE ssID IN (SELECT id FROM snapshots WHERE dsID=?)
returning the error:
Lock wait timeout exceeded; try restarting transaction
Which sounds clear enough, except for a few things.
I have autocommit enabled, and am not using (explicit) transactions.
I'm using InnoDB which is supposed to use row-level locking.
The argument passed as $dsID is unique to each code, so there should be no conflicting locks to get into deadlocks.
Actually, at present, there are no rows that match the inner SELECT clause (I have verified this).
Given these things, I cannot understand why I am getting lock problems -- no locks should be waiting on each other, and there is no scope for deadlocks! (Note, though, that the same script later on does insert into the ssRates table, so some instances of the code may be doing that).
Having googled around a little, this looks like it may be a "gap locking" phenomenon, but I'm not entirely sure why, and more to the point, I'm not sure what the right solution is. I have some possible workarounds, -- the obvious one being to split the process up: do the select clause, and then loop over results giving delete command. But really, I'd like to understand this otherwise I'm going to end up in this mess again!
So I have two questions for you friendly experts.
Is this a gap-locking thing?
If not - what is it? If yes -- why. I can't see how this condition matches the gap lock definition.
(NB, server is running MariaDB: 5.5.68-MariaDB; in case this is something fixed in newer versions).

Django & MariaDB/MySQL: Does select_for_update lock rows from subqueries? Causing deadlocks?

Software: Django 2.1.0, Python 3.7.1, MariaDB 10.3.8, Linux Ubuntu 18LTS
We recently added some load to a new application, and starting observing lots of deadlocks. After a lot digging, I found out that the Django select_for_update query resulted in an SQL with several subqueries (3 or 4). In all deadlocks I've seeen so far, at least one of the transactions involves this SQL with multiple subqueries.
my question is... Does the select_for_udpate lock records from every table involved? In my case, would record from the main SELECT, and from other tables used by subqueries get locked? Or only records from the main SELECT?
From Django docs:
By default, select_for_update() locks all rows that are selected by the query. For example, rows of related objects specified in select_related() are locked in addition to rows of the queryset’s model.
However, I'm not using select_related() , at least I don't put it explicitly.
Summary of my app:
with transaction.atomic():
ModelName.objects.select_for_update().filter(...)
...
update record that is locked
...
50+ clients sending queries to the database concurrently
Some of those queries ask for the same record. Meaning different transactions will run the same SQL at the same time.
After a lot of reading, I did the following to try to get the deadlock under control:
1- Try/Catch exception error '1213' (deadlock). When this happens, wait 30 seconds and retry the query. Here, I rely on the ROLLBACK function from the database engine.
Also, print output of SHOW ENGINE INNODB STATUS and SHOW PROCESSLIST. But SHOW PROCESSLIST doesn't give useful information.
2- Modify the Django select_on_update so that it doesn't build an SQL with subqueries. Now, the SQL generated contains a single WHERE with values and no subqueries.
Anything else that could be done to reduce the deadlocks?
If u hv select_for_update inside a transaction, it will only be released went the whole transaction commits or rollbacks. With nowait set to true the other concurrent requests will immediately fail with:
3572, 'Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.')
So if we cant use optimistic locks and cannot make transactions shorter, we can set nowait=true in our select_for_update, and we will see a lot of failures if our assumptions are correct. Here we can just catch deadlock failures and retry them with backoff strategy. This is based on the assumption that all people are trying to write to the same thing like an auction item, or ticket booking with a short window of time. If that is not the case consider changing the db design a bit to make deadlocks common

What will happen if I kill a huge MySQL InnoDb DELETE Query?

I'm currently running a DELETE query that is taking a lot longer than expected (already 10hrs!). I would like to kill it through phpmyadmin processes, however am concerned about what might happen. Will the roll-back he automatically does take a lot of time also? Current query status shows "updating".
It depends on the stage your query is in right now.
But generally rollback takes about equal time, sometimes even more than the original operation.
As per point 2 of this document, it's not really advisable.
Also, be sure to verify your MySQL version as it has a VERY nasty bug with delete/update queries rollback in some versions as per this article
Restarting/killing the MySQL process won't help as rollback will resume upon restart.
Thumb rule is:
Just let it rollback by its own and don't even think about restarting
the DB as it will resume after restart but worse is your DB won't be
accessible meanwhile.
Yes rollback of huge data (i.e. millions of rows) will be considerably slower than its commit operation and even more slow if parallel InnoDB commits are happening in same database.

How do I avoid MySql deadlocks?

I'm talking to a MySql database using the jOOQ database abstraction layer.
I keep getting the following error:
SQL [null]; Deadlock found when trying to get lock; try restarting transaction
This is during a bulk insert of about 500 rows into a table. It is likely that more than one of these bulk inserts will be attempted at a time from different threads.
What is causing the deadlock, and how can I avoid it?
A traditional deadlock is when a transaction is trying to lock A and then B where another is trying to lock B and then A, leading to a situation where neither can complete. MySQL produces another sort of deadlock when there are too many pending locks on a particular resource.
You should check SHOW PROCESSLIST to see how many "waiting for lock" processes you have. It could be that the ones that fail are simply out of luck because there's too many in line.

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.