MySQL lock wait timeout and deadlock errors - mysql

I'm developing a mobile application whose backend is developed in Java and database is MySQL.
We have some insert and update operations in database tables with a lot of rows (between 400.000 and 3.000.000). Every operation usually doesn't need to touch every register of the table, but maybe, they are called simultaneously to update a 20% of them.
Sometimes I get this errors:
Deadlock found when trying to get lock; try restarting transaction
and
Lock wait timeout exceeded; try restarting transaction
I have improved my queries making them smaller and faster but I still have a big problem when some operations can't be performed.
My solutions until now have been:
Increase server performance (AWS Instance from m2.large to c3.2xlarge)
SET GLOBAL tx_isolation = 'READ-COMMITTED';
Avoid to check foreign keys: SET FOREIGN_KEY_CHECKS = 0; (I know this is not safe but my priotity is not to lock de database)
Set this values for timeout variables (SHOW VARIABLES LIKE '%timeout%';):
connect_timeout: 10
delayed_insert_timeout: 300
innodb_lock_wait_timeout: 50
innodb_rollback_on_timeout: OFF
interactive_timeout: 28800
lock_wait_timeout: 31536000
net_read_timeout: 30
net_write_timeout: 60
slave_net_timeout: 3600
wait_timeout: 28800
But I'm not sure if these things have decreased performance.
Any idea of how to reduce those errors?
Note: these others SO answer don't help me:
MySQL Lock wait timeout exceeded
MySQL: "lock wait timeout exceeded"
How can I change the default Mysql connection timeout when connecting through python?

Try to update less rows per single transaction.
Instead of updating 20% or rows in a single transaction update 1% of rows 20 times.
This will improve significantly your performances and you will avoid the timeout.
Note: ORM are not the good solution for big updates. It is better to use standard JDBC. Use ORM to retrieve, update, delete few records each time. It speed up the coding phase, not the execution time.

As a comment more than an answer, if you are in the early stages of development, you may wish to consider whether or not you actually need this particular data in a relational database. There are much faster and larger alternatives for storing data from mobile apps depending upon the planned use of the data. [S3 for large files, stored-once, read often (and can be cached); NoSQL (Mongo etc) for unstructured large, write-once, read many, etc.]

Related

What is the good practice on update query with big volume of data to avoid a lock wait timeout?

So basically, I have this current query :
UPDATE act AS a
INNER JOIN blok AS b
ON b.fav_pat = a.pat_id
SET a.blok_id = b.id
Because of the volumn of data i have, its currently timing out. is there a way around to avoid the time out without modifying db config ?
The flyway package you use does its best to allow any incomplete operation to be entirely rolled back using the host RDBMS's transaction semantics. That means it is designed to do update operations like the one you showed us in an ACID-compliant single transaction.
If the tables involved are large (millions of rows or more) the transactions can be very large. They can make your MySQL server thrash, spilling transaction logs to disk or SSD. Committing those transaction logs can take a very long time. You didn't mention row counts, but if they are large is is possible that flyway is not the right tool for this job.
Your lock timeout hints that you are doing this operation on a database with other concurrent activity. You may want to do it on an otherwise quiet database for best results.
You can increase the lock wait timeout by doing this.
show variables like 'innodb_lock_wait_timeout'; -- previous vale
SET GLOBAL innodb_lock_wait_timeout = 300; -- five min
Then, perhaps, try again, just before sunrise on a holiday or another quiet time. More information here.
Consider restoring the lock timeout to its previous value when your flyway job is done.
You can also consider doing your update in batches, for example 1000 rows at a time. But flyway doesn't seem to support that. If you go that route you can ask another question.

MYSQL: Lock wait timeout exceeded; try restarting transaction

Facts:
I have a mysql database with a table name tableA
I am running multiple aws batches at the same time where each batch process communicates with tableA by:
first adding multiple rows to the table
next, deleting multiple rows to the table
Each batch handles its own distinct set of rows
If I run one batch process no problem occurs.
When multiple batch processes run on the same time I got the following error:
sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1205, 'Lock wait timeout exceeded; try restarting transaction')
It is not related to aws batch, as same problem occurs when I try to do it locally.
Other info:
SELECT ##GLOBAL.transaction_isolation, ##transaction_isolation, ##session.transaction_isolation; ==> repeatable-read, repeatable-read, repeatable-read
show variables like 'innodb_lock_wait_timeout' ==> 50
Question
I can see some solutions recommend to set the innodb_lock_wait_timeout to higher value which will propably eliminate the error. But my understanding is that if I set innodb_lock_wait_timeout to higher value what it will happen is that each transaction will just wait the other transaction to finish. That means that these processes will not run in parallel as each one will wait the other.
What I want is these processes to happen without waiting other transactions(insertions or deletions) that are happening at the moment.
Any recommendations?
Running multiple batch load processes in parallel is difficult.
Speed up the DELETE queries used in your batch process. Run EXPLAIN on them to ensure that they have the indexes they need, then add the indexes you need.
Try using SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; before running your batch in each session. If each batch handles its own distinct set of rows, this may (or may not) allow a bit more parallelism.
Try reducing the size (the row count) of the batches. The performance reason for using transaction batches is to avoid doing a costly COMMIT for every row. You get most of the performance benefit with batches of 100 rows as you do with batches of 10 000 rows.
Try loading each incoming batch into a temporary table outside your transaction. Then use that temporary table inside your transaction to do your update. Something like this code, which is obviously simpler than you need.
CREATE TEMPORARY TABLE batchrows;
INSERT INTO batchrows (col,col,col) VALUES(a,b,c);
INSERT INTO batchrows (col,col,col) VALUES(d,e,f);
INSERT INTO batchrows (col,col,col) VALUES(g,h,i);
BEGIN TRANSACTION;
INSERT INTO maintable SELECT * FROM batchrows;
DELETE FROM maintable WHERE col IN (SELECT whatever FROM batchrows); /* ??? */
COMMIT;
DROP TEMPORARY TABLE batchrows;
The point of this? Reducing the elapsed time during which the transaction lock is held.
Finally: don't try to do batch loading in parallel. Sometimes the integrity of your data simply requires you to process the batches one after another. Actually, that is what happens now in your system: each batch must wait for the previous one to complete.
Generally speaking, Repeatable Read is not a good default for production. It locks all rows it touched. This will create a lot of unnecessary locks. Changing to Read Committed will reduce the locks significantly.
Before other tuning, I suggest you enable innodb locks log to see what are the locks.
set innodb_status_output_locks = on
set innodb_status_output = on
If that lock can be relieved, that will be a big performance boost.
I don't recommend to increase innodb_lock_wait_timeout. If a lock is held more than 50 seconds, the batch job won't be fast.
In a worse scenario which i experienced before, if the database is shared by other application, such as app serer and the long wait timeout could occupy all your connections. This will result your app server cannot serve new requests.

MySQL "LOCK TABLES" timeout?

What's the timeout for mysql LOCK TABLES statement?
Can't find it anywhere.
I tried to set variable innodb_lock_wait_timeout ini my.cnf but it seems it's related to another (row level) locking not to table locking.
Simply it has no effect for LOCK TABLES.
I want to set some low timeout value for case of deadlock, because if some operation will LOCK tables and something will go wrong, it will hang up the whole site!
Which is stupid for example in case of finishing purchase on your site.
My work-around is to create a dedicated lock table and just lock a row in that table. This has the advantage of only locking the processes that specifically want to be locked. Other parts of the application can continue to access the tables even if they are at some point touched by the update processes.
Setup
CREATE TABLE `mutex` (
EMPTY ENUM('') NOT NULL,
PRIMARY KEY (EMPTY)
);
Usage
set innodb_lock_wait_timeout = 1;
start transaction;
insert into `mutex` values();
[... do the real work here ... or somewhere else ... even a different machine ...]
delete from `mutex`;
commit;
Why are you using LOCK TABLES?
If you are using MyISAM (which sometimes needs LOCK TABLES), you should convert to InnoDB.
If you are using InnoDB, you should never use LOCK TABLES. Instead, depend on innodb_lock_wait_timeout (default is an unreasonably high 50 seconds). And you should check for errors.
InnoDB Deadlocks are caught and immediately cause an error. Certain non-deadlocks may wait for innodb_lock_wait_timeout.
Edit
Since the transaction looks like
BEGIN;
SELECT ...;
compute some stuff
UPDATE ... (using that stuff);
COMMIT;
You need to add FOR UPDATE on the end of the SELECT.
I think you are after the table_lock_timout variable which was introduced in MySQL 5.0.10 but subsequently removed in 5.5. Unfortunately, the release notes don't specify an alternative to use, and I'm guessing that the general attitude is to switch over to using InnoDB transactions as #Rick James has stated in his answer.
I think that removing the variable was unhelpful. Others may regard this as a case of the XY Problem, where we are trying to fix a symptom (deadlocks) by changing the timeout period of locking tables when really we should resolve the root cause by switching over to transactions instead. I think there may still be cases where table locks are more suitable to the application than using transactions and are perhaps a lot easier to comprehend, even if they are worse performing.
The nice thing about using LOCK TABLES, is that you can state the tables that you're queries are dependent upon before proceeding. With transactions, the locks are grabbed at the last possible moment and if they can't be fetched and time-out, you then need to check for this failure and roll back before trying everything all over again. It's simpler to have a 1 second timeout (minimum) on the lock tables query and keep retrying to get the lock(s) until you succeed and then proceeding with your queries before unlocking the tables. This logic is at no risk of deadlocks.
I believe the developer's attitude is summed up by the following excerpt from the documetation:
...avoid using the LOCK TABLES statement, because it does not offer
any extra protection, but instead reduces concurrency.
The correct answer is the lock_wait_timeout system variable.
From the documentation:
This variable specifies the timeout in seconds for attempts to acquire
metadata locks. The permissible values range from 1 to 31536000 (1
year). The default is 31536000.
This timeout applies to all statements that use metadata locks. These
include DML and DDL operations on tables, views, stored procedures,
and stored functions, as well as LOCK TABLES, FLUSH TABLES WITH READ
LOCK, and HANDLER statements.
I think you meant to say the default timeout value; which is 50 Seconds per MySQL Documentation it says
innodb_lock_wait_timeout Default 50 The timeout in seconds an
InnoDB transaction may wait for a row lock before giving up. The
default value is 50 seconds

How to give priority to certain queries?

On certain occasions, when several back-end process happen to run at the same time (queue management is something else, I can solve it like that, but this is not the question here),
I get General error: 1205 Lock wait timeout exceeded; try restarting transaction ROLLING BACK
The process which has less priority is the one that locks the table, due to the fact that it started a few minutes before the high priority one.
How do I give priority to a query over an already running process?
Hope it was clear enough.
Once a query has begun execution it cannot be paused/interrupted. The only exception to this is at the DB administration level where you could essentially force the query to stop (think of it as killing a running process in windows if you will). However you don't want to do that, so forget it.
Your best option would be to use a LOW PRIORITY chunked operation. Basically what that means is if the query on the LOW PRIORITY is taking too long to execute, think about ways in which you could split it up to be quicker without creating orphaned data or illegal data in the database.
A very basic use case would be imagine an insert that inserts 10,000 new rows. By "chunking" the insert so that it runs the insert multiple times with smaller data sets (i.e. 500 at a time), each one will complete more quickly, and therefore allow any non-LOW PRIORITY operations to be executed in a more timely manner.
How To
Setting something as low priority is as simple as adding in the LOW_PRIORITY flag.
INSERT LOW_PRIORITY INTO xxx(a,b,c,) VALUES()
UPDATE LOW_PRIORITY xxx SET a=b
DELETE LOW_PRIORITY FROM xxx WHERE a="value"

High values of innodb_lock_wait_timeout on Mysql

we're trying to get some statistics over our large log tables on MySQL. Some select queries are taking too long to complete and causing exceptions as;
Caused by: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
This is causing our whole application to stop serving with the same error. After some research we decided to change 'innodb_lock_wait_timeout' variable of our MySQL server configuration.
But, What are the drawbacks of this configuration change?
I am not sure this applies to your issue, but your question is something I have dealt with a while ago. I found out that on my system the locks were not needed and were related to queries like CREATE TABLE AS SELECT * FROM table_x... which apparently lock all records in table_x even in InooDB.
The solution was to set the global parameter innodb_locks_unsafe_for_binlog to true (in my.cnf add the line innodb_locks_unsafe_for_binlog=1). Which changes the way InnoDB locks records.
Here is some documentation about it. It really saved my application from those unexpected locks.
As load increases, you'll need an even longer timeout. The drawbacks will be a risk of ever increasing maximum query times for other client queries. You need to look into this, I would suggest using the linux tool mytop to find the long running queries then do an EXPLAIN on them to see how the locks are being used. Restructure your data and/or query to lock less.
Finally, MariaDB (a fork of MySQL) has a lot of focus on reducing the amount of locks needed for operations, so moving to that may help you also.