how to release a lock automatically in mysql - mysql

My question is similar to this questionMySQL rollback on transaction with lost/disconnected connection ,but it was 5 years ago.
If a client(like jdbc or something else) lock one row in table, execute some statements then network is down, so mysql would never receive commit or rollback command from client, does mysql support to rollback this transaction(unlock row) automatically?
I refer innodb_rollback_on_timeout it says If --innodb_rollback_on_timeout is specified, a transaction timeout causes InnoDB to abort and roll back the entire transaction, but how long is the transaction timeout and where to set it?
The accepted answer in similar question is to use wait_timeout, if wait_timeout is set to a small number like 10 seconds, so the idle connections in pool(if used) need to test connection every 10 seconds before they are disconnected by mysql sever, is the cost too high? or is there other ways(configuration will be best) to solve my question?

Actually there's no settings for transaction timeout, still wait_timeout or interactive_timeout applies. What --innodb_rollback_on_timeout affected is the behavior of rollback(whole transaction or statements in the transation).

Related

MySQL lock wait timeout and deadlock errors

I'm developing a mobile application whose backend is developed in Java and database is MySQL.
We have some insert and update operations in database tables with a lot of rows (between 400.000 and 3.000.000). Every operation usually doesn't need to touch every register of the table, but maybe, they are called simultaneously to update a 20% of them.
Sometimes I get this errors:
Deadlock found when trying to get lock; try restarting transaction
and
Lock wait timeout exceeded; try restarting transaction
I have improved my queries making them smaller and faster but I still have a big problem when some operations can't be performed.
My solutions until now have been:
Increase server performance (AWS Instance from m2.large to c3.2xlarge)
SET GLOBAL tx_isolation = 'READ-COMMITTED';
Avoid to check foreign keys: SET FOREIGN_KEY_CHECKS = 0; (I know this is not safe but my priotity is not to lock de database)
Set this values for timeout variables (SHOW VARIABLES LIKE '%timeout%';):
connect_timeout: 10
delayed_insert_timeout: 300
innodb_lock_wait_timeout: 50
innodb_rollback_on_timeout: OFF
interactive_timeout: 28800
lock_wait_timeout: 31536000
net_read_timeout: 30
net_write_timeout: 60
slave_net_timeout: 3600
wait_timeout: 28800
But I'm not sure if these things have decreased performance.
Any idea of how to reduce those errors?
Note: these others SO answer don't help me:
MySQL Lock wait timeout exceeded
MySQL: "lock wait timeout exceeded"
How can I change the default Mysql connection timeout when connecting through python?
Try to update less rows per single transaction.
Instead of updating 20% or rows in a single transaction update 1% of rows 20 times.
This will improve significantly your performances and you will avoid the timeout.
Note: ORM are not the good solution for big updates. It is better to use standard JDBC. Use ORM to retrieve, update, delete few records each time. It speed up the coding phase, not the execution time.
As a comment more than an answer, if you are in the early stages of development, you may wish to consider whether or not you actually need this particular data in a relational database. There are much faster and larger alternatives for storing data from mobile apps depending upon the planned use of the data. [S3 for large files, stored-once, read often (and can be cached); NoSQL (Mongo etc) for unstructured large, write-once, read many, etc.]

Mysql deadlock : what does 'try restarting transaction' mean and what exactly happens to the locked transactions [duplicate]

This question already has answers here:
Restarting transaction in MySQL after deadlock
(4 answers)
Closed 8 years ago.
I have a situation where 2 transactions create a mysql deadlock.
The following error is fired : Deadlock found when trying to get lock; try restarting transaction
If I'm correct, this error means that mysql deadlock timeout is expired, and mysql try to do something to removes this deadlock.
What isn't clear for me is what means try restarting transaction ? How a transaction can be "restarted" ?
What happens to the 2 locked transactions ? Are they both canceled (roll-backed) ? Or is it just one of them that is canceled so the lock can be released.
Thanks in advance
There is no deadlock timeout (though there are lock timeouts). If a deadlock is detected, no amount of time will resolve it, so MySQL reacts immediately.
MySQL will roll back one or more transactions until the deadlock is resolved.
From MySQL docs:
InnoDB tries to pick small transactions to roll back, where the size
of a transaction is determined by the number of rows inserted,
updated, or deleted.
It is up to your application that is making the SQL call to retry the transaction.
MySQL has some recommendations in its documentation How to Cope with Deadlocks.
If you wish to try to avoid the deadlock and are having trouble understanding the cause of the deadlock, I recommend starting another question, and posting the complete affected queries and schema, and ideally the deadlock report from SHOW ENGINE INNODB STATUS.

innodb rollback on timeout

I have a mysql 5.1 db running a stored proc with
START TRANSACTION
Insert some rows to table 1
Insert some rows to table 2
COMMIT
Calling this stored procedure often fails with
SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction
According to this page here, if mysql server is not started with innodb_rollback_on_timeout then only the last statement is rolled back but START TRANSACTION itself will set autocommit = 0. Does that mean that our mysql server needs to be started with this parameter so that it doesn't leave the db in an inconsistent state where some rows are inserted into table 1 but not into table 2?
Yes, either that, or declare a handler "FOR '1205'" in your procedure by which you could (eg.) roll back the transaction and interrupt the process.
You can rollback yourself if the calling client checks for errors and rolls back the transaction if an error occurs. As stated in the bug log:
In the event of a row level lock timeout, it can be desirable to allow your application to decide what to do (such as ROLLBACK, retry the statement, etc...) so this behavior was added with an option for backwards compatibility if desired.
Otherwise yes - if you don't check for lock wait errors, and you always want the whole transaction to rollback, then you should set innodb_rollback_on_timeout in your my.cnf.

Deadlock vs Lockwait Timeout on MySQL [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Can anyone explain me on details the difference of Deadlock and Lockwait errors found on MySQL 5.1. Is it just the same? When does the deadlock error occur and when does the lockwait timeout occur?
A deadlock occurs whenever a circular dependency arises among the locks that transactions must acquire in order to proceed: for example, imagine that transaction 1 holds lock A but needs to acquire lock B to proceed; and transaction 2 holds lock B but needs to acquire lock A to proceed—the transactions are immediately deadlocked (no timeout required) and neither can proceed until one releases its locks. Thus the database picks a transaction to abort/rollback; application code should detect this eventuality and handle accordingly, usually by attempting the transaction again. A deadlock is analogous to a policeman solving gridlock (the situation at a road junction when no vehicle is able to move forward) by ordering a random participant to reverse.
A wait timeout occurs when the configured timeout period (e.g. innodb_lock_wait_timeout
in the case of InnoDB locks) elapses while a transaction awaits a lock, perhaps because a slow transaction is holding the lock and has not finished executing or perhaps because a number of transactions are queuing for the lock. It's possible (even, likely) that the lock would have become available and have been acquired if the transaction had waited longer, but the timeout exists to avoid applications waiting on the database indefinitely. A wait timeout is analogous to a driver giving up and turning back because of delays.
Deadlock is two threads infinitely waiting on the same thing. Lock wait timeout means one thread timed out while waiting to get a lock, thus preventing a deadlock.

Getting "Lock wait timeout exceeded; try restarting transaction" even though I'm not using a transaction

I'm running the following MySQL UPDATE statement:
mysql> update customer set account_import_id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
I'm not using a transaction, so why would I be getting this error? I even tried restarting my MySQL server and it didn't help.
The table has 406,733 rows.
HOW TO FORCE UNLOCK for locked tables in MySQL:
Breaking locks like this may cause atomicity in the database to not be enforced on the sql statements that caused the lock.
This is hackish, and the proper solution is to fix your application that caused the locks. However, when dollars are on the line, a swift kick will get things moving again.
1) Enter MySQL
mysql -u your_user -p
2) Let's see the list of locked tables
mysql> show open tables where in_use>0;
3) Let's see the list of the current processes, one of them is locking your table(s)
mysql> show processlist;
4) Kill one of these processes
mysql> kill <put_process_id_here>;
You are using a transaction; autocommit does not disable transactions, it just makes them automatically commit at the end of the statement.
What could be happening is, some other thread is holding a record lock on some record (you're updating every record in the table!) for too long, and your thread is being timed out. Or maybe running multiple (2+) UPDATE queries on the same row during a single transaction.
You can see more details of the event by issuing a
SHOW ENGINE INNODB STATUS
after the event (in SQL editor). Ideally do this on a quiet test-machine.
mysql> set innodb_lock_wait_timeout=100;
Query OK, 0 rows affected (0.02 sec)
mysql> show variables like 'innodb_lock_wait_timeout';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 100 |
+--------------------------+-------+
Now trigger the lock again. You have 100 seconds time to issue a SHOW ENGINE INNODB STATUS\G to the database and see which other transaction is locking yours.
Take a look to see if your database is fine tuned, especially the transaction isolation. It isn't a good idea to increase the innodb_lock_wait_timeout variable.
Check your database transaction isolation level in MySQL:
mysql> SELECT ##GLOBAL.tx_isolation, ##tx_isolation, ##session.tx_isolation;
+-----------------------+-----------------+------------------------+
| ##GLOBAL.tx_isolation | ##tx_isolation | ##session.tx_isolation |
+-----------------------+-----------------+------------------------+
| REPEATABLE-READ | REPEATABLE-READ | REPEATABLE-READ |
+-----------------------+-----------------+------------------------+
1 row in set (0.00 sec)
You could get improvements changing the isolation level. Use the Oracle-like READ COMMITTED instead of REPEATABLE READ. REPEATABLE READ is the InnoDB default.
mysql> SET tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)
Also, try to use SELECT FOR UPDATE only if necessary.
Something is blocking the execution of the query. Most likely another query updating, inserting or deleting from one of the tables in your query. You have to find out what that is:
SHOW PROCESSLIST;
Once you locate the blocking process, find its id and run :
KILL {id};
Re-run your initial query.
mysql->SHOW PROCESSLIST;
kill xxxx;
and then kill which one in sleep. In my case it is 2156.
100% with what MarkR said. autocommit makes each statement a one statement transaction.
SHOW ENGINE INNODB STATUS should give you some clues as to the deadlock reason. Have a good look at your slow query log too to see what else is querying the table and try to remove anything that's doing a full tablescan. Row level locking works well but not when you're trying to lock all of the rows!
Try to update the below two parameters as they must be having default values.
innodb_lock_wait_timeout = 50
innodb_rollback_on_timeout = ON
For checking parameter value you can use the below SQL.
SHOW GLOBAL VARIABLES LIKE 'innodb_rollback_on_timeout';
Can you update any other record within this table, or is this table heavily used? What I am thinking is that while it is attempting to acquire a lock that it needs to update this record the timeout that was set has timed out. You may be able to increase the time which may help.
If you've just killed a big query, it will take time to rollback. If you issue another query before the killed query is done rolling back, you might get a lock timeout error. That's what happened to me. The solution was just to wait a bit.
Details:
I had issued a DELETE query to remove about 900,000 out of about 1 million rows.
I ran this by mistake (removes only 10% of the rows):
DELETE FROM table WHERE MOD(id,10) = 0
Instead of this (removes 90% of the rows):
DELETE FROM table WHERE MOD(id,10) != 0
I wanted to remove 90% of the rows, not 10%. So I killed the process in the MySQL command line, knowing that it would roll back all the rows it had deleted so far.
Then I ran the correct command immediately, and got a lock timeout exceeded error soon after. I realized that the lock might actually be the rollback of the killed query still happening in the background. So I waited a few seconds and re-ran the query.
In our case the problem did not have much to do with the locks themselves.
The issue was that one of our application endpoints needed to open 2 connections in parallel to process a single request.
Example:
Open 1st connection
Start transaction 1
Lock 1 row in table1
Open 2nd connection
Start transaction 2
Lock 1 row in table2
Commit transaction 2
Release 2nd connection
Commit transaction 1
Release 1st connection
Our application had a connection pool limited to 10 connections.
Unfortunately, under load, as soon as all connections were used the application stopped working and we started having this problem.
We had several requests that needed to open a second connection to complete, but could not due to the connection pool limit. As a consequence, those requests were keeping a lock on the table1 row for a long time leading the following requests that needed to lock the same row to throw this error.
Solution:
In the short term, we patched the problem by increasing the connection pool limit.
In the long term, we removed all nested connections, to fully solve the issue.
Tips:
You can easily check if you have nested connections by trying to lower your connection pool limit to 1 and test your application.
The number of rows is not huge... Create an index on account_import_id if its not the primary key.
CREATE INDEX idx_customer_account_import_id ON customer (account_import_id);
Make sure the database tables are using InnoDB storage engine and READ-COMMITTED transaction isolation level.
You can check it by SELECT ##GLOBAL.tx_isolation, ##tx_isolation; on mysql console.
If it is not set to be READ-COMMITTED then you must set it. Make sure before setting it that you have SUPER privileges in mysql.
You can take help from http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html.
By setting this I think your problem will be get solved.
You might also want to check you aren't attempting to update this in two processes at once. Users ( #tala ) have encountered similar error messages in this context, maybe double-check that...
I came from Google and I just wanted to add the solution that worked for me. My problem was I was trying to delete records of a huge table that had a lot of FK in cascade so I got the same error as the OP.
I disabled the autocommit and then it worked just adding COMMIT at the end of the SQL sentence. As far as I understood this releases the buffer bit by bit instead of waiting at the end of the command.
To keep with the example of the OP, this should have worked:
mysql> set autocommit=0;
mysql> update customer set account_import_id = 1; commit;
Do not forget to reactivate the autocommit again if you want to leave the MySQL config as before.
mysql> set autocommit=1;
Late to the party (as usual) however my issue was the fact that I wrote some bad SQL (being a novice) and several processes had a lock on the record(s) <-- not sure the appropriate verbiage. I ended up having to just: SHOW PROCESSLIST and then kill the IDs using KILL <id>
This kind of thing happened to me when I was using php
language construct exit; in middle of transaction. Then this
transaction "hangs" and you need to kill mysql process (described above with processlist;)
In my instance, I was running an abnormal query to fix data. If you lock the tables in your query, then you won't have to deal with the Lock timeout:
LOCK TABLES `customer` WRITE;
update customer set account_import_id = 1;
UNLOCK TABLES;
This is probably not a good idea for normal use.
For more info see: MySQL 8.0 Reference Manual
I ran into this having 2 Doctrine DBAL connections, one of those as non-transactional (for important logs), they are intended to run parallel not depending on each other.
CodeExecution(
TransactionConnectionQuery()
TransactionlessConnectionQuery()
)
My integration tests were wrapped into transactions for data rollback after very test.
beginTransaction()
CodeExecution(
TransactionConnectionQuery()
TransactionlessConnectionQuery() // CONFLICT
)
rollBack()
My solution was to disable the wrapping transaction in those tests and reset the db data in another way.
We ran into this issue yesterday and after slogging through just about every suggested solution here, and several others from other answers/forums we ended up resolving it once we realized the actual issue.
Due to some poor planning, our database was stored on a mounted volume that was also receiving our regular automated backups. That volume had reached max capacity.
Once we cleared up some space and restarted, this error was resolved.
Note that we did also manually kill several of the processes: kill <process_id>; so that may still be necessary.
Overall, our takeaway was that it was incredibly frustrating that none of our logs or warnings directly mentioned a lack of disk space, but that did seem to be the root cause.
I had similar error when using python to access mysql database.
The python program was using a while and for loop.
Closing cursor and link at appropriate line solved problem
https://github.com/nishishailesh/sensa_host_com/blob/master/sensa_write.py
see line 230
It appears that asking repeated link without closing previous link produced this error
I've faced a similar issue when doing some testing.
Reason - In my case transaction was not committed from my spring boot application because I killed the #transactional function during the execution(when the function was updating some rows). Due to which transaction was never committed to the database(MySQL).
Result - not able to update those rows from anywhere. But able to update other rows of the table.
mysql> update some_table set some_value = "Hello World" where id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
Solution - killed all the MySQL processes using
sudo killall -9 mysqld
sudo killall -9 mysqld_safe (restarting the server when an error occurs and logging runtime information to an error log. Not required in my case)
Had this same error, even though I was only updating one table with one entry, but after restarting mysql, it was resolved.