How does GET_LOCK(‘lockname’, 0) of MariaDB/MySQL work? - mysql

I have found the use of GET_LOCK(‘lockname’, 0) of MariaDB in a java application that I am working on.
The timeout value is used as 0 here. It should work in non-blocking fashion, I suppose. But, after getting some exceptions in the log file, I have got the impression that it is still trying the get the lock using a default timeout time. Applying the call of IS_FREE_LOCK(‘lockname’) before GET_LOCK call makes the application run smoothly.
My question is, what is the impact of using 0 as the timeout value here?

Have you determined the timeout?. I can't reproduce the problem from the command line:
Session 1:
MariaDB [(none)]> SELECT GET_LOCK('lock1', 10);
+-----------------------+
| GET_LOCK('lock1', 10) |
+-----------------------+
| 1 |
+-----------------------+
1 row in set (0.000 sec)
Session 2:
MariaDB [(none)]> SELECT GET_LOCK('lock1', 0.5);
+------------------------+
| GET_LOCK('lock1', 0.5) |
+------------------------+
| 0 |
+------------------------+
1 row in set (0.500 sec)
MariaDB [(none)]> SELECT GET_LOCK('lock1', 0);
+----------------------+
| GET_LOCK('lock1', 0) |
+----------------------+
| 0 |
+----------------------+
1 row in set (0.000 sec)

"lock wait timeout" has nothing to do with GET_LOCK. It only applies to InnoDB transactions. The default for innodb_lock_wait_timeout is 50 seconds. (In my opinion, that is much too high.)
InnoDB transactions should be designed to finish in very few seconds. Never keep a transaction open while waiting for user interaction; a potty break could lead to "lock wait timeout".
I see that it is a "BatchUpdate". Is this loading a lot of data? Is it coming from some slow source? Could you use autocommit and not put the entire load into a single transaction?
Another thing... If you start a transaction (BEGIN or START TRANSACTION), then do GET_LOCK('foo', 51), and 'foo' is not available, you are asking for "wait lock timeout".
Please provide the bigger picture (BatchUpdate, reason for GET_LOCK, etc) so we can dig deeper.

Related

LOCK TABLES table WRITE blocks my readings

I am at the REPEATABLE-READ level.
Why does it make me wait?
I understand that all reads (SELECTs) at any level are non-blocking.
what am I missing?
Session 1:
mysql> lock tables users write;
Query OK, 0 rows affected (0.00 sec)
Session 2:
mysql> begin;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from users where id = 1; // wait
Session 1:
mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)
Session 2:
mysql> select * from users where id = 1;
+----+-----------------+--------------------+------+---------------------+--------------------------------------------------------------+----------------+---------------------+---------------------+------------+
| id | name | email | rol | email_verified_at | password | remember_token | created_at | updated_at | deleted_at |
+----+-----------------+--------------------+------+---------------------+--------------------------------------------------------------+----------------+---------------------+---------------------+------------+
| 1 | Bella Lueilwitz | orlo19#example.com | NULL | 2022-08-01 17:22:29 | $2y$10$92IXUNpkjO0rOQ5byMi.Ye4oKoEa3Ro9llC/.og/at2.uheWG/igi | MvMlaX9TQj | 2022-08-01 17:22:29 | 2022-08-01 17:22:29 | NULL |
+----+-----------------+--------------------+------+---------------------+--------------------------------------------------------------+----------------+---------------------+---------------------+------------+
1 row in set (10.51 sec)
In this question the opposite is true
Why doesn't LOCK TABLES [table] WRITE prevent table reads?
You reference a question about MySQL 5.0 posted in 2013. The answer from that time suggests that the client was allowed to get a result that had been cached in the query cache. Since then, MySQL 5.6 and 5.7 disabled the query cache by default, and MySQL 8.0 removed the feature altogether. This is a good thing.
The documentation says:
WRITE lock:
Only the session that holds the lock can access the table. No other session can access it until the lock is released.
This was true in the MySQL 5.0 days too, but the query cache allowed some clients to get around it. But I guess it wasn't reliable even then, because if the client ran a query that happened not to be cached, I suppose it would revert to the documented behavior. Anyway, it's moot, because all currently supported versions of MySQL should have the query cache disabled or removed.

Can MySQL UPDATE locks starve due to continuous SHARE locks?

I have 2 different transactions where one is using read locks (FOR SHARE) for SELECT statements and the other uses write locks (FOR UPDATE).
Let's say they are trying to acquire the lock on the same row. Here's the scenario I'm trying to understand what's happening.
Let's say I have continuous stream of requests using the read locks and occasionally I need to acquire the write lock.
Are these locks using FIFO strategy to avoid starvation or some other strategy such as read locks would be acquired as long as it can acquire the lock and write lock would wait all the reads to drain (even the new ones in this case).
I'm suspecting 2nd might be happening but I'm not 100% sure.
I'm investigating an issue and couldn't find a good documentation about this.
If you lack documentation, you can try an experiment:
Window 1:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from tablename for share;
+---------------------+
| ldt |
+---------------------+
| 1969-12-31 16:00:00 |
+---------------------+
1 row in set (0.00 sec)
Window 2:
mysql> update tablename set ldt=now();
(hangs, waiting for lock)
Window 3:
mysql> select * from tablename for share;
(hangs, also waiting for lock)
This indicates that the X-lock request is blocking subsequent S-lock requests.
50 seconds passes, and then:
Window 2:
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
And then immediately:
Window 3:
mysql> select * from tablename for share;
+---------------------+
| ldt |
+---------------------+
| 1969-12-31 16:00:00 |
+---------------------+
1 row in set (41.14 sec)
The select in window 3 was blocked while waiting for the update in window 2. When the update timed out, then the select in window 3 was able to proceed.

Force-release a user-level lock in MySQL/MAriaDB

My session management (Zebra Session) uses user-level locks to avoid race conditions between two requests in the same session. To start the session, GET_LOCK is used. After closing the session, RELEASE_LOCK is used.
MariaDB [planner_20201026]> select GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaaa', '5');
+-------------------------------------------------------------------+
| GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaaa', '5') |
+-------------------------------------------------------------------+
| 1 |
+-------------------------------------------------------------------+
1 row in set (0.000 sec)
MariaDB [planner_20201026]> select RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaa');
+-----------------------------------------------------------------+
| RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaa') |
+-----------------------------------------------------------------+
| NULL |
+-----------------------------------------------------------------+
1 row in set (0.000 sec)
Now I am in a situation because of a reason which I do not know yet where the lock was not released properly. GET_LOCK finishes because of the timeout, RELEASE_LOCK tells me that it cannot release the lock because it was (according to the documentation) established by another thread:
MariaDB [xyz]> select GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b', '5');
+-------------------------------------------------------------------+
| GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b', '5') |
+-------------------------------------------------------------------+
| 0 |
+-------------------------------------------------------------------+
1 row in set (5.015 sec)
MariaDB [xyz]> select RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b');
+------------------------------------------------------------------+
| RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b') |
+------------------------------------------------------------------+
| 0 |
+------------------------------------------------------------------+
1 row in set (0.000 sec)
The session is now more or less blocked/useless/doomed, each request takes TIMEOUT seconds extra.
Is there any chance how I can clear that lock, especially after a timeout?
You can only use RELEASE_LOCK() to release a lock acquired in the same thread. A thread has no privilege to force another thread to give up its lock.
That would be a pretty useless locking system if you could acquire a lock but any other thread could unilaterally force you to release it!
One way you could work around this is to call IS_USED_LOCK() to tell you which thread holds the lock. It returns the integer thread id of the holder, or NULL if the lock is not held by anyone.
Then if you have SUPER privilege, your thread can KILL that other thread, and this will force it to release its lock (as well as disconnecting that client). But that's a pretty rude thing to do.
I have a feeling this is an XY Problem. You are searching for a solution to force locks held by other threads to be released, but this is a bad solution because it doesn't solve your real problem.
The real problem is:
Now I am in a situation because of a reason which I do not know yet where the lock was not released properly.
You need to think harder about this and design a system where you do not lose track of who has acquired the lock.
Hint: GET_LOCK(name, 0) may help. This returns immediately (that is, with zero seconds of timeout). If the lock can be acquired, it is acquired, and the return value of GET_LOCK is 1. If it was already held by another thread, the GET_LOCK still returns immediately, but with a return value of 0, telling you that it could not be acquired.

MySQL InnoDB "SELECT FOR UPDATE" - SKIP LOCKED equivalent

Is there any way to skip "locked rows" when we make "SELECT FOR UPDATE" in MySQL with an InnoDB table?
E.g.: terminal t1
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select id from mytable ORDER BY id ASC limit 5 for update;
+-------+
| id |
+-------+
| 1 |
| 15 |
| 30217 |
| 30218 |
| 30643 |
+-------+
5 rows in set (0.00 sec)
mysql>
At the same time, terminal t2:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select id from mytable where id>30643 order by id asc limit 2 for update;
+-------+
| id |
+-------+
| 30939 |
| 31211 |
+-------+
2 rows in set (0.01 sec)
mysql> select id from mytable order by id asc limit 5 for update;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql>
So if I launch a query forcing it to select other rows, it's fine.
But is there a way to skip the locked rows?
I guess this should be a redundant problem in the concurrent process, but I did not find any solution.
EDIT:
In reality, my different concurrent processes are doing something apparently really simple:
take the first rows (which don't contain a specific flag - e.g.: "WHERE myflag_inUse!=1").
Once I get the result of my "select for update", I update the flag and commit the rows.
So I just want to select the rows which are not already locked and where myflag_inUse!=1...
The following link helps me to understand why I get the timeout, but not how to avoid it:
MySQL 'select for update' behaviour
mysql> SHOW VARIABLES LIKE "%version%";
+-------------------------+-------------------------+
| Variable_name | Value |
+-------------------------+-------------------------+
| innodb_version | 5.5.46 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.5.46-0ubuntu0.14.04.2 |
| version_comment | (Ubuntu) |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
+-------------------------+-------------------------+
7 rows in set (0.00 sec)
MySQL 8.0 introduced support for both SKIP LOCKED and NO WAIT.
SKIP LOCKED is useful for implementing a job queue (a.k.a batch queue) so that you can skip over locks that are already locked by a concurrent transaction.
NO WAIT is useful for avoiding waiting until a concurrent transaction releases the locks that we are also interested in locking.
Without NO WAIT, we either have to wait until the locks are released (at commit or release time by the transaction that currently holds the locks) or the lock acquisition times out. NO WAIT acts as a lock timeout with a value of 0.
For more details about SKIP LOCK and NO WAIT.
This appears to now exist in MySQL starting in 8.0.1:
https://mysqlserverteam.com/mysql-8-0-1-using-skip-locked-and-nowait-to-handle-hot-rows/
Starting with MySQL 8.0.1 we are introducing the SKIP LOCKED modifier
which can be used to non-deterministically read rows from a table
while skipping over the rows which are locked. This can be used by
our booking system to skip orders which are pending. For example:
However, I think that version is not necessarily production ready.
Unfortunately, it seems that there is no way to skip the locked row in a select for update so far.
It would be great if we could use something like the Oracle 'FOR UPDATE SKIP LOCKED'.
In my case, the queries launched in parallel are both exactly the same, and contain a 'where' clause and a 'group by' on a several millions of rows...because the queries need between 20 and 40 seconds to run, that was (as I already knew) a big part of the problem.
The only -temporary and not the best- solution I saw was to move some (i.e.: millions of) rows that I would not (directly) use in order to reduce the time the query will take.
So I will still have the same behavior but I will wait less time...
I was expecting a way to not select the locked row in the select.
I don't mark this as an answer, so if a new clause from mysql is added (or discovered), I can accept it later...
I'm sorry, but I think you approach the problem from a wrong angle. If your user wants to list records from a table that satisfy certain selection criteria, then your query should return them all, or return with an error message and provide no resultset whatsoever. But the query should not reurn only a subset of the results leading the user to belive that he has all the matching records.
The issue should be addressed by making sure that your application locks as few rows as possible, for as little time as possible.
Walk through the table in chunks of the PRIMARY KEY, using some suitable LIMIT so you are not looking at "too many" rows at once.
By using the PK, you are ordering things in a predictable way; this virtually eliminates deadlocks.
By using LIMIT, you will keep from hogging too much at once. The LIMIT should be embodied as a range over the PK. This makes it quite clear if two threads are about to step on each other.
More details are (indirectly) in my blog on big deletes.

"Lock wait timeout exceeded" without no process in processlist

I've gotten the next error while trying to perform some bunch deletion with reasonable limit:
query=(DELETE FROM `A` WHERE `id` < 123456 LIMIT 1000)
exception=(1205, 'Lock wait timeout exceeded; try restarting transaction')
And
mysql> SHOW OPEN TABLES like 'A';
+----------+----------------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+----------------------+--------+-------------+
| D | A | 3 | 0 |
+----------+----------------------+--------+-------------+
1 row in set (0.22 sec)
I see that where is might be a deadlock, but show full processlist outputs only itself. Where to dig into?
InnoDB, MySQL 5.5
This means there is a transaction that should be committed. Check other sessions or other applications which may operate with this table.
Also there could be unclosed transacions after SELECTs. I've solved (I hope) such case adding commit/rollback after separate (not some transaciotn parts) SELECTs.
This idea has looked strange for me, so I'd spent some time for other atempts before I've tried it. And it has helped.