MySQL UPDATE operations on InnoDB occasionally timeout - mysql

These are simple UPDATEs on very small tables in an InnoDB database. On occasion, an operation appears to lock, and doesn't timeout. Then every subsequent UPDATE ends with a timeout. The only recourse right now is to ask my ISP to restart the daemon. Every field in the table is used in queries, so all the fields are indexed, including a primary.
I'm not sure what causes the initial lock, and my ISP doesn't provide enough information to diagnose the problem. They are reticent about giving me access to any settings as well.
In a previous job, I was required to handle similar information, but instead I would do an INSERT. Periodically, I had a script run to DELETE old records from the table, so that not so many records needed to be filtered. When SELECTing I used extrapolation techniques so having more than just the most recent data was useful. This setup was rock solid, it never hung, even under very heavy usage.
I have no problem replacing the UPDATE with an INSERT and periodic DELETEs, but it just seems so clunky. Has anyone encountered a similar problem and fixed it more elegantly?
Current Configuration
max_heap_table_size: 16 MiB
count(*): 4 (not a typo, four records!)
innodb_buffer_pool_size: 1 GiB
Edit: DB is failing now; locations has 5 records. Sample error below.
MySQL query:
UPDATE locations SET x = "43.630181733", y = "-79.882244160", updated = NULL
WHERE uuid = "6a5c7e9d-400f-c098-68bd-0a0c850b9c86";
MySQL error:
Error #1205 - Lock wait timeout exceeded; try restarting transaction
locations
Field Type Null Default
uuid varchar(36) No
x double Yes NULL
y double Yes NULL
updated timestamp No CURRENT_TIMESTAMP
Indexes:
Keyname Type Cardinality Field
PRIMARY PRIMARY 5 uuid
x INDEX 5 x
y INDEX 5 y
updated INDEX 5 updated

It's a known issue with InnoDB, see MySQL rollback with lost connection. I would welcome something like innodb_rollback_on_disconnect as mentioned there. What's happening to you is that you're getting connections disconnected early, as can happened on the web, and if this happens in the middle of a modifying query, the thread doing that will hang but retain a lock on the table.
Right now, accessing InnoDB directly with web services is vulnerable to these kinds of disconnects and there's nothing you can do within FatCow other than ask them to restart the service for you. Your idea to use MyISAM and low priority is okay, and will probably not have this problem, but if you want to go with InnoDB, would recommend an approach like the following.
1) Go with stored procedures, then the transactions are guaranteed to run to completion and not hang in the event of a disconnect. It's a lot of work, but improves reliability big time.
2) Don't rely on auto commit, ideally set it to zero, and explicitly begin and end each transaction with BEGIN TRANSACTION and COMMIT.

transaction1> START TRANSACTION;
transaction1> SELECT * FROM t WHERE i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 30 |
+------+
transaction2> START TRANSACTION;
transaction2> INSERT INTO t VALUES(26);
transaction2> COMMIT;
transaction1> select * from t where i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 26 |
| 30 |
+------+
What is a gap lock?
A gap lock is a lock on the gap between index records. Thanks to
this gap lock, when you run the same query twice, you get the same
result, regardless other session modifications on that table.
This makes reads consistent and therefore makes the replication
between servers consistent. If you execute SELECT * FROM id > 1000
FOR UPDATE twice, you expect to get the same value twice.
To accomplish that, InnoDB locks all index records found by the
WHERE clause with an exclusive lock and the gaps between them with a
shared gap lock.
This lock doesn’t only affect to SELECT … FOR UPDATE. This is an example with a DELETE statement:
transaction1 > SELECT * FROM t;
+------+
| age |
+------+
| 21 |
| 25 |
| 30 |
+------+
Start a transaction and delete the record 25:
transaction1 > START TRANSACTION;
transaction1 > DELETE FROM t WHERE age=25;
At this point we suppose that only the record 25 is locked. Then, we try to insert another value on the second session:
transaction2 > START TRANSACTION;
transaction2 > INSERT INTO t VALUES(26);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(29);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(23);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(31);
Query OK, 1 row affected (0.00 sec)
After running the delete statement on the first session, not only the affected index record has been locked but also the gap before and after that record with a shared gap lock preventing the insertion of data to other sessions.

If your UPDATE is literally:
UPDATE locations SET updated = NULL;
You are locking all rows in the table. If you abandon the transaction while holding locks on all rows, of course all rows will remain locked. InnoDB is not "unstable" in your environment, it would appear that it is doing exactly what you ask. You need to not abandon the open transaction.

Related

Can MySQL UPDATE locks starve due to continuous SHARE locks?

I have 2 different transactions where one is using read locks (FOR SHARE) for SELECT statements and the other uses write locks (FOR UPDATE).
Let's say they are trying to acquire the lock on the same row. Here's the scenario I'm trying to understand what's happening.
Let's say I have continuous stream of requests using the read locks and occasionally I need to acquire the write lock.
Are these locks using FIFO strategy to avoid starvation or some other strategy such as read locks would be acquired as long as it can acquire the lock and write lock would wait all the reads to drain (even the new ones in this case).
I'm suspecting 2nd might be happening but I'm not 100% sure.
I'm investigating an issue and couldn't find a good documentation about this.
If you lack documentation, you can try an experiment:
Window 1:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from tablename for share;
+---------------------+
| ldt |
+---------------------+
| 1969-12-31 16:00:00 |
+---------------------+
1 row in set (0.00 sec)
Window 2:
mysql> update tablename set ldt=now();
(hangs, waiting for lock)
Window 3:
mysql> select * from tablename for share;
(hangs, also waiting for lock)
This indicates that the X-lock request is blocking subsequent S-lock requests.
50 seconds passes, and then:
Window 2:
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
And then immediately:
Window 3:
mysql> select * from tablename for share;
+---------------------+
| ldt |
+---------------------+
| 1969-12-31 16:00:00 |
+---------------------+
1 row in set (41.14 sec)
The select in window 3 was blocked while waiting for the update in window 2. When the update timed out, then the select in window 3 was able to proceed.

Force-release a user-level lock in MySQL/MAriaDB

My session management (Zebra Session) uses user-level locks to avoid race conditions between two requests in the same session. To start the session, GET_LOCK is used. After closing the session, RELEASE_LOCK is used.
MariaDB [planner_20201026]> select GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaaa', '5');
+-------------------------------------------------------------------+
| GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaaa', '5') |
+-------------------------------------------------------------------+
| 1 |
+-------------------------------------------------------------------+
1 row in set (0.000 sec)
MariaDB [planner_20201026]> select RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaa');
+-----------------------------------------------------------------+
| RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587aaa') |
+-----------------------------------------------------------------+
| NULL |
+-----------------------------------------------------------------+
1 row in set (0.000 sec)
Now I am in a situation because of a reason which I do not know yet where the lock was not released properly. GET_LOCK finishes because of the timeout, RELEASE_LOCK tells me that it cannot release the lock because it was (according to the documentation) established by another thread:
MariaDB [xyz]> select GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b', '5');
+-------------------------------------------------------------------+
| GET_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b', '5') |
+-------------------------------------------------------------------+
| 0 |
+-------------------------------------------------------------------+
1 row in set (5.015 sec)
MariaDB [xyz]> select RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b');
+------------------------------------------------------------------+
| RELEASE_LOCK('session_ebe210e9b39f1ad3a409763be60efebff587ac8b') |
+------------------------------------------------------------------+
| 0 |
+------------------------------------------------------------------+
1 row in set (0.000 sec)
The session is now more or less blocked/useless/doomed, each request takes TIMEOUT seconds extra.
Is there any chance how I can clear that lock, especially after a timeout?
You can only use RELEASE_LOCK() to release a lock acquired in the same thread. A thread has no privilege to force another thread to give up its lock.
That would be a pretty useless locking system if you could acquire a lock but any other thread could unilaterally force you to release it!
One way you could work around this is to call IS_USED_LOCK() to tell you which thread holds the lock. It returns the integer thread id of the holder, or NULL if the lock is not held by anyone.
Then if you have SUPER privilege, your thread can KILL that other thread, and this will force it to release its lock (as well as disconnecting that client). But that's a pretty rude thing to do.
I have a feeling this is an XY Problem. You are searching for a solution to force locks held by other threads to be released, but this is a bad solution because it doesn't solve your real problem.
The real problem is:
Now I am in a situation because of a reason which I do not know yet where the lock was not released properly.
You need to think harder about this and design a system where you do not lose track of who has acquired the lock.
Hint: GET_LOCK(name, 0) may help. This returns immediately (that is, with zero seconds of timeout). If the lock can be acquired, it is acquired, and the return value of GET_LOCK is 1. If it was already held by another thread, the GET_LOCK still returns immediately, but with a return value of 0, telling you that it could not be acquired.

MySQL: How to hold lock and make other threads wait for an insert that hasn't happened yet

I'm confronted by a seemingly simple problem that has turned out to be pretty hard to figure out. We want to keep a record of every time we're presented with a piece of marketing (a lead) so we don't purchase it more than once in a 90 day period. Many lead providers can present us the same lead many times, often concurrently. We want to return an "accept" to exactly one lead provider.
So let's talk about the scenario that works: We have seen the material in the last 90 days and have a record on the table and there are 3 providers presenting the lead concurrently:
select count(id) from recent_leads where
last_seen_at >= '2019-10-11 00:00:00'
and email = 'yes#example.com' for update;
Thread1 arrives first, and acquires the lock. MySQL returns to Thread1:
+-----------+
| count(id) |
+-----------+
| 1 |
+-----------+
1 row in set (0.00 sec)
Thread1 issues a new insert:
insert into recent_leads (email, last_seen_at)
values ('yes#example.com', '2019-12-12 18:23:35');
Thread2 and Thread3 will blocking trying to execute the same statement until Thread1 commits or issues a rollback on it's transaction. Then Thread2 and Thread3 compete for the lock and the same process happens.
So that works as expected and we're happy with it. The wheels come off when there isn't a record.
Thread1, Thread2, and Thread3 all issue the same SQL as above. MySQL now returns this to all three threads immediately, whereas before, only one Thread would proceed:
+-----------+
| count(id) |
+-----------+
| 0 |
+-----------+
1 row in set (0.00 sec)
All three threads now attempt the insert. Two of them will get an error:
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
Is there a way we can get MySQL to behave like the first scenario all the time? We want Thread2 and Thread3 to block ideally.
Thank you,
-Jonathan
So we ended up dropping locking. Instead, we commit the row in a separate transaction, then select back all rows for an email minus the last_insert_id(). If we find a row with a lower primary key, we assume another thread is already handling the request. This is nice because it's lock free which is a bit easier to debug.

Why isn't REPETEABLE_READ on MariaDB producing phantom reads?

In my tests I have seen that when using MariaDB, executing the same query in REPETEABLE_READ isolation doesn't produce phantom reads, when it should.
For instance:
I have two rows in the bank_account table:
ID | OWNER | MONEY
------------------------
1 | John | 1000
2 | Louis | 2000
The expected flow should be as shown below:
THREAD 1 (REPETEABLE_READ) THREAD 2 (READ_UNCOMMITED)
| |
findAll()->[1|John|1000,2|Louis|2000] |
| |
| updateAccount(1, +100)
| createAccount("Charles", 3000)
| flush()
| |
| commitTx()
| |_
|
findAll()->[1|John|1000,2|Louis|2000,
| 3|Charles|3000]
|
|
commitTx()
|_
To sum up, after Thread2.createAccount("Charles", 3000); and its flush, Thread1 would search all rows and would get
ID | OWNER | MONEY
------------------------
1 | John | 1000
2 | Louis | 2000
3 | Charles | 3000
Thread1 is protected from uncommited changes seeing [1, John, 1000] instead of [1, John, 1100] but it is supposed to see the new inserted row.
However, what Thread1 retrieves in the second findAll are the exact same results as the ones from the first findAll():
ID | OWNER | MONEY
------------------------
1 | John | 1000
3 | Charles | 3000
It doesn't have phantom reads. Why?????
This is the code executed by Thread1:
#Transactional(readOnly=true, isolation=Isolation.REPEATABLE_READ)
#Override
public Iterable<BankAccount> findAllTwiceRepeteableRead(){
printIsolationLevel();
Iterable<BankAccount> accounts = baDao.findAll();
logger.info("findAllTwiceRepeteableRead() 1 -> {}", accounts);
//PAUSE HERE
...
}
I pause the execution where it sais //PAUSE HERE.
Then Thread2 executes:
bankAccountService.addMoneyReadUncommited(ba.getId(), 200);
bankAccountService.createAccount("Carlos", 3000);
And then Thread1 resumes:
//PAUSE HERE
...
Iterable<BankAccount> accounts = baDao.findAll();
logger.info("findAllTwiceRepeteableRead() 2 -> {}", accounts);
UPDATE:
I've updated the thread transaction flows with what I'm really doing (I am commiting the second transaction after the new row insert).
This matches what, according to wikipedia is a phantom read and I think is the very same scenario. So I still don't get why I'm not getting the phantom read [3|Charles,3000]
A phantom read occurs when, in the course of a transaction, two
identical queries are executed, and the collection of rows returned by
the second query is different from the first.
This can occur when range locks are not acquired on performing a
SELECT ... WHERE operation. The phantom reads anomaly is a special
case of Non-repeatable reads when Transaction 1 repeats a ranged
SELECT ... WHERE query and, between both operations, Transaction 2
creates (i.e. INSERT) new rows (in the target table) which fulfill
that WHERE clause.
Transaction 1 Transaction 2
/* Query 1 */
SELECT * FROM users
WHERE age BETWEEN 10 AND 30;
/* Query 2 */
INSERT INTO users(id,name,age) VALUES ( 3, 'Bob', 27 );
COMMIT;
/* Query 1 */
SELECT * FROM users
WHERE age BETWEEN 10 AND 30;
COMMIT;
What you described as the actual behaviour is in fact the correct behaviour for repeatable_read. The behaviour you are expecting can be achieved by using read_committed.
As mariadb documentation on repeatable_read says (bolding is mine):
there is an important difference from the READ COMMITTED isolation
level: All consistent reads within the same transaction read the
snapshot established by the first read.
In thread 1 the 1st FindAll() call returning John and Louis established the snapshot. The 2nd FindAll() simply used the same snapshot.
This is further corroborated by a Percona blog post on Differences between READ-COMMITTED and REPEATABLE-READ transaction isolation levels:
In REPEATBLE READ, a ‘read view’ ( trx_no does not see trx_id >= ABC,
sees < ABB ) is created at the start of the transaction, and this
read view (consistent snapshot in Oracle terms) is held open for the
duration of the transaction. If you execute a SELECT statement at 5AM,
and come back in an open transaction at 5PM, when you run the same
SELECT, then you will see the exact same resultset that you saw at
5AM. This is called MVCC (multiple version concurrency control) and
it is accomplished using row versioning and UNDO information.
UPDATE
Caveat: The following references are from the MySQL documentation. However, since these references relate to the innodb storage engine, I firmly believe that they apply to mariadb's innodb storage engine as well.
So, in innodb storage engine under repeatable read isolation level, the non-locking selects within the same transaction read from the snapshot established by the first read. No matter how many records were inserted / updated / deleted in concurrent committed transactions, the reads will be consistent. Period.
This is the scenario described by the OP in the question. This would imply that a non-locking read in repeatable read isolation level would not be able to produce a phantom read, right? Well, not exactly.
As MySQL documentation on InnoDB Consistent Nonlocking Reads says:
The snapshot of the database state applies to SELECT statements within
a transaction, not necessarily to DML statements. If you insert or
modify some rows and then commit that transaction, a DELETE or UPDATE
statement issued from another concurrent REPEATABLE READ transaction
could affect those just-committed rows, even though the session could
not query them. If a transaction does update or delete rows committed
by a different transaction, those changes do become visible to the
current transaction. For example, you might encounter a situation like
the following:
SELECT COUNT(c1) FROM t1 WHERE c1 = 'xyz';
-- Returns 0: no rows match. DELETE FROM t1 WHERE c1 = 'xyz';
-- Deletes several rows recently committed by other transaction.
SELECT COUNT(c2) FROM t1 WHERE c2 = 'abc';
-- Returns 0: no rows match. UPDATE t1 SET c2 = 'cba' WHERE c2 = 'abc';
-- Affects 10 rows: another txn just committed 10 rows with 'abc' values.
SELECT COUNT(c2) FROM t1 WHERE c2 = 'cba';
-- Returns 10: this txn can now see the rows it just updated.
To sum up: if you use innodb with repeatable read isolation mode, then phantom reads may occur if data modification statements in concurrent committed transactions interact with data modification statements within the current transaction.
The linked Wikipedia article on isolation levels describes a general theoretical model. You always need to read the actual product manual how a certain feature is implemented because there may be differences.
In the Wikipedia article only locks are described as a mean of preventing the phantom reads. However, innodb uses the creation of the snapshot to prevent the phantom reads in most of the cases, thus there is no need to rely on locks.

"Lock wait timeout exceeded" without no process in processlist

I've gotten the next error while trying to perform some bunch deletion with reasonable limit:
query=(DELETE FROM `A` WHERE `id` < 123456 LIMIT 1000)
exception=(1205, 'Lock wait timeout exceeded; try restarting transaction')
And
mysql> SHOW OPEN TABLES like 'A';
+----------+----------------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+----------------------+--------+-------------+
| D | A | 3 | 0 |
+----------+----------------------+--------+-------------+
1 row in set (0.22 sec)
I see that where is might be a deadlock, but show full processlist outputs only itself. Where to dig into?
InnoDB, MySQL 5.5
This means there is a transaction that should be committed. Check other sessions or other applications which may operate with this table.
Also there could be unclosed transacions after SELECTs. I've solved (I hope) such case adding commit/rollback after separate (not some transaciotn parts) SELECTs.
This idea has looked strange for me, so I'd spent some time for other atempts before I've tried it. And it has helped.