Long running SELECT with UPDATE locks database - mysql

I have a select statement that takes a long time to run (around 5 minutes). Because of this I only run the query every hour and save the results to a metadata table. Here is the query:
UPDATE `metadata` SET `value` = (select count(`id`) from `logs`) WHERE `key` = 'logs'
But this is the issue I have been having (And correct me if I am wrong). A select statement does not lock the database, but an update statement does. Now since I am running this long ruining select query inside of the update query, it ends up locking the DB for about 5 minutes.
Is there a better way to do this to run the select statement and save it to a variable and then once that is done then running the update query? This way it wont lock the DB.
Also note I don't care about dirty data.
The database has over 300 million rows and has data being added to it constantly.

Just to avoid the possibility that the server disconnects between the statement getting the count and the statement storing it, leaving your variable unset, beginning in mariadb 1.1 you can run multiple statements in a single request by putting them in a block:
begin not atomic
declare `logs_count` int;
select count(*) into `logs_count` from `logs`;
update `metadata` set `value`=`logs_count` where `key`='logs';
end
fiddle

I have found that setting this before the query runs seems to work and runs a whole lot faster. This should keep the DB from locking when executing the query. We then enable locking after it has completed.
(Please correct me if I have done something incorrect here)
BEGIN
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
UPDATE `metadata` SET `value` = (select count(`id`) from `logs`) WHERE `key` = 'logs';
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
END

Related

MySQL SET user variable locks rows and doesn't obey REPEATABLE READ

I've encountered an undocumented behavior of "SET #my_var = (SELECT ..)" inside a transaction:
The first one is that it locks rows ( depends whether it is a unique index or not ).
Example -
START TRANSACTION;
SET #my_var = (SELECT id from table_name where id = 1);
select trx_rows_locked from information_schema.innodb_trx;
ROLLBACKL;
The output is 1 row locked, which is strange, it shouldn't gain a reading lock.
Also, the equivalent statement SELECT id INTO #my_var won't produce a lock.
It can lead to a deadlock in case of an UPDATED after the SET statement ( for 2 concurrent requests )
In REPEATABLE READ -
The SELECT inside the SET statement gets a new snapshot of the data, instead of using the original SNAPSHOT.
SESSION 1:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START transaction;
SELECT data FROM my_table where id = 2; # Output : 2
SESSION 2:
UPDATE my_table set data = 3 where id = 2 ;
SESSION 1:
SET #data = (SELECT data FROM my_table where id = 2);
SELECT #data; # Output : 3, instead of 2
ROLLBACK;
However, I would expect that #data will contain the original value from the first snapshot ( 2 ).
If I use SELECT data into #data from my_table where id = 2 then I will get the expected value - 2;
Do you have an idea what is the source of the different behavior of SET = (SELECT ..) compared to SELECT data INTO #var FROM .. ?
Thanks.
Correct — when you SELECT in a context where you're copying the results into a variable or a table, it implicitly works as if you had used a locking read SELECT ... FOR SHARE.
This means it places a shared lock on the rows examined, and it also means that the statement reads only the most recently committed version of rows, as if your transaction were in READ-COMMITTED isolation level.
I'm not sure why SELECT ... INTO #var does not do the same kind of implicit locking in MySQL 8.0. My memory is that in older versions of MySQL it did do locking in that query form. I've searched the manual for an explanation but I can't find one yet.
Other cases that implicitly lock the rows examined by SELECT, and therefore reads data as if you transaction is READ-COMMITTED:
INSERT INTO <table> SELECT ...
UPDATE or DELETE multi-table, even if you don't update or delete a given table, the rows joined become locked.
SELECT inside a trigger

Understanding InnoDB Repeatable Read isolation level snapshots

I have the following table:
CREATE TABLE `accounts` (
`name` varchar(50) NOT NULL,
`balance` int NOT NULL,
PRIMARY KEY (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
And it has two accounts in it. "Bob" has a balance of 100. "Jim" has a balance of 200.
I run this query to transfer 50 from Jim to Bob:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN;
SELECT * FROM accounts;
SELECT SLEEP(10);
SET #bobBalance = (SELECT balance FROM accounts WHERE name = 'bob' FOR UPDATE);
SET #jimBalance = (SELECT balance FROM accounts WHERE name = 'jim' FOR UPDATE);
UPDATE accounts SET balance = #bobBalance + 50 WHERE name = 'bob';
UPDATE accounts SET balance = #jimBalance - 50 WHERE name = 'jim';
COMMIT;
While that query is sleeping, I run the following query in a different session to set Jim's balance to 500:
UPDATE accounts SET balance = 500 WHERE name = 'jim';
What I thought would happen is that this would cause a bug. The transaction would set Jim's balance to 150, because the first read in the transaction (before the SLEEP) would establish a snapshot in which Jim's balance is 200, and that snapshot would be used in the later query to get Jim's balance. So we would subtract 50 from 200 even though Jim's balance has actually been changed to 500 by the other query.
But that's not what happens. Actually, the end result is correct. Bob has 150 and Jim has 450. But I don't understand why this is.
The MySQL documentation says about Repeatable Read:
This is the default isolation level for InnoDB. Consistent reads within the same transaction read the snapshot established by the first read. This means that if you issue several plain (nonlocking) SELECT statements within the same transaction, these SELECT statements are consistent also with respect to each other. See Section 15.7.2.3, “Consistent Nonlocking Reads”.
So what am I missing here? Why does it seem like the SELECT statements in the transaction are not all using a snapshot established by the first SELECT statement?
The repeatable-read behavior only works for non-locking SELECT queries. It reads from the snapshot established by the first query in the transaction.
But any locking SELECT query reads the latest committed version of the row, as if you had started your transaction in READ-COMMITTED isolation level.
A SELECT is implicitly a locking read if it's involved in any kind of SQL statement that modifies data.
For example:
INSERT INTO table2 SELECT * FROM table1 WHERE ...;
The above locks examined rows in table1, even though the statement is just copying them to table2.
SET #myvar = (SELECT ... FROM table1 WHERE ...);
This is also copying a value from table1, into a variable. It locks the examined row in table1.
Likewise SELECT statements that are invoked in a trigger, or as part of a multi-table UPDATE or DELETE, and so on. Anytime the SELECT is part of a larger statement that modifies any data (in a table or in a variable), it locks the rows examined by the SELECT.
And therefore it's a locking read, and behaves like an UPDATE with respect to which row version it reads.

Is it possible to create a Lost Update with MySQL Workbench

I want to create a Lost Update with MySQL Workbench. Therefore, I have 2 connections to my database and 2 transactions. I also changed the transaction isolation level to read uncommitted but transaction A uses the current data when the update statement starts. It never uses the data from the first select statement and with select ... for update the transaction b is blocked.
Transaction A (starts first):
Start transaction;
SELECT * FROM table;
Select sleep(10); -- <- Transaction B executes in this 10 seconds
UPDATE table SET Number = Number + 10 WHERE FirstName = "Name1";
COMMIT;
Transaction B:
Start transaction;
UPDATE table SET Number = Number - 5 WHERE FirstName = "Name1";
COMMIT;
Is it possible to create this failure with MySQL Workbench. What´s wrong with my code?
Thanks for your help
The update in A work with data after the sleep is executed. Select before does nothing in the transaction.

MySQL row lock and atomic updates

I am building a "poor man's queuing system" using MySQL. It's a single table containing jobs that need to be executed (the table name is queue). I have several processes on multiple machines whose job it is to call the fetch_next2 sproc to get an item off of the queue.
The whole point of this procedure is to make sure that we never let 2 clients get the same job. I thought that by using the SELECT .. LIMIT 1 FOR UPDATE would allow me to lock a single row so that I could be sure it was only updated by 1 caller (updated such that it no longer fit the criteria of the SELECT being used to filter jobs that are "READY" to be processed).
Can anyone tell me what I'm doing wrong? I just had some instances where the same job was given to 2 different processes so I know it doesn't work properly. :)
CREATE DEFINER=`masteruser`#`%` PROCEDURE `fetch_next2`()
BEGIN
SET #id = (SELECT q.Id FROM queue q WHERE q.State = 'READY' LIMIT 1 FOR UPDATE);
UPDATE queue
SET State = 'PROCESSING', Attempts = Attempts + 1
WHERE Id = #id;
SELECT Id, Payload
FROM queue
WHERE Id = #id;
END
Code for the answer:
CREATE DEFINER=`masteruser`#`%` PROCEDURE `fetch_next2`()
BEGIN
SET #id := 0;
UPDATE queue SET State='PROCESSING', Id=(SELECT #id := Id) WHERE State='READY' LIMIT 1;
#You can do an if #id!=0 here
SELECT Id, Payload
FROM queue
WHERE Id = #id;
END
The problem with what you are doing is that there is no atomic grouping for the operations. You are using the SELECT ... FOR UPDATE syntax. The Docs say that it blocks "from reading the data in certain transaction isolation levels". But not all levels (I think). Between your first SELECT and UPDATE, another SELECT can occur from another thread. Are you using MyISAM or InnoDB? MyISAM might not support it.
The easiest way to make sure this works properly is to lock the table.
[Edit] The method I describe right here is more time consuming than using the Id=(SELECT #id := Id) method in the above code.
Another method would be to do the following:
Have a column that is normally set to 0.
Do an "UPDATE ... SET ColName=UNIQ_ID WHERE ColName=0 LIMIT 1. That will make sure only 1 process can update that row, and then get it via a SELECT afterwards. (UNIQ_ID is not a MySQL feature, just a variable)
If you need a unique ID, you can use a table with auto_increment just for that.
You can also kind of do this with transactions. If you start a transaction on a table, run UPDATE foobar SET LockVar=19 WHERE LockVar=0 LIMIT 1; from one thread, and do the exact same thing on another thread, the second thread will wait for the first thread to commit before it gets its row. That may end up being a complete table blocking operation though.

MySql InnoDB increment and return a field in a transaction

In my application I want to take a value from an InnoDB table, and then increment and return it within a single transaction. I want also lock the row that i am going to update in order to prevent another session from changing the value during the transaction. I wrote this query;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION;
SELECT #no:=`value` FROM `counter` where name='booking' FOR UPDATE;
UPDATE `counter` SET `value` = `value` + 1 where `name`='booking';
SELECT #no;
COMMIT;
I want to know if the isolation level is right and is there any need for 'FOR UPDATE' statement. Am i doing it right?
Yes whatever you are doing perfectly fine.
Below lines I am directly quoting from MySQL documentation.
"If you query data and then insert or update related data within the same transaction, the regular SELECT statement does not give enough protection.
..
To implement reading and incrementing the counter, first perform a locking read of the counter using FOR UPDATE, and then increment the counter. For example:
SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;
A SELECT ... FOR UPDATE reads the latest available data, setting exclusive locks on each row it reads. Thus, it sets the same locks a searched SQL UPDATE would set on the rows.
Reference:
https://dev.mysql.com/doc/refman/5.6/en/innodb-locking-reads.html