MySQL isolation level repeatable reads and atomic increments in updates - mysql

The last hours I've studied documentation about the different SQL transaction isolation levels and found out that MySQL uses the Repeatable Read Isolation by default and did some experiments.
As far as I understand this, selects in an ongoing transaction should see the same data unless the same transaction does updates to it.
I found a non repeatable read while using atomic increments (e.g. update table set age=age+1 where id=1).
My test table consists of two columns id and age with one entry 1, 20.
Running the following commands in 2 session I get a non repeatable read:
Transaction 1 Transaction 2
--------------- -------------------
begin; begin;
select * from test; select * from test;
+----+-----+ +----+-----+
| id | age | | id | age |
+----+-----+ +----+-----+
| 1 | 20 | | 1 | 20 |
+----+-----+ +----+-----+
update test set \
age=age+1 where id=1;
select * from test; select * from test;
+----+-----+ +----+-----+
| id | age | | id | age |
+----+-----+ +----+-----+
| 1 | 21 | | 1 | 20 |
+----+-----+ +----+-----+
commit;
select * from test;
-- age = 20
update test set age=age+1 where id=1;
select * from test;
-- Expected age=21
-- got age=22 => Non-Repeatable Read
Why does the update use a different value than a select would return? Imagine I would do a select and increment the returned value by one following an update of the row. I would get different results.

The UPDATE operation from the connection on the right column blocks until the transaction on the left completes. If you want repeatable reads on both connections, you'll need to use BEGIN / COMMIT on both connections.

The proper way to run such code is to use FOR UPDATE on the end of the first SELECT. Without that, you are asking for troubles like you found.
What I think happened is (in the righthand connection):
the second SELECT on the right did a "repeatable read" and got only 20.
the UPDATE saw that 21 had been committed, so it bumped it to 22.
the third SELECT new that you had changed the row, so it reread it, getting 22.

Related

Why deleted rows still show up in a subsequent select query in concurrently executed MySQL transaction?

I have the following memberships table with some initial data.
CREATE TABLE memberships (
id SERIAL PRIMARY KEY,
user_id INT,
group_id INT
);
INSERT INTO memberships(user_id, group_id)
VALUES (1, 1), (2, 1), (1, 2), (2, 2);
I have two transactions (repeatable read isolation level) deleting all the rows whose group_id is 2 from the memberships table and retrieving the result using a select query, but the result I get is surprising.
time
transaction 1
transaction 2
T1
start transaction
T2
delete from memberships where group_id = 2
start transaction
T3
select * from memberships this is to make MySQL believe that transaction 2 starts before transaction 1 finishes
T4
select * from memberships this prints only rows whose group_id is 1
T5
commit
T6
delete from memberships where group_id = 2
T7
select * from memberships surprisingly, this prints all rows including rows whose group_id is 2
Below is the result I get from T7.
select * from memberships;
+----+---------+----------+
| id | user_id | group_id |
+----+---------+----------+
| 1 | 1 | 1 |
| 2 | 2 | 1 |
| 3 | 1 | 2 |
| 4 | 2 | 2 |
+----+---------+----------+
4 rows in set (0.00 sec)
This is really surprising since this select query is immediately preceded by a delete query which should remove all the rows whose group_id is 2.
I tried this on MySQL 5.7 and 8.0, and both of them have this issue.
I also tried this on Postgres 14 (also repeatable read isolation level), fortunately, Postgres doesn't have this issue. At timestamp T6, I get an error could not serialize access due to concurrent delete.
Can someone explain to me:
Why MySQL has the issue I described above? How does MySQL implement deletion and how does it work with the MySQL MVCC scheme?
Why Postgres doesn't have the issue? How does Postgres implement deletion and how does it work with the Postgres MVCC implementation?
Thanks a lot!
The repeatable read behavior you are seeing is mentioned in the MySQL documentation:
This is the default isolation level for InnoDB. Consistent reads within the same transaction read the snapshot established by the first read.
This means that the repeatable snapshot which the second transaction sees throughout its transaction is established at T3. Keep in mind that repeatable read is the default isolation level for MySQL.
On Postgres, the default isolation level is not repeatable read but rather read committed. Under this isolation level, attempting the delete from the second transaction which interleaves with the first transaction generates the serialize access error. If you explicitly set the isolation level in Postgres you should get similar behavior:
BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;

MySQL ALTER TABLE taking long in small table

I have two tables in my scenario
table1, which has about 20 tuples
table2, which has about 3 million tuples
table2 has a foreign key referencing table1 "ID" column.
When I try to execute the following query:
ALTER TABLE table1 MODIFY vccolumn VARCHAR(1000);
It takes forever. Why is it taking that long? I have read that it should not, because it only has 20 tuples.
Is there any way to speed it up without having server downtime? Because the query is locking the table, also.
I would guess the ALTER TABLE is waiting on a metadata lock, and it has not actually starting altering anything.
What is a metadata lock?
When you run any query like SELECT/INSERT/UPDATE/DELETE against a table, it must acquire a metadata lock. Those queries do not block each other. Any number of queries of that type can have a metadata lock.
But a DDL statement like CREATE/ALTER/DROP/TRUNCATE/RENAME or event CREATE TRIGGER or LOCK TABLES, must acquire an exclusive metadata lock. If any transaction still holds a metadata lock, the DDL statement waits.
You can demonstrate this. Open two terminal windows and open the mysql client in each window.
Window 1: CREATE TABLE foo ( id int primary key );
Window 1: START TRANSACTION;
Window 1: SELECT * FROM foo; -- it doesn't matter that the table has no data
Window 2: DROP TABLE foo; -- notice it waits
Window 1: SHOW PROCESSLIST;
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| 679 | root | localhost | test | Query | 0 | starting | show processlist | 0 | 0 |
| 680 | root | localhost | test | Query | 4 | Waiting for table metadata lock | drop table foo | 0 | 0 |
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
You can see the drop table waiting for the table metadata lock. Just waiting. How long will it wait? Until the transaction in window 1 completes. Eventually it will time out after lock_wait_timeout seconds (by default, this is set to 1 year).
Window 1: COMMIT;
Window 2: Notice it stops waiting, and it immediately drops the
table.
So what can you do? Make sure there are no long-running transactions blocking your ALTER TABLE. Even a transaction that ran a quick SELECT against your table earlier will hold its metadata lock until the transaction commits.

Mysql repeatable read get other session's commit when use select ...for update

My table's definition is
CREATE TABLE auto_inc (
id int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
At first there are four rows:
| id |
| 1 |
| 2 |
| 3 |
| 4 |
I opened session 1 and executed
#session 1
set transaction isolation level REPEATABLE READ
start transaction;
select * from auto_inc
return four rows 1,2,3,4.And then I opened another session 2 and executed
#session 2
insert into auto_inc(`id`) values(null)
and insert success.Back to session 1 I executed
#session 1
select * from auto_inc;#command 1
select * from auto_inc for update;#command 2
command 1 return four rows 1,2,3,4.But command 2 return 1,2,3,4,5.Could anyone gives me some clues why command 2 will see session 2's insertion?
Thanks in advance!
why session 2 can insert new data ?
under REPEATABLE READ the second SELECT is guaranteed to see the rows that has seen at first select unchanged. New rows may be added by a concurrent transaction, but the existing rows cannot be deleted nor changed.
https://stackoverflow.com/a/4036063/3020810
why session 1 can see the insertion?
under REPEATABLE READ, Consistent reads within the same transaction read the snapshot established by the first read.If you want to see the “freshest” state of the database, use either the READ COMMITTED isolation level or a locking read, and a select ... for update is a locking read.
Consistent Nonlocking Reads: https://dev.mysql.com/doc/refman/5.6/en/innodb-consistent-read.html

Stored procedure hanging

A stored procedure hangs from time to time. Any advices?
BEGIN
DECLARE bookId int;
SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn=p_isbn
and stoc>0
and status='vizibil'
and pret_ron=(SELECT MAX(pret_ron) FROM products
WHERE isbn=p_isbn
and stoc>0
and status='vizibil')
ORDER BY stoc DESC
LIMIT 0,1;
IF bookId>0 THEN
UPDATE products SET afisat='nu' WHERE isbn=p_isbn;
UPDATE products SET afisat='da' WHERE id=bookId;
SELECT bookId INTO obookId;
ELSE
SELECT id INTO bookId FROM products
WHERE
isbn=p_isbn
and stoc=0
and status='vizibil'
and pret_ron=(SELECT MAX(pret_ron) FROM products
WHERE isbn=p_isbn
and stoc=0
and status='vizibil')
LIMIT 0,1;
UPDATE products SET afisat='nu' WHERE isbn=p_isbn;
UPDATE products SET afisat='da' WHERE id=bookId;
SELECT bookId INTO obookId;
END IF;
END
When it hangs it does it on:
| 23970842 | username | sqlhost:54264 | database | Query | 65 | Sending data | SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn= NAME_CONST('p_isbn',_utf8'973-679-50 | 0.000 |
| 1133136 | username | sqlhost:52466 | database _emindb | Query | 18694 | Sending data | SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn= NAME_CONST('p_isbn',_utf8'606-92266- | 0.000 |
First, I'd like to mention the Percona toolkit, it's great for debugging deadlocks and hung transactions. Second, I would guess that at the time of the hang, there are multiple threads executing this same procedure. What we need to know is, which locks are being acquired at the time of the hang. MySQL command SHOW INNODB STATUS gives you this information in detail. At the next 'hang', run this command.
I almost forgot to mention the tool innotop, which is similar, but better: https://github.com/innotop/innotop
Next, I am assuming you are the InnoDB engine. The default transaction isolation level of REPEATABLE READ may be too high in this situation because of range locking, you may consider trying READ COMMITTED for the body of the procedure (SET to READ COMMITTED at the beginning and back to REPEATABLE READ at the end).
Finally, perhaps most importantly, notice that your procedure performs SELECTs and UPDATEs (in mixed order) on the same table using perhaps the same p_isbn value. Imagine if this procedure runs concurrently -- it is a perfect deadlock set up.

Fastest way to count the rows in any database table?

We assume that there is no primary key defined for a table T. In that case, how does one count all the rows in T quickly/efficiently for these databases - Oracle 11g, MySql, Mssql ?
It seems that count(*) and count(column_name) can be slow and inaccurate respectively. The following seems to be the fastest and most reliable way to do it-
select count(rowid) from MySchema.TableInMySchema;
Can you tell me if the above statement also has any shortcomings ? If it is good, then do we have similar statements for mysql and mssql ?
Thanks in advance.
Source -
http://www.thewellroundedgeek.com/2007/09/most-people-use-oracle-count-function.html
count(column_name) is not inaccurate, it's simply something completely different than count(*).
The SQL standard defines count(column_name) as equivalent to count(*) where column_name IS NOT NULL. To the result is bound to be different if column_name is nullable.
In Oracle (and possibly other DBMS as well), count(*) will use an available index on a not null column to count the rows (e.g. PK index). So it will be just as fas
Additionally there is nothing similar to the rowid in SQL Server or MySQL (in PostgreSQL it would be ctid).
Do use count(*). It's the best option to get the row count. Let the DBMS do any optimization in the background if adequate indexes are available.
Edit
A quick demo on how Oracle automatically uses an index if available and how that reduces the amount of work done by the database:
The setup of the test table:
create table foo (id integer not null, c1 varchar(2000), c2 varchar(2000));
insert into foo (id, c1, c2)
select lvl, c1, c1 from
(
select level as lvl, dbms_random.string('A', 2000) as c1
from dual
connect by level < 10000
);
That generates 10000 rows with each row filling up some space in order to make sure the table has a realistic size.
Now in SQL*Plus I run the following:
SQL> set autotrace traceonly explain statistics;
SQL> select count(*) from foo;
Execution Plan
----------------------------------------------------------
Plan hash value: 1342139204
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2740 (1)| 00:00:33 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| FOO | 9999 | 2740 (1)| 00:00:33 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
181 recursive calls
0 db block gets
10130 consistent gets
0 physical reads
0 redo size
430 bytes sent via SQL*Net to client
420 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
As you can see a full table scan is done on the table which requires 10130 "IO Operations" (I know that that is not the right term, but for the sake of the demo it should be a good enough explanation for someone never seen this before)
Now I create an index on that column and run the count(*) again:
SQL> create index i1 on foo (id);
Index created.
SQL> select count(*) from foo;
Execution Plan
----------------------------------------------------------
Plan hash value: 129980005
----------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| I1 | 9999 | 7 (0)| 00:00:01 |
----------------------------------------------------------------------
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
27 consistent gets
21 physical reads
0 redo size
430 bytes sent via SQL*Net to client
420 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
As you can see Oracle did use the index on the (not null!) column and the amount of IO went drastically down (from 10130 to 27 - not something I'd call "grossly ineffecient").
The "physical reads" stem from the fact that the index was just created and was not yet in the cache.
I would expect other DBMS to apply the same optimizations.
In Oracle, COUNT(*) is the most efficient. Realistically, COUNT(rowid), COUNT(1), or COUNT('fuzzy bunny') are likely to be equally efficient. But if there is a difference, COUNT(*) will be more efficient.
i EVER use SELECT COUNT(1) FROM anything;, instead of the asterisk...
some people are of the opinion, that mysql uses the asterisk to invoke the query-optimizer and ignores any optimizing when use of "1" as static scalar...
imho, this is straight-forward, because you don't use any variable and it's clear, that you only count all rows.