I read A Critique of ANSI SQL Isolation Levels, then it talks about cursor lost update as shown below:
But, I don't really understand it so cannot produce it in MySQL.
What is and how to produce cursor lost update in MySQL?
-- initial state
insert into mytable set x = 100;
Trans 1 Trans 2
start transaction; start transaction;
update mytable set x = 75;
update mytable set x = 110;
-- waits because row is locked by trans 1
commit;
-- acquires lock
-- update x = 110 succeeds
commit;
select * from mytable;
-- shows 110
-- where did my value 75 go??
This happens irrespective of cursors. MySQL does not support update where current of cursor as some other implementations do, so the closest you can get is to issue discrete update statements.
Cf. https://dev.mysql.com/doc/refman/8.0/en/cursor-restrictions.html
UPDATE WHERE CURRENT OF and DELETE WHERE CURRENT OF are not implemented, because updatable cursors are not supported.
Related
I've encountered an undocumented behavior of "SET #my_var = (SELECT ..)" inside a transaction:
The first one is that it locks rows ( depends whether it is a unique index or not ).
Example -
START TRANSACTION;
SET #my_var = (SELECT id from table_name where id = 1);
select trx_rows_locked from information_schema.innodb_trx;
ROLLBACKL;
The output is 1 row locked, which is strange, it shouldn't gain a reading lock.
Also, the equivalent statement SELECT id INTO #my_var won't produce a lock.
It can lead to a deadlock in case of an UPDATED after the SET statement ( for 2 concurrent requests )
In REPEATABLE READ -
The SELECT inside the SET statement gets a new snapshot of the data, instead of using the original SNAPSHOT.
SESSION 1:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START transaction;
SELECT data FROM my_table where id = 2; # Output : 2
SESSION 2:
UPDATE my_table set data = 3 where id = 2 ;
SESSION 1:
SET #data = (SELECT data FROM my_table where id = 2);
SELECT #data; # Output : 3, instead of 2
ROLLBACK;
However, I would expect that #data will contain the original value from the first snapshot ( 2 ).
If I use SELECT data into #data from my_table where id = 2 then I will get the expected value - 2;
Do you have an idea what is the source of the different behavior of SET = (SELECT ..) compared to SELECT data INTO #var FROM .. ?
Thanks.
Correct — when you SELECT in a context where you're copying the results into a variable or a table, it implicitly works as if you had used a locking read SELECT ... FOR SHARE.
This means it places a shared lock on the rows examined, and it also means that the statement reads only the most recently committed version of rows, as if your transaction were in READ-COMMITTED isolation level.
I'm not sure why SELECT ... INTO #var does not do the same kind of implicit locking in MySQL 8.0. My memory is that in older versions of MySQL it did do locking in that query form. I've searched the manual for an explanation but I can't find one yet.
Other cases that implicitly lock the rows examined by SELECT, and therefore reads data as if you transaction is READ-COMMITTED:
INSERT INTO <table> SELECT ...
UPDATE or DELETE multi-table, even if you don't update or delete a given table, the rows joined become locked.
SELECT inside a trigger
I am trying to benchmark a sql transaction with a select..for update statement which uses an exclusive lock on the row and then insert a row into another table as shown below.
START TRANSACTION;
SELECT CurrentSize
FROM testtable
WHERE id = {id} FOR UPDATE;
-- update current size in testtable
UPDATE testtable
SET currentsize = currentsize + 1
WHERE id = {id} ;
-- insert into a different table
insert into testtable2 values(1,2);
COMMIT;
I am getting 2K tps for the above transaction and I am assuming each transaction takes 0.5ms to complete so giving me 2K tps .
Is it even possible to scale the system beyond this point? If yes is there any implementation that I could try and use.
I am using a 16x.large machine of AWS RDS Aurora MySQL.
I want to create a Lost Update with MySQL Workbench. Therefore, I have 2 connections to my database and 2 transactions. I also changed the transaction isolation level to read uncommitted but transaction A uses the current data when the update statement starts. It never uses the data from the first select statement and with select ... for update the transaction b is blocked.
Transaction A (starts first):
Start transaction;
SELECT * FROM table;
Select sleep(10); -- <- Transaction B executes in this 10 seconds
UPDATE table SET Number = Number + 10 WHERE FirstName = "Name1";
COMMIT;
Transaction B:
Start transaction;
UPDATE table SET Number = Number - 5 WHERE FirstName = "Name1";
COMMIT;
Is it possible to create this failure with MySQL Workbench. What´s wrong with my code?
Thanks for your help
The update in A work with data after the sleep is executed. Select before does nothing in the transaction.
update my_table set limit_id = 2 where id='176846';
start transaction;
update my_table set limit_id = 1 where id='176846';
update my_table set limit_id = 4 where id='176846'; -- <- this one fails
commit;
select limit_id from my_table where id='176846';
I would like to roll this back automatically - I want the script to output 2, not 1. I have no access to the connection policy in use.
reading here:
http://dev.mysql.com/doc/refman/5.5/en/commit.html
By default, MySQL runs with autocommit mode enabled. This means that
as soon as you execute a statement that updates (modifies) a table,
MySQL stores the update on disk to make it permanent. The change
cannot be rolled back.
try something like
SET autocommit = 0;
start transaction;
(...)
commit;
It depends on why a limit_id value of 4 causes an error, but MySql does not always roll back the entire transaction. See: http://dev.mysql.com/doc/refman/5.7/en/innodb-error-handling.html for more information, but in several cases, MySql will only implicitly rollback the last statement, then continue with the transaction.
In case I ran a very long update (which has millions of records to update and is going to take several hours), I was wondering if there is anyway to kill the update without having InnoDB rollback the changes.
I would like the records that were already updated to stay as they are (and the table locks released ASAP), meaning to continue the update later when I have time for it.
This is similar to what MyISAM would do when killing an update.
If you mean a single UPDATE statement, I may be wrong but I doubt that's possible. However, you can always split your query into smaller sets. Rather than:
UPDATE foo SET bar=gee
... use:
UPDATE foo SET bar=gee WHERE id BETWEEN 1 AND 100;
UPDATE foo SET bar=gee WHERE id BETWEEN 101 AND 200;
UPDATE foo SET bar=gee WHERE id BETWEEN 201 AND 300;
...
This can be automated in a number of ways.
My suggestion would be to create a black_hole table, with fields to match your needs for the update statement.
CREATE TABLE bh_table1
..field defs
) ENGINE = BLACKHOLE;
Now create a trigger on the blackhole table.
DELIMITER $$
CREATE TRIGGER ai_bh_table1_each AFTER INSERT ON bh_table1 FOR EACH ROW
BEGIN
//implicit start transaction happens here.
UPDATE table1 t1 SET t1.field1 = NEW.field1 WHERE t1.id = NEW.id;
//implicit commit happens here.
END $$
DELIMITER ;
You can do the update statement as an insert into the blackhole.
INSERT INTO bh_table1 (id, field1)
SELECT id, field1
FROM same_table_with_lots_of_rows
WHERE filter_that_still_leaves_lots_of_rows;
This will still be a lot slower than your initial update.
Let me know how it turns out.