Rollback transactions with LOCK TABLES - mysql

I have a PHP/5.2 driven application that uses transactions under MySQL/5.1 so it can rollback multiple inserts if an error condition is met. I have different reusable functions to insert different type of items. So far so good.
Now I need to use table locking for some of the inserts. As the official manual suggests, I'm using SET autocommit=0 instead of START TRANSACTION so LOCK TABLES does not issue an implicit commit. And, as documented, unlocking tables implicitly commits any active transaction:
http://dev.mysql.com/doc/refman/5.1/en/lock-tables-and-transactions.html
And here lies the problem: if I simply avoid UNLOCK TABLES, it happens that the second call to LOCK TABLES commits pending changes!
It appears that the only way is to perform all necessary LOCK TABLES in a single statement. That's a mainteinance nightmare.
Does this issue have a sensible workaround?
Here's a little test script:
DROP TABLE IF EXISTS test;
CREATE TABLE test (
test_id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
random_number INT(10) UNSIGNED NOT NULL,
PRIMARY KEY (test_id)
)
COLLATE='utf8_spanish_ci'
ENGINE=InnoDB;
-- No table locking: everything's fine
START TRANSACTION;
INSERT INTO test (random_number) VALUES (ROUND(10000* RAND()));
SELECT * FROM TEST ORDER BY test_id;
ROLLBACK;
SELECT * FROM TEST ORDER BY test_id;
-- Table locking: everything's fine if I avoid START TRANSACTION
SET autocommit=0;
INSERT INTO test (random_number) VALUES (ROUND(10000* RAND()));
SELECT * FROM TEST ORDER BY test_id;
ROLLBACK;
SELECT * FROM TEST ORDER BY test_id;
SET autocommit=1;
-- Table locking: I cannot nest LOCK/UNLOCK blocks
SET autocommit=0;
LOCK TABLES test WRITE;
INSERT INTO test (random_number) VALUES (ROUND(10000* RAND()));
SELECT * FROM TEST ORDER BY test_id;
ROLLBACK;
UNLOCK TABLES; -- Implicit commit
SELECT * FROM TEST ORDER BY test_id;
SET autocommit=1;
-- Table locking: I cannot chain LOCK calls ether
SET autocommit=0;
LOCK TABLES test WRITE;
INSERT INTO test (random_number) VALUES (ROUND(10000* RAND()));
SELECT * FROM TEST ORDER BY test_id;
-- UNLOCK TABLES;
LOCK TABLES test WRITE; -- Implicit commit
INSERT INTO test (random_number) VALUES (ROUND(10000* RAND()));
SELECT * FROM TEST ORDER BY test_id;
-- UNLOCK TABLES;
ROLLBACK;
SELECT * FROM TEST ORDER BY test_id;
SET autocommit=1;

Apparently, LOCK TABLES cannot be fixed to play well with transactions. A workaround is to replace it with SELECT .... FOR UPDATE. You don't need any special syntax (you can use regular START TRANSACTION) and it works as expected:
START TRANSACTION;
SELECT COUNT(*) FROM foo FOR UPDATE; -- Lock issued
INSERT INTO foo (foo_name) VALUES ('John');
SELECT COUNT(*) FROM bar FOR UPDATE; -- Lock issued, no side effects
ROLLBACK; -- Rollback works as expected
Please note that COUNT(*) is just an example, you can normally use the SELECT statement to fetch data you actually need ;-)
(This information was provided by Frank Heikens.)

Related

Transactions - Locks on INSERT INTO...SELECT/UPATE...SELECT type queries

What does the bold text refer to? The "SELECT part acts like READ COMMITTED" part I already understand with this sql
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION; -- snapshot 1 for this transaction is created
SELECT * FROM t1; -- result is 1 row, snapshot 1 is used
-- another transaction (different session) inserts and commits new row into t1 table
SELECT * FROM t1; -- result is still 1 row, because its REPEATABLE READ, still using snapshot 1
INSERT INTO t2 SELECT * FROM t1; -- this SELECT creates new snapshot 2
SELECT * FROM t2; -- result are 2 rows
SELECT * FROM t1; -- result is still 1 row, using snapshot 1
Here: https://dev.mysql.com/doc/refman/8.0/en/innodb-consistent-read.html
The type of read varies for selects in clauses like INSERT INTO ...
SELECT, UPDATE ... (SELECT), and CREATE TABLE ... SELECT that do not
specify FOR UPDATE or FOR SHARE:
By default, InnoDB uses stronger locks for those statements and the
SELECT part acts like READ COMMITTED, where each consistent read, even
within the same transaction, sets and reads its own fresh snapshot.
I do not understand THIS, what does a stronger block mean?
InnoDB uses stronger locks for those statements
This question helped me, but I still don't understand that part of the sentence.
Prevent INSERT INTO ... SELECT statement from creating its own fresh snapshot

MYSQL - Are inserts with autocommit on, considered a single step or multi step process?

In MYSQL if a conditional insert is performed with autocommit ON,
ie.
set autocommit true;
insert into blah (x,y,z) values (1,2,3) where not exists (.....);
Would the above statement be executed atomically and committed at the same time? Or is it possible that there can be a delay between executing the insert and doing the commit?
EDIT:
Updated the insert statement to reflect more accurate query:
set autocommit true;
insert into foo (x,y,z) select 1,2,3 from dual where not exists (select 1 from bar where a = 1);
I want to insert only if a row in another table does not exist. What I want to confirm is that in the below scenario there will be a failure:
SESSION1: insert into foo ..... where not exists (select 1 from bar where a = 1);
SESSION2: insert into bar (a) values (1);
SESSION2: commit;
SESSION1: commit; // should fail here.
The way it works is the same as not using autocommit, but you begin a new transaction, immediately do your INSERT, and then immediately COMMIT without delay.
START TRANSACTION;
INSERT ...
COMMIT;
This is atomic, in the sense that no other client will see your INSERT in a partially-finished state. Atomicity isn't about speed, it's about making sure the change is either committed fully or else not at all. No half-committed state is visible to other sessions.
By the way, the syntax you show, INSERT INTO ... VALUES ... WHERE NOT EXISTS ... is not meaningful. INSERT does not have a WHERE clause. You may be thinking of an INSERT that uses rows output by a SELECT statement:
INSERT INTO ...
SELECT ... FROM ... WHERE ...;
If you do this, you would NOT use a VALUES() clause for your INSERT.
Given your updated question, it cannot work the way you show.
SESSION1: insert into foo ..... where not exists (select 1 from bar where a = 1);
If you use the default transaction isolation level of REPEATABLE-READ, this will acquire a gap lock on bar, where the row where a=1 would exist. It does this to ensure that there is no change to the latest committed entries in the table that the query was reading.
SESSION2: insert into bar (a) values (1);
This causes session to wait, because it cannot lock the gap to insert into. It will time out with an error unless Session 1 commits within innodb_lock_wait_timeout seconds (default 50).

Locking during inserting rows in mysql

We have table
CREATE TABLE TEST_SUBSCRIBERS (
SUBSCRIPTION_ID varchar(255) NOT NULL COMMENT 'Subscriber id in format MSISDN-SERVICE_ID-TIMESTAMP',
MSISDN varchar(12) NOT NULL COMMENT 'Subscriber phone',
STATE enum ('ACTIVE', 'INACTIVE', 'UNSUBSCRIBED_SMS', 'UNSUBSCRIBED_PARTNER', 'UNSUBSCRIBED_ADMIN', 'UNSUBSCRIBED_REBILLING') NOT NULL,
SERVICE_ID varchar(255) NOT NULL COMMENT 'Id of service',
PRIMARY KEY (SUBSCRIPTION_ID)
)
ENGINE = INNODB
CHARACTER SET utf8
COLLATE utf8_general_ci;
In parallel threads we perform actions (in java) like these
1. Select active subscribers
SELECT *
FROM TEST_SUBSCRIBERS
WHERE SERVICE_ID='web-sub-1'
and MSISDN='000000002'
AND STATE IN ('ACTIVE', 'INACTIVE');
2. If there are no such subscribers, I can insert it
INSERT INTO TEST_SUBSCRIBERS
(SUBSCRIPTION_ID, MSISDN, STATE, SERVICE_ID)
VALUES ('web-sub-1-000000002-1504624819', '000000002', 'ACTIVE', 'web-sub-1');
In concurrency mode 2 threads can try to insert row with msisdn="000000002" and service-id="web-sub-1" and different subscriptionId because the current timestamp can be different. Both threads perform first select, get zero results and both insert. So we try to join these 2 queries into tranaction, but there is problem with locking for not existing rows - when we need lock for insert or something like that.
And we do not want to lock all table during this 2 actions because we suppose that our system will work too slowly in this case.
We cannot create uniq key for this situation, because for one abonent there can be multiple rows with the same unsubscribed statuses. And if we try to insert the 2 subscribers for the same service, primary key can contain timestamp with different seconds.
We tried to use SELECT ... FOR UPDATE and SELECT ... LOCK IN SHARE MODE, but we get deadlock and it's heavy operation for database server.
For tests we opened 2 terminals and did step by step:
# Window 1
mysql> start transaction;
mysql> SELECT SUBSCRIPTION_ID FROM TEST_SUBSCRIBERS s
WHERE s.SERVICE_ID="web-sub-1" AND s.MSISDN="000000002" FOR UPDATE;
# Window 2
start transaction;
mysql> SELECT SUBSCRIPTION_ID FROM TEST_SUBSCRIBERS s
WHERE s.SERVICE_ID="web-sub-1" AND s.MSISDN="000000002" FOR UPDATE;
# Window 1
mysql> INSERT INTO TEST_SUBSCRIBERS
(SUBSCRIPTION_ID, MSISDN, STATE, SERVICE_ID)
VALUES('web-sub-1-000000002-1504624818', '000000002', 'ACTIVE', 'web-sub-1');
# Window 2
mysql> INSERT INTO TEST_SUBSCRIBERS
(SUBSCRIPTION_ID, MSISDN, STATE, SERVICE_ID)
VALUES('web-sub-1-000000002-1504624819', '000000002', 'ACTIVE', 'web-sub-1');
ERROR 1213 (40001): Deadlock found when trying to get lock;
try restarting transaction
Is there any way to do such without deadlocks and without locking full table? Other variants that we analyzed were:
1. separate table
2. inserting and deleting unwanted rows.
Plan A. This will either insert (if necessary) or silently do nothing:
INSERT IGNORE ...;
Plan B. This may be overkill, since nothing needs "updating":
INSERT INTO ...
(...)
ON DUPLICATE KEY UPDATE
...;
Plan C. This statement is mostly replaced by IODKU:
REPLACE ... (same syntax as INSERT, but it does a silent DELETE first)
A and B (and probably C) are "atomic", so there is no chance of a deadlock.
Following answer from #RickJames.
Plan D. Use READ-COMMITTED
Window 1
mysql> set tx_isolation='READ-COMMITTED';
mysql> start transaction;
mysql> SELECT SUBSCRIPTION_ID FROM TEST_SUBSCRIBERS s
WHERE s.SERVICE_ID="web-sub-1" AND s.MSISDN="000000002" FOR UPDATE;
Window 2
mysql> set tx_isolation='READ-COMMITTED';
mysql> start transaction;
mysql> SELECT SUBSCRIPTION_ID FROM TEST_SUBSCRIBERS s
WHERE s.SERVICE_ID="web-sub-1" AND s.MSISDN="000000002" FOR UPDATE;
Window 1
mysql> INSERT INTO TEST_SUBSCRIBERS (SUBSCRIPTION_ID, MSISDN, STATE, SERVICE_ID)
VALUES('web-sub-1-000000002-10', '000000002', 'ACTIVE', 'web-sub-1');
Window 2
mysql> INSERT INTO TEST_SUBSCRIBERS (SUBSCRIPTION_ID, MSISDN, STATE, SERVICE_ID)
VALUES('web-sub-1-000000002-10', '000000002', 'ACTIVE', 'web-sub-1');
<begins lock wait>
Window 1
mysql> commit;
Window 2
<lock wait ends immediately>
ERROR 1062 (23000): Duplicate entry 'web-sub-1-000000002-10' for key 'PRIMARY'
The duplicate key error is not a deadlock, but it's still an error. But it doesn't roll back the entire transaction, it just cancels the attempted insert. You still have an active transaction with any other changes that have been successfully executed still pending.
Plan E. Use a queue
Instead of having concurrent Java threads inserting to the database, just have the Java threads enter items into a message queue (e.g. ActiveMQ). Then create one Java thread to do nothing but pull items from the queue and insert them into the database. This prevents deadlocks because there's only one thread inserting to the database.
Plan F. Embrace the deadlocks
You can't prevent all types of deadlocks, you can only handle them when they occur. Concurrent systems should be designed to anticipate some number of deadlocks, and retry operations when necessary.

Transaction with trigger that use a select max. It return wrong result

I have an complex database. I can simplify it like that:
The table a:
CREATE TABLE a
(
Id int(10) unsigned NOT NULL AUTO_INCREMENT,
A int(11) DEFAULT NULL,
CalcUniqId int(11) DEFAULT NULL,
PRIMARY KEY (Id)
) ENGINE=InnoDB;
CREATE TRIGGER a_before_ins_tr before INSERT on a
FOR EACH ROW
BEGIN
select Max(CalcUniqId) from A into #MaxCalcUniqId;
set new.CalcUniqId=IfNull(#MaxCalcUniqId,1)+1;
END $
It works like that:
start transaction
insert into A(A)
... insert in other tables. It take between 30 and 60 seconds
commit;
The problem is, the trigger returns the same CalcUniqId for all transaction that run at the same time.
Is there any solution or work arround.
Is this a solution:
start transaction;
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
insert into A(A) values(10);
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
....
commit;
Man can run this test:
Session 1:
Step1: start transaction;
Step2: insert into A(A) values(1);
Step3: commit;
Session 2:
Step1: start transaction;
Step2: insert into A(A) values(2);
Step3: commit;
Run in session 1 the steps 1,2 and in session 2 the steps 1,2. than step 3 in both. After that do
select Id, A, CalcUniqId from a;
both have the same CalcUniqId=2.
Change the SELECT in the Trigger to this:
select Max(CalcUniqId) from A into #MaxCalcUniqId
FOR UPDATE; -- add this
That tells the transaction that you intend to change the value; that blocks the other transactions from changing it.
This will probably lead to your 30-60 sec transactions being run one after another. And probably dying due to exceeding lock_wait_timeout. Rather than increasing that setting (which is already "too high"), please explain the bigger picture. Perhaps we can concoct a workaround that gets the 'correct' value and runs in parallel.

MySQL Transaction Isolation

I'm seeing duplicate rows in my history table - however the transaction that updates it first does delete from ; and then re-inserts the data for each one. The other queries are correct (and work most of the time just fine).
How is it that the select statement can see an inconsistent view of the data?
create my_history_table (
`a` int(11),
`b` int(11),
`timestamp` datetime
) ENGINE=InnoDB;
We update this table as needed with this:
set autocommit=0;
set transaction isolation level read committed;
start transaction;
delete from my_history_table;
<loop on cursor inserting results for each>
commit;
In another stored procedure we read these results back to the client:
select * from my_history_table join anotherTable using (a);
So the answer was that delete from ; was timing out on getting a lock but as of MySQL 5.0.31, that does not automatically roll back the transaction. So it was doing the inserts without doing the delete and commiting that mess.