Misunderstanding rollback in MySQL - mysql

I not familiar with mysql but i just think about trouble if i using START TRANSACTION and ROLLBACK for 2 client or more then i have a error in one time, for example:
AT ONE TIME
Client 1:
START TRANSACTION -> insert query 1 -> insert query 2 - insert query 3
Client 2:
START TRANSACTION -> insert Query 4 -> error and rollback
if client 2 rollback (i just think) query 4 rollback and query 1 too, correct (I just assumed)?
i can't checkout that, how we can check this?
UPDATE :
IF that SITUATION:
AT ONE TIME, ID is PRIMARY (ai)
Client 1:
START TRANSACTION -> insert tbl1 (ID : 1) -> error and rollback
Client 2:
START TRANSACTION -> insert tbl1 (ID : 2) -> insert tbl2 (ID : 2)
(i just think) this transaction from client 1 is Pasted and ID 1 if lost? and other transaction continue using id 3,4,5 etc..., true or not?

Related

Transactions: How to make them work with FireDAC and mySQL? [duplicate]

I'm using a MySQL DB for my site, which is hosted on a Linux shared server.
I wrote a test script which I run using 'mysql' to test if transactions are working ok. Running the script, I do not get any error, but the result of executing the scripts is as if transaction is not enabled.
I also made sure to grant ALL privileges to the admin MySQL user which runs the script.
In order to double check, I tried the same test script on PostgreSQL, and there - the result of the script indicated that transaction does work. So it's definitely something which is specific to MySQL.
The script runs on a simple table which I created as follows:
create table a ( id serial primary key);
Following is the test script:
delete from a;
set autocommit = 0;
start transaction;
insert into a(id) values(1);
rollback work;
select count(*) from a;
So the script makes sure the table is empty, Then it starts a transaction, insert a row and rollback the insert. As the "insert" is rolled back, the "select" should indicate that table contains 0 rows.
Running this on PostgreSQL:
$ psql db admin < test1
DELETE 0
START TRANSACTION
INSERT 0 1
ROLLBACK
count
-------
0
This is the expected behavior, 0 rows in the table as the insert was rolled back.
Running the same on my MySQL DB:
$ mysql db -u admin < test1
count(*)
1
Having 1 row following the rollback indicate that the "insert" was not rolled back, just as in non-transaction mode.
As mentioned, admin is granted with ALL privileges to the DB.
Anything I've missed?
Probably the table is created with the MyISAM storage engine as default.
MyISAM storage engine doesnt support transactions.
Create table
CREATE TABLE a ( id SERIAL PRIMARY KEY) ENGINE = MYISAM;
Query
DELETE FROM a;
SET autocommit = 0;
START TRANSACTION;
INSERT INTO a(id) VALUES(1);
ROLLBACK WORK;
SELECT COUNT(*) FROM a;
Result
count(*)
1
Making the table InnoDB
Query
ALTER TABLE a ENGINE=INNODB;
Query
DELETE FROM a;
SET autocommit = 0;
START TRANSACTION;
INSERT INTO a(id) VALUES(1);
ROLLBACK WORK;
SELECT COUNT(*) FROM a;
Result
count(*)
----------
0

MySQL SELECT FOR UPDATE is locking whole table

MySQL Version 5.7.16
Process 1:
START TRANSACTION;
SELECT * from statistic_activity WHERE activity_id = 1 FOR UPDATE;
Process 2:
START TRANSACTION;
INSERT INTO `statistic_activity` (`activity_id`) values (2678597);
If Process 1 SELECT statement returns results, Process 2 is not blocked (as you will expect)
But If Process 1 returns empty set (no rows exists with activity_id = 1) then whole table is locked and all INSERTS are blocked until Process 1 transaction ends.
Is this expected behavior ?

Is it possible to create a Lost Update with MySQL Workbench

I want to create a Lost Update with MySQL Workbench. Therefore, I have 2 connections to my database and 2 transactions. I also changed the transaction isolation level to read uncommitted but transaction A uses the current data when the update statement starts. It never uses the data from the first select statement and with select ... for update the transaction b is blocked.
Transaction A (starts first):
Start transaction;
SELECT * FROM table;
Select sleep(10); -- <- Transaction B executes in this 10 seconds
UPDATE table SET Number = Number + 10 WHERE FirstName = "Name1";
COMMIT;
Transaction B:
Start transaction;
UPDATE table SET Number = Number - 5 WHERE FirstName = "Name1";
COMMIT;
Is it possible to create this failure with MySQL Workbench. What´s wrong with my code?
Thanks for your help
The update in A work with data after the sleep is executed. Select before does nothing in the transaction.

Insert row if not exists without deadlock

I have a simple table
CREATE TABLE test (
col INT,
data TEXT,
KEY (col)
);
and a simple transaction
START TRANSACTION;
SELECT * FROM test WHERE col = 4 FOR UPDATE;
-- If no results, generate data and insert
INSERT INTO test SET col = 4, data = 'data';
COMMIT;
I am trying to ensure that two copies of this transaction running concurrently result in no duplicate rows and no deadlocks. I also don't want to incur the cost of generating data for col = 4 more than once.
I have tried:
SELECT .. (without FOR UPDATE or LOCK IN SHARE MODE):
Both transactions see that there are no rows with col = 4 (without acquiring a lock) and both generate data and insert two copies of the row with col = 4.
SELECT .. LOCK IN SHARE MODE
Both transactions acquire a shared lock on col = 4, generate data and attempt to insert a row with col = 4. Both transactions wait for the other to release their shared lock so it can INSERT, resulting in ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction.
SELECT .. FOR UPDATE
I would expect that one transaction's SELECT will succeed and acquire an exclusive lock on col = 4 and the other transaction's SELECT will block waiting for the first.
Instead, both SELECT .. FOR UPDATE queries succeed and the transactions proceed to deadlock just like with SELECT .. LOCK IN SHARE MODE. The exclusive lock on col = 4 just doesn't seem to work.
How can I write this transaction without causing duplicate rows and without deadlock?
Adjust your schema slightly:
CREATE TABLE test (
col INT NOT NULL PRIMARY KEY,
data TEXT
);
With col being a primary key it cannot be duplicated.
Then use the ON DUPLICATE KEY feature:
INSERT INTO test (col, data) VALUES (4, ...)
ON DUPLICATE KEY UPDATE data=VALUES(data)
Maybe this...
START TRANSACTION;
INSERT IGNORE INTO test (col, data) VALUES (4, NULL); -- or ''
-- if Rows_affected() == 0, generate data and replace `data`
UPDATE test SET data = 'data' WHERE col = 4;
COMMIT;
Caution: If the PRIMARY KEY is an AUTO_INCREMENT, this may 'burn' an id.
Note that InnoDB has 2 types of exclusive locks: one is for update and delete, and another one for insert. So to execute your SELECT FOR UPDATE transaction InnoDB will have to first take the lock for update in one transaction, then the second transaction will try to take the same lock and will block waiting for the first transaction (it couldn't have succeeded as you claimed in the question), then when first transaction will try to execute INSERT it will have to change its lock from the lock for update to the lock for insert. The only way InnoDB can do that is first downgrade the lock down to shared one and then upgrade it back to lock for insert. And it can't downgrade the lock when there's another transaction waiting to acquire the exclusive lock as well. That's why in this situation you get a deadlock error.
The only way for you to correctly execute this is to have unique index on col, try to INSERT the row with col = 4 (you can put dummy data if you don't want to generate it before the INSERT), then in case of duplicate key error rollback, and in case INSERT was successful you can UPDATE the row with the correct data.
Note though that if you don't want to incur cost of generating data unnecessarily it probably means that generating it takes a long time, and all that time you'll hold an open transaction that inserted row with col = 4 which will hold all other processes trying to insert the same row hanging. I'm not sure that would be significantly better than generating data first and then inserting it.
If you're goal is to have only one session insert the missing row, and any other sessions do nothing without even attempting an insert of DATA, then you need to either lock the entire table (which reduces your concurrency) or insert an incomplete row and follow it with an update.
A. create a primary key on column COL
Code:
begin
insert into test values (4,null);
update test set data = ... where col = 4;
commit;
exception
when dup_val_on_index then
null;
end;
The first session that attempts the insert on col 4 will succeed and procede to the update where you can do the expensive calculation of DATA. Any other session trying to do this will raise a PK violation (-00001, or DUP_VAL_ON_INDEX) and go to the exception handler which traps it and does nothing (NULL). It will never reach the update statement, so won't do whatever expensive thing it is you do to calculate DATA.
Now, this will cause the other session to wait while the first session calculates DATA and does the update. If you don't want that wait, you can use NOWAIT to cause the lagging sessions to throw an exception immediately if the row is locked. If the row doesn't exist, that will also throw an exception, but a different one. Not great to use exception handling for normal code branches, but hey, it should work.
declare
var_junk number;
begin
begin
select col into var_junk from test where col = 4 for update nowait;
exception
when no_data_found then
insert into test values (col,null);
update test set data = ... where col = 4;
commit;
when others then
null;
end;
end;

Transactions not working for my MySQL DB

I'm using a MySQL DB for my site, which is hosted on a Linux shared server.
I wrote a test script which I run using 'mysql' to test if transactions are working ok. Running the script, I do not get any error, but the result of executing the scripts is as if transaction is not enabled.
I also made sure to grant ALL privileges to the admin MySQL user which runs the script.
In order to double check, I tried the same test script on PostgreSQL, and there - the result of the script indicated that transaction does work. So it's definitely something which is specific to MySQL.
The script runs on a simple table which I created as follows:
create table a ( id serial primary key);
Following is the test script:
delete from a;
set autocommit = 0;
start transaction;
insert into a(id) values(1);
rollback work;
select count(*) from a;
So the script makes sure the table is empty, Then it starts a transaction, insert a row and rollback the insert. As the "insert" is rolled back, the "select" should indicate that table contains 0 rows.
Running this on PostgreSQL:
$ psql db admin < test1
DELETE 0
START TRANSACTION
INSERT 0 1
ROLLBACK
count
-------
0
This is the expected behavior, 0 rows in the table as the insert was rolled back.
Running the same on my MySQL DB:
$ mysql db -u admin < test1
count(*)
1
Having 1 row following the rollback indicate that the "insert" was not rolled back, just as in non-transaction mode.
As mentioned, admin is granted with ALL privileges to the DB.
Anything I've missed?
Probably the table is created with the MyISAM storage engine as default.
MyISAM storage engine doesnt support transactions.
Create table
CREATE TABLE a ( id SERIAL PRIMARY KEY) ENGINE = MYISAM;
Query
DELETE FROM a;
SET autocommit = 0;
START TRANSACTION;
INSERT INTO a(id) VALUES(1);
ROLLBACK WORK;
SELECT COUNT(*) FROM a;
Result
count(*)
1
Making the table InnoDB
Query
ALTER TABLE a ENGINE=INNODB;
Query
DELETE FROM a;
SET autocommit = 0;
START TRANSACTION;
INSERT INTO a(id) VALUES(1);
ROLLBACK WORK;
SELECT COUNT(*) FROM a;
Result
count(*)
----------
0