set transaction isolation level repeatable read is giving dead locks - sql-server-2008

I am updating some records in session 1 with open transaction -
begin transaction
update aa
set name = 'harry1'
where name = 'harry'
As you can see that commit/rollback transaction is not issued. Now i try to read the records from another session session 2.
set transaction isolation level repeatable read
select * from aa
Now Isolation level - repeatable read should give me the same old value that was there before update statement in session 1 that should be harry and not harry1.. please correct me if i am wrong.
But when I try to read record in session 2 while transaction is still open in session 1 I get deadlock..can someone tell me why repeatable read is not working and is behaving like read committed .

REPEATABLE READ is same as READ COMMITTED but in addition share locks are retained on rows read for the duration of the transaction. In other words any row that is read cannot be modified by another connection until the transaction commits or rolls back.
So your query on the session 2 is waiting for either commit or rollback on the session 1.

while data is being read in the begin tran block, no other transactions are allow update, preventing a dirty read. Therefore a non-repeatable read does not occur. Once the commit has occurred that an update to the customer table is allow by a concurrent process.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
--first read
SELECT first_name, last_name from customer WHERE customer_id=5
---second read
SELECT first_name, last_name from customer WHERE customer_id=5
COMMIT TRAN

Related

Deadlock in transaction with isolation level serializable

I was trying to understand how locking works with isolation levels. I have gone through this question but can not understand flow given blow
Here i am starting two transactions in different terminals and reading same row in them. As i try to update them both the terminal keeps waiting for the update. No other query is running apart from this
Here are the series of steps i did
conn1: START TRANSACTION;
conn1: SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;
conn2: START TRANSACTION;
conn2: SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;
conn1: SELECT * from users WHERE id = 1;
conn2: SELECT * from users WHERE id = 1;
conn1: UPDATE users set name = 'name' WHERE id = 1; waiting...
conn2: UPDATE users set name = 'name' WHERE id = 1; waiting...
Here is my first question
Here i want to understand why both the connections are waiting and if they are who has the lock to update the row ?
If i change above steps to
conn1: START TRANSACTION;
conn1: SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;
conn2: START TRANSACTION;
conn2: SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;
conn1: UPDATE users set name = 'name' WHERE id = 1;
conn2: SELECT * from users WHERE id = 1; waiting...
conn1: commit
conn2: updated results
In this case the difference is i can see conn1 has the lock and until it either commits or rollback the changes all other request will be waiting and will get updated results if conn1 committed
Here is my second question
Is this the correct way if i want to lock a row and if locked i want other connections to wait(even for read) till this lock releases(commit or rollback) or i should use for update clause
DB - Mysql 5.7
As mysql documentation on SERIALIZABLE isolation level says:
This level is like REPEATABLE READ, but InnoDB implicitly converts all plain SELECT statements to SELECT ... LOCK IN SHARE MODE
The clause on autocommit does not apply here, since you explicitly start a transaction.
This means that in the first scenario both transactions obtain a shared lock on the same record. Then the first transaction (T1) tries to execute an update, which needs an exclusive lock. That cannot be granted, since T2 holds a shared lock. Then T2 tries to update, but cannot due to T1 holding a shared lock.
Whether you use an atomic update or a select ... for update statement to lock records, depends on the application logic you need to apply. If you need to fetch the record's data an do some complex calculations with those before updating the record, the use the select ... for update approach. Otherwise, go for the atomic update.

How do I know which transactions run first

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
INSERT INTO Students VALUES(’Jason’,50);
UPDATE Students SET mark = mark + 10;
COMMIT
SET TRANSACTION ISOLATION READ COMMITED
INSERT INTO Students VALUES (’Kylie’,70);
SELECT SUM(mark) FROM Students;
COMMIT
If I have two transactions that run simultaneously, how do I know what runs first and what values would be returned by the query? I understand that Serializable isolates T1. But more than that I don't know how to proceed.
if you run both at same time, READ COMMITTED will wait for SERIALIZABLE to finish. btw, seems BEGIN TRANSACTION is missing on your TSQL.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO Students VALUES(’Jason’,50);
UPDATE Students SET mark = mark + 10;
COMMIT TRANSACTION
SET TRANSACTION ISOLATION READ COMMITTED
BEGIN TRANSACTION
INSERT INTO Students VALUES (’Kylie’,70);
SELECT SUM(mark) FROM Students;
COMMIT TRANSACTION

Issue with inserting values that fetched from locked rows

I have two scripts bounded by transactions:
The first:
START TRANSACTION;
update product set price = 70;
SELECT SLEEP(20);
rollback;
The second:
START TRANSACTION;
insert into product_order(product_id, amount, price) select id, amount, price from product;
commit;
The second transaction has started execute when the first one is in 'sleep' state.
So, I expected that second one will have executed during sleeping of the first transaction.
Unexpectedly the second transaction is waiting until the first one goes out from sleep state.
I know that it is something connected to row locking. But I hadn't updated the rows that included into the first transaction.
My question: What is the reason of such behaviour and how I can get rid of it?
It look like the lock will be released after the end of the transaction (You can't read the data because if the transaction fails the database will have to rollback to the previous state)
Before your insert you should set the sessions transaction isolation level so it can read data that are not commited:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
START TRANSACTION;
insert into product_order(product_id, amount, price) select id, amount, price from product;
COMMIT;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;

MySQL 'REPEATABLE READ' transaction unexpected behavior

The default isolation level of MySQL transaction is 'Repeatable Read'.
According to another stackoverflow question(
Difference between read commit and repeatable read)
"Repeatable read is a higher isolation level, that in addition to the guarantees of the read committed level, it also guarantees that any data read cannot change, if the transaction reads the same data again, it will find the previously read data in place, unchanged, and available to read."
Here is my test database;
mysql> select * from people;
+------+---------+
| name | howmany |
+------+---------+
| alex | 100 |
| bob | 100 |
+------+---------+
slow.sql
START TRANSACTION;
SELECT #new_val := howmany FROM people WHERE name = 'alex';
SELECT SLEEP(10);
SET #new_val = #new_val - 5;
UPDATE people SET howmany = #new_val WHERE name = 'alex';
COMMIT;
fast.sql
START TRANSACTION;
SELECT #new_val := howmany FROM people WHERE name = 'alex';
-- SELECT SLEEP(10);
SET #new_val = #new_val - 5;
UPDATE people SET howmany = #new_val WHERE name = 'alex';
COMMIT;
If I run slow.sql, and before it returns I run fast.sql multiple times.
fast.sql will print 95, 90, 85....
I think repeatable read isolation level should make fast.sql fail to run or I misunderstand 'repeatable read'.
I'm running MySQL 5.7 from Ubuntu 16.10.
Thanks very much.
If not wrong then Repeatable Read talks about consistent reads within the same transaction and not with other transaction. From MySQL Documentation
REPEATABLE READ
This is the default isolation level for InnoDB. Consistent reads
within the same transaction read the snapshot established by the first
read. This means that if you issue several plain (nonlocking) SELECT
statements within the same transaction, these SELECT statements are
consistent also with respect to each other.
Repeatable read isolation level guarantees consistency within a single transaction. You are executing multiple transactions. For the behaviour your expecting you would need to look into locking reads. See here for more info. https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html#isolevel_repeatable-read
If some data changes during your transaction by process outside of it, it won't have any effect on data read in said transaction.
I don't see how fast.sql would fail to run because of this isolation level (or any isolation).

NHibernate query deadlock in case multiple connection

I have next transaction:
Desc d = new Desc();
d.Descr = "new";
_sess.Transaction.Begin();
_sess.SaveOrUpdate(d);
var desc = _sess.CreateCriteria(typeof(Desc)).List<Desc>();
_sess.Transaction.Commit();
This transaction performs next query:
BEGIN TRANSACTION
INSERT
SELECT
COMMIT TRANSACTION
When I perform this code in two processes I have deadlock, because
1 Process
Perform INSERT and lock Key
2 Process
Perform INSERT and lock key
1 Process wants to perform SELECT and passes in TIMEOUT STATE
2 Process wants to perform SELECT and passes in TIMEOUT STATE
result: deadlock
BD: MS SQL Server 2008 R2
2 questions:
How do me set UPDATE LOCK on All tables what included in transaction
If I use this code:
Desc d = new Desc();
d.Descr = "new";
_sess.Transaction.Begin(IsolationLevel.Serializable);
_sess.SaveOrUpdate(d);
var desc = _sess.CreateCriteria(typeof(Desc)).List();
_sess.Transaction.Commit();
Nothing changes.
What does IsolationLevel.Serializable do ?
UPDATE:
I need to get following:
USE Test
BEGIN TRANSACTION
SELECT TOP 1 Id FROM [Desc] (UPDLOCK)
INSERT INTO [Desc] (Descr) VALUES ('33333')
SELECT * FROM [Desc]
COMMIT TRANSACTION
How do me perform with help NHibernate following:
SELECT TOP 1 Id FROM [Desc] (UPDLOCK)
?
I would change the transaction isolation level to snapshot. This avoids locks when reading data, allows much more concurrency and particularly no deadlocks in read-only transactions.
The reason for the deadlock is following: insert do not conflict with each other. They lock the newly inserted row. The query however is locked out, because it tries to read the newly inserted row from the other transaction. So you get two queries both waiting for the other transaction to complete, which is a deadlock. With isolation level snapshot, the query doesn't care about non committed row at all. Instead of waiting for locks to be released, it only "sees" rows that had been committed. This avoids deadlocks in queries.