I ran the following query and my records are locked now I can't read or update or delete. For testing purpose i didn't called commit tran and now these records got stuck. How can I release these locks which are already placed.
BEGIN TRAN
SELECT * from inquiry with (XLOCK,READPAST) where inquiry_id=228563
You should find you lock process id by sys.dm_tran_locks and kill it manually
SELECT * FROM sys.dm_tran_locks WHERE RESOURCE_TYPE = ‘OBJECT’
and then use KILL with id number
Related
I have a select statement that takes a long time to run (around 5 minutes). Because of this I only run the query every hour and save the results to a metadata table. Here is the query:
UPDATE `metadata` SET `value` = (select count(`id`) from `logs`) WHERE `key` = 'logs'
But this is the issue I have been having (And correct me if I am wrong). A select statement does not lock the database, but an update statement does. Now since I am running this long ruining select query inside of the update query, it ends up locking the DB for about 5 minutes.
Is there a better way to do this to run the select statement and save it to a variable and then once that is done then running the update query? This way it wont lock the DB.
Also note I don't care about dirty data.
The database has over 300 million rows and has data being added to it constantly.
Just to avoid the possibility that the server disconnects between the statement getting the count and the statement storing it, leaving your variable unset, beginning in mariadb 1.1 you can run multiple statements in a single request by putting them in a block:
begin not atomic
declare `logs_count` int;
select count(*) into `logs_count` from `logs`;
update `metadata` set `value`=`logs_count` where `key`='logs';
end
fiddle
I have found that setting this before the query runs seems to work and runs a whole lot faster. This should keep the DB from locking when executing the query. We then enable locking after it has completed.
(Please correct me if I have done something incorrect here)
BEGIN
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
UPDATE `metadata` SET `value` = (select count(`id`) from `logs`) WHERE `key` = 'logs';
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
END
I want to create a Lost Update with MySQL Workbench. Therefore, I have 2 connections to my database and 2 transactions. I also changed the transaction isolation level to read uncommitted but transaction A uses the current data when the update statement starts. It never uses the data from the first select statement and with select ... for update the transaction b is blocked.
Transaction A (starts first):
Start transaction;
SELECT * FROM table;
Select sleep(10); -- <- Transaction B executes in this 10 seconds
UPDATE table SET Number = Number + 10 WHERE FirstName = "Name1";
COMMIT;
Transaction B:
Start transaction;
UPDATE table SET Number = Number - 5 WHERE FirstName = "Name1";
COMMIT;
Is it possible to create this failure with MySQL Workbench. What´s wrong with my code?
Thanks for your help
The update in A work with data after the sleep is executed. Select before does nothing in the transaction.
1:
I was trying this and it was working fine:
start transaction;
select * from orders where id = 21548 LOCK IN SHARE MODE;
update orders set amount = 1500 where id = 21548;
commit;
According to the definition of LOCK IN SHARE MODE , it locks the table with IS lock and lock the selected rows with S lock.
When a row is locked with S lock.How can it be modified without releasing lock?
It needs X lock to modify it.Right? Or is it valid only for different connection transaction?
2:
//session1
start transaction;
select * from orders where id = 21548 FOR UPDATE;
Keep this session1 same and try this in the different session:
//session2
select * from orders where id = 21548; //working
update orders set amount = 2000 where id = 21548; //waiting
FOR UPDATE locks the entire table into IX mode and selected row into X mode.
As X mode is incompatible with S mode then how come select query in second session is getting executed?
One answer might be that select query is not asking for S lock that's why it's running successfully.But update query in the second session is also not asking for X lock , but as you execute it , it starts waiting for the lock held by session1.
I have read a lot of stuff regarding this but not able to clear my doubts.Please help.
Bill Karwin answered this question through email.He said:
The same transaction that holds an S lock can promote the lock to an X lock. This is not a conflict.
The SELECT in session 1 with FOR UPDATE acquires an X lock. A simple SELECT query with no locking clause specified does not need to acquire an S lock.
Any UPDATE or DELETE needs to acquire an X lock. That's implicit. Those statements don't have any special locking clause for that.
For more details on IS/IX locks and FOR UPDATE/LOCK IN SHARE MODE please visit
shared-and-exclusive-locks .
I want to implement parallel processing of multiple DB transactions which lock only a few rows for short periods of time. For Example we have this query executed every time an user opens the page:
START TRANSACTION;
SELECT * FROM table_1 WHERE worktime < UNIX_TIMESTAMP() FOR UPDATE;
...WORK...
...UPDATE...
COMMIT;
In a multiuser environment, this kind of row locking would lead to Deadlocks every time the select statement would be executed. Currently I would solve the problem using a second table to store the locked IDs:
START TRANSACTION;
LOCK TABLE table_1 WRITE, table_locks WRITE;
SELECT id FROM table_1 WHERE worktime < UNIX_TIMESTAMP() AND id NOT IN table_locks;
...insert locked Ids into Table "table_locks"...
...this prevents other calls to read from this table...
UNLOCK TABLES;
COMMIT;
...Perform calculations and Updates...
DELETE FROM table_locks WHERE id = ...
The problem of this method is, that if something goes wrong after "locking" a row by storing its ID in the table_locks table, this Row would never be updated anymore. Of course I can set a timeout to release such locks automatically after some time, but this doesen't seem properly done to me. But is there something possible like:
SELECT * FROM table_1 WHERE worktime < UNIX_TIMESTAMP() AND NOT LOCKED BY OTHER TRANSACTION FOR UPDATE
?
You could mark rows to be done by your session:
UPDATE table_1
SET marked_by_connection_id = CONNECTION_ID(),
marked_time = NOW()
WHERE worktime < UNIX_TIMESTAMP() AND marked_by_connection_id IS NULL;
Then you can feel free to work on any row that has your connection id, knowing that another session will not try to claim them:
SELECT * FROM table_1 WHERE marked_by_connection_id = CONNECTION_ID();
. . .
No locking or non-autocommit transaction is needed.
At the end of your session, unmark any rows you had marked:
UPDATE table_1 SET marked_by_connection_id = NULL
WHERE marked_by_connection_id = CONNECTION_ID();
Or alternatively your app could unmark individual rows as it processes them.
But perhaps your session dies before it can unmark those rows. So some rows were marked, but never processed. Run a cron job that clears such abandoned marked rows, allowing them to get re-processed by another worker, although a bit late.
UPDATE table_1 SET marked_by_connection_id = NULL
WHERE marked_time < NOW() - INTERVAL 30 MINUTE;
I have next transaction:
Desc d = new Desc();
d.Descr = "new";
_sess.Transaction.Begin();
_sess.SaveOrUpdate(d);
var desc = _sess.CreateCriteria(typeof(Desc)).List<Desc>();
_sess.Transaction.Commit();
This transaction performs next query:
BEGIN TRANSACTION
INSERT
SELECT
COMMIT TRANSACTION
When I perform this code in two processes I have deadlock, because
1 Process
Perform INSERT and lock Key
2 Process
Perform INSERT and lock key
1 Process wants to perform SELECT and passes in TIMEOUT STATE
2 Process wants to perform SELECT and passes in TIMEOUT STATE
result: deadlock
BD: MS SQL Server 2008 R2
2 questions:
How do me set UPDATE LOCK on All tables what included in transaction
If I use this code:
Desc d = new Desc();
d.Descr = "new";
_sess.Transaction.Begin(IsolationLevel.Serializable);
_sess.SaveOrUpdate(d);
var desc = _sess.CreateCriteria(typeof(Desc)).List();
_sess.Transaction.Commit();
Nothing changes.
What does IsolationLevel.Serializable do ?
UPDATE:
I need to get following:
USE Test
BEGIN TRANSACTION
SELECT TOP 1 Id FROM [Desc] (UPDLOCK)
INSERT INTO [Desc] (Descr) VALUES ('33333')
SELECT * FROM [Desc]
COMMIT TRANSACTION
How do me perform with help NHibernate following:
SELECT TOP 1 Id FROM [Desc] (UPDLOCK)
?
I would change the transaction isolation level to snapshot. This avoids locks when reading data, allows much more concurrency and particularly no deadlocks in read-only transactions.
The reason for the deadlock is following: insert do not conflict with each other. They lock the newly inserted row. The query however is locked out, because it tries to read the newly inserted row from the other transaction. So you get two queries both waiting for the other transaction to complete, which is a deadlock. With isolation level snapshot, the query doesn't care about non committed row at all. Instead of waiting for locks to be released, it only "sees" rows that had been committed. This avoids deadlocks in queries.