Have run into this weird problem where a simple query fails due to a deadlock
Here is the query
UPDATE myprobelmatictable SET mycolumn = (mycolum-0) WHERE id = '59'
The weird issue is that this query fails only when my php server is located on a slower server on a remote network
Before I run this query, the following happens
transaction starts
insert new row in table 5
select 1 row from myproblematictable
insert new row in table 6
update table 4
UPDATE myprobelmatictable SET mycolumn = (mycolum-0) WHERE id = '<id>'
update table 3
Commit Transaction
The strange thing is that the same query fails each time with the following error
Error Number: 1213</p><p>Deadlock found when trying to get lock; try restarting transaction
The innodb status command does not seem to mention myproblematictable
any clues?
This can be the result of another query updating the tables in a different order. I would try to see if there's a pre-determined order in which the tables should be updated, and if so, rewriting the order of the updates.
If not, I would suggest trying to find the offending quer(ies), and seeing what order they are updating the tables. What type of table engine are you using? Keep in mind that MyISAM locks the entire table.
Related
In my MySQL table contain more than 20 millions of records . I want to delete it from lower index by running
delete FROM mydb.dailyreportdetails where idDailyReportDetails>0 order by idDailyReportDetails asc limit 1000 ;
While running the above query i got the error as mention below
Operation failed: There was an error while applying the SQL script to the database.
ERROR 1205: 1205: Lock wait timeout exceeded; try restarting transaction
SQL Statement:
Is there have any way to run query in mysql background or any faster steps to delete those record ?
You could first find the actual id to delete...
SELECT idDailyReportDetails
FROM mydb.dailyreportdetails
where idDailyReportDetails>0
order by idDailyReportDetails asc limit 1000,1 ;
Then a straight forward delete, using the value from the select...
DELETE FROM mydb.dailyreportdetails
where idDailyReportDetails < ID;
you can check the InnoDB locks by
SHOW ENGINE InnoDB STATUS;
Then you will get the status or error..
If you want to increase the time in your case use
set innodb_lock_wait_timeout=3000;
After that run your delete query.
The quickest way to delete rows from a table is to reference them by id. This as well is preferable for the binary log. Hence the best way is to use some programming language and in a loop fetch a new group of row ids and then delete the rows by explicitly stating them in the WHERE idDailyReportDetails IN () clause.
Range queries (WHERE id < something) might constantly fail because of locks if the database is used by other processes.
I'm running the following query to record aggregated stats for traffic according to the country of origin:
INSERT INTO stats_countries (country_id, country_traffic, country_updated)
VALUES (?,1,UNIX_TIMESTAMP())
ON DUPLICATE KEY UPDATE
country_traffic = country_traffic + 1,
country_updated = UNIX_TIMESTAMP()
Unfortunately it's running this query so frequently it seems to be causing some kind of row deadlock condition, where I have numerous connections going to sleep forever.
Is there any way I can fix this easily while keeping the "on duplicate" condition?
Or will I have to manually create one row for each country_id and just run a simple update?
If I have a query like:
UPDATE table_x SET a = 1 WHERE id = ? AND (
SELECT SUM(a) < 100 FROM table_x
)
And
hundreds of this query could be made at exactly the same time
I need to be certain that a never gets to more than 100
Do I need to lock the table or will table_x be locked automatically as it's a subquery?
Assuming this is innodb table, You will have row level locking . So, even if they are 100 of these happening at a time, only ONE transaction will be able to acquire the lock on those rows and finish processing before the next transaction is to occur. There is no difference between how a transaction is processed for the update and the subquery. To the innodb engine this is all ONE transaction, not two separate transactions.
If you want to see what is going on behind the scenes when you run your query, type 'show engine innodb status' in the command line while the query is running.
Here is a great walkthrough on what all that output means.
If you want to read more about Innodb and row level locking, follow link here.
It is unclear to me (by reading MySQL docs) if the following query ran on INNODB tables on MySQL 5.1, would create WRITE LOCK for each of the rows the db updates internally (5000 in total) or LOCK all the rows in the batch. As the database has really heavy load, this is very important.
UPDATE `records`
INNER JOIN (
SELECT id, name FROM related LIMIT 0, 5000
) AS `j` ON `j`.`id` = `records`.`id`
SET `name` = `j`.`name`
I'd expect it to be per row but as I do not know a way to make sure it is so, I decided to ask someone with deeper knowledge. If this is not the case and the db would LOCK all the rows in the set, I'd be thankful if you give me explanation why.
The UPDATE is running in transaction - it's an atomic operation, which means that if one of the rows fails (because of unique constrain for example) it won't update any of the 5000 rows. This is one of the ACID properties of a transactional database.
Because of this the UPDATE hold a lock on all of the rows for the entire transaction. Otherwise another transaction can further update the value of a row, based on it's current value (let's say update records set value = value * '2'). This statement should produce different result depending if the first transaction commits or rollbacks. Because of this it should wait for the first transaction to complete all 5000 updates.
If you want to release the locks, just do the update in (smaller) batches.
P.S. autocommit controls if each statement is issued in own transaction, but does not effect the execution of a single query
I have innodb table read by lot of different instances (cloud)
Daemon in each instance takes 100 rows to "do things" of this table, but I don't want 2 (or more) instances to take the same things.
So I have a "status" column ("todo", "doing", "done").
INSTANCE 1: It takes 100 rows where status = "todo" ... Then I need to UPDATE these rows asap to status "doing", so INSTANCE 2,3,..x can't take the same rows.
How can i do it ?
Please, I would like a solution without LOCKING WHOLE table, but locking just the rows (that's because I use innodb) ... I have read a lot about that (LOCK SHARE MODE, FOR UPDATE, COMMITs ... ) but I do not get the right way ...
You should use LOCK TABLES and UNLOCK TABLES functions to do this:
http://dev.mysql.com/doc/refman/5.1/en/lock-tables.html
use a transaction and then SELECT ... FOR UPDATE when you read the records.
This way the records you read are locked. When you get all the data update the records to "doing" and COMMIT the transaction.
Maybe what you were missing is the use of a transaction, or the correct order of commands. Here is a basic example:
BEGIN TRANSACTION;
SELECT * FROM table WHERE STATUS = 'todo' FOR UPDATE;
// Loop over results in code, save necessary data to array/list..
UPDATE table SET STATUS ='doing' WHERE ...;
COMMIT;
// process the data...
UPDATE table SET STATUS ='done' WHERE ...;