I have a medium-volume application that does about 40 inserts per second. All inserts are in the format of:
UPDATE item SET values=... WHERE row_id='18273-3749d-8743'
Furthermore, no item is updated multiple times at the same time. It seems simple enough and there should never be a deadlock if we ignore everything else. However, I have a fulltext field that seems to acquire a pseudo-table level lock, instead of a row-level lock, if I'm troubleshooting this correctly. Here is the error that it gives on SHOW ENGINE INNODB STATUS:
LATEST DEADLOCK DETECTED
...
RECORD LOCKS space ... index FTS_DOC_ID_INDEX of table
So, if I'm interpreting this correctly, it seems like FTS_DOC_ID_INDEX is doing some sort of "more-than-row-level-lock" to update the search index. Is this what is occurring? And if so, what's the correct way to deal with this -- as in, I cannot decrease the number of writes to the application, is there a way to do a "safe update" (?) on the FTS field? Or do I need to write the updates such that I remove the fts field and queue those separately (which would seem like a huge pain to do). What's the best way to deal with this?
(After seeing the SELECT, etc`, I may have a better answer, but here is starting advice.)
Whenever you may get a deadlock, you should provide in your code for recovering from it. That is, check for deadlock after doing the UPDATE, then re-execute the update once.
Related
I have a piece of (Perl) code, of which I have multiple instances running at the same time, all with a different - unique - value for a variable $dsID. Nearly all of them keep falling over when they try to execute the following (prepared) SQL statement:
DELETE FROM ssRates WHERE ssID IN (SELECT id FROM snapshots WHERE dsID=?)
returning the error:
Lock wait timeout exceeded; try restarting transaction
Which sounds clear enough, except for a few things.
I have autocommit enabled, and am not using (explicit) transactions.
I'm using InnoDB which is supposed to use row-level locking.
The argument passed as $dsID is unique to each code, so there should be no conflicting locks to get into deadlocks.
Actually, at present, there are no rows that match the inner SELECT clause (I have verified this).
Given these things, I cannot understand why I am getting lock problems -- no locks should be waiting on each other, and there is no scope for deadlocks! (Note, though, that the same script later on does insert into the ssRates table, so some instances of the code may be doing that).
Having googled around a little, this looks like it may be a "gap locking" phenomenon, but I'm not entirely sure why, and more to the point, I'm not sure what the right solution is. I have some possible workarounds, -- the obvious one being to split the process up: do the select clause, and then loop over results giving delete command. But really, I'd like to understand this otherwise I'm going to end up in this mess again!
So I have two questions for you friendly experts.
Is this a gap-locking thing?
If not - what is it? If yes -- why. I can't see how this condition matches the gap lock definition.
(NB, server is running MariaDB: 5.5.68-MariaDB; in case this is something fixed in newer versions).
In my code I need to do the following:
Check a MySQL table (InnoDB) if a particular row (matching some criteria) exists. If it does, return it. If it doesn't, create it and then return it.
The problem I seem to have is race conditions. Every now and then two processes run so closely together, that they both check the table at the same time, don't see the row, and both insert it - thus duplicate data.
I'm reading MySQL documentation trying to come up with some way to prevent this. What I've come up so far:
Unique indexes seem to be one option, but they're not universal (it only works when the criteria is something unique for all rows).
Transactions even at SERIALIZABLE level don't protect against INSERT, period.
Neither do SELECT ... LOCK IN SHARE MODE or SELECT ... FOR UPDATE.
A LOCK TABLE ... WRITE would do it, but it's a very drastic measure - other processes won't be able to read from the table, and I need to lock ALL tables that I intend to use until I unlock them.
Basically, I'd like to do either of the following:
Prevent all INSERT to the table from processes other than mine, while allowing SELECT/UPDATE (this is probably impossible because it make so little sense most of the time).
Organize some sort of manual locking. The two processes would coordinate among themselves which one gets to do the select/insert dance, while the other waits. This needs some sort of operation that waits until the lock is released. I could probably implement a spin-lock (one process repeatedly checks if the other has released the lock), but I'm afraid that it would be too resource intensive.
I think I found an answer myself. Transactions + SELECT ... FOR UPDATE in an InnoDB table can provide a synchronization lock (aka mutex). Have all processes lock on a specific row in a specific table before they start their work. Then only one will be able to run at a time and the rest will wait until the first one finishes its transaction.
What is the exact difference between the two locking read clauses:
SELECT ... FOR UPDATE
and
SELECT ... LOCK IN SHARE MODE
And why would you need to use one over the other?
I have been trying to understand the difference between the two. I'll document what I have found in hopes it'll be useful to the next person.
Both LOCK IN SHARE MODE and FOR UPDATE ensure no other transaction can update the rows that are selected. The difference between the two is in how they treat locks while reading data.
LOCK IN SHARE MODE does not prevent another transaction from reading the same row that was locked.
FOR UPDATE prevents other locking reads of the same row (non-locking reads can still read that row; LOCK IN SHARE MODE and FOR UPDATE are locking reads).
This matters in cases like updating counters, where you read value in 1 statement and update the value in another. Here using LOCK IN SHARE MODE will allow 2 transactions to read the same initial value. So if the counter was incremented by 1 by both transactions, the ending count might increase only by 1 - since both transactions initially read the same value.
Using FOR UPDATE would have locked the 2nd transaction from reading the value till the first one is done. This will ensure the counter is incremented by 2.
For Update --- You're informing Mysql that the selected rows can be updated in the next steps(before the end of this transaction) ,,so that mysql does'nt grant any read locks on the same set of rows to any other transaction at that moment. The other transaction(whether for read/write )should wait until the first transaction is finished.
For Share- Indicates to Mysql that you're selecting the rows from the table only for reading purpose and not to modify before the end of transaction. Any number of transactions can access read lock on the rows.
Note: There are chances of getting a deadlock if this statement( For update, For share) is not properly used.
Either way the integrity of your data will be guaranteed, it's just a question of how the database guarantees it. Does it do so by raising runtime errors when transactions conflict with each other (i.e. FOR SHARE), or does it do so by serializing any transactions that would conflict with each other (i.e. FOR UPDATE)?
FOR SHARE (a.k.a. LOCK IN SHARE MODE): Transactions face a higher probability of failure due to deadlock, because they delay blocking until the moment an update statement is received (at which point they either block until all readlocks are released, or fail due to deadlock if another write is in progress). However, only one client blocks and eventually succeeds: the other clients will fail with deadlock if they try to update, so only one of them will succeed and the rest will have to retry their transactions.
FOR UPDATE: Transactions won't fail due to deadlock, because they won't be allowed to run concurrently. This may be desirable for example because it makes it easier to reason about multi-threading if all updates are serialized across all clients. However, it limits the concurrency you can achieve because all other transactions block until the first transaction is finished.
Pro-Tip: As an exercise I recommend taking some time to play with a local test database and a couple mysql clients on the command line to prove this behavior for yourself. That is how I eventually understood the difference myself, because it can be very abstract until you see it in action.
In my application, I have two queries that occur from time to time (from different processes), that cause a deadlock.
Query #1
UPDATE tblA, tblB SET tblA.varcharfield=tblB.varcharfield WHERE tblA.varcharfield IS NULL AND [a few other conditions];
Query #2
INSERT INTO tmp_tbl SELECT * FROM tblA WHERE [various conditions];
Both of these queries take a significant time, as these tables have millions of rows. When query #2 is running, it seems that tblA is locked in mode S. It seems that query #1 requires an X lock. Since this is incompatible with an S lock, query #1 waits for up to 30 seconds, at which point I get a deadlock:
Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
Based on what I've read in the documentation, I think I have a couple options:
Set an index on tblA.varcharfield. Unfortunately, I think that this would require a very large index to store the field of varchar(512). (See edit below... this didn't work.)
Disable locking with SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
. I don't understand the implications of this, and am worried about corrupt data. I don't use explicit transactions in my application currently, but I might at some point in the future.
Split my time-consuming queries into small pieces so that they can queue and run in MySQL without reaching the 30-second timeout. This wouldn't really fix the heart of the issue, and I am concerned that when my database servers get busy that the problem will occur again.
Simply retrying queries over and over again... not an option I am hoping for.
How should I proceed? Are there alternate methods I should consider?
EDIT: I have tried setting an index on varcharfield, but the table is still locking. I suspect that the locking happens when the UPDATE portion is actually executing. Are there other suggestions to get around this problem?
A. If we assume that indexing varcharField takes a lot of disk space and adding new column will not hit you hard I can suggest the following approach:
create new field with datatype "tinyint"
index it.
this field will store 0 if varcharField is null and 1 - otherwise.
rewrite the first query to do update relying on new field. In this case it will not cause entire table locking.
Hope it helps.
You can index only part of the varchar column, it will still work, and will require less space. Just specify index size:
CREATE INDEX someindex ON sometable (varcharcolumn(32))
I was able to solve the issue by adding explicit LOCK TABLE statements around both queries. This turned out to be a better solution, since each query affects so many records, and that both of these are background processes. They now wait on each other.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
While this is an okay solution for me, it obviously isn't the answer for everyone. Locking with WRITE means that you cannot READ. Only a READ lock will allow others to READ.
I have a very slow query that I need to run on a MySQL database from time to time.
I've discovered that attempts to update the table that is being queried are blocked until the query has finished.
I guess this makes sense, as otherwise the results of the query might be inconsistent, but it's not ideal for me, as the query is of much lower importance than the update.
So my question really has two parts:
Out of curiosity, what exactly does MySQL do in this situation? Does it lock the table for the duration of the query? Or try to lock it before the update?
Is there a way to make the slow query not blocking? I guess the options might be:
Kill the query when an update is needed.
Run the query on a copy of the table as it was just before the update took place
Just let the query go wrong.
Anyone have any thoughts on this?
It sounds like you are using a MyISAM table, which uses table level locking. In this case, the SELECT will set a shared lock on the table. The UPDATE then will try to request an exclusive lock and block and wait until the SELECT is done. Once it is done, the UPDATE will run like normal.
MyISAM Locking
If you switched to InnoDB, then your SELECT will set no locks by default. There is no need to change transaction isolation levels as others have recommended (repeatable read is default for InnoDB and no locks will be set for your SELECT). The UPDATE will be able to run at the same time. The multi-versioning that InnoDB uses is very similar to how Oracle handles the situation. The only time that SELECTs will set locks is if you are running in the serializable transaction isolation level, you have a FOR UPDATE/LOCK IN SHARE MODE option to the query, or it is part of some sort of write statement (such as INSERT...SELECT) and you are using statement based binary logging.
InnoDB Locking
For the purposes of the select statement, you should probably issue a:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
command on the connection, which causes the subsequent select statements to operate without locking.
Don't use the 'SELECT ... FOR UPDATE', as that definitely locks the table rows that are affected by the select statement.
The full list of msql transaction isloation levels are in the docs.
First off all you need to know what engine you´re using (MySam or InnoDb).
This is clearly a transaction problem.
Take a look a the section 13.4.6. SET TRANSACTION Syntax in the mysql manual.
UPDATE LOW_PRIORITY .... may be helpful - the mysql docs aren't clear whether this would let the user requesting the update continue and the update happen when it can (which is what I think happens) or whether the user has to wait (which would be worse than at present ...), and I can't remember.
What table types are you using? If you are on MyISAM, switching to InnoDB (if you can - it has no full text indexing) opens up more options for this sort of thing, as it supports the transactional features and row level locking.
I don't know MySQL, But it sounds like transaction problem.
You should be able to set transaction typ to Dirty Read in your select query.
That won't nessarily give you correct results. But it should'nt be blocked.
Better would be to make the first query go faster. Do some analyzing and check if you can speed it up with correct indeing and so on.