InnoDB deadlock with lock modes S and X - mysql

In my application, I have two queries that occur from time to time (from different processes), that cause a deadlock.
Query #1
UPDATE tblA, tblB SET tblA.varcharfield=tblB.varcharfield WHERE tblA.varcharfield IS NULL AND [a few other conditions];
Query #2
INSERT INTO tmp_tbl SELECT * FROM tblA WHERE [various conditions];
Both of these queries take a significant time, as these tables have millions of rows. When query #2 is running, it seems that tblA is locked in mode S. It seems that query #1 requires an X lock. Since this is incompatible with an S lock, query #1 waits for up to 30 seconds, at which point I get a deadlock:
Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
Based on what I've read in the documentation, I think I have a couple options:
Set an index on tblA.varcharfield. Unfortunately, I think that this would require a very large index to store the field of varchar(512). (See edit below... this didn't work.)
Disable locking with SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
. I don't understand the implications of this, and am worried about corrupt data. I don't use explicit transactions in my application currently, but I might at some point in the future.
Split my time-consuming queries into small pieces so that they can queue and run in MySQL without reaching the 30-second timeout. This wouldn't really fix the heart of the issue, and I am concerned that when my database servers get busy that the problem will occur again.
Simply retrying queries over and over again... not an option I am hoping for.
How should I proceed? Are there alternate methods I should consider?
EDIT: I have tried setting an index on varcharfield, but the table is still locking. I suspect that the locking happens when the UPDATE portion is actually executing. Are there other suggestions to get around this problem?

A. If we assume that indexing varcharField takes a lot of disk space and adding new column will not hit you hard I can suggest the following approach:
create new field with datatype "tinyint"
index it.
this field will store 0 if varcharField is null and 1 - otherwise.
rewrite the first query to do update relying on new field. In this case it will not cause entire table locking.
Hope it helps.

You can index only part of the varchar column, it will still work, and will require less space. Just specify index size:
CREATE INDEX someindex ON sometable (varcharcolumn(32))

I was able to solve the issue by adding explicit LOCK TABLE statements around both queries. This turned out to be a better solution, since each query affects so many records, and that both of these are background processes. They now wait on each other.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
While this is an okay solution for me, it obviously isn't the answer for everyone. Locking with WRITE means that you cannot READ. Only a READ lock will allow others to READ.

Related

mysql deadlocking with autocommit on and no confict

I have a piece of (Perl) code, of which I have multiple instances running at the same time, all with a different - unique - value for a variable $dsID. Nearly all of them keep falling over when they try to execute the following (prepared) SQL statement:
DELETE FROM ssRates WHERE ssID IN (SELECT id FROM snapshots WHERE dsID=?)
returning the error:
Lock wait timeout exceeded; try restarting transaction
Which sounds clear enough, except for a few things.
I have autocommit enabled, and am not using (explicit) transactions.
I'm using InnoDB which is supposed to use row-level locking.
The argument passed as $dsID is unique to each code, so there should be no conflicting locks to get into deadlocks.
Actually, at present, there are no rows that match the inner SELECT clause (I have verified this).
Given these things, I cannot understand why I am getting lock problems -- no locks should be waiting on each other, and there is no scope for deadlocks! (Note, though, that the same script later on does insert into the ssRates table, so some instances of the code may be doing that).
Having googled around a little, this looks like it may be a "gap locking" phenomenon, but I'm not entirely sure why, and more to the point, I'm not sure what the right solution is. I have some possible workarounds, -- the obvious one being to split the process up: do the select clause, and then loop over results giving delete command. But really, I'd like to understand this otherwise I'm going to end up in this mess again!
So I have two questions for you friendly experts.
Is this a gap-locking thing?
If not - what is it? If yes -- why. I can't see how this condition matches the gap lock definition.
(NB, server is running MariaDB: 5.5.68-MariaDB; in case this is something fixed in newer versions).

InnoDB full-text-search deadlock

I have a medium-volume application that does about 40 inserts per second. All inserts are in the format of:
UPDATE item SET values=... WHERE row_id='18273-3749d-8743'
Furthermore, no item is updated multiple times at the same time. It seems simple enough and there should never be a deadlock if we ignore everything else. However, I have a fulltext field that seems to acquire a pseudo-table level lock, instead of a row-level lock, if I'm troubleshooting this correctly. Here is the error that it gives on SHOW ENGINE INNODB STATUS:
LATEST DEADLOCK DETECTED
...
RECORD LOCKS space ... index FTS_DOC_ID_INDEX of table
So, if I'm interpreting this correctly, it seems like FTS_DOC_ID_INDEX is doing some sort of "more-than-row-level-lock" to update the search index. Is this what is occurring? And if so, what's the correct way to deal with this -- as in, I cannot decrease the number of writes to the application, is there a way to do a "safe update" (?) on the FTS field? Or do I need to write the updates such that I remove the fts field and queue those separately (which would seem like a huge pain to do). What's the best way to deal with this?
(After seeing the SELECT, etc`, I may have a better answer, but here is starting advice.)
Whenever you may get a deadlock, you should provide in your code for recovering from it. That is, check for deadlock after doing the UPDATE, then re-execute the update once.

MySql count(*) super slow with concurrent queries

I have a script that tries to read all the rows from a table like this:
select count(*) from table where col1 = 'Y' or col1 is null;
col1 and col2 are not indexed and this query usually takes ~20 seconds but if someone is already running this query, it takes ages and gets blocked.
We just have around 100k rows in the table and I tried it without the where clause and it causes the same issue.
The table uses InnoDB so, it doesn't store the exact count but I am curious if there is any concurrency parameter I should look into. I am not sure if absence of indexes on the table causes the issue but it doesn't make sense to me.
Thanks!
If they are not indexed, then it is required to read the entire disk files of your tables to find your data. A single hard disk cannot perform very well concurrent read intensive operations. You have to index.
It looks like your SELECT COUNT(*)... query is being serialized with other operations on your table. Unless you tell the MySQL server otherwise, your query will do its best to be very precise.
Try changing the transaction isolation level by issuing this command immediately before your query.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Setting this enables so-called dirty reads, which means you might not count everything in the table that changes during your operation. But that probably will not foul up your application too badly.
(Adding appropriate indexes is always a good idea, but not the cause of the problem you ask about.)

Mysql Lock times in slow query log

I have an application that has been running fine for quite awhile, but recently a couple of items have started popping up in the slow query log.
All the queries are complex and ugly multi join select statements that could use refactoring. I believe all of them have blobs, meaning they get written to disk. The part that gets me curious is why some of them have a lock time associated with them. None of the queries have any specific locking protocols set by the application. As far as I know, by default you can read against locks unless explicitly specified.
so my question: What scenarios would cause a select statement to have to wait for a lock (and thereby be reported in the slow query log)? Assume both INNODB and MYISAM environments.
Could the disk interaction be listed as some sort of lock time? If yes, is there documentation around that says this?
thanks in advance.
MyISAM will give you concurrency problems, an entire table is completely locked when an insert is in progress.
InnoDB should have no problems with reads, even while a write/transaction is in progress due to it's MVCC.
However, just because a query is showing up in the slow-query log doesn't mean the query is slow - how many seconds, how many records are being examined?
Put "EXPLAIN" in front of the query to get a breakdown of the examinations going on for the query.
here's a good resource for learning about EXPLAIN (outside of the excellent MySQL documentation about it)
I'm not certain about MySql, but I know that in SQL Server select statements do NOT read against locks. Doing so will allow you to read uncommitted data, and potentially see duplicate records or miss a record entirely. The reason for this is because if another process is writing to the table, the database engine may decide it's time to reorganize some data and shifts it around on disk. So it moves a record you already read to the end and you see it again, or it moves one from the end up higher where you've already past.
There's a guy on the net somewhere who actually wrote a couple of scripts to prove that this happens and I tried them once and it only took a few seconds before a duplicate showed up. Of course, he designed the scripts in a fashion that would make it more likely to happen, but it proves that it definitely can happen.
This is okay behaviour if your data doesn't need to be accurate and can certainly help prevent deadlocks. However, if you're working on an application dealing with something like people's money then that's very bad.
In SQL Server you can use the WITH NOLOCK hint to tell your select statement to ignore locks. I'm not sure what the equivalent in MySql would be but maybe someone else here will say.

What does MySQL do if you attempt to update a table that is being queried?

I have a very slow query that I need to run on a MySQL database from time to time.
I've discovered that attempts to update the table that is being queried are blocked until the query has finished.
I guess this makes sense, as otherwise the results of the query might be inconsistent, but it's not ideal for me, as the query is of much lower importance than the update.
So my question really has two parts:
Out of curiosity, what exactly does MySQL do in this situation? Does it lock the table for the duration of the query? Or try to lock it before the update?
Is there a way to make the slow query not blocking? I guess the options might be:
Kill the query when an update is needed.
Run the query on a copy of the table as it was just before the update took place
Just let the query go wrong.
Anyone have any thoughts on this?
It sounds like you are using a MyISAM table, which uses table level locking. In this case, the SELECT will set a shared lock on the table. The UPDATE then will try to request an exclusive lock and block and wait until the SELECT is done. Once it is done, the UPDATE will run like normal.
MyISAM Locking
If you switched to InnoDB, then your SELECT will set no locks by default. There is no need to change transaction isolation levels as others have recommended (repeatable read is default for InnoDB and no locks will be set for your SELECT). The UPDATE will be able to run at the same time. The multi-versioning that InnoDB uses is very similar to how Oracle handles the situation. The only time that SELECTs will set locks is if you are running in the serializable transaction isolation level, you have a FOR UPDATE/LOCK IN SHARE MODE option to the query, or it is part of some sort of write statement (such as INSERT...SELECT) and you are using statement based binary logging.
InnoDB Locking
For the purposes of the select statement, you should probably issue a:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
command on the connection, which causes the subsequent select statements to operate without locking.
Don't use the 'SELECT ... FOR UPDATE', as that definitely locks the table rows that are affected by the select statement.
The full list of msql transaction isloation levels are in the docs.
First off all you need to know what engine you´re using (MySam or InnoDb).
This is clearly a transaction problem.
Take a look a the section 13.4.6. SET TRANSACTION Syntax in the mysql manual.
UPDATE LOW_PRIORITY .... may be helpful - the mysql docs aren't clear whether this would let the user requesting the update continue and the update happen when it can (which is what I think happens) or whether the user has to wait (which would be worse than at present ...), and I can't remember.
What table types are you using? If you are on MyISAM, switching to InnoDB (if you can - it has no full text indexing) opens up more options for this sort of thing, as it supports the transactional features and row level locking.
I don't know MySQL, But it sounds like transaction problem.
You should be able to set transaction typ to Dirty Read in your select query.
That won't nessarily give you correct results. But it should'nt be blocked.
Better would be to make the first query go faster. Do some analyzing and check if you can speed it up with correct indeing and so on.