Query takes long time on 'statistics' state - mysql

One of my query takes long time(more than 300 seconds) for executing simple query. and fails in statistics state.
It happens while concurrent execution of same query.
"select 1 from <table_name> where id = <value> for update"
Even, i have 'optimizer_search_depth' config as 0 and buffer size has 14GB.

SELECT .... FOR UPDATE sets a read lock on the rows it returns until the transaction is done, therefor when you call the same query multiple times at the same time, the other ones have to wait for the locks to get released
I guess you are using innodb as engine for your table?
Please see innodb-locking-reads for further information on locking with "FOR UPDATE"

Related

MySQL Locking scenarios

We have a large table of about 100 million records and 100+ fields and there are frequent select and update queries running related to this table.
Now we have a requirement to set almost 50+ fields to null and we are planning to do this updation based on the primary key.
We are aware that there will be a locking mechanism when two updates are trying to update the same record.
Our question is, what happens when an update and select query is trying to access the same record.
For example in our case
case1: If we are selecting some 10000 records in one thread and during this select query execution if we are trying to update one of this 10000 records to null in another thread, Will both of this query executes without waiting for the other query? how will be the locking mechanism behave in this scenario?
case2: If we are updating 10000 records to null and during this update query execution if we are trying to select one of these 10000 records, Will both of these queries execute without waiting for the other query? how will be the locking mechanism behave in this scenario?
We are using MySQL 5.7, InnoDB engine and consider all parameters in MySQL is default
Apologizing for this basic question
Given your premise that you use InnoDB as the storage engine and default configuration options:
Readers do not block writers, and vice-versa, full stop.
A SELECT query (unless it's a locking read) can read rows even if they are locked by a concurrent transaction. The result of the query will be based on the latest version of the rows that were committed at the time the SELECT query's transaction snapshot started.
Likewise, an UPDATE can lock a row even if it is being read by a concurrent transaction.

Can Mysql Innodb handle heavy parallel processing

I have a Mysql system with a table of 1.7M records. This is a production system. It was previously Myisam & very resilient but as a test I have converted it to Innodb (and the php script) in the hope that it would run faster and row-level locking would make it even more resilient. It is serviced by 30 robots using PHP 7 CLI. Each of them scans the table for records that need to be updated, updates them then continues as part of the team until the job is done. They do this in chunks of 40 rows which means the script is run about 42,500 times.
But during testing I have noticed some features of Innodb transactions that I had not expected and seem to be showstoppers. Before I roll it back I thought I'd ask others of their views, whether I've completely got something wrong or to prove or disprove my findings. The issue centres around one db call (all search fields are indexed) below is pseudo-code:
update table set busy=$token where condition=true order by id $order limit $units
if affected rows != $units
do function to clear
return
else do stuff.....
endif
BEFORE
Under Myisam the result is that the robots each take a run at getting table level locks and just queue until they get them. This can produce bottlenecks but all are resolved within a minute.
AFTER
Under Innodb the call is ok for one robot but any attempt at multi-user working results in 'Lock wait timeout exceeded; try restarting transaction'.
Changing the wait_timeout / autocommit / tx_isolation makes no difference. Nor does converting this to a transaction and using:
begin
select .... for update
update
test
commit or rollback
It seems to me that:
1 Innodb creates an implicit transaction for all updates even if you don't set up a transaction. If these take too long then parallel processing is not possible.
2 Much more importantly,when Innodb locks rows it does not 'know' which rows it locked. You can't do:
begin
select 10 rows where condition=this for update
update the rows I locked
commit
You have to do two identical calls like this:
begin
select 10 rows where condition=this for update
update 10 rows where condition=this
commit
This is a recipe for deadlocks as robot1 may lock 40 rows, robot2 locks 40 others and so on but then robot1 then updates 40 rows which may be completely different from the ones it just locked. This will continue until all rows are locked and they cannot write back to the table.
So where I have 30 robots contending for chunks of rows that need updating it seems to me that Innodb is useless for my purposes. It is clever but not clever enough to handle heavy parallel processing.
Any thoughts...
Ponder this approach:
SET autocommit = 1;
Restart:
$left_off = 0;
Loop:
# grab one item:
BEGIN;
$id = SELECT id FROM tbl WHERE condition AND id > $left_off
ORDER BY id LIMIT 1 FOR UPDATE;
if nothing returned, you are end of table, COMMIT and GOTO Restart
UPDATE tbl SET busy = $token WHERE id = $id;
COMMIT;
do stuff
UPDATE tbl SET busy = $free WHERE id = $id; -- Release it
$left_off = $id;
Goto Loop
Notes:
It seems that the only reason to set busy is if "do stuff" hangs onto the row "too long". Am I correct?
I chose to lock only one at a time -- less complexity.
$left_off is to avoid scanning over lots of rows again and again. No, OFFSET is not a viable alternative.
BEGIN overrides autocommit. So that transaction lasts until COMMIT.
The second UPDATE is run with autocommit=1, so it is a transaction unto itself.
Be sure to tailor the number of robots -- too few = too slow; too many = too much contention. It is hard to predict the optimal value.
During my tests of Innodb v MyIsam I found that when I did resolve any contention issues the Innodb model was 40% slower than MyIsam. But, I do believe that with further tweaking this can be reduced so that it runs on a par with MyIsam.
What I did notice that MyIsam would queue 'waiting for table-level lock' indefinately which actually suited me but punished the hard disk. Whereas Innodb is a much more democratic process and the disk access is more even. I have rolled it back for the moment but will pursue the Innodb version in a few weeks with the adjustments I commented on above.
In answer to my own question: Yes Innodb can handle heavy parallel processing with a lot of tweaking and rationalizing your database design. Dissapointing that no one answered my question about whether Innodb record locking has an awareness of which records it locks.

InnoDB lock exhaustion with batched transactional DELETEs under DBI

I am using perl and DBI to perform deletes in chunks of 1000 on a very large mysql table. But I am receiving this error: DBD::mysql::db do failed: The total number of locks exceeds the lock table size.
Here is the perl code with the sql statement that performs the deletes
my $q = q{
DELETE FROM table
WHERE date_format(date, '%Y-%m') > '2015-01' LIMIT 1000
};
my $rc = '';
until ($rc eq '0E0') {
$rc = $dbh->do($q);
$dbh->commit();
}
In my experience this error has only occurred when trying to delete or insert a very large number of records all at once with one statement. In fact the viable solutions I have been able to find are:
Increase the innodb buffer pool size using the innodb_buffer_pool_size global variable.
perform the delete in chunks.
I have not tried solution 1. for two reasons. First being that it seems in my specific situation it would only increase the time before the buffer is eventually filled, though I am not sure about that, and second because we are not certain what effect it may have on the application using the database.
I would like to know:
*Why is this error occurring even though I am deleting in chunks?
*Is there a quick high level solution to this problem with perl and/or DBI?
*Any other info that could lead to a soution.
Why is this error occurring even though I am deleting in chunks?
InnoDB uses row-level locking:
14.5.8 Locks Set by Different SQL Statements in InnoDB
A locking read, an UPDATE, or a DELETE generally set record locks on every index record that is scanned in the processing of the SQL statement. It does not matter whether there are WHERE conditions in the statement that would exclude the row. InnoDB does not remember the exact WHERE condition, but only knows which index ranges were scanned. The locks are normally next-key locks that also block inserts into the “gap” immediately before the record.
[...]
DELETE FROM ... WHERE ... sets an exclusive next-key lock on every record the search encounters.
(emphasis added)
This means that your query will lock every row it scans, even rows that don't match the condition in your WHERE clause.
I don't know the exact execution details of your query, but I imagine that with a large table, it wouldn't be difficult to overrun the default 128 MB of innodb_buffer_pool_size (which I believe is shared by all sessions; other sessions could be locking rows at the same time as your query). Especially so if your query doesn't use indexes and triggers a table scan.
Is there a quick high level solution to this problem?
The MySQL manual describes a simple workaround for exactly this situation:
If you are deleting many rows from a large table, you may exceed the lock table size for an InnoDB table. To avoid this problem, or simply to minimize the time that the table remains locked, the following strategy (which does not use DELETE at all) might be helpful:
Select the rows not to be deleted into an empty table that has the same structure as the original table:
INSERT INTO t_copy SELECT * FROM t WHERE ... ;
Use RENAME TABLE to atomically move the original table out of the way and rename the copy to the original name:
RENAME TABLE t TO t_old, t_copy TO t;
Drop the original table:
DROP TABLE t_old;
No other sessions can access the tables involved while RENAME TABLE executes, so the rename operation is not subject to concurrency problems. See Section 13.1.20, “RENAME TABLE Syntax”.
Have INDEX(date)
date is of type DATETIME or DATE or TIMESTAMP
Perform the query this way:
DELETE FROM table
WHERE date > '2015-01-31'
ORDER BY date DESC
LIMIT 1000
stop when the DELETE has rows_affected == 0

Mysql:when query this sql:"select * from user limit 0,1000",if allow delete operation

I have a mysql lock question:
If I query this sql: select * from user order by id asc limit 0,1000.
Then anohther thread simutanousely delete the row between 0,1000 in the user table,if allowed?
In the MySQL Documentation for InnoDB, it states InnoDB does locking on the row level and runs queries as nonlocking consistent reads by default.
More directly, however is Internal Locking Methods, which says MySQL uses table-level locking for MyISAM, MEMORY, and MERGE tables, allowing only one session to update those tables at a time. Also, this:
MySQL grants table write locks as follows:
1. If there are no locks on the table, put a write lock on it.
2. Otherwise, put the lock request in the write lock queue.
MySQL grants table read locks as follows:
1. If there are no write locks on the table, put a read lock on it.
2. Otherwise, put the lock request in the read lock queue.
Okay, let's digest that: In InnoDB, each row has it's own lock, which means your query would loop through the table until it hit a row that has a lock. However, in MyISAM, there is only one lock for the entire table, which is set before the query is executed.
In other words, for InnoDB, if the DELETE operation removed the row before the SELECT operation read the row, then the row would not show up in the results. However, if the SELECT operation read the row first, then it would be returned in the result set, but any future SELECT operations would not show the row. If you want to intentionally lock the entire result set in InnoDB, look into SELECT ... FOR UPDATE.
In MyISAM, the table is locked by default, so it depends which query began execution first: if the DELETE operation started first, then the row would not be returned with the SELECT. But if the SELECT operation began execution first, then the row would indeed be returned.
There is more about interlaced here: http://dev.mysql.com/doc/refman/5.0/en/select.html
And also here: Any way to select without causing locking in MySQL?

Select statement blocks the read/write operation on the InnoDB table

I have a Select query which executes on a transactional table having more than 4 million records. Whenever I execute this query , I observe that all write and update operations on that particular transactional table become suspended and we start getting exceptions from java side that lock wait timeout exceeds , try restarting transaction. Currently lock wait timeout is set to 200 seconds. I am unable to understand that why a select statement can create such locks on the table and block all insert/update statements. The table storage engine is InnoDb and primary key is auto-increment key. The MySQL Version is 5.1.40.
Also I m not starting any transaction before executing select statement.
Any Idea?
So, yes, your SELECT in one transaction read-locks the records of that table and write operations which touch the same records will have to wait until read transaction completes (if it follows two phase locking).
This document may help understanding innodb locks model