Mysql: Deadlock found when trying to get lock, need remove key? - mysql

i count page view statistics in Mysql and sometimes get deat lock.
How can resolve this problem? Maybe i need remove one of key?
But what will happen with reading performance? Or is it not affect?
Table:
CREATE TABLE `pt_stat` (
`stat_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`post_id` int(11) unsigned NOT NULL,
`stat_name` varchar(50) NOT NULL,
`stat_value` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`stat_id`),
KEY `post_id` (`post_id`),
KEY `stat_name` (`stat_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8
Error: "Deadlock found when trying to get lock; try restarting transaction".
UPDATE pt_stat SET stat_value = stat_value + 1 WHERE post_id = "21500" AND stat_name = 'day_20170111';

When dealing with deadlocks, the first thing to do, always, is to see whether you have complex transactions deadlocking against eachother. This is the normal case. I assume based on your question that the update statement, however, is in its own transaction and therefore there are no complex interdependencies among writes from a logical database perspective.
Certain multi-threaded databases (including MySQL) can have single statements deadlock against themselves due to write dependencies within threads on the same query. MySQL is not alone here btw. MS SQL Server has been known to have similar problems in some cases and workloads. The problem (as you seem to grasp) is that a thread updating an index can deadlock against another thread that updates an index (and remember, InnoDB tables are indexes with leaf-nodes containing the row data).
In these cases there are three things you can look at doing:
If the problem is not severe, then the best option is generally to retry the transaction in case of deadlock.
You could reduce the number of background threads but this will affect both read and write performance, or
You could try removing an index (key). However, keep in mind that unindexed scans on MySQL are slow.

Related

Why do I get a deadlock error when unique index is present

This a follow up to a previous question of mine...
I have a table with to fields: pos (point of sale) and voucher_number. I need to have an unique sequence per pos.
CREATE TABLE `test_table` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`date_event` DATETIME NOT NULL,
`pos` TINYINT NOT NULL,
`voucher_number` INT NOT NULL,
`origin` CHAR(1) NULL,
PRIMARY KEY (`id`))
ENGINE = InnoDB;
I perform the tests described in my answer and everything works fine. Basically there are two or more scripts trying to do the following at the same time:
//Notice the SELECT ... FOR UPDATE
$max_voucher_number = select max(voucher_number) as max_voucher_number from vouchers where pos = $pos for update;
$max_voucher_number = $max_voucher_number + 1;
insert into (pos, voucher_number) values ($pos, $max_voucher_number);
but the first script set a sleep(10) before the insert in order to test lock for the sequence
The problem arise when I add a UNIQUE INDEX
ALTER TABLE `test_table`
ADD UNIQUE INDEX `per_pos_sequence` (`pos` ASC, `voucher_number` ASC);
Then I get this error for the :
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when
trying to get lock; try restarting transaction
Why do I get that error if an index is present?
Is it possible to mantain the index and get no errors?
I'd guess you are running into the behavior described in this bug report: https://bugs.mysql.com/bug.php?id=98324
It matches an increase in deadlocks we have observed at my company since upgrading to MySQL 5.7.26 or later. Many different applications and tables have increased frequency of deadlocks, and the only thing in common is that they are using tables with PRIMARY KEY and also secondary UNIQUE KEY.
The response to the bug report says that deadlocks happening is not a bug, but a natural consequence of concurrent clients requesting locks. If the locks cannot be granted atomically, then there is a chance of deadlocks. Application clients should not treat this as a bug or an error. Just follow the instructions in the deadlock message: re-try the failed transaction.
The only ways I know of to avoid deadlocks are:
Avoid defining multiple unique keys on a given table.
Disallow concurrent clients requesting locks against the same table.
Use pessimistic table-locks to ensure clients access the table serially.

MySQL performance - Selecting and deleting from a large table

I have a large table called "queue". It has 12 million records right now.
CREATE TABLE `queue` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`userid` varchar(64) DEFAULT NULL,
`action` varchar(32) DEFAULT NULL,
`target` varchar(64) DEFAULT NULL,
`name` varchar(64) DEFAULT NULL,
`state` int(11) DEFAULT '0',
`timestamp` int(11) DEFAULT '0',
`errors` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `idx_unique` (`userid`,`action`,`target`),
KEY `idx_userid` (`userid`),
KEY `idx_state` (`state`)
) ENGINE=InnoDB;
Multiple PHP workers (150) use this table simultaneously.
They select a record, perform a network request using the selected data and then delete the record.
I get mixed execution times from the select and delete queries. Is the delete command locking the table?
What would be the best approach for this scenario?
SELECT record + NETWORK request + DELETE the record
SELECT record + NETWORK request + MARK record as completed + DELETE completed records using a cron from time to time (I don't want an even bigger table).
Note: The queue gets new records every minute but the INSERT query is not the issue here.
Any help is appreciated.
"Don't queue it, just do it". That is, if the tasks are rather fast, it is better to simply perform the action and not queue it. Databases don't make good queuing mechanisms.
DELETE does not lock an InnoDB table. However, you can write a DELETE that seems that naughty. Let's see your actual SQL so we can work in improving it.
12M records? That's a huge backlog; what's up?
Shrink the datatypes so that the table is not gigabytes:
action is only a small set of possible values? Normalize it down to a 1-byte ENUM or TINYINT UNSIGNED.
Ditto for state -- surely it does not need a 4-byte code?
There is no need for INDEX(userid) since there is already an index (UNIQUE) starting with userid.
If state has only a few value, the index won't be used. Let's see your enqueue and dequeue queries so we can discuss how to either get rid of that index or make it 'composite' (and useful).
What's the current value of MAX(id)? Is it threatening to exceed your current limit of about 4 billion for INT UNSIGNED?
How does PHP use the queue? Does it hang onto an item via an InnoDB transaction? That defeats any parallelism! Or does it change state. Show us the code; perhaps the lock & unlock can be made less invasive. It should be possible to run a single autocommitted UPDATE to grab a row and its id. Then, later, do an autocommitted DELETE with very little impact.
I do not see a good index for grabbing a pending item. Again, let's see the code.
150 seems like a lot -- have you experimented with fewer? They may be stumbling over each other.
Is the Slowlog turned on (with a low value for long_query_time)? If so, I wonder what is the 'worst' query. In situations like this, the answer may be surprising.

INSERT ... ON DUPLICATE UPDATE - Lock wait time out

I am struggling with INSERT .. ON DUPLICATE KEY UPDATE for a file on a big InnoDB table.
My values table saves the details for each entity belonging to an client. An entity can have only one value for a particular key. So when a change is happening we are updating the same. The table looks something like below:
CREATE TABLE `key_values` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`client_id` int(11) NOT NULL COMMENT 'customer/tenant id',
`key_id` int(11) NOT NULL COMMENT 'reference to the keys',
`entity_id` bigint(20) NOT NULL,
`value` text,
`modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `client_id` (`client_id`,`entity_id`,`key_id`),
KEY `client_id_2` (`client_id`,`key_id`)
) ;
All writes queries are of the form:
INSERT INTO `key_values`
(client_id, key_id, entity_id,value)
values
(23, 47, 147, 'myValue'), (...), (...)...
ON DUPLICATE KEY UPDATE value = values(value);
The table is around 350M records by now and is growing pretty fast.
The writes to table can happen from real time integration often
inserting less than 10 rows or as a bulk of 25K from offline sources.
For a given client, only one bulk operation can run at a time. This is reduce the row locks between insert
Lock wait time out period is set at 50 seconds
Currently, when the offline activities are happening sometimes(not always) we are getting an lock wait time-out. What could be possible changes without to avoid the time out ?
A design change at the moment is not possible ( sharding/partitioning/cluster).
REPLACE is another candidate, but I dont want to give delete privilege in production to anything from code.
INSERT IGNORE and then UPDATE is a good candidate, but will it give much improvement?
What other options do I have?
Thanks in advance for all suggestion and answers.
Regarding the lock wait timeout, this can be changed via the mysql configuration setting innodb_lock_wait_timeout which can be modified dynamically (without restarting mysql), in addition to changing it in your my.cnf.
Regarding the lock waits, one thing to consider with mysql is the default transaction isolation level, which is REPEATABLE READ. The side effect of this setting is that much more locking occurs for reads that you might expect (especially if you had a SQL Server background, which has a default tran iso level of READ COMMITTED). Now, if you don't need REPEATABLE READ, you can change your tran iso level, either in a query, using the SET TRANSACTION ISOLATION LEVEL syntax, or for the whole server, using the config setting transaction-isolation. I recommend using READ COMMITTED, and consider if there are other places in your application where even 'dirtier' reads are acceptable (in which case you can use READ UNCOMMITTED.

InnoDB strange performances

I'm doing some tests with a really simple InnoDB table (named Test) with the following structure:
Id int(10) unsigned NOT NULL AUTO_INCREMENT
UserId int(10) NOT NULL
Body varchar(512) COLLATE utf8_unicode_ci NOT NULL
CreatedAt datetime NOT NULL
one additional index on UserId:
KEY Idx_Test_UserId (UserId) USING BTREE
When I try to execute this query...
INSERT INTO Comments (UserId,Body,CreatedAt) VALUES (1,'This is a test',NOW());
...sometimes I get the operation completed in a few milliseconds but some other times it takes around a second.
I'm the only one person doing the tests on this specific table, I really don't understand I have such execution time differences.
Last note, when I'm doing the same tests with a MyISAM table I don't have any issues.
InnoDB works by default in AUTOCOMMIT mode, which means that every insert requires two seperate write to disk operations. If you have only one disk drive in your machine, sometimes you might need to wait a bit fr that. Also, AFAIR InnoDB used to have (not sure if it's still the case) a bit of performance problemws with writing to disk in Windows, but I think it involved concurrency higher than 1.

How to optimize a 'col = col + 1' UPDATE query that runs on 100,000+ records?

See this previous question for some background. I'm trying to renumber a corrupted MPTT tree using SQL. The script is working fine logically, it is just much too slow.
I repeatedly need to execute these two queries:
UPDATE `tree`
SET `rght` = `rght` + 2
WHERE `rght` > currentLeft;
UPDATE `tree`
SET `lft` = `lft` + 2
WHERE `lft` > currentLeft;
The table is defined as such:
CREATE TABLE `tree` (
`id` char(36) NOT NULL DEFAULT '',
`parent_id` char(36) DEFAULT NULL,
`lft` int(11) unsigned DEFAULT NULL,
`rght` int(11) unsigned DEFAULT NULL,
... (a couple of more columns) ...,
PRIMARY KEY (`id`),
KEY `parent_id` (`parent_id`),
KEY `lft` (`lft`),
KEY `rght` (`rght`),
... (a few more indexes) ...
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The database is MySQL 5.1.37. There are currently ~120,000 records in the table. Each of the two UPDATE queries takes roughly 15 - 20 seconds to execute. The WHERE condition may apply to a majority of the records, so that almost all records need to be updated each time. In the worst case both queries are executed as many times as there are records in the database.
Is there a way to optimize this query by keeping the values in memory, delaying writing to disk, delaying index updates or something along these lines? The bottleneck seems to be hard disk throughput right now, as MySQL seems to be writing everything back to disk immediately.
Any suggestion appreciated.
I never used it, but if your have enough memory, try the memory table.
Create a table with the same structure as tree, insert into .. select from .., run your scripts against the memory table, and write it back.
Expanding on some ideas from comment as requested:
The default is to flush to disk after every commit. You can wrap multiple updates in a commit or change this parameter:
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
The isolation level is simple to change. Just make sure the level fits your design. This probably won't help because a range update is being used. It's nice to know though when looking for some more concurrency:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html
Ultimately, after noticing the range update in the query, your best bet is the MEMORY table that andrem pointed out. Also, you'll probably be able to find some performance by using a btree indexes instead of the default of hash:
http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/
You're updating indexed columns - indexes negatively impact (read: slow down) INSERT/UPDATEs.
If this is a one time need to get things correct:
Drop/delete the indexes on the columns being updated (lft, rght)
Run the update statements
Re-create the indexes (this can take time, possibly equivalent to what you already experience in total)