So I'm working on a project that have the following schema on a MySQL 5.7 database:
CREATE TABLE `voucher` (
`id` varchar(36) NOT NULL,
`voucher_status_id` int(11) NOT NULL,
`situation_code` varchar(30) DEFAULT NULL,
`organization_id` int(11) DEFAULT NULL,
`code` varchar(15) NOT NULL,
`authorization_code` varchar(7) NOT NULL
.. other columns omitted for brevity
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
This table have ~ 2M rows and stores a pool of Vouchers.
The system get vouchers with voucher_status_id = 1 to sell.
The sale process is basically:
The user inputs the quantity X of Vouchers he wants to buy.
The application query the database with this select: SELECT v.* FROM voucher v where v.voucher_status_id = ? and v.organization_id = ? limit X for update
Then do the rest (update the voucher status, etc), and commit the transaction.
Things to notice: there's no SKIP LOCKED feature available, because this application is using MySQL 5.7; this pool of vouchers exist because the process of generating a voucher code is complex and made by a scheduled process on another system that load this table; the id column is a UUID v4
As you noticed, the select statement from above does lock some rows and it causes a lot of problems to the other sales concurrently happening and often (really) inccurs in Lock wait timeout exceeded errors.
By reading MySQL documentation, I found that it locks all the rows it has to scan while doing select ... for update statements.
A SELECT ... FOR UPDATE reads the latest available data, setting
exclusive locks on each row it reads. Thus, it sets the same locks a
searched SQL UPDATE would set on the rows.
The application is getting tons of Lock wait timeout exceeded errors. How can this be avoided in this case?
Another dev proposed randomly select rows by using ORDER BY RAND(), but it would end up with the same problem as a test I made clearly showed us:
Transaction A:
begin;
select * from voucher v where v.voucher_status_id = 1
and organization_id = 5
order by rand()
limit 5
for update;
Transaction B:
begin;
select * from voucher v where v.voucher_status_id = 1
and organization_id = 5
order by rand()
limit 10
for update;
Transaction B gets locked until Transaction A is commited.
How would you solve such situation?
Related
I have the following table:
CREATE TABLE `accounts` (
`name` varchar(50) NOT NULL,
`balance` int NOT NULL,
PRIMARY KEY (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
And it has two accounts in it. "Bob" has a balance of 100. "Jim" has a balance of 200.
I run this query to transfer 50 from Jim to Bob:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN;
SELECT * FROM accounts;
SELECT SLEEP(10);
SET #bobBalance = (SELECT balance FROM accounts WHERE name = 'bob' FOR UPDATE);
SET #jimBalance = (SELECT balance FROM accounts WHERE name = 'jim' FOR UPDATE);
UPDATE accounts SET balance = #bobBalance + 50 WHERE name = 'bob';
UPDATE accounts SET balance = #jimBalance - 50 WHERE name = 'jim';
COMMIT;
While that query is sleeping, I run the following query in a different session to set Jim's balance to 500:
UPDATE accounts SET balance = 500 WHERE name = 'jim';
What I thought would happen is that this would cause a bug. The transaction would set Jim's balance to 150, because the first read in the transaction (before the SLEEP) would establish a snapshot in which Jim's balance is 200, and that snapshot would be used in the later query to get Jim's balance. So we would subtract 50 from 200 even though Jim's balance has actually been changed to 500 by the other query.
But that's not what happens. Actually, the end result is correct. Bob has 150 and Jim has 450. But I don't understand why this is.
The MySQL documentation says about Repeatable Read:
This is the default isolation level for InnoDB. Consistent reads within the same transaction read the snapshot established by the first read. This means that if you issue several plain (nonlocking) SELECT statements within the same transaction, these SELECT statements are consistent also with respect to each other. See Section 15.7.2.3, “Consistent Nonlocking Reads”.
So what am I missing here? Why does it seem like the SELECT statements in the transaction are not all using a snapshot established by the first SELECT statement?
The repeatable-read behavior only works for non-locking SELECT queries. It reads from the snapshot established by the first query in the transaction.
But any locking SELECT query reads the latest committed version of the row, as if you had started your transaction in READ-COMMITTED isolation level.
A SELECT is implicitly a locking read if it's involved in any kind of SQL statement that modifies data.
For example:
INSERT INTO table2 SELECT * FROM table1 WHERE ...;
The above locks examined rows in table1, even though the statement is just copying them to table2.
SET #myvar = (SELECT ... FROM table1 WHERE ...);
This is also copying a value from table1, into a variable. It locks the examined row in table1.
Likewise SELECT statements that are invoked in a trigger, or as part of a multi-table UPDATE or DELETE, and so on. Anytime the SELECT is part of a larger statement that modifies any data (in a table or in a variable), it locks the rows examined by the SELECT.
And therefore it's a locking read, and behaves like an UPDATE with respect to which row version it reads.
MySQL Version 5.7.16
Process 1:
START TRANSACTION;
SELECT * from statistic_activity WHERE activity_id = 1 FOR UPDATE;
Process 2:
START TRANSACTION;
INSERT INTO `statistic_activity` (`activity_id`) values (2678597);
If Process 1 SELECT statement returns results, Process 2 is not blocked (as you will expect)
But If Process 1 returns empty set (no rows exists with activity_id = 1) then whole table is locked and all INSERTS are blocked until Process 1 transaction ends.
Is this expected behavior ?
First, this is OpenCart
I have two tables:
1. oc_product (product_id, model, price, event_start, event_end and etc.)
2. oc_product_to_category (product_id, category_id)
Every product has Start Date and End Date. I created MYSQL event that catch every product with expired date (event_end < NOW()) to store it in category "Archive" with id = 68
Here is the code of MYSQL EVENT
CREATE EVENT move_to_archive_category
ON SCHEDULE EVERY 1 MINUTE
STARTS NOW()
DO
INSERT INTO `oc_product_to_category` (product_id, category_id)
SELECT product_id, 68 as category_id
FROM oc_product p WHERE p.event_end < NOW() AND p.event_end <> '0000-00-00';
When event starts it works properly! BUT, when I got to administration and publish new product with expired date I'm waiting 1 minute to see the product in "Archive" category but nothing happens.
I saw in "SHOW PROCESSLIST" and everything is OK:
event_scheduler localhost NULL Daemon 67 Waiting for next activation NULL
and also "SHOW EVENTS" looks good
Db Name Definer Time zone Type Execute at Interval value Interval field Starts Ends Status Originator character_set_client collation_connection Database Collation
events move_to_archive_category root#localhost SYSTEM RECURRING NULL 1 MINUTE 2016-08-15 13:37:54 NULL ENABLED 1 utf8 utf8_general_ci utf8_general_ci
I'm working locally, not live
Any ideas?
Thanks in advance! :)
I suggest turning on the sonar. I have 3 event links hanging off my profile page. So I created a few helper tables (that can also be seen in those links) to assist is turning on the sonar to see what is up in your events. Note you can expand on it for performance tracking as I did in those links.
Remember that Events succeed or fail (in your mind) based on the data and they do so silently. But tracking what is going on, you can vastly increase you happiness level when developing in them.
Event:
DROP EVENT IF EXISTS move_to_archive_category;
DELIMITER $$
CREATE EVENT move_to_archive_category
ON SCHEDULE EVERY 1 MINUTE STARTS '2015-09-01 00:00:00'
ON COMPLETION PRESERVE
DO
BEGIN
DECLARE incarnationId int default 0;
DECLARE evtAlias varchar(20);
SET evtAlias:='move_2_archive';
INSERT incarnations(usedBy) VALUES (evtAlias);
SELECT LAST_INSERT_ID() INTO incarnationId;
INSERT EvtsLog(incarnationId,evtName,step,debugMsg,dtWhenLogged)
SELECT incarnationId,evtAlias,1,'Event Fired, begin looking',now();
INSERT INTO `oc_product_to_category` (product_id, category_id)
SELECT product_id, 68 as category_id
FROM oc_product p WHERE p.event_end < NOW() AND p.event_end <> '0000-00-00';
-- perhaps collect metrics for above insert and use that in debugMsg below
-- perhaps with a CONCAT into a msg
INSERT EvtsLog(incarnationId,evtName,step,debugMsg,dtWhenLogged)
SELECT incarnationId,evtAlias,10,'INSERT finished',now();
-- pretend there is more stuff
-- ...
-- ...
INSERT EvtsLog(incarnationId,evtName,step,debugMsg,dtWhenLogged)
SELECT incarnationId,evtAlias,99,'Event Finished',now();
END $$
DELIMITER ;
Tables:
create table oc_product_to_category
( product_id INT not null,
category_id INT not null
);
create table oc_product
( product_id INT not null,
event_end datetime not null
);
drop table if exists incarnations;
create table incarnations
( -- NoteA
-- a control table used to feed incarnation id's to events that want performance reporting.
-- The long an short of it, insert a row here merely to acquire an auto_increment id
id int auto_increment primary key,
usedBy varchar(50) not null
-- could use other columns perhaps, like how used or a datetime
-- but mainly it feeds back an auto_increment
-- the usedBy column is like a dummy column just to be fed a last_insert_id()
-- but the insert has to insert something, so we use usedBy
);
drop table if exists EvtsLog;
create table EvtsLog
( id int auto_increment primary key,
incarnationId int not null, -- See NoteA (above)
evtName varchar(20) not null, -- allows for use of this table by multiple events
step int not null, -- facilitates reporting on event level performance
debugMsg varchar(1000) not null,
dtWhenLogged datetime not null
-- tweak this with whatever indexes your can bear to have
-- run maintenance on this table to rid it of unwanted rows periodically
-- as it impacts performance. So, dog the rows out to an archive table or whatever.
);
Turn on Events:
show variables where variable_name='event_scheduler'; -- OFF currently
SET GLOBAL event_scheduler = ON; -- turn her on
SHOW EVENTS in so_gibberish; -- confirm
Confirm Evt is firing:
SELECT * FROM EvtsLog WHERE step=1 ORDER BY id DESC; -- verify with our sonar
For more details of those helper tables, visit those links off my profile page for Events. Pretty much just the one link for Performance Tracking and Reporting.
You will also note that it is of no concern at the moment of having any data in the actual tables that you were originally focusing on. That can come later, and can be reported on in the evt log table by doing a custom string CONCAT into a string variable (for the counts etc). And reporting that in a step # like step 10 or 20.
The point is, you are completely blind without something like this as to know what is going on.
So,
I saw in mysqlog the following errors
160816 10:18:00 [ERROR] Event Scheduler: [root#localhost][events.move_to_archive_category] Duplicate entry '29-68' for key 'PRIMARY'
160816 10:18:00 [Note] Event Scheduler: [root#localhost].[events.move_to_archive_category] event execution failed.
and I just add INGORE in SQL INSERT... so the finally result is
INSERT IGNORE INTO `oc_product_to_category` (product_id, category_id)
Yesterday I found one problem on my project. For some reason for usual insert mysql did increase auto increment from 8 symbols to 10. In binlogs I found this
SET INSERT_ID=2147483646/*!*/;
# at 2638426
#140514 18:49:36 server id 31245 end_log_pos 2638810 Query thread_id=178500933 exec_time=0 error_code=0
SET TIMESTAMP=1400093376/*!*/;
INSERT INTO deals SET NAME = '###', PRICE = 125
But it must be around 26513863
ID field is: `ID` int(10) NOT NULL AUTO_INCREMENT
Table type: InnoDB
Mysql version: 5.5.31
Maybe someone know how can it be, or have any ideas?
A failed insert can still cause the auto-increment column to increase. If your program went into an infinite loop of failures it could cause the limit to be reached.
It's also possible to set the auto-increment programmatically to a specific value.
ALTER TABLE yourtable AUTO_INCREMENT = 12345;
from "Mark Byers" in this question stackoverflow
I'm having a strange problem with a SELECT in an Innodb table, it never returns, I waited more than two hours to see if I get the results but no, still waiting.
CREATE TABLE `example` (
`id` int(11) NOT NULL,
`done` tinyint(2) NOT NULL DEFAULT '0',
`agent` tinyint(4) NOT NULL DEFAULT '0',
`text` varchar(256) NOT NULL,
PRIMARY KEY (`id`),
KEY `da_idx` (`done`,`agent`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
The query that I can't obtain the results is:
SELECT id, text FROM example WHERE done = 0 AND agent = 0 LIMIT 120;
First I thought in some index optimization or lock problem, I was some time researching that but then I found this:
SELECT id FROM example WHERE done = 0 AND agent = 0 LIMIT 120;
...
...
...
120 rows in set (0.27 sec)
SELECT text FROM example WHERE done = 0 AND agent = 0 LIMIT 120;
...
...
...
120 rows in set (0.83 sec)
Now I'm lost, how obtaining id or text column separately with the exactly same query (same WHERE and LIMIT) works perfect and then obtaining both of them not??
Executing the "SELECT id, text..." again after that two queries have the same effect, never returns.
Any help is appreciated, an Innodb guru could help ;)
Added information:
Doesn't look like a transaction lock problem, look at the exponential increase of the response times for the next queries:
SELECT id, text FROM example WHERE done = 0 AND agent = 0 limit 109;
...
109 rows in set (0.31 sec)
SELECT id, text FROM example WHERE done = 0 AND agent = 0 limit 110;
...
110 rows in set (3.98 sec)
SELECT id, text FROM example WHERE done = 0 AND agent = 0 limit 111;
...
111 rows in set (4 min 5.00 sec)
I found the solution to my own question and I want to share here because it was quite strange, at least for me.
I always thought that the time delay that the mysql client returns after each query (for example "120 rows in set (0.27 sec)") was the time that takes to the mysql server to generate that result, but not.
The problem was a network problem! With no relation at all with the mysql server!
So, I found, opposed to what I always thought, that the time after each query showed by the mysql client includes the network delay. The same requests at the same time from a server in the same datacenter as the mysql server and another from another country returns a different time(considerabily different).