MySQL: UPDATE Concurrency - mysql

I have a table with two columns: item_id (int, auto inc) and item_counter (int, default value 0)
A user on a web page is allotted a few items by entering a form. This runs:
SELECT * FROM itemtable WHERE item_counter<100 ORDER BY RAND() LIMIT 5
This is followed by an UPDATE query increasing the item_counter of each of those items.
UPDATE itemtable SET item_counter=item_counter + 1 WHERE item_id=:item_id
About 20-50 users will be doing this operation at once.
A simple application. 1 SELECT and 5 UPDATE operations are sequential for each user, and I want the item_counter to be accurate (avoiding this scenario: an item_id is selected as having item_counter 99 but gets updated by some other user before this user is able to update it.
Should I use concurrency/locking in this?
I don't know if InnoDB's row-level locking is inherent to all UPDATE operations or any syntax changes are required?. I'm wondering what should I use here.. or don't use anything at all.

BEGIN;
SELECT ... FOR UPDATE; -- "FOR UPDATE" is the secret sauce.
UPDATE ...
COMMIT;

Related

MySQL concurrency and auto_incrementing key

I have a MySQL table of Users, and a table of Actions performed by the Users (linked to that User by a the primary key, userid ). The Actions table has an incrementing key indx. Whenever I add a new row to that table, I then update the latest column of the relevant Users row with the indx of the row I just added to the Actions table. So something like:
INSERT INTO actions(indx,actionname,userid) VALUES(default, "myaction", 1);
UPDATE users SET latest=LAST_INSERT_ID() WHERE userid=1;
The idea being that I can check for updates for a User by seeing if the latest is higher then the last time I checked.
My issue is that if more than one connection is opened on the database and they try and add an Action for the same User at the same time, connection2 could conceivably run their INSERT and UPDATE between the INSERT and update of connection1, and the latest entry of the user they're both trying to update will no longer have the indx of the most recent action entry.
I've been reading up on transaction, isolation levels, etc. But haven't really found a way around this (though my understanding of how these work exactly is pretty shaky, so maybe I just misunderstood). I think I need a way to lock the Actions table until the User table is updated. This application only gets used by a few hundred users tops, so I don't think the performance hit due to momentarily locking the table will be too bad.
So is that something that can be done in MySQL? Is there a better solution? I imagine this general pattern must be pretty common: having one table with a bunch of varieties of rows, and a second table with a row that tracks meta data for each variety in table A and needs to be updated atomically each time that first table is changed. So I'm hoping there's a solution that isn't too complex
Use SELECT ... FOR UPDATE to lock the row in order to serialize the access to the table and prevent from race conditions:
START TRANSACTION;
SELECT any_column FROM users WHERE userid=1 FOR UPDATE;
INSERT INTO actions(indx,actionname,userid) VALUES(default, "myaction", 1);
UPDATE users SET latest=LATEST_INSERT_ID() WHERE userid=1;
COMMIT;
However this will slown down your INSERTing rate, because all these transactions from all sessions will be serialized.
The better option is to not store the last ID in users table at all. Just use SELECT max( id ) FROM actions WHERE userid = xxxx in all places where this number is required. With an index on actions( userid ) this query will be very fast (assuming that id column is the primary key in this table), and the inserts will not be slowed down

MySQL performance when updating row with FK

I have two tables
spies |
--------- |
id | PK
weapon_id | FK
name |
weapons
--------- |
id | PK
name |
I'm trying to clarify whether there is a difference in these two SQL updates (when using MySQL innoDB)
Query 1:
UPDATE spies SET name = 'Bond', weapon_id = 1 WHERE id = 1
OR
Query 2:
UPDATE spies SET name = 'Bond' WHERE id = 1
I have heard that when the updating a row with a FK creates read-only lock (not sure if that's the correct term) on the parent.
Would using Query 2 avoid the lock on the parent table?
Consider the following schema:
(Rem stmts left in for your convenience) :
-- drop table if exists spies;
create table spies
( id int primary key,
weapon_id int not null,
name varchar(100) not null,
key(weapon_id),
foreign key (weapon_id) references weapons(id)
)engine=InnoDB;
-- drop table if exists weapons;
create table weapons
( id int primary key,
name varchar(100) not null
)engine=InnoDB;
insert weapons(id,name) values (1,'slingshot'),(2,'Ruger');
insert spies(id,weapon_id,name) values (1,2,'Sally');
-- truncate table spies;
Now, we have 2 processes, P1 and P2. Best to test where P1 is perhaps MySQL Workbench and P2 is a MySql Command-line window. In other words, you have to set this up as separate connections and right. You would have to have a meticulous eye for step-by-step running these in the proper fashion (described in the Narrative below) and see its impact on the other process window.
Consider the following queries, keeping in mind that a mysql query not wrapped in an explicit transaction is itself an implicit transaction. But below, I swung for explicit:
Q1:
START TRANSACTION;
-- place1
UPDATE spies SET name = 'Bond', weapon_id = 1 WHERE id = 1;
-- place2
COMMIT;
Q2:
START TRANSACTION;
-- place1
UPDATE spies SET name = 'Bond' WHERE id = 1;
-- place2
COMMIT;
Q3:
START TRANSACTION;
-- place1
SELECT id into #mine_to_use from weapons where id=1 FOR UPDATE; -- place2
-- place3
COMMIT;
Q4:
START TRANSACTION;
-- place1
SELECT id into #mine_to_use from spies where id=1 FOR UPDATE; -- place2
-- place3
COMMIT;
Q5 (hodge podge of queries):
SELECT * from weapons;
SELECT * from spies;
Narrative
Q1: When P1 starts to begin Q1, and gets to place2, it has obtained an exclusive row-level update lock in both tables weapons and spies for the id=1 row (2 rows total, 1 row in each table). This can be proved by P2 starting to run Q3, getting to place1, but blocking on place2, and only being freed when P1 gets around to calling COMMIT. Everything I just said about P2 running Q3 is ditto for P2 running Q4. In summary, on the P2 screen, place2 freezes until the P1 Commit.
A note again about implicit transactions. Your real Q1 query is going to perform this very fast and coming out of it will do an implicit commit. However, the prior paragraph breaks it down were you to have more time-costly routines running.
Q2: When P1 starts to begin Q2, and gets to place2, it has obtained an exclusive row-level update lock in both tables weapons and spies for the id=1 row (2 rows total, 1 row in each table). However, P2 has no issues with Q3 blocking weapons, but P2 has block issues running Q4 at place2 spies.
So, the differences between Q1 and Q2 come down to MySQL knowing that the FK index is not relevant to a column in the UPDATE, and the manual states that in Note1 below.
When P1 runs Q1, P2 has no problems the read-only non-lock aquiring Q5 types of queries. The only issues are what data renditions P2 sees based on the ISOLATION LEVEL in place.
Note1: From the MySQL Manual Page entitled Locks Set by Different SQL Statements in InnoDB:
If a FOREIGN KEY constraint is defined on a table, any insert, update,
or delete that requires the constraint condition to be checked sets
shared record-level locks on the records that it looks at to check the
constraint. InnoDB also sets these locks in the case where the
constraint fails.
The above is why the behavior of Q2: is such that P2 is free to perform an UPDATE or acquire an UPDATE exclusive momentary lock on weapons. This is because the engine is not performing an UPDATE with P1 on weapon_id and thus does not have a row-level lock in that table.
To pull this back to 50,000 feet, one's biggest concern is the duration at which a lock is held either in an implicit transaction (one with no START/COMMIT), or explicit transaction before a COMMIT. A peer process can be prohibited from acquiring its need for an UPDATE in theory indefinitely. But each attempt at acquiring that lock is governed by its setting for innodb_lock_wait_timeout. What that means is, by default, after about 60 seconds it times out. For a view of your setting, run:
select ##innodb_lock_wait_timeout;
For me, at the moment, it is 50 (seconds).
Why not run EXPLAIN for this query and check it for yourself?
So, lets run!!
EXPLAIN UPDATE spies SET name = 'Bond', weapon_id = 1 WHERE id = 1\G
And check for number of rows that this query is scanning for, check for ROWS section and see how many rows its scanning.
Do the same for the below one as well.
EXPLAIN UPDATE spies SET name = 'Bond' WHERE id = 1\G
Now, coming to your question, INNODB will lock every update you are making on the each row in a table. But remember, this is a row level locking.
So, to answer your question, updating a row with or without a foreign key will not make a difference if its the same row and the same table.
It will make a difference if its a different row or different table.

How safe is SELECT FOR UPDATE in MySQL

Here is my Query:
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
START TRANSACTION;
DROP TEMPORARY TABLE IF EXISTS taken;
CREATE TEMPORARY Table taken(
id int,
invoice_id int
);
INSERT INTO taken(id, invoice_id)
SELECT id, $invoice_id FROM `licenses` l
WHERE l.`status` = 0 AND `type` = $type
LIMIT $serial_count
FOR UPDATE;
UPDATE `licenses` SET `status` = 1
WHERE id IN (SELECT id FROM taken);
If I'm going to face high concurrency is the query above thread-safe? I mean I don't wanna assign records which has already assigned to another one.
With your FOR UPDATE statement, you are locking all selected licenses until you perform an update, so you can be sure that there will not be concurrency problem on those records.
the only problem i can see is that if your query requires a lot of time to perform (how many licenses do you expect to process at every query?) and other queries requires licenses (even read queries are locked) on the same time, your system will be slowed down.

Performance of mysql counting rows in a big table

This fairly obvious question has very few (couldnt find any) solid answers.
I do simple select from table of 2 million rows.
select count(id) as total from big_table
Any machine I try this query on, usually takes at least 5 seconds to complete. This is unacceptable for realtime queries.
The reason I need an exact value of rows fetched is for precise statistical calculations later on.
Using the last auto increment value is unfortunately not an options because rows also get deleted periodically.
It can indeed be slow when running on an InnoDB engine. As stated in section 14.24 of the MySQL 5.7 Reference Manual, “InnoDB Restrictions and Limitations”, 3rd bullet point:
InnoDB InnoDB does not keep an internal count of rows in a table because concurrent transactions might “see” different numbers of rows at the same time. Consequently, SELECT COUNT(*) statements only count rows visible to the current transaction.
For information about how InnoDB processes SELECT COUNT(*) statements, refer to the COUNT() description in Section 12.20.1, “Aggregate Function Descriptions”.
The suggested solution is a counter table. This is a separate table with one row and column, having the current record count. It could be kept updated via triggers. Something like this:
create table big_table_count (rec_count int default 0);
-- one-shot initialisation:
insert into big_table_count select count(*) from big_table;
create trigger big_insert after insert on big_table
for each row
update big_table_count set rec_count = rec_count + 1;
create trigger big_delete after delete on big_table
for each row
update big_table_count set rec_count = rec_count - 1;
You can see here a fiddle, where you should alter the insert/delete statements in the build section to see the effect on:
select rec_count from big_table_count;
You could extend this for several tables, either by creating such a table for each, or to reserve a row per table in the above counter table. It would then be keyed by a column "table_name".
Improving concurrency
The above method does have an impact if you have many concurrent sessions inserting or deleting records, because they need to wait for each other to complete the update of the counter.
A solution is to not let the triggers update the same, single record, but to let them insert a new record, like this:
create trigger big_insert after insert on big_table
for each row
insert into big_table_count (rec_count) values (1);
create trigger big_delete after delete on big_table
for each row
insert into big_table_count (rec_count) values (-1);
The way to get the count then becomes:
select sum(rec_count) from big_table_count;
Then, once in a while (e.g. daily) you should re-initialise the counter table to keep it small:
truncate table big_table_count;
insert into big_table_count select count(*) from big_table;

MySQL - update a certain column right after select

I ran into a problem and can't choose the right solution.
I have a SELECT query that selects records from table.
These records has an status column as seen below.
SELECT id, <...>, status FROM table WHERE something
Now, right after this SELECT I have to UPDATE the status column.
How can I do it to avoid a race condition?
What I want to achieve is once somebody (session) selected something, this something cannot be selected by anybody else until I do not release it manually (for example using a status column).
Thoughts?
There is some mysql documentation, thar may be interesting to solve your task, not sure if it fit you needs, but it describes right way to do select followed by update.
The technique described does not prevent other sessions reading, but prevent writing of selected record until the end of transaction.
It contains an example similar to your problem:
SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;
It is required that you tables use Innodb engine and your programs use transactions.
If you need locking only for short time, i.e. one session select row with lock, update it, and release lock in one session, then you do not need field status at all, just use select ... for update and select ... lock in share mode so if all sessions will use these two with conjunction with transactions select... for update then update to modify, and select ... with shared lock to just read - this will solve your requirements.
If you need to lock for long time, select and lock in one session and then update and release in another, then right you use some storage to keep lock statuses and all session should use as described below: select ... for update and set status and status owner in one session, then in another session select for update check status and owner, update and remove status - for updating scenario, and for read scenario: select ... with shared lock check status.
You can do it with some preparations. Add a column sessionId to your table. It has to be NULL-able and it will contain the unique ID of the session that acquires the row. Also add an index on this new column; we'll use the column to search for rows in the table.
ALTER TABLE `tbl`
ADD COLUMN `sessionId` CHAR(32) DEFAULT NULL,
ADD INDEX `sessionId`(`sessionId`)
When a session needs to acquire some rows (based on some criteria) run:
UPDATE `tbl`
SET `sessionId` = 'aaa'
WHERE `sessionId` IS NULL
AND ...
LIMIT bbb
Replace aaa with the current session ID and ... with the conditions you need to select the correct rows. Replace bbb with the number of rows you need to acquire. Add an ORDER BY clause if you need to process the rows in a certain order (if some of them have higher priority than others). You can also add status = ... in the UPDATE clause to change the status of the acquired rows (to pending f.e.) to let other instances of the code know those rows are processed right now.
The query above acquires some rows. Next, run:
SELECT *
FROM `tbl`
WHERE `sessionId` = 'aaa'
This query gets the acquired rows to be processed in the client code.
After each row is processed, you either DELETE the row or UPDATE it and set sessionId to NULL (release the row) and status to reflect its new status.
Also you should release the rows (using the same procedure as above) when the session is closed.