Understanding MySQL concurrency/isolation levels - mysql

I am working on the backend of an application that needs to protect an external API from too many requests per user per month. So I need to keep track of number of requests from each user. I have a lot of experience with concurrent programming but almost no experience with db management or MySQL,
So, suppose I want to execute the equivalent of the following pseudocode, where I mix SQL statements with application-level logic, and where lookups is a table:
mutex mtx;
set #userid = 'usrid1';
set #date = CURDATE();
set #month = CONCAT_WS('-', YEAR(#date), MONTH(#date));
mtx.lock()
select counter from lookups where userid=#userid and month=#month;
if returned rows == 0:
insert into lookups set month=#month, userid=#userid, counter=1;
else:
update lookups set counter=counter+1;
mtx.unlock()
Except, of course, I don't have access to that mutex. At first I thought it would be enough to just wrap the whole thing inside a transaction, but upon closer inspection of the MySQL reference it seems that may not be enough to avoid possible race conditions, such as two threads/processes reading the same counter value. Is it good enough then, in mysql with default settings, to do the following:
set #userid = 'usrid1';
set #date = CURDATE();
set #month = CONCAT_WS('-', YEAR(#date), MONTH(#date));
start transaction;
select counter from lookups where userid=#userid and month=#month for update;
if returned rows == 0:
insert into lookups set month=#month, userid=#userid, counter=1;
else:
update lookups set counter=counter+1;
commit;
From what I can glean from the reference, it looks like it should be enough, and it should cause neither race conditions nor deadlocks, but the reference is long winded and complex, so I wanted to ask here to be sure. Performance isn't important. The reference states that MySQL's default isolation level is REPEATABLE READ.

I suggest this solution:
create table lookups (userid varchar(20), yearmonth date, counter int, primary key (userid, yearmonth));
insert into lookups set userid = 'usrid1',
yearmonth = date_format(curdate(), '%Y-%m-01'),
counter = last_insert_id(1)
on duplicate key update
counter = last_insert_id(counter + 1);
select last_insert_id(); -- returns the new value, whether 1 or the updated value.
This means you don't have to check if a row exists, it will either insert it or update it atomically.
The last_insert_id(<expression>) trick is documented at the end of the entry for that function: https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id

Related

UPDATE primary key in POSTGRES database [duplicate]

Several months ago I learned from an answer on Stack Overflow how to perform multiple updates at once in MySQL using the following syntax:
INSERT INTO table (id, field, field2) VALUES (1, A, X), (2, B, Y), (3, C, Z)
ON DUPLICATE KEY UPDATE field=VALUES(Col1), field2=VALUES(Col2);
I've now switched over to PostgreSQL and apparently this is not correct. It's referring to all the correct tables so I assume it's a matter of different keywords being used but I'm not sure where in the PostgreSQL documentation this is covered.
To clarify, I want to insert several things and if they already exist to update them.
PostgreSQL since version 9.5 has UPSERT syntax, with ON CONFLICT clause. with the following syntax (similar to MySQL)
INSERT INTO the_table (id, column_1, column_2)
VALUES (1, 'A', 'X'), (2, 'B', 'Y'), (3, 'C', 'Z')
ON CONFLICT (id) DO UPDATE
SET column_1 = excluded.column_1,
column_2 = excluded.column_2;
Searching postgresql's email group archives for "upsert" leads to finding an example of doing what you possibly want to do, in the manual:
Example 38-2. Exceptions with UPDATE/INSERT
This example uses exception handling to perform either UPDATE or INSERT, as appropriate:
CREATE TABLE db (a INT PRIMARY KEY, b TEXT);
CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS
$$
BEGIN
LOOP
-- first try to update the key
-- note that "a" must be unique
UPDATE db SET b = data WHERE a = key;
IF found THEN
RETURN;
END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently,
-- we could get a unique-key failure
BEGIN
INSERT INTO db(a,b) VALUES (key, data);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing, and loop to try the UPDATE again
END;
END LOOP;
END;
$$
LANGUAGE plpgsql;
SELECT merge_db(1, 'david');
SELECT merge_db(1, 'dennis');
There's possibly an example of how to do this in bulk, using CTEs in 9.1 and above, in the hackers mailing list:
WITH foos AS (SELECT (UNNEST(%foo[])).*)
updated as (UPDATE foo SET foo.a = foos.a ... RETURNING foo.id)
INSERT INTO foo SELECT foos.* FROM foos LEFT JOIN updated USING(id)
WHERE updated.id IS NULL;
See a_horse_with_no_name's answer for a clearer example.
Warning: this is not safe if executed from multiple sessions at the same time (see caveats below).
Another clever way to do an "UPSERT" in postgresql is to do two sequential UPDATE/INSERT statements that are each designed to succeed or have no effect.
UPDATE table SET field='C', field2='Z' WHERE id=3;
INSERT INTO table (id, field, field2)
SELECT 3, 'C', 'Z'
WHERE NOT EXISTS (SELECT 1 FROM table WHERE id=3);
The UPDATE will succeed if a row with "id=3" already exists, otherwise it has no effect.
The INSERT will succeed only if row with "id=3" does not already exist.
You can combine these two into a single string and run them both with a single SQL statement execute from your application. Running them together in a single transaction is highly recommended.
This works very well when run in isolation or on a locked table, but is subject to race conditions that mean it might still fail with duplicate key error if a row is inserted concurrently, or might terminate with no row inserted when a row is deleted concurrently. A SERIALIZABLE transaction on PostgreSQL 9.1 or higher will handle it reliably at the cost of a very high serialization failure rate, meaning you'll have to retry a lot. See why is upsert so complicated, which discusses this case in more detail.
This approach is also subject to lost updates in read committed isolation unless the application checks the affected row counts and verifies that either the insert or the update affected a row.
With PostgreSQL 9.1 this can be achieved using a writeable CTE (common table expression):
WITH new_values (id, field1, field2) as (
values
(1, 'A', 'X'),
(2, 'B', 'Y'),
(3, 'C', 'Z')
),
upsert as
(
update mytable m
set field1 = nv.field1,
field2 = nv.field2
FROM new_values nv
WHERE m.id = nv.id
RETURNING m.*
)
INSERT INTO mytable (id, field1, field2)
SELECT id, field1, field2
FROM new_values
WHERE NOT EXISTS (SELECT 1
FROM upsert up
WHERE up.id = new_values.id)
See these blog entries:
Upserting via Writeable CTE
WAITING FOR 9.1 – WRITABLE CTE
WHY IS UPSERT SO COMPLICATED?
Note that this solution does not prevent a unique key violation but it is not vulnerable to lost updates.
See the follow up by Craig Ringer on dba.stackexchange.com
In PostgreSQL 9.5 and newer you can use INSERT ... ON CONFLICT UPDATE.
See the documentation.
A MySQL INSERT ... ON DUPLICATE KEY UPDATE can be directly rephrased to a ON CONFLICT UPDATE. Neither is SQL-standard syntax, they're both database-specific extensions. There are good reasons MERGE wasn't used for this, a new syntax wasn't created just for fun. (MySQL's syntax also has issues that mean it wasn't adopted directly).
e.g. given setup:
CREATE TABLE tablename (a integer primary key, b integer, c integer);
INSERT INTO tablename (a, b, c) values (1, 2, 3);
the MySQL query:
INSERT INTO tablename (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=c+1;
becomes:
INSERT INTO tablename (a, b, c) values (1, 2, 10)
ON CONFLICT (a) DO UPDATE SET c = tablename.c + 1;
Differences:
You must specify the column name (or unique constraint name) to use for the uniqueness check. That's the ON CONFLICT (columnname) DO
The keyword SET must be used, as if this was a normal UPDATE statement
It has some nice features too:
You can have a WHERE clause on your UPDATE (letting you effectively turn ON CONFLICT UPDATE into ON CONFLICT IGNORE for certain values)
The proposed-for-insertion values are available as the row-variable EXCLUDED, which has the same structure as the target table. You can get the original values in the table by using the table name. So in this case EXCLUDED.c will be 10 (because that's what we tried to insert) and "table".c will be 3 because that's the current value in the table. You can use either or both in the SET expressions and WHERE clause.
For background on upsert see How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
I was looking for the same thing when I came here, but the lack of a generic "upsert" function botherd me a bit so I thought you could just pass the update and insert sql as arguments on that function form the manual
that would look like this:
CREATE FUNCTION upsert (sql_update TEXT, sql_insert TEXT)
RETURNS VOID
LANGUAGE plpgsql
AS $$
BEGIN
LOOP
-- first try to update
EXECUTE sql_update;
-- check if the row is found
IF FOUND THEN
RETURN;
END IF;
-- not found so insert the row
BEGIN
EXECUTE sql_insert;
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing and loop
END;
END LOOP;
END;
$$;
and perhaps to do what you initially wanted to do, batch "upsert", you could use Tcl to split the sql_update and loop the individual updates, the preformance hit will be very small see http://archives.postgresql.org/pgsql-performance/2006-04/msg00557.php
the highest cost is executing the query from your code, on the database side the execution cost is much smaller
There is no simple command to do it.
The most correct approach is to use function, like the one from docs.
Another solution (although not that safe) is to do update with returning, check which rows were updates, and insert the rest of them
Something along the lines of:
update table
set column = x.column
from (values (1,'aa'),(2,'bb'),(3,'cc')) as x (id, column)
where table.id = x.id
returning id;
assuming id:2 was returned:
insert into table (id, column) values (1, 'aa'), (3, 'cc');
Of course it will bail out sooner or later (in concurrent environment), as there is clear race condition in here, but usually it will work.
Here's a longer and more comprehensive article on the topic.
I use this function merge
CREATE OR REPLACE FUNCTION merge_tabla(key INT, data TEXT)
RETURNS void AS
$BODY$
BEGIN
IF EXISTS(SELECT a FROM tabla WHERE a = key)
THEN
UPDATE tabla SET b = data WHERE a = key;
RETURN;
ELSE
INSERT INTO tabla(a,b) VALUES (key, data);
RETURN;
END IF;
END;
$BODY$
LANGUAGE plpgsql
Personally, I've set up a "rule" attached to the insert statement. Say you had a "dns" table that recorded dns hits per customer on a per-time basis:
CREATE TABLE dns (
"time" timestamp without time zone NOT NULL,
customer_id integer NOT NULL,
hits integer
);
You wanted to be able to re-insert rows with updated values, or create them if they didn't exist already. Keyed on the customer_id and the time. Something like this:
CREATE RULE replace_dns AS
ON INSERT TO dns
WHERE (EXISTS (SELECT 1 FROM dns WHERE ((dns."time" = new."time")
AND (dns.customer_id = new.customer_id))))
DO INSTEAD UPDATE dns
SET hits = new.hits
WHERE ((dns."time" = new."time") AND (dns.customer_id = new.customer_id));
Update: This has the potential to fail if simultaneous inserts are happening, as it will generate unique_violation exceptions. However, the non-terminated transaction will continue and succeed, and you just need to repeat the terminated transaction.
However, if there are tons of inserts happening all the time, you will want to put a table lock around the insert statements: SHARE ROW EXCLUSIVE locking will prevent any operations that could insert, delete or update rows in your target table. However, updates that do not update the unique key are safe, so if you no operation will do this, use advisory locks instead.
Also, the COPY command does not use RULES, so if you're inserting with COPY, you'll need to use triggers instead.
Similar to most-liked answer, but works slightly faster:
WITH upsert AS (UPDATE spider_count SET tally=1 WHERE date='today' RETURNING *)
INSERT INTO spider_count (spider, tally) SELECT 'Googlebot', 1 WHERE NOT EXISTS (SELECT * FROM upsert)
(source: http://www.the-art-of-web.com/sql/upsert/)
I custom "upsert" function above, if you want to INSERT AND REPLACE :
`
CREATE OR REPLACE FUNCTION upsert(sql_insert text, sql_update text)
RETURNS void AS
$BODY$
BEGIN
-- first try to insert and after to update. Note : insert has pk and update not...
EXECUTE sql_insert;
RETURN;
EXCEPTION WHEN unique_violation THEN
EXECUTE sql_update;
IF FOUND THEN
RETURN;
END IF;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION upsert(text, text)
OWNER TO postgres;`
And after to execute, do something like this :
SELECT upsert($$INSERT INTO ...$$,$$UPDATE... $$)
Is important to put double dollar-comma to avoid compiler errors
check the speed...
According the PostgreSQL documentation of the INSERT statement, handling the ON DUPLICATE KEY case is not supported. That part of the syntax is a proprietary MySQL extension.
I have the same issue for managing account settings as name value pairs.
The design criteria is that different clients could have different settings sets.
My solution, similar to JWP is to bulk erase and replace, generating the merge record within your application.
This is pretty bulletproof, platform independent and since there are never more than about 20 settings per client, this is only 3 fairly low load db calls - probably the fastest method.
The alternative of updating individual rows - checking for exceptions then inserting - or some combination of is hideous code, slow and often breaks because (as mentioned above) non standard SQL exception handling changing from db to db - or even release to release.
#This is pseudo-code - within the application:
BEGIN TRANSACTION - get transaction lock
SELECT all current name value pairs where id = $id into a hash record
create a merge record from the current and update record
(set intersection where shared keys in new win, and empty values in new are deleted).
DELETE all name value pairs where id = $id
COPY/INSERT merged records
END TRANSACTION
CREATE OR REPLACE FUNCTION save_user(_id integer, _name character varying)
RETURNS boolean AS
$BODY$
BEGIN
UPDATE users SET name = _name WHERE id = _id;
IF FOUND THEN
RETURN true;
END IF;
BEGIN
INSERT INTO users (id, name) VALUES (_id, _name);
EXCEPTION WHEN OTHERS THEN
UPDATE users SET name = _name WHERE id = _id;
END;
RETURN TRUE;
END;
$BODY$
LANGUAGE plpgsql VOLATILE STRICT
For merging small sets, using the above function is fine. However, if you are merging large amounts of data, I'd suggest looking into http://mbk.projects.postgresql.org
The current best practice that I'm aware of is:
COPY new/updated data into temp table (sure, or you can do INSERT if the cost is ok)
Acquire Lock [optional] (advisory is preferable to table locks, IMO)
Merge. (the fun part)
UPDATE will return the number of modified rows. If you use JDBC (Java), you can then check this value against 0 and, if no rows have been affected, fire INSERT instead. If you use some other programming language, maybe the number of the modified rows still can be obtained, check documentation.
This may not be as elegant but you have much simpler SQL that is more trivial to use from the calling code. Differently, if you write the ten line script in PL/PSQL, you probably should have a unit test of one or another kind just for it alone.
Edit: This does not work as expected. Unlike the accepted answer, this produces unique key violations when two processes repeatedly call upsert_foo concurrently.
Eureka! I figured out a way to do it in one query: use UPDATE ... RETURNING to test if any rows were affected:
CREATE TABLE foo (k INT PRIMARY KEY, v TEXT);
CREATE FUNCTION update_foo(k INT, v TEXT)
RETURNS SETOF INT AS $$
UPDATE foo SET v = $2 WHERE k = $1 RETURNING $1
$$ LANGUAGE sql;
CREATE FUNCTION upsert_foo(k INT, v TEXT)
RETURNS VOID AS $$
INSERT INTO foo
SELECT $1, $2
WHERE NOT EXISTS (SELECT update_foo($1, $2))
$$ LANGUAGE sql;
The UPDATE has to be done in a separate procedure because, unfortunately, this is a syntax error:
... WHERE NOT EXISTS (UPDATE ...)
Now it works as desired:
SELECT upsert_foo(1, 'hi');
SELECT upsert_foo(1, 'bye');
SELECT upsert_foo(3, 'hi');
SELECT upsert_foo(3, 'bye');
PostgreSQL >= v15
Big news on this topic as in PostgreSQL v15, it is possible to use MERGE command. In fact, this long awaited feature was listed the first of the improvements of the v15 release.
This is similar to INSERT ... ON CONFLICT but more batch-oriented. It has a powerful WHEN MATCHED vs WHEN NOT MATCHED structure that gives the ability to INSERT, UPDATE or DELETE on such conditions.
It not only eases bulk changes, but it even adds more control that tradition UPSERT and INSERT ... ON CONFLICT
Take a look at this very complete sample from official page:
MERGE INTO wines w
USING wine_stock_changes s
ON s.winename = w.winename
WHEN NOT MATCHED AND s.stock_delta > 0 THEN
INSERT VALUES(s.winename, s.stock_delta)
WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
UPDATE SET stock = w.stock + s.stock_delta
WHEN MATCHED THEN
DELETE;
PostgreSQL v9, v10, v11, v12, v13, v14
If version is under v15 and over v9.5 , probably best choice is to use UPSERT syntax, with ON CONFLICT clause
Here is the example how to do upsert with params and without special sql constructions
if you have special condition (sometimes you can't use 'on conflict' because you can't create constraint)
WITH upd AS
(
update view_layer set metadata=:metadata where layer_id = :layer_id and view_id = :view_id returning id
)
insert into view_layer (layer_id, view_id, metadata)
(select :layer_id layer_id, :view_id view_id, :metadata metadata FROM view_layer l
where NOT EXISTS(select id FROM upd WHERE id IS NOT NULL) limit 1)
returning id
maybe it will be helpful

MySQL row lock and atomic updates

I am building a "poor man's queuing system" using MySQL. It's a single table containing jobs that need to be executed (the table name is queue). I have several processes on multiple machines whose job it is to call the fetch_next2 sproc to get an item off of the queue.
The whole point of this procedure is to make sure that we never let 2 clients get the same job. I thought that by using the SELECT .. LIMIT 1 FOR UPDATE would allow me to lock a single row so that I could be sure it was only updated by 1 caller (updated such that it no longer fit the criteria of the SELECT being used to filter jobs that are "READY" to be processed).
Can anyone tell me what I'm doing wrong? I just had some instances where the same job was given to 2 different processes so I know it doesn't work properly. :)
CREATE DEFINER=`masteruser`#`%` PROCEDURE `fetch_next2`()
BEGIN
SET #id = (SELECT q.Id FROM queue q WHERE q.State = 'READY' LIMIT 1 FOR UPDATE);
UPDATE queue
SET State = 'PROCESSING', Attempts = Attempts + 1
WHERE Id = #id;
SELECT Id, Payload
FROM queue
WHERE Id = #id;
END
Code for the answer:
CREATE DEFINER=`masteruser`#`%` PROCEDURE `fetch_next2`()
BEGIN
SET #id := 0;
UPDATE queue SET State='PROCESSING', Id=(SELECT #id := Id) WHERE State='READY' LIMIT 1;
#You can do an if #id!=0 here
SELECT Id, Payload
FROM queue
WHERE Id = #id;
END
The problem with what you are doing is that there is no atomic grouping for the operations. You are using the SELECT ... FOR UPDATE syntax. The Docs say that it blocks "from reading the data in certain transaction isolation levels". But not all levels (I think). Between your first SELECT and UPDATE, another SELECT can occur from another thread. Are you using MyISAM or InnoDB? MyISAM might not support it.
The easiest way to make sure this works properly is to lock the table.
[Edit] The method I describe right here is more time consuming than using the Id=(SELECT #id := Id) method in the above code.
Another method would be to do the following:
Have a column that is normally set to 0.
Do an "UPDATE ... SET ColName=UNIQ_ID WHERE ColName=0 LIMIT 1. That will make sure only 1 process can update that row, and then get it via a SELECT afterwards. (UNIQ_ID is not a MySQL feature, just a variable)
If you need a unique ID, you can use a table with auto_increment just for that.
You can also kind of do this with transactions. If you start a transaction on a table, run UPDATE foobar SET LockVar=19 WHERE LockVar=0 LIMIT 1; from one thread, and do the exact same thing on another thread, the second thread will wait for the first thread to commit before it gets its row. That may end up being a complete table blocking operation though.

Calling a stored procedure in parallel to increase a counter and ensure atomic increments

I'm creating a stored procedure which can increment the value of a counter and return if that invocation was responsible for reaching the MaxValue. The tricky part is this procedure will be call quickly and in parallel from different threads and different machines.
Example scenario:
Two threads executing in parallel call the same stored procedure to increment the same counter. Lets assume CounterId = 5 is passed in as a parameter for both. Before either executes the Counter record currently has field values of CounterValue = 9 and a MaxValue = 10.
What I want to happen is for one of the procedures to successfully increment the CurrentValue to 10 and return a result indicating it was responsible for making the change which caused CounterValue to reach the MaxValue. The other procedure should not increment the value (since it would go past 10) and should return a result indicating that the MaxReach was already met for the counter.
I thought about performing a query before or after but it seems that could leave a 'hole' where a change could be made by separate thread and cause a false positive/negative to be returned.
This is just a start of an idea for the procedure. I feel like it needs locking, a transaction or something?
UPDATE SomeCounters
SET CounterValue = (CounterValue + #AddValue),
MaxReached = CASE WHEN MaxValue = (CurrentValue + 1) THEN 1 ELSE 0
WHERE CounterId = #CounterId
AND MaxReached = 0
Use OUTPUT
DECLARE #temp TABLE (MaxReached BIT NOT NULL);
UPDATE SomeCounters
SET CounterValue = (CounterValue + #AddValue),
MaxReached = CASE WHEN MaxValue = (CurrentValue + 1) THEN 1 ELSE 0
WHERE CounterId = #CounterId
AND MaxReached = 0
OUTPUT INSERTED.MaxReached INTO #temp
The update is atomic and you can then select the value out of the #temp table and do whatever you want with it. This way you'll be able to capture the exact update that caused MaxReached to be set to true (1).
You need to wrap it in a transaction and add a select within the same transaction, as follows:
BEGIN TRANSACTION;
UPDATE SomeCounters
SET CounterValue = (CounterValue + #AddValue)
WHERE CounterId = #CounterId;
SELECT CASE WHEN MaxValue = CurrentValue THEN 1 ELSE 0 MaxReached
FROM SomeCounters
WHERE CounterId = #CounterId;
COMMIT TRANSACTION;
You can put that last part into an output parameter so that it's returned from the proc.
One way to achieve what you are looking for is to take a pessimistic approach; meaning that each stored procedure only updates a record if it wasn't modified by another stored procedure, and try again until the max is reached. To do this you need to read the current value before the update, then update the record with a WHERE clause that expects the value to be the same. You also need a loop if you need to make sure the call eventually succeeds. Using this approach only 1 stored procedure will update the table at a time, and retry the work until the max is reached.
Something like this:
DECLARE #savedValue int
DECLARE #maxedReached int
-- read current values for concurrency
SELECT #savedValue = CounterValue, #maxedReached = MaxReached
FROM SomeCounters WHERE CounterId = #counterId)
WHILE(#maxedReached = 0)
BEGIN
UPDATE SomeCounters
SET CounterValue = (CounterValue + #AddValue),
MaxReached = CASE WHEN MaxValue = (CurrentValue + 1) THEN 1 ELSE 0 END
WHERE
CounterId = #CounterId
AND MaxReached = 0
-- the next clause ensures that only one stored procedure will succeed
AND CounterValue = #savedValue
if (##rowcount = 0)
BEGIN
-- failed... another procedure made the change?
-- If #maxReached becomes 1, the loop will exit and you will
-- know the maximum was reached; if not the loop will try updating
-- the value again
-- read the values for concurrency again.
SELECT #savedValue = CounterValue, #maxedReached = MaxReached
FROM SomeCounters WHERE CounterId = #counterId)
END
END
Another strategy I'm investigating is the use of sp_getapplock within a transaction. It seems this would allow me create a unique string for the counter I'm trying to update and block other concurrent executions until it is finished.
This seems particularly useful since my procedure will also contain some IF EXISTS ... ELSE ... logic which will handle either creating the counter record for the first time or updating and existing one.
http://msdn.microsoft.com/en-us/library/ms189823.aspx - sp_getapplock
Assuming that MaxValue is well-known, and is the same for each counter, then you don't need transactions:
UPDATE CounterTable
SET Counter=Counter+1
WHERE CounterId = #CounterId
This is a database, not a multi-threaded program. This is a request to SQL Server to increment the value of the Counter column of one row of the table. SQL Server will do that - I don't think that it will permit the table to lose one of the requests.
So, at worst, you might wind up with Counter > MaxValue. But if you know what MaxValue is, then you know that any value above it really means MaxValue. There's no need to instantly schedule the work in the same transaction.
So, depending on how time-critical the "extra work" is, simply have a job or other program query the table looking for any counter values greater or equal to MaxValue, and do the work right there. At worst, create a trigger to go off on every UPDATE, which only does any work when the counter value is high.
No need for transactions, unless you need the "extra work" to execute in the same transaction that does the counter update. Since you don't say that you're using transactions for that now, I suspect that you don't need the "extra work" to occur in the same transaction.

MySQL transaction and triggers

Hey guys, here is one I am not able to figure out. We have a table in database, where PHP inserts records. I created a trigger to compute a value to be inserted as well. The computed value should be unique. However it happens from time to time that I have exact same number for few rows in the table. The number is combination of year, month and day and a number of the order for that day. I thought that single operation of insert is atomic and table is locked while transaction is in progress. I need the computed value to be unique...The server is version 5.0.88. Server is Linux CentOS 5 with dual core processor.
Here is the trigger:
CREATE TRIGGER bi_order_data BEFORE INSERT ON order_data
FOR EACH ROW BEGIN
SET NEW.auth_code = get_auth_code();
END;
Corresponding routine looks like this:
CREATE FUNCTION `get_auth_code`() RETURNS bigint(20)
BEGIN
DECLARE my_auth_code, acode BIGINT;
SELECT MAX(d.auth_code) INTO my_auth_code
FROM orders_data d
JOIN orders o ON (o.order_id = d.order_id)
WHERE DATE(NOW()) = DATE(o.date);
IF my_auth_code IS NULL THEN
SET acode = ((DATE_FORMAT(NOW(), "%y%m%d")) + 100000) * 10000 + 1;
ELSE
SET acode = my_auth_code + 1;
END IF;
RETURN acode;
END
I thought that single operation of
insert is atomic and table is locked
while transaction is in progress
Either table is locked (MyISAM is used) or records may be locked (InnoDB is used), not both.
Since you mentioned "transaction", I assume that InnoDB is in use.
One of InnoDB advantages is absence of table locks, so nothing will prevent many triggers' bodies to be executed simultaneously and produce the same result.

MySQL UPDATE and SELECT in one pass

I have a MySQL table of tasks to perform, each row having parameters for a single task.
There are many worker apps (possibly on different machines), performing tasks in a loop.
The apps access the database using MySQL's native C APIs.
In order to own a task, an app does something like that:
Generate a globally-unique id (for simplicity, let's say it is a number)
UPDATE tasks
SET guid = %d
WHERE guid = 0 LIMIT 1
SELECT params
FROM tasks
WHERE guid = %d
If the last query returns a row, we own it and have the parameters to run
Is there a way to achieve the same effect (i.e. 'own' a row and get its parameters) in a single call to the server?
try like this
UPDATE `lastid` SET `idnum` = (SELECT `id` FROM `history` ORDER BY `id` DESC LIMIT 1);
above code worked for me
You may create a procedure that does it:
CREATE PROCEDURE prc_get_task (in_guid BINARY(16), OUT out_params VARCHAR(200))
BEGIN
DECLARE task_id INT;
SELECT id, out_params
INTO task_id, out_params
FROM tasks
WHERE guid = 0
LIMIT 1
FOR UPDATE;
UPDATE task
SET guid = in_guid
WHERE id = task_id;
END;
BEGIN TRANSACTION;
CALL prc_get_task(#guid, #params);
COMMIT;
If you are looking for a single query then it can't happen. The UPDATE function specifically returns just the number of items that were updated. Similarly, the SELECT function doesn't alter a table, only return values.
Using a procedure will indeed turn it into a single function and it can be handy if locking is a concern for you. If your biggest concern is network traffic (ie: passing too many queries) then use the procedure. If you concern is server overload (ie: the DB is working too hard) then the extra overhead of a procedure could make things worse.
I have the exact same issue. We ended up using PostreSQL instead, and UPDATE ... RETURNING:
The optional RETURNING clause causes UPDATE to compute and return value(s) based on each row actually updated. Any expression using the table's columns, and/or columns of other tables mentioned in FROM, can be computed. The new (post-update) values of the table's columns are used. The syntax of the RETURNING list is identical to that of the output list of SELECT.
Example: UPDATE 'my_table' SET 'status' = 1 WHERE 'status' = 0 LIMIT 1 RETURNING *;
Or, in your case: UPDATE 'tasks' SET 'guid' = %d WHERE 'guid' = 0 LIMIT 1 RETURNING 'params';
Sorry, I know this doesn't answer the question with MySQL, and it might not be easy to just switch to PostgreSQL, but it's the best way we've found to do it. Even 6 years later, MySQL still doesn't support UPDATE ... RETURNING. It might be added at some point in the future, but for now MariaDB only has it for DELETE statements.
Edit: There is a task (low priority) to add UPDATE ... RETURNING support to MariaDB.
I don't know about the single call part, but what you're describing is a lock. Locks are an essential element of relational databases.
I don't know the specifics of locking a row, reading it, and then updating it in MySQL, but with a bit of reading of the mysql lock documentation you could do all kinds of lock-based manipulations.
The postgres documenation of locks has a great example describing exactly what you want to do: lock the table, read the table, modify the table.
UPDATE tasks
SET guid = %d, params = #params := params
WHERE guid = 0 LIMIT 1;
It will return 1 or 0, depending on whether the values were effectively changed.
SELECT #params AS params;
This one just selects the variable from the connection.
From: here