I'm working on a ticketing system where users escrow a large amount of tickets at once (basically all tickets that are not out of stock) before claiming them. These tickets that shown to the user and they can select whichever ticket they want to claim.
This escrow system could introduce race conditions if two users try to escrow the same tickets at the same time and there aren't enough tickets, as in:
Tickets left: 1
User A hits the page, checks number of tickets left. 1 ticket left
User B hits the page, checks number of tickets left. 1 ticket left
Since they both have a ticket left they would both escrow the ticket, making tickets left -1.
I'd like to avoid locking if at all possible and am wondering if a statement with subqueries like
INSERT INTO ticket_escrows (`ticket`,`count`)
SELECT ticket,tickets_per_escrow FROM tickets WHERE tickets.total > (
COALESCE(
SELECT SUM(ticket_escrows.count) FROM ticket_escrows
WHERE ticket_escrows.ticket = tickets.id
AND ticket_escrows.valid = 1
,0)
+
COALESCE(
SELECT SUM(ticket_claims.count)
FROM ticket_claims
WHERE ticket_claims.ticket = tickets.id
,0)
)
will be atomic and allow me to prevent race conditions without locking.
Specifically I'm wondering if the above query will prevent the following from happening:
Max tickets: 50 Claimed/Escrowed tickets: 49
T1: start tx -> sums ticket escrows --> 40
T2: start tx -> sums ticket escrows --> 40
T1: sums ticket claims --> 9
T2: sums ticket claims --> 9
T1: Inserts new escrow since there is 1 ticket left --> 0 tickets left
T2: Inserts new escrow since there is 1 ticket left --> -1 tickets left
I'm using InnoDB.
To answer your question "if a statement with subqueries ... will be atomic": In your case, yes.
It will be atomic only when enclosed in a transaction. Since you state you're using InnoDB, the query even with subqueries is an SQL statement and as such executed in a transaction. Quoting the documentation:
In InnoDB, all user activity occurs inside a transaction. If autocommit mode is enabled, each SQL statement forms a single transaction on its own.
...If a statement returns an error, the commit or rollback behavior depends on the error.
Also, isolations levels matter.
In terms of the SQL:1992 transaction isolation levels, the default InnoDB level is REPEATABLE READ
REPEATABLE READ may not be enough for you, depending on the logic of your program. It prevents transactions from writing data that was read by another transaction until the reading transaction completes, phantom reads are possible, however. Check SET TRANSACTION for a way how to change the isolation level.
To answer your second question "if the above query will prevent the following from happening ...": In a transaction with the SERIALIZABLE isolation level it cannot happen. I believe the default level should be safe as well in your case (supposing tickets.total doesn't change), but I'd prefer having it confirmed by someone.
You have really left a lot of information out about how you want this to work, which is why you haven't gotten more/better answers.
Ticketing is a matter of trade-offs. If you show someone there are 10 tickets available, either you immediately make all 10 tickets unavailable for everyone else (which is bad for everyone else) or your don't, which means that person could potentially order a ticket that someone else snapped up while they were deciding which ticket to take. An "escrow" system doesn't really help matters, as it just moves the problem from which tickets to purchase to which tickets to escrow.
During the period where you are not locking everyone else out, the best practice is to craft your SQL in such a way that updates or inserts will fail if someone else has modified the data while you were working on it. This can be as simple as incrementing a counter in the row every time the row is changed and using that counter (plus the primary key) in the WHERE clause of the UPDATE statement. If the counter changed, then the update fails and you know you've lost the race.
I don't understand what you want to have happen or your data structures enough to give you much more advice.
Related
I know that single queries in Mysql are executed in an atomic way (there is the autocommit mode enabled by default)
But look at this query:
Update products set Amount = Amount-1 Where Amount>0 AND ID_PRODUCT = 5;
But what about concurrency? Namely more than one user can exec about in the same time the same query. For instance 2 users buy the same product when the availability is 1. When they purchase the product there is one, but when in the backend the query is executed the other user has already purchased the product thus the condition Amount>0 is not satisfied and the update is not applied. I would kindly know if this model is robust and safe for my application?
Or I have to use a lock or something like that?
This statement is atomic. It can be run more than once, even concurrently, but you will need to pay close attention to the result to see if any rows were modified.
In the case of running out of stock you'll get a result indicating no rows were modified, or in other words, it failed to subtract stock due to the condition.
Some systems prefer to move the stock around from a stock table like this to another "order" table, much like a ledger, so you can be sure you're not subtracting inventory that then goes missing if not properly purchased. A ledger makes it easy to unwind and return stock if someone abandons an order, makes a return, etc.
Currently we have a ticket management system and like all ticketing systems it needs to assign cases in a round-robin manner to the agents. Also, at the same time the agent can apply their own filtering logic and work on their queue.
The problem,
The table with the tickets is very large now, spans over 10 million rows.
One ticket should never be assigned to two different users.
To solve the above problem, this is the flow we have,
Select query is fired with filter criteria and limit 0,1
The row returned by the above query is then selected based on id and locked for update.
Lastly we fire the update saying user X has picked the case.
While step 3 executes other user cannot get a lock on the same case, so they fire 3.a query may be multiple times to get the next available case.
As number of users increase this time in step 4 goes higher and higher.
We tried doing a select for update in query at step 4 itself, but it makes the entire query slow. Assuming this is because a huge number of rows in the select query.
Questions,
Is there a different approach we need to take altogether?
Would doing a select and update in a stored procedure ensure the same results as doing a select for update and then update?
P.S - I have asked the same question stackexchange.
The problem is that you are trying to use MySQL level locking to ensure that a ticket cannot be assigned to more than one person. This way there is no way to detect if a ticket is locked by a user.
I would implement an application level lock by adding 2 lock related fields to the tickets table: a timestamp when the lock was applied and a user id field telling you which user holds the lock. The lock related fields may be held in another table (shopping cart, for example can be used for this purpose).
When a user selects a ticket, then you try to update these lock fields with a conditional update statement:
update tickets
set lock_time=now(), lock_user=...
where ticket_id=... and lock_time is null
Values in place of ... are supplied by your application. lock_time is null criteria is there to make sure that if the ticket has already been selected by another user, then the later user does not override the lock. After the update statement check out the number of rows affected. If it is one, then the current user acquired the lock. If it is 0, then someone else locked the ticket.
If you have the locking data in another table, then place a unique restriction on the ticket id field in that table and use insert to acquire a lock. If the insert succeeds, then the lock is acquired. If it fails, then another user has locked the ticket.
The lock is usually held for a number of minutes, after that your application must release the lock (set locking fields to null or delete the locking record from the other table).
Let's say I have a table called tickets which has 4 rows, each representing a ticket to a show (in this scenario these are the last 4 tickets available to this show).
3 users are attempting a purchase simultaneously and each want to buy 2 tickets and all press their "purchase" button at the same time.
Is it enough to handle the assignment of each set of 2 via a TRANSACTION or do I need to explicitly call LOCK TABLE on each assignment to protect against the possibility that 2 of the tickets will be assigned to two users.
The desire is for one of them to get nothing and be told that the system was mistaken in thinking there were available tickets.
I'm confused by the documentation which says that the LOCK will be implicitly released when I start a TRANSACTION, and was hoping to get some clarity on the correct way to handle this.
If you use a transaction, MySQL takes care of locking automatically. That's the whole point of transactions -- they totally prevent any kind of interference due to overlapping requests.
You could use "optimistic locking": When updating the ticket as sold, make sure you include the condition that the ticket is still available. Then check if the update failed (you get a count of rows updated, can be 1 or 0).
For example, instead of
UPDATE tickets SET sold_to = ? WHERE id = ?
do
UPDATE tickets SET sold_to = ? WHERE id = ? AND sold_to IS NULL
This way, the database will assure that you don't get conflicting updates. No need for explict locking (the normal transaction isolation will be sufficient).
If you have two tickets, you still need to wrap the two calls into a single transaction (and roll back if either of them failed.
I have this online shop that I built a while ago and I've been using LOCK TABLES on the product stock tables to make sure stock doesn't go below 0 or isn't update properly.
Lately there's been a lot of traffic and lots of operations (some other LOCK TABLES too) on the database so I have been asking myself if there was a faster and safer way to keep the data consistent.
I'm trying to understand transactions and how they work and I'm wondering if the following logic would successfully apply to my checkout process:
1) START TRANSACTION (so that any following query can be rolled back, if one of them fails or some other condition like stock is lower than needed)
2) insert customer in the 'customers' table
3) insert order info in the 'orders' table
4) similar queries to different tables
5) for each product in the cart:
5.1) update products set stock = stock - x where stock - x >= 0 (x is whatever units the customer wants to buy)
5.2) check affected rows and if affected rows == 0 then ROLLBACK and exit (not enough stock for the current product so cancel the order/throw an error)
5.3) some other queries... etc..
6) COMMIT
Does that sound correct?
What I don't get (and don't know if I should be concerned about in the first place) is what happens to the 'products' table and the stock figures, if some concurrent session (another customer) tries to place the same order or an order containing some of the same products.
Would the second transaction wait for the first one to be finished and the second transaction use the latest stock figures or would they both run concurrently and probably both fail in some cases or what?
Your workflow is correct, even though I would take step 2 (save customer's details) out of the transaction: you probably want to remember your customer, even if the order couldn't be placed.
When a row is updated within a transaction, the row (or in some unfortunate cases, the whole table) becomes locked in exclusive mode until the end of the transaction. It menas that a simultaneous attempt to place an order on the same product would be put on hold when trying to update the stock.
When the first transaction is committed (or rolled back), the lock is released, and a concurrent update would be updating the new value.
Recommended reading: this manual chapter, in full, and this page in particular. Yes, it's a lot (but don't worry if you don't understand everything at the beginning -- the rabbit hole is very, very deep)
I'm developing a hospital management system, as you know is a huge system, to make things clear, Reception will open a new ticket for the patient Ticket no will be a customized form month year no (112011000101) which mean ticket no 101 on the month Nov on 2011.
When saving the ticket, my application will read the last saved ticket no & increase the no with one, (Select Tick_No from tickets where Tick_No Like '112011'% order by Tick_No DESC limit 1), this will return the last saved ticket which is (112011000101) increment by one ==> new ticket will be (112011000102).
So if there are two employees saving new tickets in the same is there is a chance to get duplicate ticket no??
I'm using transactions with mysql database which will make a row-level locking but still table available for any queries.
so i need a clear answer please if it is possible.
note in my code, when saving if something go wrong a rollback will be issued and auto retry saving function will be issued (in each saving order the application will perform check for the last number).
You should make a table-level locking in this case, so you could use the MyISAM table engine. This should help http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html