What's the correct way to protect against multiple sessions getting the same data? - mysql

Let's say I have a table called tickets which has 4 rows, each representing a ticket to a show (in this scenario these are the last 4 tickets available to this show).
3 users are attempting a purchase simultaneously and each want to buy 2 tickets and all press their "purchase" button at the same time.
Is it enough to handle the assignment of each set of 2 via a TRANSACTION or do I need to explicitly call LOCK TABLE on each assignment to protect against the possibility that 2 of the tickets will be assigned to two users.
The desire is for one of them to get nothing and be told that the system was mistaken in thinking there were available tickets.
I'm confused by the documentation which says that the LOCK will be implicitly released when I start a TRANSACTION, and was hoping to get some clarity on the correct way to handle this.

If you use a transaction, MySQL takes care of locking automatically. That's the whole point of transactions -- they totally prevent any kind of interference due to overlapping requests.

You could use "optimistic locking": When updating the ticket as sold, make sure you include the condition that the ticket is still available. Then check if the update failed (you get a count of rows updated, can be 1 or 0).
For example, instead of
UPDATE tickets SET sold_to = ? WHERE id = ?
do
UPDATE tickets SET sold_to = ? WHERE id = ? AND sold_to IS NULL
This way, the database will assure that you don't get conflicting updates. No need for explict locking (the normal transaction isolation will be sufficient).
If you have two tickets, you still need to wrap the two calls into a single transaction (and roll back if either of them failed.

Related

My way to handle purchases made at the same time in Mysql is correct?

I know that single queries in Mysql are executed in an atomic way (there is the autocommit mode enabled by default)
But look at this query:
Update products set Amount = Amount-1 Where Amount>0 AND ID_PRODUCT = 5;
But what about concurrency? Namely more than one user can exec about in the same time the same query. For instance 2 users buy the same product when the availability is 1. When they purchase the product there is one, but when in the backend the query is executed the other user has already purchased the product thus the condition Amount>0 is not satisfied and the update is not applied. I would kindly know if this model is robust and safe for my application?
Or I have to use a lock or something like that?
This statement is atomic. It can be run more than once, even concurrently, but you will need to pay close attention to the result to see if any rows were modified.
In the case of running out of stock you'll get a result indicating no rows were modified, or in other words, it failed to subtract stock due to the condition.
Some systems prefer to move the stock around from a stock table like this to another "order" table, much like a ledger, so you can be sure you're not subtracting inventory that then goes missing if not properly purchased. A ledger makes it easy to unwind and return stock if someone abandons an order, makes a return, etc.

MySql Logic Optimization

Currently we have a ticket management system and like all ticketing systems it needs to assign cases in a round-robin manner to the agents. Also, at the same time the agent can apply their own filtering logic and work on their queue.
The problem,
The table with the tickets is very large now, spans over 10 million rows.
One ticket should never be assigned to two different users.
To solve the above problem, this is the flow we have,
Select query is fired with filter criteria and limit 0,1
The row returned by the above query is then selected based on id and locked for update.
Lastly we fire the update saying user X has picked the case.
While step 3 executes other user cannot get a lock on the same case, so they fire 3.a query may be multiple times to get the next available case.
As number of users increase this time in step 4 goes higher and higher.
We tried doing a select for update in query at step 4 itself, but it makes the entire query slow. Assuming this is because a huge number of rows in the select query.
Questions,
Is there a different approach we need to take altogether?
Would doing a select and update in a stored procedure ensure the same results as doing a select for update and then update?
P.S - I have asked the same question stackexchange.
The problem is that you are trying to use MySQL level locking to ensure that a ticket cannot be assigned to more than one person. This way there is no way to detect if a ticket is locked by a user.
I would implement an application level lock by adding 2 lock related fields to the tickets table: a timestamp when the lock was applied and a user id field telling you which user holds the lock. The lock related fields may be held in another table (shopping cart, for example can be used for this purpose).
When a user selects a ticket, then you try to update these lock fields with a conditional update statement:
update tickets
set lock_time=now(), lock_user=...
where ticket_id=... and lock_time is null
Values in place of ... are supplied by your application. lock_time is null criteria is there to make sure that if the ticket has already been selected by another user, then the later user does not override the lock. After the update statement check out the number of rows affected. If it is one, then the current user acquired the lock. If it is 0, then someone else locked the ticket.
If you have the locking data in another table, then place a unique restriction on the ticket id field in that table and use insert to acquire a lock. If the insert succeeds, then the lock is acquired. If it fails, then another user has locked the ticket.
The lock is usually held for a number of minutes, after that your application must release the lock (set locking fields to null or delete the locking record from the other table).

Locking Mysql Transaction

We have a table (say, child) that has a relation to another table (say, parent). In a perfect world, it will always have a parent row, and sometimes a child row. It should never have more than one child row, but it may in the future (so a Unique index is not suitable long-term).
Right now, we use transactions and lock the rows. However, because Mysql locks the rows based to the point in time in which it starts, each transaction (if one starts before the other commits) is able to create their own row. Then, on insert, each insert takes effect and we end up with two rows. I think each row is locking its own row, and without committing, it is hidden to the other thread. Basically, transactions have a chicken and egg type of problem.
How can we enforce a policy of up to one row? We can add a unique index and at the time the transaction commits, it will fail. But then we remove the ability to add multiple rows in the future (when one parent would have two child's), which is problematic.
This has to be solved somehow. I just don't know how personally.
Edit 1: Updated Schema information (I'm using a job schema to represent the problem)
Table: job (the "Parent")
job_id
job_title
job_payment
Table: job_asignment (the "Child")
job_id
user_id (assigned worker)
est_hours
opt_insurance
Our application is a SaaS-based product that helps manage workflows. We check whether everything necessary is okay before hand (like whether the job is still in the right status, whether the person trying to accept the job was given access, and so on). Then, if that is true, we assign him (insert or update the row in the job_assignment table).
Our problem is that our system takes 2 to 3 seconds for the rest of the assignment to happen (place payment holds, insert the actual row, email the worker that they are assigned, move the status to assigned, and so on). During this time, another user also tries to accept the job, his thread validates everything before (where we check if its still available), and it is. We then start the process on him too, since each thread is a transaction and the changes haven't been committed.
Then, we get two assignment rows. For us, thats bad right now since we only pay one worker.
We would use application locking with temp files or something, but were on a load balanced (HA) environment and cannot guarantee that both users hit the same server.
This seems really rudimentary, but I can't figure out how to solve it. Other than a unique index, the only other way is to highly invest in hardware for the DB and get that window as small as we can.
Does this clarify anything?

Applying MySQL transactions to my shopping cart checkout process

I have this online shop that I built a while ago and I've been using LOCK TABLES on the product stock tables to make sure stock doesn't go below 0 or isn't update properly.
Lately there's been a lot of traffic and lots of operations (some other LOCK TABLES too) on the database so I have been asking myself if there was a faster and safer way to keep the data consistent.
I'm trying to understand transactions and how they work and I'm wondering if the following logic would successfully apply to my checkout process:
1) START TRANSACTION (so that any following query can be rolled back, if one of them fails or some other condition like stock is lower than needed)
2) insert customer in the 'customers' table
3) insert order info in the 'orders' table
4) similar queries to different tables
5) for each product in the cart:
5.1) update products set stock = stock - x where stock - x >= 0 (x is whatever units the customer wants to buy)
5.2) check affected rows and if affected rows == 0 then ROLLBACK and exit (not enough stock for the current product so cancel the order/throw an error)
5.3) some other queries... etc..
6) COMMIT
Does that sound correct?
What I don't get (and don't know if I should be concerned about in the first place) is what happens to the 'products' table and the stock figures, if some concurrent session (another customer) tries to place the same order or an order containing some of the same products.
Would the second transaction wait for the first one to be finished and the second transaction use the latest stock figures or would they both run concurrently and probably both fail in some cases or what?
Your workflow is correct, even though I would take step 2 (save customer's details) out of the transaction: you probably want to remember your customer, even if the order couldn't be placed.
When a row is updated within a transaction, the row (or in some unfortunate cases, the whole table) becomes locked in exclusive mode until the end of the transaction. It menas that a simultaneous attempt to place an order on the same product would be put on hold when trying to update the stock.
When the first transaction is committed (or rolled back), the lock is released, and a concurrent update would be updating the new value.
Recommended reading: this manual chapter, in full, and this page in particular. Yes, it's a lot (but don't worry if you don't understand everything at the beginning -- the rabbit hole is very, very deep)

How to properly avoid Mysql Race Conditions

I know this has been asked before, but I'm still confused and would like to avoid any problems before I go into programming if possible.
I plan on having an internal website with at least 100 users active at any given time. Users would post an item (inserted into db with a 0 as its value) and that item would be shown via a php site (db query). Users then get the option to press a button and lock that item as theirs (assign the value of that item as their id)
How do I ensure that 2 or more users don't retrieve the same item at the same time. I know in programming like c++ I would just use plain ol mutex lock. Is their an equivalent in mysql where it will lock just one item entry like that? I've seen references to LOCK_TABLES and GET_LOCK and many others so I'm still very confused on what would be best.
There is potential for many people all racing to press that one button and it would be disastrous if multiple people get a confirmation.
I know this is a prime example of a race condition, but mysql is foreign territory for me.
I obviously will query the value of the item before I update it and make sure it hasn't written, but what is the best way to ensure that this race condition is avoided.
Thanks in advance.
To achieve this, you will need to lock the record somehow.
Add a column LockedBy defaulting to 0.
When someone pushes the button execute a query resembling this:
UPDATE table SET LockedBy= WHERE LockedBy=0 and id=;
After the update verify the affected rows (in php mysql_affected_rows). If the value is 0 it means the query did not update anything because the LockedBy column is not 0 and thus locked by someone else.
Hope this helps
When you post a row, set the column to NULL, not 0.
Then when a user updates the row to make it their own, update it as follows:
UPDATE MyTable SET ownership = COALESCE(ownership, $my_user_id) WHERE id = ...
COALESCE() returns its first non-null argument. So even if you and I are updating concurrently, the first one to commit gets to set the value. The second one will not override that value.
You may consider Transactions
BEGING TRANSACTION;
SELECT ownership FROM ....;
UPDATE table .....; // set the ownership if the table not owned yet
COMMIT;
and also you can ROLLBACK all the queries between the transaction if you caught an error !