SQL row locks and transactions - mysql

I’m really new to relational databases. I’m working on a project that involves finances and so I want any actions that affect balance not to occur at the same time and I want to achieve that using locks, however I’m not sure how to use them. Vision I have now:
I want to have a separate table for each action and a balance field in users table, value of which would be derived from all the relevant tables. That being sad I’m never actually going to update existing records - only adding them. I want to make sure only one record for each user is being inserted at a time in these tables. For instance: 3 transactions occur at the same time and so 3 records are about to be added to any relevant tables. Two of the records have the same userid, a foreign key to my users table, and the other one has a different one. I want my records with the same foreign keys to be pipelined, and the other one can be done whenever. How do I achieve this? Are there any better ways to approach this?

I want any actions that affect balance not to occur at the same time
Why?
I want to achieve that using locks
Why?
To give you a counter example. Let's say you want to avoid having negative account balances. When a user withdraws 500$, how can you model that without locks.
UPDATE accounts
SET balance = balance - 500
WHERE accountholderid = 42
AND balance >= 500
This works without any explicit locks and is safe for concurrent access. You will have to check the update count, if it is 0 you would have overdrawn the account.
(I'm aware MySQL will still acquire a row lock)
It still makes sense to have a ledger but even there the need for locks is not obvious to me.

Use ENGINE=InnoDB for all your tables.
Use transactions:
BEGIN;
do all the work for a single action
COMMIT;
The classic example of a single action is to remove money from one account and add it to another account. The removing would include a check for overdraft, in which case you would have code to ROLLBACK instead of COMMIT.
The locks you get assure that everything for the single action is either completely done, or nothing at all is done. This even applies if the system crashes between the BEGIN and COMMIT.
Without begin and commit, but with autocommit=ON, each statement is implicitly surrounded by begin and commit. That is the UPDATE example in a previous answer is 'atomic'. However, if the money deducted from the one account needs to be added to another account, what will happen if a crash occurs just after the UPDATE? The money vanishes. So, you really need
BEGIN;
if not enough funds, ROLLBACK and exit
UPDATE to take money from one account
UPDATE to add that money to another account
INSERT into some log or audit trail to track all transactions
COMMIT;
Check after each step -- ROLLBACK and take evasive action on any unexpected error.
What happens if 2 (or more) actions happen at the "same time"?
One waits for the other.
There is a deadlock and a ROLLBACK is forced on it.
But, in no case, will the data be messed up.
A further note... In some cases you need FOR UPDATE:
BEGIN;
SELECT some stuff from a row FOR UPDATE;
test the stuff, such as account balance
UPDATE that same row;
COMMIT;
The FOR UPDATE says to other threads "Keep your hands off this row, I'm likely to change it; please wait until I am finished." Without FOR UPDATE, another thread could sneak in and drain the account of the money you thought was there.
Comments on some of your thoughts:
One table is usually sufficient for many users and their account. It would contain the "current" balance for each account. I mentioned a "log"; that would be a separate table; it would contain a "history" (as opposed to just the "current" info).
FOREIGN KEYs are mostly irrelevant in this discussion. They serve 2 purposes: Verify that another table has a row that should be there; and implicitly create an INDEX to make that check faster.
Pipelining? If you are not doing more than a hundred 'transactions' per second, the BEGIN..COMMIT logic is all you need to worry about.
"Same time" and "simultaneously" are misused terms. It is very unlikely that two users will hit the database at the "same time" -- consider browser delays, network delays, OS delays, etc. Plus the fact that most of those steps force activity to go single-file. The network forces one message to get there before another. Meanwhile, if one of your 'transactions' takes 0.01 second, who cares if the "simultaneous" request has to wait for it to finish. The point is that what I described will force the "wait" if needed to avoid messing up the data.
All that said, there still can be some "at the same time" -- If transactions don't touch the same rows, then the few milliseconds it takes from BEGIN to COMMIT could overlap. Consider this timeline of two transactions that came in almost simultaneously:
BEGIN; -- A
pull money from Alice -- A
BEGIN; -- B
pull money from Bobby -- B
give Alice's money to Alan -- A
give Bobby's money to Betty --B
COMMIT; --A
COMMIT; --B

Related

Concurrent writes to MySQL and testing solutions

I was practicing some "system design" coding questions and I was interested in how to solve a concurrency problem in MySQL. The problem was "design an inventory checkout system".
Let's say you are trying to check out a specific item from an inventory, a library book for instance.
If two people are on the website, looking to book it, is it possible that they both check it out? Let's assume the query is updating the status of the row to mark a boolean checked_out to True.
Would transactions solve this issue? It would cause the second query that runs to fail (assuming they are the same query).
Alternatively, we insert rows into a checkouts table. Since both queries read that the item is not checked out currently, they could both insert into the table. I don't think a transaction would solve this, unless the transaction includes reading the table to see if a checkout currently exists for this item that hasn't yet ended.
One of the suggested methods
How would I simulate two writes at the exact same time to test this?
No, transactions alone do not address concurrency issues. Let's quickly revisit mysql's definition of transactions:
Transactions are atomic units of work that can be committed or rolled back. When a transaction makes multiple changes to the database, either all the changes succeed when the transaction is committed, or all the changes are undone when the transaction is rolled back.
To sum it up: transactions are a way to ensure data integrity.
RDBMSs use various types of locking, isolation levels, and storage engine level solutions to address concurrency. People often mistake transactions as a mean to control concurrency because transactions affect how long certain locks are held.
Focusing on InnoDB: when you issue an update statement, mysql places an exclusive lock on the record being updated. Only the transaction holding the exclusive lock can modify the given record, the others have to wait until the transaction is committed.
How does this help you preventing multiple users checking out the same book? Let's say you have an id field uniquely identifying the books and a checked_out field indicating the status of the book.
You can use the following atomic update to check out a book:
update books set checked_out=1 where id=xxx and checked_out=0
The checked_out=0 criteria makes sure that the update only succeeds if the book is not checked out yet. So, if the above statement affects a row, then the current user checks out the book. If it does not affect any rows, then someone else has already checked out the book. The exclusive lock makes sure that only one transaction can update the record at any given time, thus serializing the access to that record.
If you want to use a separate checkouts table for reserving books, then you can use a unique index on book ids to prevent the same book being checked out more than once.
Transactions don't cause updates to fail. They cause sequences of queries to be serialized. Only one accessor can run the sequence of queries; others wait.
Everything in SQL is a transaction, single-statement update operations included. The kind of transaction denoted by BEGIN TRANSACTION; ... COMMIT; bundles a series of queries together.
I don't think a transaction would solve this, unless the transaction
includes reading the table to see if a checkout currently exists for
this item.
That's generally correct. Checkout schemes must always read availability from the database. The purpose of the transaction is to avoid race conditions when multiple users attempt to check out the same item.
SQL doesn't have thread-safe atomic test-and-set instructions like multithreaded processor cores have. So you need to use transactions for this kind of thing.
The simplest form of checkout uses a transaction, something like this.
BEGIN TRANSACTION;
SELECT is_item_available, id FROM item WHERE catalog_number = whatever FOR UPDATE;
/* if the item is not available, tell the user and commit the transaction without update*/
UPDATE item SET is_item_available = 0 WHERE id = itemIdPreviouslySelected;
/* tell the user the checkout succeeded. */
COMMIT;
It's clearly possible for two or more users to attempt to check out the same item more-or-less simultaneously. But only one of them actually gets the item.
A more complex checkout scheme, not detailed here, uses a two-step system. First step: a transaction to reserve the item for a user, rejecting the reservation if someone else has it checked out or reserved. Second step: reservation holder has a fixed amount of time to accept the reservation and check out the item, or the reservation expires and some other user may reserve the item.

Producer/consumer pattern via mysql

I have 2 processes that act as a producer/consumer via a table.
One process does only INSERT into the table while the other process does a SELECT for new records and an UPDATE of these records when it finishes to mark them as finished.
This keeps happening constantly.
As far as I can see there is no need for any locking or transactions for this simple interaction. Am I right on this?
Am I overlooking something?
I would say the prime consideration to take into account is a scenario where multiple workers retrieve the same row.
The UPDATE and SELECT operations themselves should be fine, but if you have multiple workers consuming via SELECT on the same table, then you might get two workers simultaneously processing the same row.
If each worker is required to process separate rows, locking on SELECT may be required with careful consideration of deadlock if there's a significant unit of work associated with your process.

How to minimize the performance hit of deleting lots of rows from a highly active table

I have a table which has tens of thousands of new rows added a hour.
Based on certain events I set a given row a state of complete by setting it's status field to 1, update it's status_timesamp and then when querying the table with selects, I ignore all rows with field of 1.
But this leads to a huge amount of rows that I no longer need all with a field of 1. I also may need the fields at a later point in the future for logging purposes, but for the everyday purpose of my application such rows aren't needed.
I could delete the row instead of updating the field to 1 but I figure a delete is more costly than an update and many inserts are happening per second.
Ultimately I would like a way to move all the rows with status 1 into some kind of log table, without impacting on the current table which has many inserts and updates happening per second.
This is a difficult question to answer. Indeed, updates (on a non-indexed field) should be faster than deletes. In a simple environment, you would do the delete, along with a trigger that logged the information that you wanted.
I find it hard to believe that there is no downtime for the database. Can't you do the deletes at 2:00 a.m. once per week on Sunday, in some time zone?
Normally if this isn't the case, then you have high-availability requirements. And, in such a circumstance, you would have a replicated database. Most times, inserts, updates, and queries would go to both databases. During database maintenance periods, only one might be up while the other "does maintenance". Then that database catches up with the transactions, takes over the user load, and the other database "does maintenance". In your case "does maintenance" means doing the delete and logging.
If you have a high availability requirement and you are not using replication of some sort, your system has bigger vulnerabilities than simply accumulating to-be-deleted data.

How to atomic select rows in Mysql?

I have 5+ simultaneously processes selecting rows from the same mysql table. Each process SELECTS 100 rows, PROCESS IT and DELETES the selected rows.
But I'm getting the same row selected and processed 2 times or more.
How can I avoid it from happening on MYSQL side or Ruby on Rails side?
The app is built on Ruby On Rails...
Your table appears to be a workflow, which means you should have a field indicating the state of the row ("claimed", in your case). The other processes should be selecting for unclaimed rows, which will prevent the processes from stepping on each others' rows.
If you want to take it a step further, you can use process identifiers so that you know what is working on what, and maybe how long is too long to be working, and whether it's finished, etc.
And yeah, go back to your old questions and approve some answers. I saw at least one that you definitely missed.
Eric's answer is good, but I think I should elaborate a little...
You have some additional columns in your table say:
lockhost VARCHAR(60),
lockpid INT,
locktime INT, -- Or your favourite timestamp.
Default them all to NULL.
Then you have the worker processes "claim" the rows by doing:
UPDATE tbl SET lockhost='myhostname', lockpid=12345,
locktime=UNIX_TIMESTAMP() WHERE lockhost IS NULL ORDER BY id
LIMIT 100
Then you process the claimed rows with SELECT ... WHERE lockhost='myhostname' and lockpid=12345
After you finish processing a row, you make whatever updates are necessary, and set lockhost, lockpid and locktime back to NULL (or delete it).
This stops the same row being processed by more than one process at once. You need the hostname, because you might have several hosts doing processing.
If a process crashes while it is processing a batch, you can check if the "locktime" column is very old (much older than processing can possibly take, say several hours). Then you can just reclaim some rows which have an old "locktime" even though their lockhost is not null.
This is a pretty common "queue pattern" in databases; it is not extremely efficient. If you have a very high rate of items entering / leaving the queue, consider using a proper queue server instead.
http://api.rubyonrails.org/classes/ActiveRecord/Transactions/ClassMethods.html
should do it for you

mySQL - Prevent double booking

I am trying to work out the best way to stop double 'booking' in my application.
I have a table of unique id's each can be sold only once.
My current idea is to use a transaction to check if the chosen products are available, if they are then insert into a 'status' column that it is 'reserved' along with inserting a 'time of update' then if the user goes on to pay I update the status to 'sold'.
Every 10 mins I have a cron job check for 'status' = 'reserved' that was updated more than 10 mins ago and delete such rows.
Is there a better way? I have never used transactions (I have just heard the word banded around) so if someone could explain how I would do this that would be ace.
despite what others here have suggested, transactions are not the complete solution.
sounds like you have a web application here and selecting and purchasing a reservation takes a couple of pages (steps). this means you would have to hold a transaction open across a couple of pages, which is not possible.
your approach (status column) is correct, however, i would implement it differently. instead of a status column, add two columns: reserved_by and reserved_ts.
when reserving a product, set reserved_by to the primary key of the user or the session and reserved_ts to now().
when looking for unreserved products, look for ones where reserved_ts is null or more than 10 minutes old. (i would actually look for a couple minutes older than whatever you tell your user to avoid possible race conditions.)
a cron job to clear old reservations becomes unnecessary.
What you're attempting to do with your "reserved" status is essentially to emulate transactional behavior. You're much better off letting an expert (mysql) handle it for you.
Have a general read about database transactions and then how to use them in MySQL. They aren't too complicated. Feel free to post questions about them here later, and I'll try to respond.
Edit: Now that I think about your requirements... perhaps only using database transactions isn't the best solution - having tons of transactions open and waiting for user action to commit the transactions is probably not a good design choice. Instead, continue what you were doing with "status"="reserved" design, but use transactions in the database to set the value of "status", to ensure that the row isn't "reserved" by two users at the same time.
You do not need to have any added state to do this.
In order to avoid dirty reads, you should set the database to an isolation level of that will avoid them. Namely, REPEATABLE READ or SERIALIZABLE.
You can set the isolation level globally, or session specific. If all your sessions might need the isolation, you may as well set it globally.
Once the isolation level is set, you just need to use a transaction that starts before you SELECT, and optionally UPDATEs the status if the SELECT revealed that it wasn't reserved yet.