Transaction not working as I expected in Laravel to prevent duplicate entries - mysql

Within a Laravel-based platform there are three relevant tables :
users.
codes.
users_codes.
Users can claim a code - in which case, what should happen is that the system gets the next available code from the codes table, marks it as allocated, and then sets up a new entry in users_codes which ties the user to the code.
Nothing special would need to be done whilst the site is under low load, however anticipating high load, when initially written this was done as a Transaction, which I had thought would prevent the same code being allocated twice.
DB::transaction(function () use($user) {
$code = Codes::getNextAvailableCode(); // Not the actual function, but works for the sake of example
$code->allocated = 1;
$code->save();
$uc = new UserCode();
$uc->user_id = $user->id;
$uc->code_id = $code->id;
$uc->save();
});
Now that the site is under high load, a couple of times the same code has been allocated to two different users. So it's clear that a Transaction isn't what I want.
Thinking it through, I initially thought that replacing the Transaction with locking would be an option, but the more I think about it, I can't really lock a table.
Instead, I think I need to be focussing on checking, before creating and saving the new UserCode() to ensure that there is no existing UserCode with the same $code->id?
Any suggestions? Is there a way that I've not considered that will allow this to work smoothly under high load (ie. not continually throw errors back to the user when they try and claim a code that has been taken a millisecond before)?

You want PESSIMISTIC_WRITE type lock which allows us to obtain an exclusive lock and prevent the data from being read, updated or deleted.
Laravel's query builder supports Pessimistic Locking.
The query builder also includes a few functions to help you achieve "pessimistic locking" when executing your select statements. To execute a statement with a "shared lock", you may call the sharedLock method. A shared lock prevents the selected rows from being modified until your transaction is committed:
DB::table('users')
->where('votes', '>', 100)
->sharedLock()
->get();
I think you definitely need table lock instead of Transaction.

Related

Laravel lockforupdate misunderstanding SELECTS

Simple question.
If I'm using DB::transactions() and I do the following:
DB::transaction(function()
{
$result =
DB::table('orders')->select('id')->where('id', '>', 17)->lockForUpdate()->get();
});
What happens if I execute this script at exactly the same split second?
Laravel says:
Alternatively, you may use the lockForUpdate method. A "for update"
lock prevents the rows from being modified or from being selected with
another shared lock.
Does the lockForUpdate prevent a read from happening at the same time, or does it only come in to affect when doing a following UPDATE to the row?
Can I guarantee if a script is already reading from this row, then a concurrent script at the same millisecond will fail and WAIT for the transaction to release the lock before trying to run the code?
I haven't found a super clear answer anywhere, all examples are trying to update or insert. I just want to guard against a concurrent select.
This is an old thread but as there are no answers, here is my input. I've faced similar situation and after several T&E, finally got to lock the table records that are intended to be modified by only one exclusive transaction at a given time.
Probably not worthy of mention, is your table storage engine set to InnoDB? Because after several failed attempts, I discovered that my table storage engine was MyISAM.

MySQL Blocking/locking operation

I read a lot about locking databases, tables and rows but i want lower lock, like lock only 'operation' i don't know how to call it, lets say i have function in php:
function update_table()
{
//querys like SELECT, UPDATE INSERT etc.
}
Now, i want to only one person can use this function at the same time, and other will need to wait until the first one finish
But it is possible to lock not table/row but only this function, so others queries with SELECT etc. can be executed with no problem? or maybe better is just lock the table? (problem is that this table is very important and i don't want to lock it because some other users can visit the site for other resons/other pages and if many people will wait in line for this code above, other visitors also will have to wait even that they have no interests in site with code above)
Yes. You can use GET_LOCK and RELEASE_LOCK mysql functions

MySQL: How to lock tables and start a transaction?

TL;DR - MySQL doesn't let you lock a table and use a transaction at the same time. Is there any way around this?
I have a MySQL table I am using to cache some data from a (slow) external system. The data is used to display web pages (written in PHP.) Every once in a while, when the cached data is deemed too old, one of the web connections should trigger an update of the cached data.
There are three issues I have to deal with:
Other clients will try to read the cache data while I am updating it
Multiple clients may decide the cache data is too old and try to update it at the same time
The PHP instance doing the work may be terminated unexpectedly at any time, and the data should not be corrupted
I can solve the first and last issues by using a transaction, so clients will be able to read the old data until the transaction is committed, when they will immediately see the new data. Any problems will simply cause the transaction to be rolled back.
I can solve the second problem by locking the tables, so that only one process gets a chance to perform the update. By the time any other processes get the lock they will realise they have been beaten to the punch and don't need to update anything.
This means I need to both lock the table and start a transaction. According to the MySQL manual, this is not possible. Starting a transaction releases the locks, and locking a table commits any active transaction.
Is there a way around this, or is there another way entirely to achieve my goal?
This means I need to both lock the table and start a transaction
This is how you can do it:
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;
For more info, see mysql doc
If it were me, I'd use the advisory locking function within MySQL to implement a mutex for updating the cache, and a transaction for read isolation. e.g.
begin_transaction(); // although reading a single row doesnt really require this
$cached=runquery("SELECT * FROM cache WHERE key=$id");
end_transaction();
if (is_expired($cached)) {
$cached=refresh_data($cached, $id);
}
...
function refresh_data($cached, $id)
{
$lockname=some_deterministic_transform($id);
if (1==runquery("SELECT GET_LOCK('$lockname',0)") {
$cached=fetch_source_data($id);
begin_transaction();
write_data($cached, $id);
end_transaction();
runquery("SELECT RELEASE_LOCK('$lockname')");
}
return $cached;
}
(BTW: bad things may happen if you try this with persistent connections)
I'd suggest to solve the issue by removing the contention altogether.
Add a timestamp column to your cached data.
When you need to update the cached data:
Just add new cached data to your table using the current timestamp
Remove cached data older than, let's say, 24 hours.
When you need to serve the cached data
Sort by timestamp (DESC) and return the newest cached data
At any given time your clients will retrieve records which are never deleted by any other process. Moreover, you don't care if a client gets cached data belonging to different writes (i.e. with different timestamps)
The second problem may be solved without involving the database at all. Have a lock file for the cache update procedure so that other clients know that someone is already on it. This may not catch each and every corner case, but is it that big of a deal if two clients are updating the cache at the same time? After all, they are doing the update in transactions to the cache will still be consistent.
You may even implement the lock yourself by having the last cache update time stored in a table. When a client wants update the cache, make it lock that table, check the last update time and then update the field.
I.e., implement your own locking mechanism to prevent multiple clients from updating the cache. Transactions will take care of the rest.

How to ensure multiple requests to a database aren't mimicked

I am part of the coding team of a high request game.
We've experienced some problems lately where by multiple requests can be sent in at the exact same time and are syndication duplicate actions (which would not be able to happen if they ran entirely after one another).
The problematic routine calls a row in an InnoDB table and if present continues on it's routine until all other checks are okay and at which point it completes and deletes the row.
What appears to be happening is the reads are hitting the row simultaneously (despite the row level locking) and continuing on down the routine path, by which point the deletes make no difference. What this is causing to happen is that the routine is being duplicated by players smart enough to try their luck.
Does anyone have any suggestions for a way to approach fixing this?
Example routine.
// check database row exists (create the initial lock)
// proceed
// check quantity in the row
// if all is okay (few other checks needed here)
// delete the row
// release the lock either way (for the next request to go through)
MySQL has a couple different lock modes
http://dev.mysql.com/doc/refman/5.6/en/innodb-lock-modes.html
I think you'll want to enforce an exclusive lock when executing an update/delete. This way the subsequent requests will wait until the lock is released and the appropriate action has completed.
You may also want to examine the indexes being used for these concurrent queries. An appropriate indexing regime will minimize the number of rows that need to be locked during a given query.

Updating account balances with mysql

I have a field on a User table that holds the account balance for the user. Users can perform a lot of actions with my service that will result in rapid changes to their balance.
I'm trying to use mysql's serializable isolation level to make sure that multiple user actions will not update the value incorrectly. (Action A and action B simultaneously want to deduct 1 dollar from the balance.) However, I'm getting a lot of deadlock errors.
How do I do this correctly without getting all these deadlocks, and still keeping the balance field up to date?
simple schema: user has an id and a balance.
im using doctrine, so i'm doing something like the following:
$con->beginTransaction();
$tx = $con->transaction;
$tx->setIsolation('SERIALIZABLE');
$user = UserTable::getInstance()->find($userId);
$user->setBalance($user->getBalance() + $change);
$user->save();
$con->commit();
First trying to use serializable isolation level on your transaction is a good idea. It means you know at least a minimum what a transation is, and that the isolation level is one of the biggest problem.
Note that serializable is not really a true seriability. More on that on this previous answer, when you'll have some time to read it :-).
But the most important part is that you should consider that having automatic rollbacks on your transaction because of failed serialibility is a normal fact, and that the right thing to do is building your application so that transactions could fail and should be replayed.
One simple solution, and for accounting things I like this simple solution as we can predict all the facts, no suprises, so, one solution is to perform table locks. This is not a fine and elegant solution, no row levels locks, just simple big table locks (and always in the same order). After that you can do your operation as a single player and then release teh locks. Not multi user concurrency on the rows of the tables, no next-row magical locks fails (see previous link). This will certainly slow down your write operations, but if everybody performs the table locks in the same order you'll only get locks timeouts problems, no deadlocks and no 'unserializable auto-rollback'.
Edit
From your code sample I'm not sure you can set the transaction isolation level after the begin. You should activate query logs on MySQL and seewhat is done, then check that other transactions runned by the CMS are not still in the serializable level.