Spring Data JPA - Best Way to Update Concurrently Accessed "Total" Field - mysql

(Using Spring Boot 2.3.3 w/ MySQL 8.0.)
Let's say I have an Account entity that contains a total field, and one of those account entities represents some kind of master account. I.e. that master account has its total field updated by almost every transaction, and it's important that any updates to that total field are done on the most recent value.
Which is the better choice within such a transaction:
Using a PESSIMISTIC_WRITE lock, fetch the master account, increment the total field, and commit the transaction. Or,
Have a dedicated query that essentially does something like, UPDATE Account SET total = total + x as part of the transaction? I'm assuming I'd still need the same pessimistic lock in this case for the UPDATE query, e.g. via #Query and #Lock.
Also, is it an anti-pattern to retry a failed transaction a set number of times due to a lock-acquisition timeout (or other lock-based exception)? Or is it better to let it fail, report it to the client, and let the client try to call the transaction/service again?
Apologies for the basic question, but, it's been some time since I've had to worry about doing such a thing in Spring.
Thanks in advance!

After exercising my Google Fu a bit more and digging even deeper, it seems variations of this question have already been asked, at least insofar as the 'locking' portion goes.
That is, while the Spring Data JPA docs mention redeclaring repository methods and adding the #Lock annotation, it seems that it is meant strictly for queries that read only. This is what I'd originally thought as it wouldn't make much sense to "lock" an UPDATE query unless there was some additional magic happening with the JPQL query.
As for retrying, retrying does seem to be the way to go, but of course using a number of retries that makes sense for the situation.
Hopefully this helps someone else in the future who has a brain cramp like I did.

Related

Yii2 database transaction behaviour to support repeatable read

I have the following question. I have a web application (written in php Yii2), where multiple post requests are expected to hit the application server within a very short time. The business logic should be very strict, meaning that only the very first request's data should be inserted in the MySQL table, all the rest should be ignored.
The clients are sending both the parent's and the latest child record's id in the post request.
I use Yii's db transactions in this way.
$transaction = Yii::$app->db->beginTransaction();
$parent = ObjectParent::findOne(Yii::$app->request->post('parent_id')));
$latest_child = ObjectChild::findOne(Yii::$app->request->post('latest_child_id')));
if($parent->latest_child_id == $latest_child->id) {
try{
$new_child = $latest_child->createNewChild();
$parent->setLatestChild($new_child->id);
$transaction->commit();
} catch{
$transaction->rollback();
}
}
In case the requests would income sequentially, than the second request would be ignored, because the latest child record's id would not match with the one coming from the client. But my problem is, that there are multiple rows inserted in the database. The database's isolation level is REPEATABLE READ, which should assure (according to my knowledge), that the rows which were read within the transaction are guaranteed not to change until the commit happens. If this is true, than it wouldnt be a problem, because it would make the second transaction to "break".
The problem might be, that Yii might not use or is not aware of these DB locks, so doesnt know that the record is already part of a transaction, and makes the validation according to the current state of the object. The DB of course doesnt know anything about the validation rules, so it is fine from its point of view also.
My ideas to solve this:
set yii transaction explicitly to REPEATABLE READ also. This might change its behaviour. I doubt, because according to the documentation, without defining it explicitly, it uses DB default (REPEATABLE READ).
put validation logic a little bit later, closer to the commitm and after $parent->setLatestChildId($new_child->id);
I dont know if it is a 100% solution, so I dont want to start to rewrite the tested code. Note, that the skeleton code above is only the simplified version of the original.
solve the whole thing with database triggers, so it would bypass the application context.
Please let me know what is the best practise in these scenarios. Unfortunately I am not that experienced in these concurreny issues, and its quite hard to test that and simulate concurrent requestes.
thanks
Repeatable read only assures that if you read rows in a transaction, re-reading those rows gets the same result. Another transaction may alter the results.
To put some locking on them the following are possible:
SELECT ... [LOCK IN SHARE MODE|FOR UPDATE]
However for your case of ensuring an insert of parent/child is unique, I recommend that (parent_id,child_id) being a unique or primary key in your table that way a duplicate insert will generate a duplicate key exception.

Implementing a quota system to limit requests in a web based app

I want to limit my users to 25k/requests per hour/day/whatever.
My first idea was to simply use mysql and have a column in the users table where i would store the requests and i would increment this counter each time the user makes a request. Problem with this approach is that sometimes you end up writing at the same time in the column and you get a deadlock from mysql, so this isn't really a good way to go about it, is it ?
Another way would be, instead of incrementing the counters of a column, to insert log records in a separate table and then count these records for a given timespan, but this way you can easily end up with million records table and the query can be too slow.
When using a RDBMS another aspect to take into consideration is that at each request you'd have to count the user quota from database and this can take time depending on either of the above mentioned methods.
My second idea, i thought using something like redis/memcached (not sure of alternatives or which one of them is faster) and store the request counters there. This would be fast enough to query and increment the counters, for sure faster than a RDBMS, but i haven't tried it with huge amounts of data, so i am not sure how it will perform just yet.
My third idea, i would keep the quota data in memory in a map, something like map[int]int where the key would be the user_id and the value would be the quota usage and i'd protect the map access with a mutex. This would be the fastest solution of all but then what do you do if for some reason your app crashes, you lose all that data related to the number of requests certain user did. One way would be to catch the app when crashing and loop through the map and update the database. Is this feasible?
Not sure if either of the above is the right approach, but i am open to suggestions.
I'm not sure what you mean by "get a deadlock from mysql" when you try to update a row at the same time. But a simple update rate_limit set count = count + 1 where user_id = ? should do what you want.
Personally I have had great success with Redis for doing rate limiting. There are lots of resources out there to help you understand the appropriate approach for your use case. Here is one I just glanced at that seems to handle things correctly: https://www.binpress.com/tutorial/introduction-to-rate-limiting-with-redis/155. Using pipelines (MULTI) or Lua scripts may make things even nicer.
You can persist your map[int]int in RDBMS or just file system time to time and in defer function. You even can use it as cache instead of redis. Surely it will be anyway faster than connection to third-party service every request. Also you can store counters on user side simply in cookies. Smart user can clear cookies of-douse but is it so dangerous at all and you can also provide some identification info in cookies to make clearing uncomfortable.

What is the best way (in Rails/AR) to ensure writes to a database table are performed synchronously, one after another, one at a time?

I have noticed that using something like delayed_job without a UNIQUE constraint on a table column would still create double entries in the DB. I have assumed delayed_job would run jobs one after another. The Rails app runs on Apache with Passenger Phusion. I am not sure if that is the reason why this would happen, but I would like to make sure that every item in the queue is persisted to AR/DB one after another, in sequence, and to never have more than one write to this DB table happen at the same time. Is this possible? What would be some of the issues that I would have to deal with?
update
The race conditions arise because an AJAX API is used to send data to the application. The application received a bunch of data, each batch of data is identified as belonging together by a Session ID (SID), in the end, the final state of the database has to include the latest most up-to date AJAX PUT query to the API. Sometimes queries arrive at the exact same time for the same SID -- so I need a way to make sure they don't all try to be persisted at the same time, but one after the other, or simply the last to be sent by AJAX request to the API.
I hope that makes my particular use-case easier to understand...
You can lock a specific table (or tables) with the LOCK TABLES statement.
In general I would say that relying on this is poor design and will likely lead to with scalability problems down the road since you're creating an bottleneck in your application flow.
With your further explanations, I'd be tempted to add some extra columns to the table used by delayed_job, with a unique index on them. If (for example) you only ever wanted 1 job per user you'd add a user_id column and then do
something.delay(:user_id => user_id).some_method
You might need more attributes if the pattern is more sophisticated, e.g. there are lots of different types of jobs and you only wanted one per person, per type, but the principle is the same. You'd also want to be sure to rescue ActiveRecord::RecordNotUnique and deal with it gracefully.
For non delayed_job stuff, optimistic locking is often a good compromise between handling the concurrent cases well without slowing down the non concurrent cases.
If you are worried/troubled about/with multiple processes writing to the 'same' rows - as in more users updating the same order_header row - I'd suggest you set some marker bound to the current_user.id on the row once /order_headers/:id/edit was called, and removing it again, once the current_user releases the row either by updating or canceling the edit.
Your use-case (from your description) seems a bit different to me, so I'd suggest you leave it to the DB (in case of a fairly recent - as in post 5.1 - MySQL, you'd add a trigger/function which would do the actual update, and here - you could implement similar logic to the above suggested; some marker bound to the sequenced job id of sorts)

Alternatives to LINQ To SQL on high loaded pages

To begin with, I LOVE LINQ TO SQL. It's so much easier to use than direct querying.
But, there's one great problem: it doesn't work well on high loaded requests. I have some actions in my ASP.NET MVC project, that are called hundreds times every minute.
I used to have LINQ to SQL there, but since the amount of requests is gigantic, LINQ TO SQL almost always returned "Row not found or changed" or "X of X updates failed". And it's understandable. For instance, I have to increase some value by one with every request.
var stat = DB.Stats.First();
stat.Visits++;
// ....
DB.SubmitChanges();
But while ASP.NET was working on those //... instructions, the stats.Visits value stored in the table got changed.
I found a solution, I created a stored procedure
UPDATE Stats SET Visits=Visits+1
It works well.
Unfortunately now I'm getting more and more moments like that. And it sucks to create stored procedures for all cases.
So my question is, how to solve this problem? Are there any alternatives that can work here?
I hear that Stackoverflow works with LINQ to SQL. And it's more loaded than my site.
This isn't exactly a problem with Linq to SQL, per se, it's an expected result with optimistic concurrency, which Linq to SQL uses by default.
Optimistic concurrency means that when you update a record, you check the current version in the database against the copy that was originally retrieved before making any offline updates; if they don't match, report a concurrency violation ("row not found or changed").
There's a more detailed explanation of this here. There's also a fairly sizable guide on handling concurrency errors. Typically the solution involves simply catching ChangeConflictException and picking a resolution, such as:
try
{
// Make changes
db.SubmitChanges();
}
catch (ChangeConflictException)
{
foreach (var conflict in db.ChangeConflicts)
{
conflict.Resolve(RefreshMode.KeepCurrentValues);
}
}
The above version will overwrite whatever is in the database with the current values, regardless of what other changes were made. For other possibilities, see the RefreshMode enumeration.
Your other option is to disable optimistic concurrency entirely for fields that you expect might be updated. You do this by setting the UpdateCheck option to UpdateCheck.Never. This has to be done at the field level; you can't do it at the entity level or globally at the context level.
Maybe I should also mention that you haven't picked a very good design for the specific problem you're trying to solve. Incrementing a "counter" by repeatedly updating a single column of a single row is not a very good/appropriate use of a relational database. What you should be doing is actually maintaining a history table - such as Visits - and if you really need to denormalize the count, implement that with a trigger in the database itself. Trying to implement a site counter at the application level without any data to back it up is just asking for trouble.
Use your application to put actual data in your database, and let the database handle aggregates - that's one of the things databases are good at.
Use a producer/consumer or message queue model for updates that don't absolutely have to happen immediately, particularly status updates. Instead of trying to update the database immediately keep a queue of updates that the asp.net threads can push to and then have a writer process/thread that writes the queue to the database. Since only one thread is writing, there will be much less contention on the relevant tables/roles.
For reads, use caching. For high volume sites even caching data for a few seconds can make a difference.
Firstly, you could call DB.SubmitChanges() right after stats.Visits++, and that would greatly reduce the problem.
However, that still is not going to save you from the concurrency violation (that is, simultaneously modifying a piece of data by two concurrent processes). To fight that, you may use the standard mechanism of transactions. With LINQ-to-SQL, you use transactions by instantiating a TransactionScope class, thusly:
using( TransactionScope t = new TransactionScope() )
{
var stats = DB.Stats.First();
stats.Visits++;
DB.SubmitChanges();
}
Update: as Aaronaught correctly pointed out, TransactionScope is not going to help here, actually. Sorry. But read on.
Be careful, though, not to make the body of a transaction too long, as it will block other concurrent processes, and thus, significantly reduce your overall performance.
And that brings me to the next point: your very design is probably flawed.
The core principle in dealing with highly shared data is to design your application in such way that the operations on that data are quick, simple, and semantically clear, and they must be performed one after another, not simultaneously.
The one operation that you're describing - counting visits - is pretty clear and simple, so it should be no problem, once you add the transaction. I must add, however, that while this will be clear, type-safe and otherwise "good", the solution with stored procedure is actually a much preferred one. This is actually exactly the way database applications were being designed in ye olden days. Think about it: why would you need to fetch the counter all the way from the database to your application (potentially over the network!) if there is no business logic involved in processing it. The database server may increment it just as well, without even sending anything back to the application.
Now, as for other operations, that are hidden behind // ..., it seems (by your description) that they're somewhat heavy and long. I can't tell for sure, because I don't see what's there, but if that's the case, you probably want to separate them into smaller and quicker ones, or otherwise rethink your design. I really can't tell anything else with this little information.

mySQL - Prevent double booking

I am trying to work out the best way to stop double 'booking' in my application.
I have a table of unique id's each can be sold only once.
My current idea is to use a transaction to check if the chosen products are available, if they are then insert into a 'status' column that it is 'reserved' along with inserting a 'time of update' then if the user goes on to pay I update the status to 'sold'.
Every 10 mins I have a cron job check for 'status' = 'reserved' that was updated more than 10 mins ago and delete such rows.
Is there a better way? I have never used transactions (I have just heard the word banded around) so if someone could explain how I would do this that would be ace.
despite what others here have suggested, transactions are not the complete solution.
sounds like you have a web application here and selecting and purchasing a reservation takes a couple of pages (steps). this means you would have to hold a transaction open across a couple of pages, which is not possible.
your approach (status column) is correct, however, i would implement it differently. instead of a status column, add two columns: reserved_by and reserved_ts.
when reserving a product, set reserved_by to the primary key of the user or the session and reserved_ts to now().
when looking for unreserved products, look for ones where reserved_ts is null or more than 10 minutes old. (i would actually look for a couple minutes older than whatever you tell your user to avoid possible race conditions.)
a cron job to clear old reservations becomes unnecessary.
What you're attempting to do with your "reserved" status is essentially to emulate transactional behavior. You're much better off letting an expert (mysql) handle it for you.
Have a general read about database transactions and then how to use them in MySQL. They aren't too complicated. Feel free to post questions about them here later, and I'll try to respond.
Edit: Now that I think about your requirements... perhaps only using database transactions isn't the best solution - having tons of transactions open and waiting for user action to commit the transactions is probably not a good design choice. Instead, continue what you were doing with "status"="reserved" design, but use transactions in the database to set the value of "status", to ensure that the row isn't "reserved" by two users at the same time.
You do not need to have any added state to do this.
In order to avoid dirty reads, you should set the database to an isolation level of that will avoid them. Namely, REPEATABLE READ or SERIALIZABLE.
You can set the isolation level globally, or session specific. If all your sessions might need the isolation, you may as well set it globally.
Once the isolation level is set, you just need to use a transaction that starts before you SELECT, and optionally UPDATEs the status if the SELECT revealed that it wasn't reserved yet.