I have a classic balance update concurrency situation:
using (var context = GenerateContext())
{
var account = context.Accounts.First(x => x.id == accountId);
var balanceBefore = account.balance;
// ... do some other stuff ...
account.balance = balanceBefore + depositAmount;
context.SubmitChanges();
}
I would like to lock the whole table for reads/writes during this update.
Can it be done with Linq2SQL?
Edit:
So I tried the following, which does lock the table during the transaction, but it doesn't preform the update - the balance never changes.
using (var context = GenerateContext())
{
context.Connection.Open();
context.Transaction = context.Connection.BeginTransaction(IsolationLevel.Serializable);
var account = context.ExecuteQuery<Account>("SELECT TOP 1 * FROM [Account] WHERE [Id] = 10 WITH (TABLOCKX)").First();
var balanceBefore = account.balance;
// ... do some other stuff ...
account.balance = balanceBefore + depositAmount;
context.SubmitChanges();
}
What am I doing wrong?
To start with, balance updates are best handled on the back end:
UPDATE Account
OUTPUT inserted.*
SET balance += #deposit
WHERE ID = #id;
But lets consider you can only code logic on the client. You should always use optimistic concurrency instead. The benefits of optimistic concurrency under load are too big to dismiss. While pessimistic concurrency is easier to code (as there is no update failure due to concurrency to handle), pessimistic concurrency performs poorly under load.
Linq2SQL supports optimistic concurrency, see Optimistic Concurrency: Overview. The key is the Updatecheck attribute. However, handling concurrency violations in the app is more tricky than just detecting them, but the topic is entirely dependent on the actual app logic.
For the extremely rare case when Optimistic concurrency is not appropriate, the first solution to try is ... still optimistic concurrency, but augmented with application locks.
Even if, say, the world would come to end if you used optimistic concurrency and you decide that you absolutely must take the pessimistic route, use (UPDLOCK, ROWLOCK) and have an appropriate index on ID.
Okay, I managed to make it work.
I had to encapsulate everything with a TransactionScope and apply TransactionOptions with isolation level to the transaction scope.
It now looks like this:
using (var context = GenerateContext())
{
var txnOptions = new TransactionOptions();
txnOptions.IsolationLevel = System.Transactions.IsolationLevel.Serializable;
using (var txnScope = new TransactionScope(TransactionScopeOption.Required, txnOptions))
{
var account = context.ExecuteQuery<Account>("SELECT TOP 1 * FROM [Account] WITH (TABLOCKX) WHERE [Id] = 10").First();
var balanceBefore = account.balance;
// ... do some other stuff ...
account.balance = balanceBefore + depositAmount;
context.SubmitChanges();
txnScope.Complete();
}
}
Related
I have a scenario where i have a running total of user deposits.
I am trying to implement the concurrency mechanism that will ensure that two concurrent operation will not take place.
I could have used optimistic concurrency but it seem it wont do the job in my case.
In my case a new deposit transaction will depend on the previous one so i will have one read and one write in the database.
As understand i should have something like this done
public DepositTransaction DepositAdd(int userId, decimal ammount)
{
using (var cx = this.Database.GetDbContext())
{
using (var trx = cx.Database.BeginTransaction(System.Data.IsolationLevel.RepeatableRead))
{
try
{
//here the last deposit ammount is read and new created with same context
var transaction = this.DepositTransaction(userId, ammount, SharedLib.BalanceTransactionType.Deposit, cx);
trx.Commit();
return transaction;
}
catch
{
trx.Rollback();
throw;
}
}
}
}
I spawn multiple threads that call the function but it seem they are not able to get the last data committed by previous call nor does function block and wait for the previous thread to complete.
After getting deeper to documentation of SQL server i found that correct isolation level to achieve this is Serializable.
I am not sure if this is a behavior of the DB or the application. If a transaction fails at a point where some records had already been added into a table with auto-generated IDs, say ID 22, and there's rollback is there a way to re-use the ID 22? That way the table is restored to what it was -- even with regards to next auto ID -- before the failed transaction.
Here is the code I have been running with the point of failure indicated:
transaction
{
grC = ORMExecuteQuery( "FROM Relation1 WHERE somid=#form.someid#" );
if( ArrayLen( grC ) GT 0 )
{
oGR.id = [];
for( i =1; i LTE ArrayLen(grC); i = i + 1 )
{
grCNew = EntityNew( "Relation2" );
grCNew.setFirstName( grC[i].getFirstname() );
.........
EntitySave( grCNew );
ORMflush();
oGR.id[i] = grCNew.getID();
EntityDelete( grC ); //<<--- **POINT OF FAILURE**
}
}
}
How can I prevent losing IDs in Relation2 whenever there's transaction rollback due to a failure to delete corresponding entity/record in Relation1? Is it the database I should change or my application?
From the MySQL.com site, there will still be gaps. Not sure what your use case is but the gaps shouldn't be a problem. If you need consecutive numbers then you should either auto-generate them in another way (e.g. outside MySQL in PHP or other language). Good luck.
http://dev.mysql.com/doc/refman/5.0/en/innodb-auto-increment-handling.html
In the LockService documentation: https://developers.google.com/apps-script/service_lock it states that "getPublicLock() - Gets a lock that prevents concurrent access to a section of code by simultaneous executions for the current user"
So the query is around the comment: "section of code". If I have multiple sections of code that use the LockService.getPublicLock(), are they essentially independent locks?
For example:
function test1() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
function test2() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
If I have two invocations of my script concurrently executing, with one user accessing test1() and another user accessing test2(), will they both succeed? Or as it alludes to in this post: http://googleappsdeveloper.blogspot.co.uk/2011/10/concurrency-and-google-apps-script.html are the locks simply at the script level? So for this scenario, only one of test1() or test2() would succeed but not both.
If it is truly as the documentation states, and both will succeed, what denotes a 'section of code' ?? Is it the line numbers that the LockService.getPublicLock() appears on or is it the surrounding function?
There is only one public lock and only one private lock.
If you wish to have several locks you'll need to implement some sort of named lock service yourself. An example below, using the script database functionality:
var validTime = 60*1000; // maximum number of milliseconds for which a lock may be held
var lockType = "Named Locks"; // just a type in the database to identify these entries
function getNamedLock( name ) {
return {
locked: false,
db : ScriptDb.getMyDb(),
key: {type: lockType, name:name },
lock: function( timeout ) {
if ( this.locked ) return true;
if ( timeout===undefined ) timeout = 10000;
var endTime = Date.now()+timeout;
while ( (this.key.time=Date.now()) < endTime ) {
this.key = this.db.save( this.key );
if ( this.db.query(
{type: lockType,
name:this.key.name,
time:this.db.between( this.key.time-validTime, this.key.time+1 ) }
).getSize()==1 )
return this.locked = true; // no other or earlier key in the last valid time, so we have it
db.remove( this.key ); // someone else has, or might be trying to get, this lock, so try again
Utilities.sleep(Math.random()*200); // sleep randomly to avoid another collision
}
return false;
},
unlock: function () {
if (this.locked) this.db.remove(this.key);
this.locked = false;
}
}
}
To use this service, we'd do something like:
var l = getNamedLock( someObject );
if ( l.lock() ) {
// critical code, can use some fields of l for convenience, such as
// l.db - the database object
// l.key.time - the time at which the lock was acquired
// l.key.getId() - database ID of the lock, could be a convenient unique ID
} else {
// recover somehow
}
l.unlock();
Notes:
This assumes that the database operation db.save() is essentially indivisible - I think it must be, because otherwise there would be BIG trouble in normal use.
Because the time is in milliseconds we have to assume that it is possible for more than one task to try the lock with the same time stamp, otherwise the function could be simplified.
We assume that locks are never held for more than a minute (but you can change this) since the execution time limit will stop your script anyway.
Periodically you should remove all locks from the database that are more than a minute old, to save it getting cluttered with old locks from crashed scripts.
this question is too old and right now "getPublicLock()" is no Longer available.
according to https://developers.google.com/apps-script/reference/lock
at this time LockServise present these three scope:
getDocumentLock(): Gets a lock that prevents any user of the current document from concurrently running a section of code.
getScriptLock(): Gets a lock that prevents any user from concurrently running a section of code.
getUserLock():Gets a lock that prevents the current user from concurrently running a section of code.
I have something similar to the code below in LINQPAD using C# Statements. My goal is to get the actual SQL Insert statments not actually update the database.
I can easily delete the data after it inserts with this small sample, but I will need this for a larger push of data. I hope that I have missed something simple either in L2S or LINQPad.
Is there an easier way retrieve the SQL Insert?
var e1 = new MyEntity(){ Text = "First" };
var e2 = new MyEntity(){ Text = "Second" };
MyEntities.InsertOnSubmit(e1);
MyEntities.InsertOnSubmit(e2);
SubmitChanges();
A quick-n-dirty way is to wrap everything in a transaction scope that is never commited:
using(TransactionScope ts = new TransactionScope())
{
var e1 = new MyEntity(){ Text = "First" };
var e2 = new MyEntity(){ Text = "Second" };
MyEntities.InsertOnSubmit(e1);
MyEntities.InsertOnSubmit(e2);
SubmitChanges();
// Deliberately not committing the transaction.
}
This works well for small volumes. If the data volume is large and you have full recovery model on the database the transaction log growth might become a problem.
When we did the samples for "LINQ in Action", we used the following method which gets the scheduled changes from the context:
public String GetChangeText(System.Data.Linq.DataContext context)
{
MethodInfo mi = typeof(DataContext).GetMethod("GetChangeText",
BindingFlags.NonPublic | BindingFlags.Instance);
return mi.Invoke(context, null).ToString();
}
If you want to see this in action, download the samples in LINQPad (see http://www.thinqlinq.com/Default/LINQ-In-Action-Samples-available-in-LINQPad.aspx) and check out chapter 6 example 6.29.
I have a need to select a set of records which contain an IsLocked field. I then need to immediately update the IsLocked value from false to true, within a transaction, such that other programs do not fetch those already-fetched records for processing, but can fetch other, unlocked records.
Here's the code I have so far. Is this correct? And how do I do the update? Visit each record in a foreach, update the value and then SubmitChanges()? It seems that when I run the code below, I lose the collection associated with emails, thus cannot do the processing I need to do. Does closing the transaction early result in losing the records loaded?
To focus the question: how does one load-and-update-in-a-transaction records, close the transaction to not lock for any longer than necessary, process the records, then save subsequent changes back to the database.
using (ForcuraDaemonDataContext ctx = new ForcuraDaemonDataContext(props.EmailLogConnectionString))
{
System.Data.Common.DbTransaction trans = null;
IQueryable<Email> emails = null;
try
{
// get unlocked & unsent emails, then immediately lock the set for processing
ctx.Connection.Open();
trans = ctx.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
ctx.Transaction = trans;
emails = ctx.Emails.Where(e => !(e.IsLocked || e.IsSent));
/// ???
ctx.SubmitChanges();
trans.Commit();
}
catch (Exception ex)
{
if (trans != null)
trans.Rollback();
eventLog.WriteEntry("Error. Could not lock and load emails.", EventLogEntryType.Information);
}
finally
{
if (ctx.Connection.State == ConnectionState.Open)
ctx.Connection.Close();
}
// more stuff on the emails here
}
Please see this question for an answer to a similar, simpler form of the problem.