How to select and update transactionally using Linq-To-SQL? - linq-to-sql

I have a need to select a set of records which contain an IsLocked field. I then need to immediately update the IsLocked value from false to true, within a transaction, such that other programs do not fetch those already-fetched records for processing, but can fetch other, unlocked records.
Here's the code I have so far. Is this correct? And how do I do the update? Visit each record in a foreach, update the value and then SubmitChanges()? It seems that when I run the code below, I lose the collection associated with emails, thus cannot do the processing I need to do. Does closing the transaction early result in losing the records loaded?
To focus the question: how does one load-and-update-in-a-transaction records, close the transaction to not lock for any longer than necessary, process the records, then save subsequent changes back to the database.
using (ForcuraDaemonDataContext ctx = new ForcuraDaemonDataContext(props.EmailLogConnectionString))
{
System.Data.Common.DbTransaction trans = null;
IQueryable<Email> emails = null;
try
{
// get unlocked & unsent emails, then immediately lock the set for processing
ctx.Connection.Open();
trans = ctx.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
ctx.Transaction = trans;
emails = ctx.Emails.Where(e => !(e.IsLocked || e.IsSent));
/// ???
ctx.SubmitChanges();
trans.Commit();
}
catch (Exception ex)
{
if (trans != null)
trans.Rollback();
eventLog.WriteEntry("Error. Could not lock and load emails.", EventLogEntryType.Information);
}
finally
{
if (ctx.Connection.State == ConnectionState.Open)
ctx.Connection.Close();
}
// more stuff on the emails here
}

Please see this question for an answer to a similar, simpler form of the problem.

Related

How to ensure that updating to database on certain interval

I am working in a application Spring java8
I have one function that generate Labels(pdf generation) asynchronously.
it contains a loop, usually it will run more than 1000, it generate more than 1000 pdf labels.
after every loop ends we need to update the database, so that we just saving the status, ie initially it save numberOfgeneratedCount=0 , after each label we just increment the variable and update the table.
It is not Necessary to save this incremented count to db at every loop ends, what we need is in a fixed intervals only we need to update the database to reduce load on dataBase inserts.
currently my code is like
// Label is a database model class labeldb is variable of that
//commonDao.saveLabelToDb function to save Label object
int numberOfgeneratedCount =0;
labeldb.setProcessedOrderCount(numberOfgeneratedCount);
commonDao.saveLabelToDb(labeldb);
for(Order order: orders){
generated = true;
try{
// pdf generation code
}catch Exception e{
// catch block here
generated = false;
}
if(generated){
numberOfgeneratedCount++;
deliveryLabeldb.setProcessedOrderCount(numberOfgeneratedCount);
commonDao.saveLabelToDb(labeldb );
}
}
to improve the performance we need to update database only an interval of 10 seconds. Any help would appreciated
I have done this using the following code, I am not sure about whether this is a good solution, Some one please improve this using some built in functions
int numberOfgeneratedCount =0;
labeldb.setProcessedOrderCount(numberOfgeneratedCount);
commonDao.saveLabelToDb(labeldb);
int nowSecs =LocalTime.now().toSecondOfDay();
int lastSecs = nowSecs;
for(Order order: orders){
nowSecs = LocalTime.now().toSecondOfDay();
generated = true;
try{
// pdf generation code
}catch Exception e{
// catch block here
generated = false;
}
if(generated){
numberOfgeneratedCount++;
deliveryLabeldb.setProcessedOrderCount(numberOfgeneratedCount);
if(nowSecs-lastSecs > 10){
lastSecs=nowSecs;
commonDao.saveLabelToDb(labeldb );
}
}
}

Jooq batchInsert().execute()

My process looks like:
select some data 50 rows per select,
do sth with data (set some values)
transform row to object of another table
call batchInsert(myListOfRecords).execute()
My problem is how to set up when data should be inserted ? In my current setup data is only inserted at the end of my loop. This is some kind of problem for me because i want process much more data then i do in my tests. So if i will agree with this then my proccess will end with exception (OutOfMemory). Where i should define max amount of data in batch to call instert?
The important thing here is to not fetch all the rows you want to process into memory in one go. When using jOOQ, this is done using ResultQuery.fetchLazy() (possibly along with ResultQuery.fetchSize(int)). You can then fetch the next 50 rows using Cursor.fetchNext(50) and proceed with your insertion as follows:
try (Cursor<?> cursor = ctx
.select(...)
.from(...)
.fetchSize(50)
.fetchLazy()) {
Result<?> batch;
for (;;) {
batch = cursor.fetchNext(50);
if (batch.isEmpty())
break;
// Do your stuff here
// Do your insertion here
ctx.batchInsert(...);
}
}

EF6 Transaction

I have a scenario where i have a running total of user deposits.
I am trying to implement the concurrency mechanism that will ensure that two concurrent operation will not take place.
I could have used optimistic concurrency but it seem it wont do the job in my case.
In my case a new deposit transaction will depend on the previous one so i will have one read and one write in the database.
As understand i should have something like this done
public DepositTransaction DepositAdd(int userId, decimal ammount)
{
using (var cx = this.Database.GetDbContext())
{
using (var trx = cx.Database.BeginTransaction(System.Data.IsolationLevel.RepeatableRead))
{
try
{
//here the last deposit ammount is read and new created with same context
var transaction = this.DepositTransaction(userId, ammount, SharedLib.BalanceTransactionType.Deposit, cx);
trx.Commit();
return transaction;
}
catch
{
trx.Rollback();
throw;
}
}
}
}
I spawn multiple threads that call the function but it seem they are not able to get the last data committed by previous call nor does function block and wait for the previous thread to complete.
After getting deeper to documentation of SQL server i found that correct isolation level to achieve this is Serializable.

LockService ambiguity

In the LockService documentation: https://developers.google.com/apps-script/service_lock it states that "getPublicLock() - Gets a lock that prevents concurrent access to a section of code by simultaneous executions for the current user"
So the query is around the comment: "section of code". If I have multiple sections of code that use the LockService.getPublicLock(), are they essentially independent locks?
For example:
function test1() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
function test2() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
If I have two invocations of my script concurrently executing, with one user accessing test1() and another user accessing test2(), will they both succeed? Or as it alludes to in this post: http://googleappsdeveloper.blogspot.co.uk/2011/10/concurrency-and-google-apps-script.html are the locks simply at the script level? So for this scenario, only one of test1() or test2() would succeed but not both.
If it is truly as the documentation states, and both will succeed, what denotes a 'section of code' ?? Is it the line numbers that the LockService.getPublicLock() appears on or is it the surrounding function?
There is only one public lock and only one private lock.
If you wish to have several locks you'll need to implement some sort of named lock service yourself. An example below, using the script database functionality:
var validTime = 60*1000; // maximum number of milliseconds for which a lock may be held
var lockType = "Named Locks"; // just a type in the database to identify these entries
function getNamedLock( name ) {
return {
locked: false,
db : ScriptDb.getMyDb(),
key: {type: lockType, name:name },
lock: function( timeout ) {
if ( this.locked ) return true;
if ( timeout===undefined ) timeout = 10000;
var endTime = Date.now()+timeout;
while ( (this.key.time=Date.now()) < endTime ) {
this.key = this.db.save( this.key );
if ( this.db.query(
{type: lockType,
name:this.key.name,
time:this.db.between( this.key.time-validTime, this.key.time+1 ) }
).getSize()==1 )
return this.locked = true; // no other or earlier key in the last valid time, so we have it
db.remove( this.key ); // someone else has, or might be trying to get, this lock, so try again
Utilities.sleep(Math.random()*200); // sleep randomly to avoid another collision
}
return false;
},
unlock: function () {
if (this.locked) this.db.remove(this.key);
this.locked = false;
}
}
}
To use this service, we'd do something like:
var l = getNamedLock( someObject );
if ( l.lock() ) {
// critical code, can use some fields of l for convenience, such as
// l.db - the database object
// l.key.time - the time at which the lock was acquired
// l.key.getId() - database ID of the lock, could be a convenient unique ID
} else {
// recover somehow
}
l.unlock();
Notes:
This assumes that the database operation db.save() is essentially indivisible - I think it must be, because otherwise there would be BIG trouble in normal use.
Because the time is in milliseconds we have to assume that it is possible for more than one task to try the lock with the same time stamp, otherwise the function could be simplified.
We assume that locks are never held for more than a minute (but you can change this) since the execution time limit will stop your script anyway.
Periodically you should remove all locks from the database that are more than a minute old, to save it getting cluttered with old locks from crashed scripts.
this question is too old and right now "getPublicLock()" is no Longer available.
according to https://developers.google.com/apps-script/reference/lock
at this time LockServise present these three scope:
getDocumentLock(): Gets a lock that prevents any user of the current document from concurrently running a section of code.
getScriptLock(): Gets a lock that prevents any user from concurrently running a section of code.
getUserLock():Gets a lock that prevents the current user from concurrently running a section of code.

SQL Bulk Insert Vs Update - DeadLock issues

I got two process which sometimes run concurrently.
First one is Bulk insert
using (var connection = new SqlConnection(connectionString))
{
var bulkCopy = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.TableLock)
{DestinationTableName = "CTRL.CTRL_DATA_ERROR_DETAIL",};
connection.Open();
try
{
bulkCopy.WriteToServer(errorDetailsDt);
}
catch (Exception e)
{
throw new Exception("Error Bulk writing Error Details for Data Stream ID: " + dataStreamId +
" Details of Error : " + e.Message);
}
connection.Close();
}
Second one is Bulk Update from Stored Procedure
--Part of code from Stored Procedure--
UPDATE [CTL].[CTRL].[CTRL_DATA_ERROR_DETAIL]
SET [MODIFIED_CONTAINER_SEQUENCE_NUMBER] = #containerSequenceNumber
,[MODIFIED_DATE] = GETDATE()
,[CURRENT_FLAG] = 'N'
WHERE [DATA_ERROR_KEY] = #DataErrorKey
AND [CURRENT_FLAG] ='Y'
First process run for bit of time (depending on incoming record load) and second process always gets deadlock victim.
Should I set SqlBulkCopyOptions.TableLock so that second process waits until the resources are released.
By default SqlBulkCopy doesn't take exclusive locks so while it's doing it's thing and inserting data your update process kicks off and thus causes a deadlock. To get around this you could instruct SqlBulkCopy to take an exclusive table lock as you already suggested or you can set the batch size of the bulk insert to a reasonable number.
If you can get away with it I think the table lock idea is the best option.