EF6 Transaction - mysql

I have a scenario where i have a running total of user deposits.
I am trying to implement the concurrency mechanism that will ensure that two concurrent operation will not take place.
I could have used optimistic concurrency but it seem it wont do the job in my case.
In my case a new deposit transaction will depend on the previous one so i will have one read and one write in the database.
As understand i should have something like this done
public DepositTransaction DepositAdd(int userId, decimal ammount)
{
using (var cx = this.Database.GetDbContext())
{
using (var trx = cx.Database.BeginTransaction(System.Data.IsolationLevel.RepeatableRead))
{
try
{
//here the last deposit ammount is read and new created with same context
var transaction = this.DepositTransaction(userId, ammount, SharedLib.BalanceTransactionType.Deposit, cx);
trx.Commit();
return transaction;
}
catch
{
trx.Rollback();
throw;
}
}
}
}
I spawn multiple threads that call the function but it seem they are not able to get the last data committed by previous call nor does function block and wait for the previous thread to complete.

After getting deeper to documentation of SQL server i found that correct isolation level to achieve this is Serializable.

Related

EF Core Async Stored Procedure Lock wait timeout exceeded

in my context, I'm running 2 stored procedures asynchronously using EF Core. This is causing me a deadlock and timeout issue.
Below I show the code of the mehtod that calls the other 2 that invoke the stored procedures:
public PortfolioPublishJobStep...
private async Task DoExecuteAsync(ProcessingContext context)
{
var (startDate, endDate) = GetInterval(context);
var portfolioApiId = context.Message.ManagedPortfolioApiId;
using var transactionScope = TransactionScopeFactory.CreateTransactionScope(timeout: Timeout, transactionScopeAsyncFlowOption: TransactionScopeAsyncFlowOption.Enabled);
var asyncTasks = new List<Task>();
foreach (var publishableService in _publishableServices)
{
var asyncTask = publishableService.PublishAsync(portfolioApiId, startDate, endDate);
asyncTasks.Add(asyncTask);
}
await Task.WhenAll(asyncTasks.ToArray()).ConfigureAwait(continueOnCapturedContext: false);
transactionScope.Complete();
}
And here below the classes/methods that invoke their respective procedures...
PortfolioFinancialBenchmarkDataService...
public async Task PublishAsync(string portfolioApiId, DateTime startDate, DateTime endDate)
{
if (string.IsNullOrWhiteSpace(portfolioApiId))
{
throw new ArgumentException(nameof(portfolioApiId));
}
var repository = UnitOfWork.Repository<PortfolioFinancialBenchmarkData>();
await repository.RemoveAsync(x => x.PortfolioApiId == portfolioApiId && x.ReferenceDate >= startDate && x.ReferenceDate <= endDate).ConfigureAwait(continueOnCapturedContext: false);
var parameters = new[]
{
DbParameterFactory.CreateDbParameter<MySqlParameter>("#PortfolioApiId", portfolioApiId),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#StartDate", startDate),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#EndDate", endDate)
};
await repository.ExecuteSqlCommandAsync("CALL PublishPortfolioFinancialBenchmarkData(#PortfolioApiId, #StartDate, #EndDate);", parameters).ConfigureAwait(continueOnCapturedContext: false);
}
And this:
PortfolioFinancialDataService...
public async Task PublishAsync(string portfolioApiId, DateTime startDate, DateTime endDate)
{
if (string.IsNullOrWhiteSpace(portfolioApiId))
{
throw new ArgumentException(nameof(portfolioApiId));
}
var repository = UnitOfWork.Repository<PortfolioFinancialData>();
await repository.RemoveAsync(x => x.PortfolioApiId == portfolioApiId && x.ReferenceDate >= startDate && x.ReferenceDate <= endDate).ConfigureAwait(continueOnCapturedContext: false);
var parameters = new[]
{
DbParameterFactory.CreateDbParameter<MySqlParameter>("#PortfolioApiId", portfolioApiId),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#StartDate", startDate),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#EndDate", endDate)
};
await repository.ExecuteSqlCommandAsync("CALL PublishPortfolioFinancialData(#PortfolioApiId, #StartDate, #EndDate);", parameters).ConfigureAwait(continueOnCapturedContext: false);
}
I believe the problem is the simultaneous connection to the database.
I thought I mitigated this using TransactionScopeAsyncFlowOption, as I've seen elsewhere, but the problem persists.
During the execution of procedures, a deadlock occurs in one of the tables that one of the procedures feeds, and a timeout error occurs.
And the exception message:
MySqlConnector.MySqlException (0x80004005): Lock wait timeout exceeded; try restarting transaction
At some point I also received the following message:
MySqlConnector.MySqlException (0x80004005): XA_RBDEADLOCK: Transaction branch was rolled back: deadlock was detected
Tests I performed:
Set database timeout from 50 to 100s, Fail
Set PortfolioPublishJobStep timeout from 2 to 3 min and PortfolioFinancialBenchmarkDataService and PortfolioFinancialDataService from 1 to 2 min, Fail
Run only 1 of the 2 stored procedures, Success
Run procedures synchronously, Success
Thus, I conclude that the problem may be in the opening of 2 transactions, and I believe that one may be waiting for the end of the other...
There are many situations where Mysql causes locks. For example:
Performing a DML operation without commit, and then performing a delete operation will lock the table.
Insert and update the same piece of data in the same transaction.
Improper design of table index leads to deadlock in the database.
Long things, blocking DDL, and then blocking all subsequent operations of the same table.
Solution
Emergency method:
show full processlist;killdrop the problematic process.
Show full processlist; kill x;
Sometimes through the processlist, it is not possible to see where there is a lock waiting. When the two transactions are in the commit phase, it cannot be reflected in the processlist.
The cure method:
select * from innodb_trx;
Check which transactions occupy table resources.
Through this method, you need to have some understanding of innodb to deal with.
innodb_lock_wait_timeout: Waiting time for row-level locks of InnoDB's dml operation
innodb_lock_wait_timeout refers to the longest time the transaction waits to obtain resources.
If the resource is not allocated after this time, it will return to the application failure; when the lock wait exceeds the set time, the following error will be reported ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction;.
The time unit of the parameter is seconds, the minimum can be set to 1s (generally not set so small), the maximum can be set to 1073741824 seconds, the default installation value is 50s (default parameter setting).
How to modify the value of innode lock wait timeout?
set innodb_lock_wait_timeout=100; set global innodb_lock_wait_timeout=100;
Or:Modify the parameter file/etc/my.cnf innodb_lock_wait_timeout = 50

Jooq batchInsert().execute()

My process looks like:
select some data 50 rows per select,
do sth with data (set some values)
transform row to object of another table
call batchInsert(myListOfRecords).execute()
My problem is how to set up when data should be inserted ? In my current setup data is only inserted at the end of my loop. This is some kind of problem for me because i want process much more data then i do in my tests. So if i will agree with this then my proccess will end with exception (OutOfMemory). Where i should define max amount of data in batch to call instert?
The important thing here is to not fetch all the rows you want to process into memory in one go. When using jOOQ, this is done using ResultQuery.fetchLazy() (possibly along with ResultQuery.fetchSize(int)). You can then fetch the next 50 rows using Cursor.fetchNext(50) and proceed with your insertion as follows:
try (Cursor<?> cursor = ctx
.select(...)
.from(...)
.fetchSize(50)
.fetchLazy()) {
Result<?> batch;
for (;;) {
batch = cursor.fetchNext(50);
if (batch.isEmpty())
break;
// Do your stuff here
// Do your insertion here
ctx.batchInsert(...);
}
}

LockService ambiguity

In the LockService documentation: https://developers.google.com/apps-script/service_lock it states that "getPublicLock() - Gets a lock that prevents concurrent access to a section of code by simultaneous executions for the current user"
So the query is around the comment: "section of code". If I have multiple sections of code that use the LockService.getPublicLock(), are they essentially independent locks?
For example:
function test1() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
function test2() {
var lock = LockService.getPublicLock();
if (lock.tryLock(10000)) {
// Do some critical stuff
lock.releaseLock();
}
}
If I have two invocations of my script concurrently executing, with one user accessing test1() and another user accessing test2(), will they both succeed? Or as it alludes to in this post: http://googleappsdeveloper.blogspot.co.uk/2011/10/concurrency-and-google-apps-script.html are the locks simply at the script level? So for this scenario, only one of test1() or test2() would succeed but not both.
If it is truly as the documentation states, and both will succeed, what denotes a 'section of code' ?? Is it the line numbers that the LockService.getPublicLock() appears on or is it the surrounding function?
There is only one public lock and only one private lock.
If you wish to have several locks you'll need to implement some sort of named lock service yourself. An example below, using the script database functionality:
var validTime = 60*1000; // maximum number of milliseconds for which a lock may be held
var lockType = "Named Locks"; // just a type in the database to identify these entries
function getNamedLock( name ) {
return {
locked: false,
db : ScriptDb.getMyDb(),
key: {type: lockType, name:name },
lock: function( timeout ) {
if ( this.locked ) return true;
if ( timeout===undefined ) timeout = 10000;
var endTime = Date.now()+timeout;
while ( (this.key.time=Date.now()) < endTime ) {
this.key = this.db.save( this.key );
if ( this.db.query(
{type: lockType,
name:this.key.name,
time:this.db.between( this.key.time-validTime, this.key.time+1 ) }
).getSize()==1 )
return this.locked = true; // no other or earlier key in the last valid time, so we have it
db.remove( this.key ); // someone else has, or might be trying to get, this lock, so try again
Utilities.sleep(Math.random()*200); // sleep randomly to avoid another collision
}
return false;
},
unlock: function () {
if (this.locked) this.db.remove(this.key);
this.locked = false;
}
}
}
To use this service, we'd do something like:
var l = getNamedLock( someObject );
if ( l.lock() ) {
// critical code, can use some fields of l for convenience, such as
// l.db - the database object
// l.key.time - the time at which the lock was acquired
// l.key.getId() - database ID of the lock, could be a convenient unique ID
} else {
// recover somehow
}
l.unlock();
Notes:
This assumes that the database operation db.save() is essentially indivisible - I think it must be, because otherwise there would be BIG trouble in normal use.
Because the time is in milliseconds we have to assume that it is possible for more than one task to try the lock with the same time stamp, otherwise the function could be simplified.
We assume that locks are never held for more than a minute (but you can change this) since the execution time limit will stop your script anyway.
Periodically you should remove all locks from the database that are more than a minute old, to save it getting cluttered with old locks from crashed scripts.
this question is too old and right now "getPublicLock()" is no Longer available.
according to https://developers.google.com/apps-script/reference/lock
at this time LockServise present these three scope:
getDocumentLock(): Gets a lock that prevents any user of the current document from concurrently running a section of code.
getScriptLock(): Gets a lock that prevents any user from concurrently running a section of code.
getUserLock():Gets a lock that prevents the current user from concurrently running a section of code.

How to select and update transactionally using Linq-To-SQL?

I have a need to select a set of records which contain an IsLocked field. I then need to immediately update the IsLocked value from false to true, within a transaction, such that other programs do not fetch those already-fetched records for processing, but can fetch other, unlocked records.
Here's the code I have so far. Is this correct? And how do I do the update? Visit each record in a foreach, update the value and then SubmitChanges()? It seems that when I run the code below, I lose the collection associated with emails, thus cannot do the processing I need to do. Does closing the transaction early result in losing the records loaded?
To focus the question: how does one load-and-update-in-a-transaction records, close the transaction to not lock for any longer than necessary, process the records, then save subsequent changes back to the database.
using (ForcuraDaemonDataContext ctx = new ForcuraDaemonDataContext(props.EmailLogConnectionString))
{
System.Data.Common.DbTransaction trans = null;
IQueryable<Email> emails = null;
try
{
// get unlocked & unsent emails, then immediately lock the set for processing
ctx.Connection.Open();
trans = ctx.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
ctx.Transaction = trans;
emails = ctx.Emails.Where(e => !(e.IsLocked || e.IsSent));
/// ???
ctx.SubmitChanges();
trans.Commit();
}
catch (Exception ex)
{
if (trans != null)
trans.Rollback();
eventLog.WriteEntry("Error. Could not lock and load emails.", EventLogEntryType.Information);
}
finally
{
if (ctx.Connection.State == ConnectionState.Open)
ctx.Connection.Close();
}
// more stuff on the emails here
}
Please see this question for an answer to a similar, simpler form of the problem.

SQL Bulk Insert Vs Update - DeadLock issues

I got two process which sometimes run concurrently.
First one is Bulk insert
using (var connection = new SqlConnection(connectionString))
{
var bulkCopy = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.TableLock)
{DestinationTableName = "CTRL.CTRL_DATA_ERROR_DETAIL",};
connection.Open();
try
{
bulkCopy.WriteToServer(errorDetailsDt);
}
catch (Exception e)
{
throw new Exception("Error Bulk writing Error Details for Data Stream ID: " + dataStreamId +
" Details of Error : " + e.Message);
}
connection.Close();
}
Second one is Bulk Update from Stored Procedure
--Part of code from Stored Procedure--
UPDATE [CTL].[CTRL].[CTRL_DATA_ERROR_DETAIL]
SET [MODIFIED_CONTAINER_SEQUENCE_NUMBER] = #containerSequenceNumber
,[MODIFIED_DATE] = GETDATE()
,[CURRENT_FLAG] = 'N'
WHERE [DATA_ERROR_KEY] = #DataErrorKey
AND [CURRENT_FLAG] ='Y'
First process run for bit of time (depending on incoming record load) and second process always gets deadlock victim.
Should I set SqlBulkCopyOptions.TableLock so that second process waits until the resources are released.
By default SqlBulkCopy doesn't take exclusive locks so while it's doing it's thing and inserting data your update process kicks off and thus causes a deadlock. To get around this you could instruct SqlBulkCopy to take an exclusive table lock as you already suggested or you can set the batch size of the bulk insert to a reasonable number.
If you can get away with it I think the table lock idea is the best option.