TransactionScope incorrectly commiting the transaction - sql-server-2008

I have the following method in my DAL:
public void SavePlan()
{
using (TransactionScope scope =
new TransactionScope(TransactionScopeOption.RequiresNew))
{
CallSaveDataProc();
CallLogMsgProc();
scope.Complete();
}
}
I have deliberately put a COMMIT Transaction in the CallLogMsgProc without creating a Transaction. This results in a SQLException being thrown from CallLogMsgProc procedure and scope.Complete() never executes.
However, in my database, I'm still seeing records saved by the first method, CallSaveDataProc. Am I doing something wrong?

Starting/committing transactions have to be paired, and preferably each pair should ideally be in the same scope (though each pair doesn't have to be in the same scope as another pair).
So you have a case of starting a transaction via the new TransactionScope, followed by Commit in your stored procedure (which will save the work... as you are seeing), followed by an attempt to commit the transaction "seen" by TransactionScope, which has now become invalid.

Related

using Async inside a transaction in Spring application

I have a Spring application which updates particular entity details in MySQL DB using a #Transactional method, And within the same method, I am trying to call another endpoint using #Async which is one more Spring application which reads the same entity from MySql DB and updates the value in redis storage.
Now the problem is, every time I update some value for the entity, sometimes its updated in redis and sometimes it's not.
When I tried to debug I found that sometimes the second application when it reads the entity from MySql is picking the old value instead of updated value.
Can anyone suggest me what can be done to avoid this and make sure that second application always picks the updated value of that entity from Mysql?
The answer from M. Deinum is good but there is still another way to achieve this which may be simpler for you case, depending on the state of your current application.
You could simply wrap the call to the async method in an event that will be processed after your current transaction commits so you will read the updated entity from the db correctly every time.
Is quite simple to do this, let me show you:
import org.springframework.transaction.annotation.Transactional;
import org.springframework.transaction.support.TransactionSynchronization;
import org.springframework.transaction.support.TransactionSynchronizationManager;
#Transactional
public void doSomething() {
// application code here
// this code will still execute async - but only after the
// outer transaction that surrounds this lambda is completed.
executeAfterTransactionCommits(() -> theOtherServiceWithAsyncMethod.doIt());
// more business logic here in the same transaction
}
private void executeAfterTransactionCommits(Runnable task) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
public void afterCommit() {
task.run();
}
});
}
Basically what happens here is that we supply an implementation for the current transaction callback and we only override the afterCommit method - there are others methods there that might be useful, check them out. And to avoid typing the same boilerplate code if you want to use this in other parts or simply make the method more readable I extracted that in a helper method.
The solution is not that hard, apparently you want to trigger and update after the data has been written to the database. The #Transactional only commits after the method finished executing. If another #Async method is called at the end of the method, depending on the duration of the commit (or the actual REST call) the transaction might have committed or not.
As something outside of your transaction can only see committed data it might see the updated one (if already committed) or still the old one. This also depends on the serialization level of your transaction but you generally don't want to use an exclusive lock on the database for performance reason.
To fix this the #Async method should not be called from inside the #Transactional but right after it. That way the data is always committed and the other service will see the updated data.
#Service
public class WrapperService {
private final TransactionalEntityService service1;
private final AsyncService service2;
public WrapperService(TransactionalEntityService service1, AsyncService service2) {
this.service1=service1;
this.service2=service2;
}
public updateAndSyncEntity(Entity entity) {
service1.update(entity); // Update in DB first
service2.sync(entity); // After commit trigger a sync with remote system
}
}
This service is non-transactional and as such the service1.update which, presumable, is #Transactional will update the database. When that is done you can trigger the external sync.

Grails Immediate commit for objects in a transaction

In my project there is a table called process_detail. Row inserted in this table as soon as a cron process starts and is updated at the end
of the cron process completion. We are using grails which internally takes care of transaction at service level method i.e. transaction starts at the start of the method, commit if the method execution successful, rollback if any exception.
Here what happens is that if the transaction fails this row also being rolled back this I don't want because this is type of a log
table. I tried creating a nested transaction and save this row and at the end update it but that fails with lock acquisition exception.
I am thinking of using MyISAM for this particular table,
this way I don't have to worry about transaction because MyISAM does not support it and it will commit immediately and no rollback possible. Here's pseudo code for what I am trying to achieve.
def someProcess(){
//Transaction starts
saveProcessDetail(details); //Commit this immediately, should not rollback if below code fails.
someOtherWork;
updateProcessDetail(details); //Commit this immediately, should
//Transaction Ends
}
Pseudo code for save and update process detail;
def saveProcessDetail(processName, processStatus){
ProcessDetail pd = new ProcessDetail(processName, processStatus);
pd.save();
}
def updateProcessDetail(processDetail, processStatus){
pd.procesStatus = processStatus;
pd.save();
}
Please advice if there is better of doing this in InnoDB. Answer could be mysql level I can find the grails solution my self. Let me know if any other info required.
Make someProcess #NonTransactional, then manage the transactional nature yourself. Write the initial saveProcessDetail with a flush:true, then make the remainder of the processing transactional, withTransaction?
Or
#NonTransactional
def someProcess() {
saveProcessDetail(details) // I'd still use a flush:true
transactionalProcessWork()
}
#Transactional
def transactionalProcessWork() {
someOtherWork()
updateProcessDetail(details)
}

CakePHP 3.0 - How do I log an error on transactionals that don't use execute?

I'm trying to run a function that goes something like this:
$records->connection()->transactional(function () use ($records, $entities) {
foreach ($entities as $record) {
$records->save($record, ['atomic' => false]);
}
});
Is there a way to check and see if this transactional threw an error, or is there just something inherently wrong with running a transactional in this fashion vs. executes?
The end goal is to update a set number of entities, but update none if an error was thrown. The functions above are abstract enough to allow different aspects of the entity to be changed, so this method was played around with rather than specific executes to allow for ease of saving through entities.
ConnectionInterface::transactional() will issue a rollback in case the callback returns false or throws an error (which is being re-thrown afterwards).
Quote from the docs:
[...]
The transactional method will do the following:
Call begin.
Call the provided closure.
If the closure raises an exception, a rollback will be issued. The original exception will be re-thrown.
If the closure returns false, a rollback will be issued.
If the closure executes successfully, the transaction will be committed.
[...]
Cookbook > Dabase Access & ORM > Database Basis > Using Transactions
[...]
Returns mixed The return value of the callback.
[...]
API > \Cake\Datasource\ConnectionInterface::transactional()
So exceptions are already covered, you can simply catch them, and the return value is the value that is returned from your closure, so all you may additionally need is to return false from your closure in case Table::save() calls are failing.

DataContext connection closed or transaction completed unexpectedly while submitting changes within a TransactionScope transaction?

Code
double timeout_in_hours = 6.0;
MyDataContext db = new MyDataContext();
using (TransactionScope tran = new TransactionScope( TransactionScopeOption.Required, new TransactionOptions(){ IsolationLevel= System.Transactions.IsolationLevel.ReadCommitted, Timeout=TimeSpan.FromHours( timeout_in_hours )}, EnterpriseServicesInteropOption.Automatic ))
{
int total_records_processed = 0;
foreach (DataRow datarow in data.Rows)
{
//Code runs some commands on the DataContext (db),
//possibly reading/writing records and calling db.SubmitChanges
total_records_processed++;
try
{
db.SubmitChanges();
}
catch (Exception err)
{
MessageBox.Show( err.Message );
}
}
tran.Complete();
return total_records_processed;
}
While the above code is running, it successfully completes 6 or 7 hundred loop iterations. However, after 10 to 20 minutes, the catch block above catches the following error:
{"The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements."}
The tran.Complete call is never made, so why is it saying the transaction associated with the connection is completed?
Why, after successfully submitting hundreds of changes, does the connection associated with the DataContext suddenly enter a closed state? (That's the other error I sometimes get here).
When profiling SQL Server, there are just a lot of consecutive selects and inserts with really nothing else while its running. The very last thing the profiler catches is a sudden "Audit Logout", which I'm not sure if that's the cause of the problem or a side-effect of it.
Wow, the max timeout is limited by machine.config: http://forums.asp.net/t/1587009.aspx/1
"OK, we resolved this issue. apparently the .net 4.0 framework doesn't
allow you to set your transactionscope timeouts in the code as we have
done in the past. we had to make the machine.config changes by adding
< system.transactions> < machineSettings maxTimeout="02:00:00"/>
< defaultSettings timeout="02:00:00"/> < /system.transactions>
to the machine.config file. using the 2.0 framework we did not have
to make these entries as our code was overriding teh default value to
begin with."
It seems that the timeout you set in TransactionScope's constructor is ignored or defeated by a maximum timeout setting in the machine.config file. There is no mention of this in the documentation for the TransactionScope's constructor that accepts a time out parameter: http://msdn.microsoft.com/en-us/library/9wykw3s2.aspx
This makes me wonder, what if this was a shared hosting environment I was dealing with, where I could not access the machine.config file? There's really no way to break up the transaction, since it involves creating data in multiple tables with relationships and identity columns whose values are auto-incremented. What a poor design decision. If this was meant to protect servers with shared hosting, it's pointless, because such a long-running transaction would be isolated to my own database only. Also, if a program specifies a longer timeout, then it obviously expects a transaction to take a longer amount of time, so it should be allowed. This limitation is just a pointless handicap IMO that's going to cause problems. See also: TransactionScope maximumTimeout

TransactionScope and Isolation Level

we have a problem to use TransactionScope. TransactionScope get to us very good flexibility to use transactions across our Data Access Layer. On this way we can use transactions implicit or explicit. There are some performance boost again ADO.NET transactions, but at this time this is not really problem. However we have problem with locking. In example code below, although isolation level is set to ReadCommitted, it is not possible to make Select SQL statement from other client on table testTable, until the main transaction (in Main method) will be committed, because there is lock on whole table. We also tried to use only one connection across all methods, but same behavior. Our DBMS is SQL Server 2008. Is there something what we didn't understood?
Regards
Anton Kalcik
See this sample code:
class Program
{
public class DAL
{
private const string _connectionString = #"Data Source=localhost\fsdf;Initial Catalog=fasdfsa;Integrated Security=SSPI;";
private const string inserttStr = #"INSERT INTO dbo.testTable (test) VALUES(#test);";
/// <summary>
/// Execute command on DBMS.
/// </summary>
/// <param name="command">Command to execute.</param>
private void ExecuteNonQuery(IDbCommand command)
{
if (command == null)
throw new ArgumentNullException("Parameter 'command' can't be null!");
using (IDbConnection connection = new SqlConnection(_connectionString))
{
command.Connection = connection;
connection.Open();
command.ExecuteNonQuery();
}
}
public void FirstMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello1"));
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required))
{
ExecuteNonQuery(command);
sc.Complete();
}
}
public void SecondMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello2"));
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required))
{
ExecuteNonQuery(command);
sc.Complete();
}
}
}
static void Main(string[] args)
{
DAL dal = new DAL();
TransactionOptions tso = new TransactionOptions();
tso.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required,tso))
{
dal.FirstMethod();
dal.SecondMethod();
sc.Complete();
}
}
}
I don't think your issue has anything to do with the .NET TransactionScope concept. Rather, it sounds like you're describing the expected behavior of SQL Server transactions. Also, changing the isolation level only affects "data reads" not "data writes". From SQL Server BOL:
"Choosing a transaction isolation level does not affect the locks acquired to protect data modifications. A transaction always gets an exclusive lock on any data it modifies, and holds that lock until the transaction completes, regardless of the isolation level set for that transaction. For read operations, transaction isolation levels primarily define the level of protection from the effects of modifications made by other transactions."
What that means is that you can prevent the blocking behavior by changing the isolation level for the client issuing the SELECT statement(s). The READ COMMITED isolation level (the default) won't prevent blocking. To prevent blocking the client, you would use the READ UNCOMMITTED isolation level, but you would have to account for the possibility that records may be retrieved that have been updated/inserted by an open transaction (i.e. they might go away if the transaction rolls back).
Good question to talk about transactions.
Your main method is keeping transactions to commit. Even if you commit within other methods, you will still have locks on that row. You will not able to read that table with READ COMMITTED, which is expected, until you commit your locking transaction.
Here is after first method returns:
After second method returns, you will add one more lock to table.
If we execute select statement from a query window with SPID(55), you will see wait status.
After you main method trans commits, you will get the select statement result and it will only show shared lock from our select statement query page.
X means exclusive lock, IX intent locks. You can read more from my blog post about transactions.
If you want to read without wait, you can use nolock hint. If you want to read after first method commits, you can remove that outer scope.