we have a problem to use TransactionScope. TransactionScope get to us very good flexibility to use transactions across our Data Access Layer. On this way we can use transactions implicit or explicit. There are some performance boost again ADO.NET transactions, but at this time this is not really problem. However we have problem with locking. In example code below, although isolation level is set to ReadCommitted, it is not possible to make Select SQL statement from other client on table testTable, until the main transaction (in Main method) will be committed, because there is lock on whole table. We also tried to use only one connection across all methods, but same behavior. Our DBMS is SQL Server 2008. Is there something what we didn't understood?
Regards
Anton Kalcik
See this sample code:
class Program
{
public class DAL
{
private const string _connectionString = #"Data Source=localhost\fsdf;Initial Catalog=fasdfsa;Integrated Security=SSPI;";
private const string inserttStr = #"INSERT INTO dbo.testTable (test) VALUES(#test);";
/// <summary>
/// Execute command on DBMS.
/// </summary>
/// <param name="command">Command to execute.</param>
private void ExecuteNonQuery(IDbCommand command)
{
if (command == null)
throw new ArgumentNullException("Parameter 'command' can't be null!");
using (IDbConnection connection = new SqlConnection(_connectionString))
{
command.Connection = connection;
connection.Open();
command.ExecuteNonQuery();
}
}
public void FirstMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello1"));
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required))
{
ExecuteNonQuery(command);
sc.Complete();
}
}
public void SecondMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello2"));
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required))
{
ExecuteNonQuery(command);
sc.Complete();
}
}
}
static void Main(string[] args)
{
DAL dal = new DAL();
TransactionOptions tso = new TransactionOptions();
tso.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required,tso))
{
dal.FirstMethod();
dal.SecondMethod();
sc.Complete();
}
}
}
I don't think your issue has anything to do with the .NET TransactionScope concept. Rather, it sounds like you're describing the expected behavior of SQL Server transactions. Also, changing the isolation level only affects "data reads" not "data writes". From SQL Server BOL:
"Choosing a transaction isolation level does not affect the locks acquired to protect data modifications. A transaction always gets an exclusive lock on any data it modifies, and holds that lock until the transaction completes, regardless of the isolation level set for that transaction. For read operations, transaction isolation levels primarily define the level of protection from the effects of modifications made by other transactions."
What that means is that you can prevent the blocking behavior by changing the isolation level for the client issuing the SELECT statement(s). The READ COMMITED isolation level (the default) won't prevent blocking. To prevent blocking the client, you would use the READ UNCOMMITTED isolation level, but you would have to account for the possibility that records may be retrieved that have been updated/inserted by an open transaction (i.e. they might go away if the transaction rolls back).
Good question to talk about transactions.
Your main method is keeping transactions to commit. Even if you commit within other methods, you will still have locks on that row. You will not able to read that table with READ COMMITTED, which is expected, until you commit your locking transaction.
Here is after first method returns:
After second method returns, you will add one more lock to table.
If we execute select statement from a query window with SPID(55), you will see wait status.
After you main method trans commits, you will get the select statement result and it will only show shared lock from our select statement query page.
X means exclusive lock, IX intent locks. You can read more from my blog post about transactions.
If you want to read without wait, you can use nolock hint. If you want to read after first method commits, you can remove that outer scope.
Related
I have a Spring application which updates particular entity details in MySQL DB using a #Transactional method, And within the same method, I am trying to call another endpoint using #Async which is one more Spring application which reads the same entity from MySql DB and updates the value in redis storage.
Now the problem is, every time I update some value for the entity, sometimes its updated in redis and sometimes it's not.
When I tried to debug I found that sometimes the second application when it reads the entity from MySql is picking the old value instead of updated value.
Can anyone suggest me what can be done to avoid this and make sure that second application always picks the updated value of that entity from Mysql?
The answer from M. Deinum is good but there is still another way to achieve this which may be simpler for you case, depending on the state of your current application.
You could simply wrap the call to the async method in an event that will be processed after your current transaction commits so you will read the updated entity from the db correctly every time.
Is quite simple to do this, let me show you:
import org.springframework.transaction.annotation.Transactional;
import org.springframework.transaction.support.TransactionSynchronization;
import org.springframework.transaction.support.TransactionSynchronizationManager;
#Transactional
public void doSomething() {
// application code here
// this code will still execute async - but only after the
// outer transaction that surrounds this lambda is completed.
executeAfterTransactionCommits(() -> theOtherServiceWithAsyncMethod.doIt());
// more business logic here in the same transaction
}
private void executeAfterTransactionCommits(Runnable task) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
public void afterCommit() {
task.run();
}
});
}
Basically what happens here is that we supply an implementation for the current transaction callback and we only override the afterCommit method - there are others methods there that might be useful, check them out. And to avoid typing the same boilerplate code if you want to use this in other parts or simply make the method more readable I extracted that in a helper method.
The solution is not that hard, apparently you want to trigger and update after the data has been written to the database. The #Transactional only commits after the method finished executing. If another #Async method is called at the end of the method, depending on the duration of the commit (or the actual REST call) the transaction might have committed or not.
As something outside of your transaction can only see committed data it might see the updated one (if already committed) or still the old one. This also depends on the serialization level of your transaction but you generally don't want to use an exclusive lock on the database for performance reason.
To fix this the #Async method should not be called from inside the #Transactional but right after it. That way the data is always committed and the other service will see the updated data.
#Service
public class WrapperService {
private final TransactionalEntityService service1;
private final AsyncService service2;
public WrapperService(TransactionalEntityService service1, AsyncService service2) {
this.service1=service1;
this.service2=service2;
}
public updateAndSyncEntity(Entity entity) {
service1.update(entity); // Update in DB first
service2.sync(entity); // After commit trigger a sync with remote system
}
}
This service is non-transactional and as such the service1.update which, presumable, is #Transactional will update the database. When that is done you can trigger the external sync.
In my project there is a table called process_detail. Row inserted in this table as soon as a cron process starts and is updated at the end
of the cron process completion. We are using grails which internally takes care of transaction at service level method i.e. transaction starts at the start of the method, commit if the method execution successful, rollback if any exception.
Here what happens is that if the transaction fails this row also being rolled back this I don't want because this is type of a log
table. I tried creating a nested transaction and save this row and at the end update it but that fails with lock acquisition exception.
I am thinking of using MyISAM for this particular table,
this way I don't have to worry about transaction because MyISAM does not support it and it will commit immediately and no rollback possible. Here's pseudo code for what I am trying to achieve.
def someProcess(){
//Transaction starts
saveProcessDetail(details); //Commit this immediately, should not rollback if below code fails.
someOtherWork;
updateProcessDetail(details); //Commit this immediately, should
//Transaction Ends
}
Pseudo code for save and update process detail;
def saveProcessDetail(processName, processStatus){
ProcessDetail pd = new ProcessDetail(processName, processStatus);
pd.save();
}
def updateProcessDetail(processDetail, processStatus){
pd.procesStatus = processStatus;
pd.save();
}
Please advice if there is better of doing this in InnoDB. Answer could be mysql level I can find the grails solution my self. Let me know if any other info required.
Make someProcess #NonTransactional, then manage the transactional nature yourself. Write the initial saveProcessDetail with a flush:true, then make the remainder of the processing transactional, withTransaction?
Or
#NonTransactional
def someProcess() {
saveProcessDetail(details) // I'd still use a flush:true
transactionalProcessWork()
}
#Transactional
def transactionalProcessWork() {
someOtherWork()
updateProcessDetail(details)
}
I have the following method in my DAL:
public void SavePlan()
{
using (TransactionScope scope =
new TransactionScope(TransactionScopeOption.RequiresNew))
{
CallSaveDataProc();
CallLogMsgProc();
scope.Complete();
}
}
I have deliberately put a COMMIT Transaction in the CallLogMsgProc without creating a Transaction. This results in a SQLException being thrown from CallLogMsgProc procedure and scope.Complete() never executes.
However, in my database, I'm still seeing records saved by the first method, CallSaveDataProc. Am I doing something wrong?
Starting/committing transactions have to be paired, and preferably each pair should ideally be in the same scope (though each pair doesn't have to be in the same scope as another pair).
So you have a case of starting a transaction via the new TransactionScope, followed by Commit in your stored procedure (which will save the work... as you are seeing), followed by an attempt to commit the transaction "seen" by TransactionScope, which has now become invalid.
Code
double timeout_in_hours = 6.0;
MyDataContext db = new MyDataContext();
using (TransactionScope tran = new TransactionScope( TransactionScopeOption.Required, new TransactionOptions(){ IsolationLevel= System.Transactions.IsolationLevel.ReadCommitted, Timeout=TimeSpan.FromHours( timeout_in_hours )}, EnterpriseServicesInteropOption.Automatic ))
{
int total_records_processed = 0;
foreach (DataRow datarow in data.Rows)
{
//Code runs some commands on the DataContext (db),
//possibly reading/writing records and calling db.SubmitChanges
total_records_processed++;
try
{
db.SubmitChanges();
}
catch (Exception err)
{
MessageBox.Show( err.Message );
}
}
tran.Complete();
return total_records_processed;
}
While the above code is running, it successfully completes 6 or 7 hundred loop iterations. However, after 10 to 20 minutes, the catch block above catches the following error:
{"The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements."}
The tran.Complete call is never made, so why is it saying the transaction associated with the connection is completed?
Why, after successfully submitting hundreds of changes, does the connection associated with the DataContext suddenly enter a closed state? (That's the other error I sometimes get here).
When profiling SQL Server, there are just a lot of consecutive selects and inserts with really nothing else while its running. The very last thing the profiler catches is a sudden "Audit Logout", which I'm not sure if that's the cause of the problem or a side-effect of it.
Wow, the max timeout is limited by machine.config: http://forums.asp.net/t/1587009.aspx/1
"OK, we resolved this issue. apparently the .net 4.0 framework doesn't
allow you to set your transactionscope timeouts in the code as we have
done in the past. we had to make the machine.config changes by adding
< system.transactions> < machineSettings maxTimeout="02:00:00"/>
< defaultSettings timeout="02:00:00"/> < /system.transactions>
to the machine.config file. using the 2.0 framework we did not have
to make these entries as our code was overriding teh default value to
begin with."
It seems that the timeout you set in TransactionScope's constructor is ignored or defeated by a maximum timeout setting in the machine.config file. There is no mention of this in the documentation for the TransactionScope's constructor that accepts a time out parameter: http://msdn.microsoft.com/en-us/library/9wykw3s2.aspx
This makes me wonder, what if this was a shared hosting environment I was dealing with, where I could not access the machine.config file? There's really no way to break up the transaction, since it involves creating data in multiple tables with relationships and identity columns whose values are auto-incremented. What a poor design decision. If this was meant to protect servers with shared hosting, it's pointless, because such a long-running transaction would be isolated to my own database only. Also, if a program specifies a longer timeout, then it obviously expects a transaction to take a longer amount of time, so it should be allowed. This limitation is just a pointless handicap IMO that's going to cause problems. See also: TransactionScope maximumTimeout
Using the following code:
EntityManager manager = factory.createEntityManager();
manager.setFlushMode(FlushModeType.AUTO);
PhysicalCard card = new PhysicalCard();
card.setIdentifier("012345ABCDEF");
card.setStatus(CardStatusEnum.Assigned);
manager.persist(card);
manager.close();
when code runs to this line, the "card" record does not appear in the database. However, if using the FlushModeType.COMMIT, and using transaction like this:
EntityManager manager = factory.createEntityManager();
manager.setFlushMode(FlushModeType.COMMIT);
manager.getTransaction().begin();
PhysicalCard card = new PhysicalCard();
card.setIdentifier("012345ABCDEF");
card.setStatus(CardStatusEnum.Assigned);
manager.persist(card);
manager.getTransaction().commit();
manager.close();
it works fine. From the eclipselink's log i can see the previous code doesn't issue an INSERT statement while the second code does.
Do I miss something here? I'm using EclipseLink 2.3 and mysql connection/J 5.1
I am assuming that you are using EclipseLink in a Java SE application, or in a Java EE application but with an application managed EntityManager instead of a container managed EntityManager.
In both scenarios, all updates made to the persistence context are flushed only when the transaction associated with the EntityManager commits (using EntityTransaction.commit), or when the EntityManager's persistence context is flushed (using EntityManager.flush). This is the reason why the second code snippet issues the INSERT as it invokes the EntityTransaction's begin and commit methods, while the first doesn't; an invocation of em.persist does not issue an INSERT.
As far as FlushModeType values are concerned, the API documentation states the following:
COMMIT
public static final FlushModeType COMMIT
Flushing to occur at transaction commit. The provider may flush at
other times, but is not required to.
AUTO
public static final FlushModeType AUTO
(Default) Flushing to occur at query execution.
Since, queries haven't been executed in the first case case, no flushing i.e. no INSERT statements corresponding to the persistence of the PhysicalCard entity will be issued. It is the explicit commit of the EntityTransaction in the second, that is resulting in the INSERT statement being issued.