I'm building database by using mysql connector with EF6.
Way to handling concurrency updates happens.
Client A updates row
Client B come in updates same row
Client B need to wait Client A commit his updates.
Codes:
using (myEntities db = new myEntities ())
{
db.Database.Connection.Open();
try
{
using (var scope = db .Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
{
var test = db.customer_table.Where(x => x.id == 38).FirstOrDefault();
test.bank_holder_name = "CLIENT A";
db.SaveChanges();
scope.Commit();
}
}
}
catch (Exception ex)
{
throw;
}
}
While debugging, I purposely pause when Client A doing this SaveChanges() step.
But Client B can finish his updates without any wait.
Anyone here is experiencing this issue?
P/S: I did a lot reading and trial about Entity Framework concurrency issue, like;
a) Optimistic locking with row_version (Not working perfectly if the timestamp not unique enough to catch concurrency)
b) Transaction Scope (Still the same as result above)
Anyone here have idea to stop concurrency in Entity Framework? Thanks in Advance!
Related
I implement a Dashboard functionality that checks every time at program start a list of Requirement-Objects for a bunch of different characteristics like progress, missing data and alike and sets for each characteristic a dedicated beacon on the UI.
protected void initializePerformanceIndicator() {
try {
updateA();
updateB();
...
updateF();
updateG();
} catch (Exception e) {
ErrorHandler.showError("Cannot show KPI Performance", e);
}
}
The checks have different compute demands some are faster some slower, therefore each of this checks runs in a dedicated Task to provide some feedback to the user. The skeleton of such a Task is always the same
protected void updateA() throws Exception {
Task<Void> task = new Task<Void>() {
#Override
protected Void call() throws Exception {
embeddedBudgetKPIController.setHyperlink("Budget", null);
embeddedBudgetKPIController.setToolTip("...");
ObservableList<UserRequirement> issues = FXCollections.observableArrayList();
List<UserRequirement> requirements = reqService.getAllUserRequirements(false); // all requirements of the selected product
for(UserRequirement req: requirements) {
if(*some criteria*) {
issues.add(req);
}
}
if(issues.isEmpty()) {
embeddedBudgetKPIController.setBeaconColor(Color.GREEN);
} else {
embeddedBudgetKPIController.setBeaconColor(Color.RED);
}
return null;
};
};
task.setOnSucceeded(e -> {
// Nothing to do
});
Thread tt = new Thread(task);
tt.start();
}
Before initializePerformanceIndicator is called, I retrieved already elsewhere the data from the database querying a number Spring Repositories:
protected final ObservableList<UserRequirement> allUserRequirements = FXCollections.observableArrayList();
public synchronized ObservableList<UserRequirement> getAllUserRequirements(boolean forceUpdate) throws Exception {
logger.debug(""); // show that this method is called
Product selectedProduct = SelectedScope.getSelectedProduct();
if(selectedProduct == null) {
throw new Exception("No selProduct selected");
}
if(forceUpdate || allUserRequirements.isEmpty()) {
allUserRequirements.clear();
allUserRequirements.addAll(epicRepository.findByProductAndRevisionSuccessorIsNull(selectedProduct));
allUserRequirements.addAll(themeRepository.findByProductAndRevisionSuccessorIsNull(selectedProduct));
allUserRequirements.addAll(userStoryRepository.findByProductAndRevisionSuccessorIsNull(selectedProduct));
allUserRequirements.addAll(tangibleRepository.findByProductAndRevisionSuccessorIsNull(selectedProduct));
}
return allUserRequirements;
}
and as you see updateBudgetKPIController calls getallUserRequirements with the parameter false. Therefore it returns the buffered result set and is not re-fetching data from database. So far everything is fine.
I can run each of these Tasks individually without problem. I tried a number combinations with 2 Tasks. Works fine, but the program will never show more than three or four beacons. Which ones are shown differs as well - what is expected as a consequence of the different Tasks. If I exceed three or four Tasks I often get no error at all, but the UI is just not showing more than three to four beacons.
Sometimes I do get an error message, which is
WARN 08:14 o.h.e.j.s.SqlExceptionHelper.logExceptions:137: SQL Error: 0, SQLState: S1009
ERROR 08:14 o.h.e.j.s.SqlExceptionHelper.logExceptions:142: No operations allowed after statement closed.
I debugged it, and realized that I was generating way too many select statements. The UserRequirement entity has almost a dozen OneToMany relations, some where defined with FetchType.LAZY, so I thought it would be better anyway to configure all these relations as
#OneToMany(fetch = FetchType.LAZY, mappedBy="parent", cascade = CascadeType.ALL)
Because of the LAZY loading, every Task tries to load additional data in the if(*some criteria*) part.
The problem did not disappear but I get more information, as the error is now
WARN 11:02 o.h.c.i.AbstractPersistentCollection.withTemporarySessionIfNeeded:278: Unable to close temporary session used to load lazy collection associated to no session
WARN 11:02 o.h.e.j.s.SqlExceptionHelper.logExceptions:137: SQL Error: 0, SQLState: S1009
ERROR 11:02 o.h.e.j.s.SqlExceptionHelper.logExceptions:142: No operations allowed after statement closed.
So I do have a LAZY loading issue.
I am using Spring Boot 2.1.6, MySQL 8.0.15 Community Server, Hibernate Core {5.3.10.Final}, Java 1.8.0_211 and the com.mysql.cj.jdbc.Driver
From a former issue, I have in my properties file the following configuration
# Prevent LazyInitializationException
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
Don't know whether this has a side effect?!
Probably changing the LAZY loading to EAGER will fix it - haven't tried yet - but it would delay program start significantly. Therefore I would prefer a solution with LAZY loading.
Any ideas? I also appreciate any ideas regarding how to further isolate the root cause as the error message is not really explicit and I can't see which part of my code triggers it. Plus when I debug it, the behavior changes as I compute all Tasks sequentially rather then in parallel. Thank you in advance.
The issue was caused by different Tasks accessing the same getter of some of the entities. If the first getter call opened a connection, the second call got on it, and then the first call closed the ResultSet, the second call one was in trouble. Synchronizing the getter method solved the problem.
In short, what I am trying solve is how to recover from certain database errors in a Grails application using Hibernate and continue on with the transaction skipping over the failed row updates that are part of a batch of changes.
The application uses Grails 2.3.11 but I have also tried with version 1.3.8 with similar failed results.
Basically there is a Grails service class that iterates over a list of imported records and attempts to update associated master records appropriately. In certain situations exceptions might occur during the domain.save(flush:true) call e.g. org.hibernate.exception.DataException thrown due to issues like (Data truncation: Data too long for column ...).
At this point I have tried:
Disabling transactions
Using domainObj.withTransaction() for each individual record
Trying various #Transactional annotations
Calling domain.clearErrors() and domain.discard() after catching the exception
Tried using a nested service with Transactional annotation with noRollbackFor as shown below
A number of other approaches but nothing I've tried has worked
Example code:
#Transactional
class UpdateService {
public updateBatch(Integer batchId) {
...
list.each { record ->
record.value = 123
try {
nestedService.saveDomain()
} catch (e) {
record.clearErrors()
record.discard()
}
}
batch.status = "POSTED"
batch.save()
}
}
#Transactional
class NestedService {
#Transactional(propagation = Propagation.REQUIRED, noRollbackFor = RuntimeException.class)
public void saveDomain(domainObj) throws RuntimeException {
if (domainObj.validate() && domainObj.save(flush:true) {
log.info "domain $domain was saved"
}
}
}
Once an error occurs I can't seem to clear out the hibernate session. On each subsequent record being updated I receive the error:
org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction
where it indicates the original failed domain id.
Revision:
Vahid, Thanks for the suggestions. I have tried that. I realized one issue is that I am passing objects across transactional boundaries. So I experimented with the NestedService class do something along the lines of:
#Transactional(propagation = Propagation.REQUIRE_NEW)
public void saveDomain(domainObj) {
def newObj = new Domain.get(domainObj.id)
newObj.properties = domainObj.properties
if (newObj.validate() && newObj.save(force:true) ) { ... }
I expected that to work but the original domainObj still fails even though I'm not calling the save on it. Very strange...
A simple approach would be to loop and then use validate(). If it does fail, then just store the id of the failed entity and proceed.
if(!domainObject.validate()){
// store Id for trying it again later ?
}else{
// Save
}
I am writing an application that utilizes Windows Phone's LocalDB feature. I realized that I need to ensure that only one thread is performing operations on a given database, so I have created an AutoResetEvent object to coordinate the various threads vying for access to the database. My code goes pretty much like this:
class SomeClass
{
AutoResetEvent DatabaseLock = new AutoResetEvent(true);
public async void AddData(Person person)
{
await Task.Run(() =>
{
MyDataContext db = null;
try
{
this.DatabaseLock.WaitOne();
db = MyDataContext.GetInstance();
db.People.InsertOnSubmit(person);
db.SubmitChanges();
}
finally
{
if (db == null)
db.Dispose();
this.DatabaseLock.Set();
}
}
}
}
Obviously that's not the real code, but it's the same general pattern. Anyway, I decided to use the AutoResetEvent object here, as I have seen suggested online in multiple locations. However, I would be inclined to use a lock {...} statement instead.
Is there any reason to use AutoResetEvent? I feel like it's slow compared to locking an object.
You should use an AutoResetEvent when you need to signal to another thread.
In this case, you're just locking a resource, so the lock statement would be a better choice.
I've got a integration test that looks like this:
using (var tran = Connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Act.
var result = controller.Create(something);
// Assert.
Assert.IsNotNull(result);
}
catch (Exception exc)
{
Assert.Fail(exc.Message);
}
finally
{
tran.Rollback();
Connection.Close();
}
}
Now, in that Create method, i end up calling a stored procedure which returns multiple result sets.
Here's the code to call that SP:
var cmd = Database.Connection.CreateCommand();
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "exec dbo.MySP #SomeParam";
cmd.Parameters.Add(new SqlParameter { Value = "test", ParameterName = "SomeParam" });
using (var rdr = cmd.ExecuteReader()) <--- exception thrown here.
{
// code to read result sets.
}
I get the following exception:
System.InvalidOperationException: ExecuteReader requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
Which i guess makes sense, but i would have thought it would inherit the pending local transaction?
I previously had the above code open a new connection, but it just timed out due to an apparent lock the pending transaction had on certain tables, despite the read uncommitted isolation level.
Basically, i need to be able to have an integration test which:
Opens a transaction
Does some stuff, including saving some records to the db, then calling another stored procedure which accesses data which includes the newly created data
Rollback the transaction.
Any ideas guys?
This works:
using (new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadUncommitted
}))
This doesn't:
using (var tran = Connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
Must have something to do with the fact that TransactionScope lives outside of the actual Connection, so it wraps all connections that are opened inside of it, whilst the former code opens the transaction with a specific connection.
I am trying learn how to best use the Reactive Extensions library and have set up simple test WPF application to view a logging database table. In a ViewModel class I am populating an ObservableCollection with the first 100 log entries from a Linq to Sql DataContext and I'm trying to use Rx to keep the UI responsive.
The following snippet works unless the database is unavailable at which point the app throws an exception and crashes. Where would be the best place to handle database connection exceptions and why are they not handled by the OnError method of the Observer?
ObservableCollection<LogEntry> _logEntries = new ObservableCollection<LogEntry>();
DataContext dataContext = new DataContext( "connection string" );
(from e in dataContext.LogEntries
select e).Take( 100 ).ToObservable()
.SubscribeOn( Scheduler.ThreadPool )
.ObserveOnDispatcher()
.Subscribe( _logEntries.Add, ex => System.Diagnostics.Debug.WriteLine( ex.ToString() ) );
Try this instead of ToObservable:
public static IObservable<T> SafeToObservable(this IEnumerable<T> This)
{
return Observable.Create(subj => {
try {
foreach(var v in This) {
subj.OnNext(v);
}
subj.OnCompleted();
} catch (Exception ex) {
subj.OnError(ex);
}
return Disposable.Empty;
});
}
In general though, this isn't a great use of Rx since the data source isn't very easy to Rx'ify - in fact, the code will execute most of the work on the UI thread, send it out to random worker threads, then send it back (i.e. completely wasted work). Task + Dispatcher.BeginInvoke might suit you better here.