Linq2SQL locking object in multithreaded environment? - linq-to-sql

I have an windows service application where I have an object that is processed during rather long time. During the process time the user can interact with the object from a GUI and calling WCF-services on the service.
Sometimes, haven't been able reproduce the problem, it seems that the user updates a childobject on my main object it causes the processing to not find the object in the repository. Can this really happen?
Would wrapping the calls to the repository in TransactionsScope help?
ProcessThread: Works on the object
WCF-service: updates some child objects in a property on the object
ProcessThread: can't find object
Any clues?
I'm creating a new DataContext all the time so it's not shared in any way

It seems that it was a concurrency problem with "Missing and Double Reads Caused by Row Updates" as described in http://technet.microsoft.com/en-us/library/ms190805.aspx

Related

ActionScript3: Cannot save and load objects - AIR

I am trying to create functions to save and load objects. I am storing the objects in File.applicationStorageDirectory...and using File Streams. At some point I thought my code did work, and then it stopped soon after. It also saved the last string I inputed but none of the others. I have pastebinned the functions I think will be necessary, even if someone could point me in the right direction. I am aiming to publish this on an Apple iPad...
http://pastebin.com/61WLLUAB
I am in the process of learning to program, and appreciate any constructive criticism with my layout.
Thanks
I think flash.utils.registerClassAlias() and SharedObjects will be the key for loading/saving objects. Be sure to implement IExternalizable on all objects that will be processed while you save and load your game.
About why does your function not work - first, you don't actually save your state between main sessions, you instead initialize all the objects anew, write them to files and then read them back, thus erasing previous data. You need to try reading an object out of file system prior to instantiating them anew, for each of the charID.IDs you have there. You don't have your IDs instantiated when your Main constructor starts, so Main doesn't know if any of the objects were saved beforehand.

Does a long lived Linq2Sql DataContext ever remove references to tracked objects?

We have inherited some code that makes use of Linq2Sql and we've just found that an instance of the Data Context is being stored in a class member that is defined as private. As this class is used inside a web application once the class is created, an instance of the data context will be created and assigned to the static member - resulting in an instance of the data context that will now be used by all instances of that class (so across all users, a potential problem in itself) but also that instance of the data context will now exist for the duration of the web application (as it is held in a static class member).
Ignoring the bad design decisions that were originally taken, my question here is what happens to the data read into the data context? I know the session in NHibernate keeps a reference to any objects it reads / creates so it can track changes, etc. and the session can slowly grow and grow and never clears out unless you implicitly tell it to. Does Linq2Sql do something similar, so if the web application lived for ever (without recycling) would this linq2Sql context slowly grow until the machine either ran out of memory, or potentially it has read the entire database by satisfying the incoming requests? It's my understanding that a context isn't usually like a cache, which will remove items that "expire" from itself, or when hitting a memory limit, start removing least used items. Because of what it does, I don't think the data context is ever able to do that? Has anyone had experience of this, and either confirm my understanding or provide a reference to show what a long lived data context can do to stop this happening.
Yes the DataContext will keep track of objects it has read if its ObjectTrackingEnabled property is set to true. It is enabled by default. You cannot change the value of ObjectTrackingEnabled to false if the DataContext is already tracking state.
We set ObjectTrackingEnabled to false when we are just doing reads with no intention of making any changes. You cannot call SubmitChanges when ObjectTrackingEnabled is set to false.
I hope that helps.
Yes, a DataContext will retain references to all of the objects that have ever been associated with it. It might be that objects that have been deleted will be expelled from the cache, but otherwise it holds on to them, no matter how much memory it will cost.
Depending on the architecture and the size of the data set, the worker process will, at some point, run out of memory and crash. The only remedy would be to disable object tracking (through ObjectTrackingEnabled) or to refactor the code so it either uses a single DataContext per request, or (and this is not recommended) to regularly recycle the application-wide DataContext.

Problem using ExtendedPersistenceContext and ApplicationException

I'm trying to use a ExtendedPersistenceContext to implement the detached object pattern using EJB 3 and Seam.
I also have a business rule engine that processes my object when I merge it based on the data on the database.
When something goes wrong in the business rule, the app launches an Exception marked with
#ApplicationException(rollback = true)
Unfortunately, according to the EJB specific and this question from SO Forcing a transaction to rollback on validation errors in Seam, that annotations forces all the object to become detached.
So basically my object is in the same state as before (it contains modification made by the user) but it can't resolve its relations using the ExtendedPersistenceContext, since it's in the Detached state.
This breaks all my page, since I have AJAX calls that I want to resolve even after the failure of the Business Engine.
I can't merge the object again otherwise the modification will be propagated on the DB, and I don't want to do it if there is an ApplicationException.
I want to rollback the transaction if a business validation fail, but I want my object to be still in a persistent state so that it can resolve its relations using the extended persistence context.
Any suggestion?
To detach a single object can use entityManager.detach(object), else can use entityManager.clear() to detach all underlying objects for a EntityManager.
You can clone the objects to maintain the changes being made & prevent them from rolling back on exception. The changes are to be done on cloned object & then apply them to the managed object before persistence.
If the object is detached, then have to perform entityManager.refresh(object) to make it managed & then applying changes of cloned object accordingly to it.

Creating static Datacontext() or Creating whenever its needed. Which is Better, and Why?

I have a function, and inside that I am creating a DataContext() everytime the function is called. What is the cost of creating a new DataContext(). Can I create a static DataContext() and use it everywhere. Because The DataContext has a full recording of all the changes when SubmitChanges() fails, is there a way I can remove those specific changes from the DataContext when SubmitChanges() fails. My Question is which is better Creating static Datacontext() or Creating whenever its needed?
This topic has been discussed quite a bit and you should read this article about DataContext lifetime management. The short answer is that DataContext is meant to be used for a unit of work, typically a single request. DataContext objects are cheap to construct and there is no database overhead to creating one.
The main reason to avoid a shared instance of DataContext is because of thread safety and change tracking. Each modification you make to a repository object is captured and translated into update/insert/delete operations when you call SubmitChanges(). This feature breaks down when using a single DataContext object.

grails and mysql batch processing

I'm trying to implement the advice found in this great blog post for batch processing in grails with MySQL. The problem that I'm having is that inclusion of periodic calls to session.clear() in my loop causes org.hibernate.LazyInitializationException's to be thrown. There's a quote down in the comments section of the page:
You’re second point about potentially
causing LIEs is absolutely true. If
you’re doing other things outside of
importing with the current thread,
you’ll want to make sure to reattach
any objects to the session after
you’re doing your clearing.
But how do I do that? Can anyone help me specifically understand how to "reattach any objects to the session after I'm done clearing?
I'm also interested in parallelizing the database insertion process so that I can take advantage of having a multi core processor. Can anyone provide advice on how to do that in Grails?
Grails has a few methods to help with this (they leverage hibernate under the covers).
If you know an object is detached, you can use the attach method to reconnect it.
If you've made changes to the object while it was detatched, you can use merge.
If for whatever reason, you're not sure if an object is attached to the session, you can use the link text method to find out if it is or isn't.
It might also be worth reviewing the Hibernate documentation on Session.