What will detach objects implicitly from a SQLAlchemy session? - sqlalchemy

We have a situation in which, at some point in our code, we are seeing certain objects as detached, but we don't explicitly detach the objects ourselves. What SQLAlchemy methods/actions can cause objects to become detached? Maybe closing a session or something similar?
Note: I've read the SQLAlchemy documentation, which does cover re-attaching objects to sessions, but is relatively more scant on what actually can detach instances implicitly.

session.close() will detach all objects. A rollback as noted will detach those objects that were INSERTed in the rolled-back transaction as well.

I guess the main one would be if you were to "rollback" a session. As the docs say:
Objects which were initially in the pending state when they were added
to the Session within the lifespan of the transaction are expunged,
corresponding to their INSERT statement being rolled back. The state
of their attributes remains unchanged.

Related

Does a long lived Linq2Sql DataContext ever remove references to tracked objects?

We have inherited some code that makes use of Linq2Sql and we've just found that an instance of the Data Context is being stored in a class member that is defined as private. As this class is used inside a web application once the class is created, an instance of the data context will be created and assigned to the static member - resulting in an instance of the data context that will now be used by all instances of that class (so across all users, a potential problem in itself) but also that instance of the data context will now exist for the duration of the web application (as it is held in a static class member).
Ignoring the bad design decisions that were originally taken, my question here is what happens to the data read into the data context? I know the session in NHibernate keeps a reference to any objects it reads / creates so it can track changes, etc. and the session can slowly grow and grow and never clears out unless you implicitly tell it to. Does Linq2Sql do something similar, so if the web application lived for ever (without recycling) would this linq2Sql context slowly grow until the machine either ran out of memory, or potentially it has read the entire database by satisfying the incoming requests? It's my understanding that a context isn't usually like a cache, which will remove items that "expire" from itself, or when hitting a memory limit, start removing least used items. Because of what it does, I don't think the data context is ever able to do that? Has anyone had experience of this, and either confirm my understanding or provide a reference to show what a long lived data context can do to stop this happening.
Yes the DataContext will keep track of objects it has read if its ObjectTrackingEnabled property is set to true. It is enabled by default. You cannot change the value of ObjectTrackingEnabled to false if the DataContext is already tracking state.
We set ObjectTrackingEnabled to false when we are just doing reads with no intention of making any changes. You cannot call SubmitChanges when ObjectTrackingEnabled is set to false.
I hope that helps.
Yes, a DataContext will retain references to all of the objects that have ever been associated with it. It might be that objects that have been deleted will be expelled from the cache, but otherwise it holds on to them, no matter how much memory it will cost.
Depending on the architecture and the size of the data set, the worker process will, at some point, run out of memory and crash. The only remedy would be to disable object tracking (through ObjectTrackingEnabled) or to refactor the code so it either uses a single DataContext per request, or (and this is not recommended) to regularly recycle the application-wide DataContext.

Linq2SQL locking object in multithreaded environment?

I have an windows service application where I have an object that is processed during rather long time. During the process time the user can interact with the object from a GUI and calling WCF-services on the service.
Sometimes, haven't been able reproduce the problem, it seems that the user updates a childobject on my main object it causes the processing to not find the object in the repository. Can this really happen?
Would wrapping the calls to the repository in TransactionsScope help?
ProcessThread: Works on the object
WCF-service: updates some child objects in a property on the object
ProcessThread: can't find object
Any clues?
I'm creating a new DataContext all the time so it's not shared in any way
It seems that it was a concurrency problem with "Missing and Double Reads Caused by Row Updates" as described in http://technet.microsoft.com/en-us/library/ms190805.aspx

AS3 execute code atomicly

I have two objects that are searching for one-another on the stage. They move in a certain direction with a certain speed. This is done via the Event.ENTER_FRAME. Once an objects finds the other it will start to do certain modifications on both objects, including stopping it.
Now, a certain problem came into mind. What if Object A finds Object B, start to do some modifications on Object B and the CPU is taken from ObjectA and given to ObjectB. Now, Object B will find ObjectA and will start to do modifications on ObjectA even though ObjectA is already in the process of doing this. This can be fixed with a very simple technique: once ObjectA find ObjectB it calls a lock() method. And objectB won't check for the other object while locked. The problem is that I don't know how to make the checking of the distance between the objects (this is how they find eachother) and the locking in an atomic way.
P.S. I have done a lot of multy-threading programming in Java in the past months so maybe these things don't apply here.
Thanks.
There should be no problem. Flash doesn't do multi-threading.
You can be sure that once an event function is called, it will run without interruption by other events. The only problem that you may need to consider is that you don't know the order in which multiple enter frame events will be executed. If the order matters, you should use a single event which calls your objects' event methods in the desired order.

Problem using ExtendedPersistenceContext and ApplicationException

I'm trying to use a ExtendedPersistenceContext to implement the detached object pattern using EJB 3 and Seam.
I also have a business rule engine that processes my object when I merge it based on the data on the database.
When something goes wrong in the business rule, the app launches an Exception marked with
#ApplicationException(rollback = true)
Unfortunately, according to the EJB specific and this question from SO Forcing a transaction to rollback on validation errors in Seam, that annotations forces all the object to become detached.
So basically my object is in the same state as before (it contains modification made by the user) but it can't resolve its relations using the ExtendedPersistenceContext, since it's in the Detached state.
This breaks all my page, since I have AJAX calls that I want to resolve even after the failure of the Business Engine.
I can't merge the object again otherwise the modification will be propagated on the DB, and I don't want to do it if there is an ApplicationException.
I want to rollback the transaction if a business validation fail, but I want my object to be still in a persistent state so that it can resolve its relations using the extended persistence context.
Any suggestion?
To detach a single object can use entityManager.detach(object), else can use entityManager.clear() to detach all underlying objects for a EntityManager.
You can clone the objects to maintain the changes being made & prevent them from rolling back on exception. The changes are to be done on cloned object & then apply them to the managed object before persistence.
If the object is detached, then have to perform entityManager.refresh(object) to make it managed & then applying changes of cloned object accordingly to it.

Creating static Datacontext() or Creating whenever its needed. Which is Better, and Why?

I have a function, and inside that I am creating a DataContext() everytime the function is called. What is the cost of creating a new DataContext(). Can I create a static DataContext() and use it everywhere. Because The DataContext has a full recording of all the changes when SubmitChanges() fails, is there a way I can remove those specific changes from the DataContext when SubmitChanges() fails. My Question is which is better Creating static Datacontext() or Creating whenever its needed?
This topic has been discussed quite a bit and you should read this article about DataContext lifetime management. The short answer is that DataContext is meant to be used for a unit of work, typically a single request. DataContext objects are cheap to construct and there is no database overhead to creating one.
The main reason to avoid a shared instance of DataContext is because of thread safety and change tracking. Each modification you make to a repository object is captured and translated into update/insert/delete operations when you call SubmitChanges(). This feature breaks down when using a single DataContext object.