We have inherited some code that makes use of Linq2Sql and we've just found that an instance of the Data Context is being stored in a class member that is defined as private. As this class is used inside a web application once the class is created, an instance of the data context will be created and assigned to the static member - resulting in an instance of the data context that will now be used by all instances of that class (so across all users, a potential problem in itself) but also that instance of the data context will now exist for the duration of the web application (as it is held in a static class member).
Ignoring the bad design decisions that were originally taken, my question here is what happens to the data read into the data context? I know the session in NHibernate keeps a reference to any objects it reads / creates so it can track changes, etc. and the session can slowly grow and grow and never clears out unless you implicitly tell it to. Does Linq2Sql do something similar, so if the web application lived for ever (without recycling) would this linq2Sql context slowly grow until the machine either ran out of memory, or potentially it has read the entire database by satisfying the incoming requests? It's my understanding that a context isn't usually like a cache, which will remove items that "expire" from itself, or when hitting a memory limit, start removing least used items. Because of what it does, I don't think the data context is ever able to do that? Has anyone had experience of this, and either confirm my understanding or provide a reference to show what a long lived data context can do to stop this happening.
Yes the DataContext will keep track of objects it has read if its ObjectTrackingEnabled property is set to true. It is enabled by default. You cannot change the value of ObjectTrackingEnabled to false if the DataContext is already tracking state.
We set ObjectTrackingEnabled to false when we are just doing reads with no intention of making any changes. You cannot call SubmitChanges when ObjectTrackingEnabled is set to false.
I hope that helps.
Yes, a DataContext will retain references to all of the objects that have ever been associated with it. It might be that objects that have been deleted will be expelled from the cache, but otherwise it holds on to them, no matter how much memory it will cost.
Depending on the architecture and the size of the data set, the worker process will, at some point, run out of memory and crash. The only remedy would be to disable object tracking (through ObjectTrackingEnabled) or to refactor the code so it either uses a single DataContext per request, or (and this is not recommended) to regularly recycle the application-wide DataContext.
Related
I'm using AFNetworking for all my connections in my app. I created a singleton 'client' class that takes care of all the AFNetworking code and uses AFHTTPRequestOperationManager. What I am confused about is whether the AFHTTPRequestOperationManager object should be a property, or should I recreate one everytime my client is asked for a connection? If it is a property, can my client be called many times asynchronously, or will that cause problems, since the same instance of AFHTTPRequestOperationManager will be used possibly at the same time ?
Typically, your singleton 'client' class would be a subclass of AFHTTPRequestOperationManager. It could also be a property, but then you won't be able to override methods. Some commonly overridden methods are:
- HTTPRequestOperationWithRequest:success:failure:, to modify how all request operations are constructed (for example, if you need an identical header in every request)
– initWithBaseURL:, to apply additional customization to the operation manager
That said, a property could work fine depending on your needs. (See Prefer composition over inheritance? for some delightful weekend reading.)
And finally:
If it is a property, can my client be called many times asynchronously, or will that cause problems, since the same instance of AFHTTPRequestOperationManager will be used possibly at the same time?
Yes, AFHTTPRequestOperationManager is designed to be thread-safe. You can tell it to do stuff from different threads. (Note that its completion blocks are always called on the main thread, since UI work is typically done there.)
I'm trying to use a ExtendedPersistenceContext to implement the detached object pattern using EJB 3 and Seam.
I also have a business rule engine that processes my object when I merge it based on the data on the database.
When something goes wrong in the business rule, the app launches an Exception marked with
#ApplicationException(rollback = true)
Unfortunately, according to the EJB specific and this question from SO Forcing a transaction to rollback on validation errors in Seam, that annotations forces all the object to become detached.
So basically my object is in the same state as before (it contains modification made by the user) but it can't resolve its relations using the ExtendedPersistenceContext, since it's in the Detached state.
This breaks all my page, since I have AJAX calls that I want to resolve even after the failure of the Business Engine.
I can't merge the object again otherwise the modification will be propagated on the DB, and I don't want to do it if there is an ApplicationException.
I want to rollback the transaction if a business validation fail, but I want my object to be still in a persistent state so that it can resolve its relations using the extended persistence context.
Any suggestion?
To detach a single object can use entityManager.detach(object), else can use entityManager.clear() to detach all underlying objects for a EntityManager.
You can clone the objects to maintain the changes being made & prevent them from rolling back on exception. The changes are to be done on cloned object & then apply them to the managed object before persistence.
If the object is detached, then have to perform entityManager.refresh(object) to make it managed & then applying changes of cloned object accordingly to it.
I have a function, and inside that I am creating a DataContext() everytime the function is called. What is the cost of creating a new DataContext(). Can I create a static DataContext() and use it everywhere. Because The DataContext has a full recording of all the changes when SubmitChanges() fails, is there a way I can remove those specific changes from the DataContext when SubmitChanges() fails. My Question is which is better Creating static Datacontext() or Creating whenever its needed?
This topic has been discussed quite a bit and you should read this article about DataContext lifetime management. The short answer is that DataContext is meant to be used for a unit of work, typically a single request. DataContext objects are cheap to construct and there is no database overhead to creating one.
The main reason to avoid a shared instance of DataContext is because of thread safety and change tracking. Each modification you make to a repository object is captured and translated into update/insert/delete operations when you call SubmitChanges(). This feature breaks down when using a single DataContext object.
I'm new to linq-to-sql (and sql for that matter), and I've started to gather evidence that maybe I'm not doing things the right way, so I wanted to see what you all have to say.
In my staff allocation application I allow the user to create assignments between employees and projects. At the beginning of the application, I open up a linq-to-sql data context to my management database. Throughout the program, I never let that data context go. As a matter of fact, most of the form constructors take this data context as one of their arguments.
I kinda thought that this was the way to do things until I read through another SO question where the asker was discussing repetitively re-creating the data context throughout his program and then "attaching" the entities to the new data contexts as needed. This would help me get around the problem I've been having wherein things are "sneaking" into my database.
So where would you use the first style (and don't be ashamed to say never), and where would you use the second style?
If you are writing a web application in, say, ASP.NET MVC, or a web service, you will be recreating the DataContext each time, as the application is "stateless" between page GETs and POSTs.
If you are writing a Winforms or WPF application, you can do it the same way, although holding a DataContext open can be easier to do, since Winforms applications are stateful (i.e. you have a container for the DataContext to live).
In general, it is probably sensible to open a DataContext each time you need to complete a "unit of work." The DataContext itself is fairly lightweight, so opening one for each "transaction" is not that big of a deal. This practice is also consistent with software layers in enterprise applications (i.e. Database, DAL, Service Layer, Repository, etc.), and helps to enforce separation of concerns between the requisite layers.
The generally recommended way of doing things is to create a new DataContext for each atomic operation. DataContext's are actually quite cheap to instantiate, and are very well suited to rapid turnover.
As a general rule of thumb, I tend to instantiate a DataContext, perform a CRUD operation, then dispose of it again. This could be the updating of a single entity, or inserting a load of objects. Do whatever makes the most sense for your scenario.
Just be careful if you're passing entities from your context around, as exceptions will be thrown if you try to enumerate or retrieve related data - it's best practice to transform the LINQ entities into independent objects (for example, a Person LINQ entity could be transformed into a PersonResult, which is consumed by the logic layer of your solution).
Hope that helps!
I'm trying to find the most efficient way to send my Linq2Sql objects to my jQuery plugins via JSON, preferably without additional code for each class.
The EntitySets are the main issue as they cause not only recursion, but when recursion is ignored (using JSON.NET's ReferenceLoopHandling feature) a silly amount of data can be retrieved, when I only really need 1 or 2 levels. This gets really bad when you're talking about Users, Roles and Permissions as you get the User's Role, the User's Permissions, the Role's Permissions, and the Role's Users all up in your JSON before it hits recursion and stops. Compare this to what I actually want, which is just the RoleId.
My initial approach was to send a "simplified" version of the object, where I reflect the entity and set any EntitySets to null, but of course in the above example Roles gets set to null and so RoleId is null. Setting only the 2nd level properties to null kind of works but there's still too much data as the EntitySets that weren't killed (the first level ones) repopulate their associated tables when the JsonSerializer does its reflection and I still get all those Permission objects that I just don't need.
I definately don't want to get into the situation of creating a lightweight version of every class and implementing "From" and "To" style methods on them, as this is a lot of work and seems wasteful.
Another option is to put a JsonIgnoreAttribute on the relevant properties, but this is going to cause a nightmare scenario whenever classes need to be re-generated.
My current favourite solution which I like and hate at the same time is to put the classes into opt-in serialization mode, but because I can't add attributes to the real properties I'd have to create JSON-only properties in a partial class. Again, this seems wasteful but I think it's the best so far.
Any suggestions gratefully received!
Have you tried to set the Serialization Mode in the dbml file?
It's a standard property under code generation and when you set it to Unidirectional it won't generate all the additional levels of your table structure. I've used this with silverlight and WCF to send data because the data contracts don't allow for additional levels to be sent (silverlight is very limited on what you can and can't do).
Hope this helps!