Best use pattern for a DataContext - linq-to-sql

What's the best lifetime model for a DataContext? Should I just create a new one whenever I need it (aka, function level), should I keep one available in each class that would use it (class level), or should I create a static class with a static DataContext (app-domain level)? Are there any considered best practices on this?

You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be .SubmitChanges()'d later, as otherwise you will lose those changes.
If you're just querying stuff then it's fine to create them as needed, but then if later you want to .SubmitChanges() you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the datacontext global throughout your app from the beginning.
Note the data context is disconnected. The connection is only made when the query data is enumerated (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On .SubmitChanges() the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the datacontext around keeps a connection open, it doesn't (you can hook the StateChange event of the connection to confirm this for yourself, that's how I'm sure).
There is a great article over at Rick Strahl's Blog which covers this topic in depth, far more than my answer here provides!!

I think Jeff Atwood talked about this in the Herding Code podcast, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so.
I think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.

Related

Google realtime object pool

This question is a little "meta" for SO, but there doesn't seem to be a better place to ask it...
According to Google, realtime collaborative objects are never deleted from the model. So it makes sense to pool objects where possible, rather than not-really-delete them and subsequently create new ones, thus preventing an unnecessary increase in file-size and overhead.
And here's the problem: in an "undo" scenario, this would mean pulling a deleted object out of the trash pool. But "undo" only applies to operations by the local user, and I can't see how the realtime engine could cope if that "deleted" object had already been claimed by a different user.
My question is, am I missing something or wrong-thinking, and/or is there an alternative to a per-user pool?
(It also occurs to me that as a feature, the API could handle pooling deleted objects, automatically minimizing file-bloat.)
I think you have to be very careful about reusing objects in the way you describe. Its really hard to get right. Are you actually running into size issues? In general as long as you don't constantly create and throw out objects, it shouldn't be a big deal.
You can delete the contents of the collab object when its not being used to free up space. That should generally be enough.
(Note, yes, the API could theoretically handle this object cleanup automatically. It turns out to be a really tricky problem to get right, do to features like undo. It might show up as a future feature if it becomes a real issue for people.)
Adding to Cheryl's answer, the one thing that I see as particularly challenging (actually, impossible) is the pulling-an-object-from-the-pool stuff:
Let's say you have a pool of objects, which (currently) contains a single object O1.
When a client needs a new object it will first check the pool. if the pool is not empty it will pull an object from there (the O1 object) and use it, right?
Now, consider the scenario where two clients (a.k.a, editors/collaborators) need a new object at the same time. Each of these clients will run the logic described in the previous paragraph. That is: both clients will check whether the pool is empty and both clients will pull O1 off of the pull.
So, the loosing client will "think" for some time that it succeeded. it will grab an object from the pool and will do some things with it. later on it will receive an event (E) that tells it that the object was actually pulled by another client. At this point the "loosing" client will need to create another object and re-apply whatever changes it did to the first object to this second object.
Given that you do not know if/when the (E) event is going to fire it actually means that every client needs to be prepared to replace every collaborative object it uses with a new one. This seems quite difficult. Making it more difficult is the fact that you cannot do model changes from event handlers (as this will trump the redo/undo stack). So the actual reaction to the (E) event need to be carried out outside of the (E) event handler. Thus, in the time between the receiving of the (E) event and the fix to the model, your UI layer will not be able to use the model.

LifeStylePerWebRequest - how does it work?

I have a bit of an issue. I'm trying to put a "session" class into my container. I want it to stay alive, while this user is on the site. It will simply contain various information, that I will use across my controllers.
I assume, but I am not entirely sure, that LifeStylePerWebRequest is what I need.
BUT, when I use that, it seems to create a new Session class, every single time I submit the page. Maybe this makes sense, if it's per web-request..
So, have I misunderstood PerWebRequest? Does it really create a new class every time I do a postback?
What else can I do? Singleton seems to work, but then all visitors will share the same instance, right?
PerWebRequest creates a brand new instance on each request, but same instance will be reused during same request in case your component will be needed (by other component dependencies) more then once.
If you need a lifestyle tight to the session you may have to wrote your own lifestyle or, if fits your needs, simply have singleton that internally use the session.
Have a look to hybrid lifestyles

changed object state after behavior that used the state

I want to give my previous question a second chance since I think I have chosen a bad example.
The question is how I should deal with situations where an object still can change after I have used it to do something and the new state is relevant for what is being done.
Example in pseudo-code:
class Book
method 'addChapter': adds a chapter
class Person
method 'readBook': read an object of class Book
Now when I ask the person the read a book, at least in PHP where the object will be passed by reference, the book object can still change. I could insert a chapter between chapter 3 and 4 while the person is already reading chapter 6. How can I deal with these kind of situations?
Maybe notifying the person that the book has changed? You can do it with events (not sure how events work in PHP). Another way is to implement the Observer/Observable pattern.
Any of the above answers is good for a good solution depending on your business demand.
You asked "How should I deal with situations..." it depends:
Is adding a chapter after you already shipped a book to the user legal? if not- you should throw an exception (I do not know if PHP supports exceptions, but anyway- you should treat is as an error situation).
Another solution would be to make sure that you expose and object that is already whole and does not expect changes to be made to it- this may be a valid solution especially if your decision to enable this kind of 'streaming' is performance oriented but you don't have real evidence that that this section is a performance bottleneck.
Now lets sat the addition of a new chapter is legal.
Do you want the change to be known to existing client or only to new clients?
If the former- you should implement some kind of notification logic (one of the suggeted forms is the a publisher/subscriber pattern, but there are others).
If the letter- you should make your book object immutable, so mutating operations will not be seen in existing clients, rather they would create an entirely new book the would be passed to new clients (Persons).
I could go on and on, But I suggest that next time you elaborate more on the exact problem you are trying to solve, since as you can see- the same problem can have a different solution in different domains.
Seems to me you are attempting to perform concurrent tasks. You might want to consider serializing activities to your objects instead, certainly in the case of PHP.

How do you refactor a class that is constantly being edited?

Over the course of time, my team has created a central class that handles an agglomeration of responsibilities and runs to over 8,000 lines, all of it hand-written, not auto-generated.
The mandate has come down. We need to refactor the monster class. The biggest part of the plan is to define categories of functionality into their own classes with a has-a relationship with the monster class.
That means that a lot of references that currently read like this:
var monster = new orMonster();
var timeToOpen = monster.OpeningTime.Subtract(DateTime.Now);
will soon read like this:
var monster = new Monster();
var timeToOpen = monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now);
The question is: How on Earth do we coordinate such a change? References to "orMonster" litter every single business class. Some methods are called in literally thousands of places in the code. It's guaranteed that, any time we make such a chance, someone else (probably multiple someone elses) on the team will have code checked out that calls the .OpeningTime property
How do you coordinate such a large scale change without productivity grinding to a halt?
You should make the old method call the new method. Then over time change the references to the old method to call the new method instead. Once all the client references are changed, you can delete the old method.
For more information, see Move Method in Martin Fowler's classic, Refactoring.
One thing you can do is to temporarily leave proxy methods in the monster class that will delegate to the new method. After a week or so, once you are sure all code is using the new method, then you can safely remove the proxy.
I've handled this before by going ahead and refactoring the code, but then adding methods that match the old signature that forward the calls to the new method. If you add the "Obsolete" attribute to these temporary methods, your code will still build with both the old method calls and the new method calls. Then over time you can go back through and upgrade the code that is calling the old method. The difference here is that you'll get "Warnings" during the build to help you find all of the code that needs upgrading.
I'm not sure what language you're using but in .Net you can create compiler warnings which will allow you to leave the old references for a time so that they will function as expected but place a warning for your other developers to see.
http://dotnettipoftheday.org/tips/ObsoleteAttribute.aspx
Develop your changes in a branch. Break out a subset of code to a new class, make changes across the client base, test thoroughly, and then merge back.
That concentrates the breakage to when you merge — not the entire development cycle.
Combine this with Patrick's suggestion to have the monster call the small monsters. That'll let you easily revert if your merged client code breaks changes to that client. As Patrick says, you'll be able to remove the monster's methods (now stubs) once you prove nobody's using it.
I also echo several posters' advice to expose the broken out classes directly — not via the monster. Why apply only half a cure? With the same effort, you could apply a complete cure.
Finally: write unit tests. Write lots of unit tests. Oh, boy, do you need unit tests to safely pull this one off. Did I mention you need unit tests?
Keep the old method in place and forward to the new method (as others have said) but also send a log message in the forwarding method to remind yourself to remove it.
You could just add a comment but that's too easy to miss.
Suggest using a tool such as nDepend to identify all of the references to the class methods. The output from nDepend can be used to give you a better idea about how to group the methods.
var monster = new Monster();
var timeToOpen = monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now);
I'm not sure that divvying it up and just making portions of it publically available is any better. That's violating the law of demeter and can lead to NullReference pain.
I'd suggest exposing timekeeper to people without involving the monster.
If anything you'd be well off analysing the API and seeing what you can cut and encapsulate within monster. Certainly giving monster toys to play with as opposed to making monster do all of the work itself is a good call. The main effort is defining the toys monster needs to simplify his work.
Don't refactor it.
Start over and follow the law of demeter. Create a second monster class and start from scratch. When the second monster class is finished and working, then replace occurrences of the first monster. Swap it out. Hopefully they share an interface, or you can make that happen.
And instead of this: "monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now)"
Do this: monster.SubtractOpeningTime(DateTime.Now). Don't kill yourself with dot-notation (hence the demeter)
Several people have provided good answers regarding the orchestration of the refactor itself. That's key. But you also asked about coordinating the changes between multiple people (which I think was the crux of your question). What source control are you using? Anything like CVS, SVN, etc can handle incoming changes from multiple developers at once. The key to making it go smoothly is that each person must make their commits granular and atomic, and each developer should pull other people's commits often.
I will look at first using partial class to split the single monster class over many files, grouping methods into categories.
You will need to stop anyone editing the monster class while you split in into the files.
From then on you are likely to get less merge conflicts as there will be less edits to each file. You can then change each method in the monster class, (one method per checkin) to call your new classes.
Such a huge class is really an issue. Since it grew so big and nobody felt uncomfortable, there must be something wrong with project policies. I'd say you should split into pairs and do pair programming. Create a branch for every pair of programmers. Work for 1-2 days on refactoring. Compare your results. This will help you avoid the situation when the refactoring would go from the start into the wrong direction and finally that would lead to the need of rewriting the monster class from scratch.

Most common examples of misuse of singleton class

When should you NOT use a singleton class although it might be very tempting to do so? It would be very nice if we had a list of most common instances of 'singletonitis' that we should take care to avoid.
Do not use a singleton for something that might evolve into a multipliable resource.
This probably sounds silly, but if you declare something a singleton you're making a very strong statement that it is absolutely unique. You're building code around it, more and more. And when you then find out after thousands of lines of code that it is not a singleton at all, you have a huge amount of work in front of you because all the other objects expect "the" sacred object of class WizBang to be a singleton.
Typical example: "There is only one database connection this application has, thus it is a singleton." - Bad idea. You may want to have several connections in the future. Better create a pool of database connections and populate it with just one instance. Acts like a Singleton, but all other code will have growable code for accessing the pool.
EDIT: I understand that theoretically you can extend a singleton into several objects. Yet there is no real life cycle (like pooling/unpooling) which means there is no real ownership of objects that have been handed out, i.e. the now multi-singleton would have to be stateless to be used simultaneously by different methods and threads.
Well singletons for the most part are just making things static anyway. So you're either in effect making data global, and we all know global variables are bad or you're writing static methods and that's not very OO now is it?
Here is a more detailed rant on why singletons are bad, by Steve Yegge. Basically you shouldn't use singletons in almost all cases, you can't really know that it's never going to be needed in more than one place.
I know many have answered with "when you have more than one", etc.
Since the original poster wanted a list of cases when you shouldn't use Singletons (rather than the top reason), I'll chime in with:
Whenever you're using it because you're not allowed to use a global!
The number of times I've had a junior engineer who has used a Singleton because they knew that I didn't accept globals in code-reviews. They often seem shocked when I point out that all they did was replace a global with a Singleton pattern and they still just have a global!
Here is a rant by my friend Alex Miller... It does not exactly enumerate "when you should NOT use a singleton" but it is a comprehensive, excellent post and argues that one should only use a singleton in rare instances, if at all.
I'm guilty of a big one a few years back (thankfully I've learned my lession since then).
What happened is that I came on board a desktop app project that had converted to .Net from VB6, and was a real mess. Things like 40-page (printed) functions and no real class structure. I built a class to encapsulate access to the database. Not a real data tier (yet), just a base class that a real data tier could use. Somewhere I got the bright idea to make this class a singleton. It worked okay for a year or so, and then we needed to build a web interface for the app as well. The singleton ended up being a huge bottleneck for the database, since all web users had to share the same connection. Again... lesson learned.
Looking back, it probably actually was the right choice for a short while, since it forced the other developers to be more disciplined about using it and made them aware of scoping issues not previously a problem in the VB6 world. But I should have changed it back after a few weeks before we had too much built up around it.
Singletons are virtually always a bad idea and generally useless/redundant since they are just a very limited simplification of a decent pattern.
Look up how Dependency Injection works. It solves the same problems, but in a much more useful way--in fact, you find it applies to many more parts of your design.
Although you can find DI libraries out there, you can also roll a basic one yourself, it's pretty easy.
I try to have only one singleton - an inversion of control / service locator object.
IService service = IoC.GetImplementationOf<IService>();
One of the things that tend to make it a nightmare is if it contains modifiable global state. I worked on a project where there were Singletons used all over the place for things that should have been solved in a completely different way (pass in strategies etc.) The "de-singletonification" was in some cases a major rewrite of parts of the system. I would argue that in the bigger part of the cases when people use a Singleton, it's just wrong b/c it looks nice in the first place, but turns into a problem especially in testing.
When you have multiple applications running in the same JVM.
A singleton is a singleton across the entire JVM, not just a single application. Even if multiple threads or applications seems to be creating a new singleton object, they're all using the same one if they run in the same JVM.
Sometimes, you assume there will only be one of a thing, then you turn out to be wrong.
Example, a database class. You assume you will only ever connect to your app's database.
// Its our database! We'll never need another
class Database
{
};
But wait! Your boss says, hook up to some other guys database. Say you want to add phpbb to the website and would like to poke its database to integrate some of its functionality. Should we make a new singleton or another instance of database? Most people agree that a new instance of the same class is preferred, there is no code duplication.
You'd rather have
Database ourDb;
Database otherDb;
than copy-past Database and make:
// Copy-pasted from our home-grown database.
class OtherGuysDatabase
{
};
The slippery slope here is that you might stop thinking about making new instance of classes and instead begin thinking its ok to have one type per every instance.
In the case of a connection (for instance), it makes sense that you wouldn't want to make the connection itself a singleton, you might need four connections, or you may need to destroy and recreate the connection a number of times.
But why wouldn't you access all of your connections through a single interface (i.e. connection manager)?