Most common examples of misuse of singleton class - language-agnostic

When should you NOT use a singleton class although it might be very tempting to do so? It would be very nice if we had a list of most common instances of 'singletonitis' that we should take care to avoid.

Do not use a singleton for something that might evolve into a multipliable resource.
This probably sounds silly, but if you declare something a singleton you're making a very strong statement that it is absolutely unique. You're building code around it, more and more. And when you then find out after thousands of lines of code that it is not a singleton at all, you have a huge amount of work in front of you because all the other objects expect "the" sacred object of class WizBang to be a singleton.
Typical example: "There is only one database connection this application has, thus it is a singleton." - Bad idea. You may want to have several connections in the future. Better create a pool of database connections and populate it with just one instance. Acts like a Singleton, but all other code will have growable code for accessing the pool.
EDIT: I understand that theoretically you can extend a singleton into several objects. Yet there is no real life cycle (like pooling/unpooling) which means there is no real ownership of objects that have been handed out, i.e. the now multi-singleton would have to be stateless to be used simultaneously by different methods and threads.

Well singletons for the most part are just making things static anyway. So you're either in effect making data global, and we all know global variables are bad or you're writing static methods and that's not very OO now is it?
Here is a more detailed rant on why singletons are bad, by Steve Yegge. Basically you shouldn't use singletons in almost all cases, you can't really know that it's never going to be needed in more than one place.

I know many have answered with "when you have more than one", etc.
Since the original poster wanted a list of cases when you shouldn't use Singletons (rather than the top reason), I'll chime in with:
Whenever you're using it because you're not allowed to use a global!
The number of times I've had a junior engineer who has used a Singleton because they knew that I didn't accept globals in code-reviews. They often seem shocked when I point out that all they did was replace a global with a Singleton pattern and they still just have a global!

Here is a rant by my friend Alex Miller... It does not exactly enumerate "when you should NOT use a singleton" but it is a comprehensive, excellent post and argues that one should only use a singleton in rare instances, if at all.

I'm guilty of a big one a few years back (thankfully I've learned my lession since then).
What happened is that I came on board a desktop app project that had converted to .Net from VB6, and was a real mess. Things like 40-page (printed) functions and no real class structure. I built a class to encapsulate access to the database. Not a real data tier (yet), just a base class that a real data tier could use. Somewhere I got the bright idea to make this class a singleton. It worked okay for a year or so, and then we needed to build a web interface for the app as well. The singleton ended up being a huge bottleneck for the database, since all web users had to share the same connection. Again... lesson learned.
Looking back, it probably actually was the right choice for a short while, since it forced the other developers to be more disciplined about using it and made them aware of scoping issues not previously a problem in the VB6 world. But I should have changed it back after a few weeks before we had too much built up around it.

Singletons are virtually always a bad idea and generally useless/redundant since they are just a very limited simplification of a decent pattern.
Look up how Dependency Injection works. It solves the same problems, but in a much more useful way--in fact, you find it applies to many more parts of your design.
Although you can find DI libraries out there, you can also roll a basic one yourself, it's pretty easy.

I try to have only one singleton - an inversion of control / service locator object.
IService service = IoC.GetImplementationOf<IService>();

One of the things that tend to make it a nightmare is if it contains modifiable global state. I worked on a project where there were Singletons used all over the place for things that should have been solved in a completely different way (pass in strategies etc.) The "de-singletonification" was in some cases a major rewrite of parts of the system. I would argue that in the bigger part of the cases when people use a Singleton, it's just wrong b/c it looks nice in the first place, but turns into a problem especially in testing.

When you have multiple applications running in the same JVM.
A singleton is a singleton across the entire JVM, not just a single application. Even if multiple threads or applications seems to be creating a new singleton object, they're all using the same one if they run in the same JVM.

Sometimes, you assume there will only be one of a thing, then you turn out to be wrong.
Example, a database class. You assume you will only ever connect to your app's database.
// Its our database! We'll never need another
class Database
{
};
But wait! Your boss says, hook up to some other guys database. Say you want to add phpbb to the website and would like to poke its database to integrate some of its functionality. Should we make a new singleton or another instance of database? Most people agree that a new instance of the same class is preferred, there is no code duplication.
You'd rather have
Database ourDb;
Database otherDb;
than copy-past Database and make:
// Copy-pasted from our home-grown database.
class OtherGuysDatabase
{
};
The slippery slope here is that you might stop thinking about making new instance of classes and instead begin thinking its ok to have one type per every instance.

In the case of a connection (for instance), it makes sense that you wouldn't want to make the connection itself a singleton, you might need four connections, or you may need to destroy and recreate the connection a number of times.
But why wouldn't you access all of your connections through a single interface (i.e. connection manager)?

Related

SOLID principles, and hard code configuration inside a class

I have noticed in a lot of code lately that people put hard coded configuration (like port numbers, etc.) values deep inside of classes/methods, making it difficult to find, and also not configurable.
Is this a violation of the SOLID principles? If not, is there another "principle" that I can cite to my team members about why it's not a good idea? I don't want to just say "it's bad because I don't like it" but I am having trouble thinking of a good argument.
A good argument against hardcoding a TCP port number in a class would be 'Context independence' violation. From GOOS, with my emphasis:
Context Independence
... the
"context independence" rule helps us decide whether an object hides
too much or hides the wrong information. A system is easier to change
if its objects are context-independent; that is, if each object has no
built-in knowledge about the system in which it executes. This allows
us to take units of behavior (objects) and apply them in new
situations. To be context-independent, whatever an object needs to
know about the larger environment it’s running in must be passed in.
In this specific case of Context Independence I would call it 'Environment Independence'. In other words a class with hardcoded port number has inappropriate dependency on a runtime OS environment, essentially stating 'I know that port 7778 will always be available' which is clearly wrong.
The SOLID principles cover class design.
I suspect the idea that you should store configuration in configuration files isn't normally regarded as controversial enough to warrant inventing a special principle to persuade people! :)
Most people just figure it out from experience, the first time they try get the software running anywhere other than their own development workstation.
While not strictly SOLID, another principle of OOD is the The Common Closure Principle, which states that classes that change together are packaged together. While not exactly a class, you could stretch this idea to configuration information. Since e.g. port numbers change based on different criteria than the surrounding code, it seems to violates this.
The Single Responsibility Principle (the S in SOLID) states that a class should only have one reason to change. This article gives an example of a Modem interface, and discusses how the details of how to connect and hang up are a separate responsibility from the communication of data, and will probably change for different reasons. You could use this to make a similar case for why port numbers are an extra "reason for change", separate from the class's main responsibility.

Proper object oriented structure for global classes

What are the pros and cons of using a singleton class for sound management?
I am concerned that it's very anti-OOP in structure and don't want to potentially get caught up going down the wrong path, but what is a better alternative?
It's an awkward topic but I'd say that a sound manager style class is a good candidate for a class that should not be initialized.
Similarly, I would find it OK if a keyboard input manager style class was a completely static class.
Some notes for my reasoning:
You would not expect more than one instance that deals with all sounds.
It's easily accessible, which is a good thing in this case because sound seems more like an application-level utility rather than something that should only be accessed by certain objects. Making the Player class for a game static for example would be a very poor design choice, because almost no other classes in a game need reference directly to the Player.
In the context of a game for example, imagine the amount of classes that would need a reference to an instance of a sound manager; enemies, effects, items, the UI, the environment. What a nightmare - having a static sound manager class eliminates this requirement.
There aren't many cases that I can think of where it makes no sense at all to have access to sounds. A sound can be triggered relevantly by almost anything - the move of the mouse, an explosion effect, the loading of a dialogue, etc. Static classes are bad when they have almost no relevance or use to the majority of the other classes in your application - sound does.
Anyways; that's my point of view to offset the likely opposing answers that will appear here.
They are bad because of the same reasons why globals are bad, some useful reading :
http://blogs.msdn.com/b/scottdensmore/archive/2004/05/25/140827.aspx
A better alternative is to have it as a member of your application class and pass references of it to only modules that need to deal with sound.
"Managers" are usually classes that are very complex in nature, and thus likely violate the Single Responsibility Principle. To paraphrase Uncle Bob Martin: Any time you feel yourself tempted to call a class "Manager" of something - that's a code smell.
In your case, you are dealing with at least three different responsibilities:
Loading and storing the sounds.
Playing the sounds when needed.
Controlling output parameters, such as volume and panning.
Of these, two might be implemented as singletons, but you should always be very careful with this pattern, because in itself, it violates the SRP, and if used the wrong way, it causes your code to be tightly coupled (instead, you should use the Dependency Injection pattern, possibly by means of a framework, such as SwiftSuspenders, but not necessarily):
Loading and storing sounds is essentially a strictly data related task, and should thus be handled by the SoundModel, of which you only need one instance per application.
Controlling output parameters is something that you probably want to handle in a central place, to be able to change global volume settings, etc. This may be implemented as a singleton, but more likely is a tree-like structure, where a master SoundController handles global settings, and several child SoundControllers are responsible for more specific contexts, such as UI sound effects, game sounds, music, etc.
Playing the sound is something, which will occur in many places, and in many different ways: There may be loops (to which you need to keep references to be able to stop them later), or effect sounds (which are usually short and play only once), or music (where each song is usually played once, but needs subsequent songs to be started automatically, when the end is reached). For each of those variations (and whichever ones you come up with), you should create a different class, which implements a common SoundPlayer interface, e.g. LoopSoundPlayerImpl, SequentialSoundPlayerImpl, EFXSoundPlayerImpl, etc. The interface should be as simple as play(),pause(), rewind() - you can easily exchange those later, and will not have any problems with tightly coupled libraries.
Each SoundPlayer can hold a reference to both the master SoundController an its content-specific one, as well as - possibly - the SoundModel. These, then can be static references: Since they are all parts of your own sound plugin, they will usually be deployed as a package, and therefore tight coupling won't do much damage here - it is important, though, not to cross the boundary of the plugin: Instantiate everything within your Main partition, and pass on the instances to all classes, which need them; only have the SoundPlayer interface show up within your game logic, etc.

What programming practice that you once liked have you since changed your mind about? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).
An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.
Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.
My questions to the community, in the spirit of self-improvement, are:
What programming patterns or practices have you reconsidered recently, and now try to avoid?
What did you decide to replace them with?
//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.
Single return points.
I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.
Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.
Trying to code things perfectly on the first try.
Trying to create perfect OO model before coding.
Designing everything for flexibility and future improvements.
In one word overengineering.
Hungarian notation (both Forms and Systems).
I used to prefix everything. strSomeString or txtFoo.
Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.
The "perfect" architecture
I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.
And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.
We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.
The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.
Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?
Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.
The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.
Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down.
So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.
Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.
Never crashing.
It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.
Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.
My favorite "don't crash" pattern is this:
public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}
Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)
I thought it made sense to apply design patterns whenever I recognised them.
Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.
Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.
Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.
Really, slavishly adhering to anything can be detrimental.
This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.
Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!
(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)
Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).
Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.
In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.
Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.
Designing more than I coded.
After a while, it turns into analysis paralysis.
The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.
Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.
Abbreviating variable/method/table/... Names
I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...
Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.
Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.
It doesn't make anybody's life easier
Its more code that can have bugs in it
A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.
Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.
I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.
In C#, using _notation for private members. I now think it's ugly.
I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.
I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.
Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.
The code will not look at all what you first thought it would look like. Happens every time :)
I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.
I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.
Checked Exceptions
An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.
Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.
That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.
I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.
When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.
Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.
There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.
Initializing all class members.
I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:
normally means that every variable is initialized twice before ever being read
is silly because in most languages automatically initialize variables to NULL.
actually enforces a slight performance hit in most languages
can bloat code on larger projects
Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.
In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!

How do you refactor a class that is constantly being edited?

Over the course of time, my team has created a central class that handles an agglomeration of responsibilities and runs to over 8,000 lines, all of it hand-written, not auto-generated.
The mandate has come down. We need to refactor the monster class. The biggest part of the plan is to define categories of functionality into their own classes with a has-a relationship with the monster class.
That means that a lot of references that currently read like this:
var monster = new orMonster();
var timeToOpen = monster.OpeningTime.Subtract(DateTime.Now);
will soon read like this:
var monster = new Monster();
var timeToOpen = monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now);
The question is: How on Earth do we coordinate such a change? References to "orMonster" litter every single business class. Some methods are called in literally thousands of places in the code. It's guaranteed that, any time we make such a chance, someone else (probably multiple someone elses) on the team will have code checked out that calls the .OpeningTime property
How do you coordinate such a large scale change without productivity grinding to a halt?
You should make the old method call the new method. Then over time change the references to the old method to call the new method instead. Once all the client references are changed, you can delete the old method.
For more information, see Move Method in Martin Fowler's classic, Refactoring.
One thing you can do is to temporarily leave proxy methods in the monster class that will delegate to the new method. After a week or so, once you are sure all code is using the new method, then you can safely remove the proxy.
I've handled this before by going ahead and refactoring the code, but then adding methods that match the old signature that forward the calls to the new method. If you add the "Obsolete" attribute to these temporary methods, your code will still build with both the old method calls and the new method calls. Then over time you can go back through and upgrade the code that is calling the old method. The difference here is that you'll get "Warnings" during the build to help you find all of the code that needs upgrading.
I'm not sure what language you're using but in .Net you can create compiler warnings which will allow you to leave the old references for a time so that they will function as expected but place a warning for your other developers to see.
http://dotnettipoftheday.org/tips/ObsoleteAttribute.aspx
Develop your changes in a branch. Break out a subset of code to a new class, make changes across the client base, test thoroughly, and then merge back.
That concentrates the breakage to when you merge — not the entire development cycle.
Combine this with Patrick's suggestion to have the monster call the small monsters. That'll let you easily revert if your merged client code breaks changes to that client. As Patrick says, you'll be able to remove the monster's methods (now stubs) once you prove nobody's using it.
I also echo several posters' advice to expose the broken out classes directly — not via the monster. Why apply only half a cure? With the same effort, you could apply a complete cure.
Finally: write unit tests. Write lots of unit tests. Oh, boy, do you need unit tests to safely pull this one off. Did I mention you need unit tests?
Keep the old method in place and forward to the new method (as others have said) but also send a log message in the forwarding method to remind yourself to remove it.
You could just add a comment but that's too easy to miss.
Suggest using a tool such as nDepend to identify all of the references to the class methods. The output from nDepend can be used to give you a better idea about how to group the methods.
var monster = new Monster();
var timeToOpen = monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now);
I'm not sure that divvying it up and just making portions of it publically available is any better. That's violating the law of demeter and can lead to NullReference pain.
I'd suggest exposing timekeeper to people without involving the monster.
If anything you'd be well off analysing the API and seeing what you can cut and encapsulate within monster. Certainly giving monster toys to play with as opposed to making monster do all of the work itself is a good call. The main effort is defining the toys monster needs to simplify his work.
Don't refactor it.
Start over and follow the law of demeter. Create a second monster class and start from scratch. When the second monster class is finished and working, then replace occurrences of the first monster. Swap it out. Hopefully they share an interface, or you can make that happen.
And instead of this: "monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now)"
Do this: monster.SubtractOpeningTime(DateTime.Now). Don't kill yourself with dot-notation (hence the demeter)
Several people have provided good answers regarding the orchestration of the refactor itself. That's key. But you also asked about coordinating the changes between multiple people (which I think was the crux of your question). What source control are you using? Anything like CVS, SVN, etc can handle incoming changes from multiple developers at once. The key to making it go smoothly is that each person must make their commits granular and atomic, and each developer should pull other people's commits often.
I will look at first using partial class to split the single monster class over many files, grouping methods into categories.
You will need to stop anyone editing the monster class while you split in into the files.
From then on you are likely to get less merge conflicts as there will be less edits to each file. You can then change each method in the monster class, (one method per checkin) to call your new classes.
Such a huge class is really an issue. Since it grew so big and nobody felt uncomfortable, there must be something wrong with project policies. I'd say you should split into pairs and do pair programming. Create a branch for every pair of programmers. Work for 1-2 days on refactoring. Compare your results. This will help you avoid the situation when the refactoring would go from the start into the wrong direction and finally that would lead to the need of rewriting the monster class from scratch.

Best use pattern for a DataContext

What's the best lifetime model for a DataContext? Should I just create a new one whenever I need it (aka, function level), should I keep one available in each class that would use it (class level), or should I create a static class with a static DataContext (app-domain level)? Are there any considered best practices on this?
You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be .SubmitChanges()'d later, as otherwise you will lose those changes.
If you're just querying stuff then it's fine to create them as needed, but then if later you want to .SubmitChanges() you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the datacontext global throughout your app from the beginning.
Note the data context is disconnected. The connection is only made when the query data is enumerated (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On .SubmitChanges() the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the datacontext around keeps a connection open, it doesn't (you can hook the StateChange event of the connection to confirm this for yourself, that's how I'm sure).
There is a great article over at Rick Strahl's Blog which covers this topic in depth, far more than my answer here provides!!
I think Jeff Atwood talked about this in the Herding Code podcast, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so.
I think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.