In Mercurial there is the fetch extension that you can use to emulate something like svn update, i.e. to merge with incoming changes without even looking at them. But even if you don't use hg fetch, most of your merges will "miraculously" work without resulting in a conflict. This is great, but how safe is it to trust such merges to be valid merges of, say, Java code?
Are there any examples to demonstrate why or when these merges should be (or shouldn't be) trusted?
Well, they are quite safe as long as you don't start reorganizing your code.
Consider the following example:
class A {
public void methodA() {
// do something here
// that should be great
}
public void methodB() {
// And now I'll
// do something here
// and it's gonna be good.
}
public void methodC() {
// Finally, let's
// do something here
}
}
Now, you start working and decide to add some instructions to methodC.
During this time, a co-worker decided that methodC, for whatever reason, should be placed on top of the class.
You will end up with two versions to merge.
Yours:
class A {
public void methodA() {
// do something here
// that should be great
}
public void methodB() {
// And now I'll
// do something here
// and it's gonna be good.
}
public void methodC() {
// Finally, let's
// do something here
// and here your marvelous changes
}
}
And your co-worker ones:
class A {
public void methodC() {
// Finally, let's
// do something here
}
public void methodA() {
// do something here
// that should be great
}
public void methodB() {
// And now I'll
// do something here
// and it's gonna be good.
}
}
When the merge occurs, as the default context is three lines wide, the automatic merge might consider that a valid result would be this one:
class A {
public void methodC() {
// Finally, let's
// do something here
}
public void methodA() {
// do something here
// that should be great
}
public void methodB() {
// And now I'll
// do something here
// and it's gonna be good.
}
public void methodC() {
// Finally, let's
// do something here
// and here your marvelous changes
}
}
For that reason, if possible, I try to keep methods organized so that accessors are grouped, and untouched, privates are grouped by common logic, etc... But it is not always possible.
Hopefully this case is quite rare and if there are too many changes, mercurial will request you to merge the classes manually.
I've been using Mercurial at work for several months now. The overall team is 50+ SW developers - divided down into sub-groups of 5-10 developers. We've yet to see a failed merge that was not the result of:
a parent file being broken/corrupted going into the merge (ie: coding error)
incorrect resolution of merge conflicts by developer
So, we've been perfectly happy with the merge results for standard text files (.c, .h, .pl, makefile, linkerfiles, etc..).
We have identified a case in which merging is a bad idea and can result in some issues, and this involves auto-generated code or models being stored as text files. Mercurial will not try to merge binary files, but it will happily merge models, auto-generated .c/.h files, and so on. You can specify merge strategies per directory or file type, but even with these settings in place, inappropriate merging can occur due to Mercurial's premerge. Outside of these cases, Hg merging has been very effective for us.
ps: if you are interested in the model/auto-gen case, find a suggestion here.
A merge commit is as serious as a "regular" code changing commit. So all rules for regular development also apply for a merge. There is no magic which makes any VCS do the right thing with the code during a merge, basically they use sophisticated text search-and-replace algorithms to place the changes of both branches into the output document. Merge conflicts only arise when the text replacement fails, but not when the semantic of the code goes wrong.
The developer performing the merge has to decide if these changes are correct, typically by reviewing the differences to both merge parents (, and check that the software compiles, and running unit tests, and having a peer review, and ... you know the drill).
I use hg fetch all the time. I let it auto pull/merge/commit and then I build+test. If it fails you can either hg backout the last transaction or just fix what's broken and then commit a new changeset before pushing.
If you're suspicious of the merge process, the horse has already left the barn.
Coordinated development requires just that: coordination. You shouldn't be simultaneously working on the same segment of code. Whether that means not working in the same function, class or file depends on your circumstances and the scope of your changes, a subject too deep for a simple explanation here.
The point is, if you don't know that you set out upon the changes with good coordination, neither an automated merge process nor a manual one, however smooth, provide a guarantee of a good outcome. At best it can tell you that you ended up not working in the same piece of code, but that's no comfort against semantic and logical changes that break your code or, worse, subtly pollute it. That's true regardless of whether or not it merges without complaint or even if it compiles.
The best defense is a test suite, which allows you to inspect the end product in an automated fashion. Lucikily, that also happens to be one of the best approaches to ongoing maintenance and development anyway.
All that said, most of the merges I have done have gone without a hitch. The ones that have caused trouble made themselves known as conflicts in the merge process, which was enough to tell us to look more closely at the code in question but also, and more importantly, the techniques we were using to partition our work. It's better to get it right going in, but it's also hard to know what "right" is until you've messed up a couple times. That doesn't mean we didn't introduce logical errors, and we don't test all of our code, but the world isn't perfect either.
What it boils down to is that Mercurial's merge process is the same as but better than the process without it, a net positive. You get to skip manually merging all of those things that seem innocuous, which even if they did have logical errors, you probably would miss on a cursory inspection such as a manual merge. It's just faster and zeroes in on those that are conflicts, which are your best smoking gun for the logical errors too in any case.
The only real answer is to explicitly coordinate your efforts up front as well as invest in a testing methodology, such as TDD with unit testing. Blaming the merge is weaksauce.
Related
Check: http://checkstyle.sourceforge.net/config_design.html#DesignForExtension
False positives: Checkstyle "Method Not Designed For Extension" error being incorrectly issued?
checkstyle Method is not designed for extension - needs to be abstract, final or empty
https://sourceforge.net/p/checkstyle/bugs/688/
Look like all switch that Check off in their configurations.
Does anybody could show real code example where this Check is useful ?
Is it useful for developers in practice, not in theory?
The documentation you linked to already explains the rationale behind the check. This can be useful in some situations. In practice, I've never turned it on, mostly because it is too cumbersome to administer, and you certainly don't want it for all your classes.
But you asked for a code example. Consider these two classes (YourSuperClass is part of the API you provide, and TheirSubClass is provided by the users of your API):
public abstract class YourSuperClass
{
public final void execute() {
doStuff();
hook();
doStuff();
}
private void doStuff() {
calculateStuff();
// do lots of stuff
}
protected abstract void hook();
protected final void calculateStuff() {
// perform your calculation of stuff
}
}
public class TheirSubClass extends YourSuperClass
{
protected void hook() {
// do whatever the hook needs to do as part of execute(), e.g.:
calculateStuff();
}
public static void main(final String[] args) {
TheirSubClass cmd = new TheirSubClass();
cmd.execute();
}
}
In this example, TheirSubClass cannot change the way execute works (do stuff, call hook, do stuff again). It also cannot change the calculateStuff() method. Thus, YourSuperClass is "designed for extension", because TheirSubClass cannot break the way it operates (like it could, if, say, execute() wasn't final). The designer of YourSuperClass remains in control, providing only specific hooks for subclasses to use. If the hook is abstract, TheirSubClass is forced to provide an implementation. If it is simply an empty method, TheirSubClass can choose to not use the hook.
Checkstyle's Check is a real-life example of a class designed for extension. Ironically, it would still fail the check, because getAcceptableTokens() is public but not final.
I have this rule enabled for all of my Spring-based projects. It's a royal PITA at first because it does represent a lot of code cleanup. But I've learned to love the principle of this rule. I find the rule to be useful at enforcing everyone to be consistent and clear in their thinking about which classes should be designed for extension and which shouldn't. In the code-bases that I've worked with, there are in reality, only a handful of classes that should be open to extension. Most are just asking for bugs by allowing extension. Maybe not today, but down the road when the current engineer is long-gone or that section of code is forgotten about and a quick change needs to come in to fix "X customer is complaining about Y and they just need Z".
Consider:
It's too easy to subclass anything in Java willy-nilly and therefore behavior can change over time through different extended classes. New programmers may get OOP happy and everything then extends something generic just because they can.
Overly-extended class-depth is difficult to reason about, and while you might be an amazing developer who'd never do something as atrocious as that ... I've worked in code-bases where that was the norm. And those deeply nested HTML Form generators were awful to work with and reason about what would actually happen So, this rule would, in theory, make the original engineer think twice about writing something so awful for their peers.
By enforcing the final rule or documenting a class designed for extension the possible bugs that could occur through inadvertent extension of a class may be avoided. I don't personally like the idea that some subclass could alter the behavior of my application because that could cause unintended side-effects in weird ways (especially large and partially tested applications). And, it's in these extended classes where complex behavior is hidden that the hard-to-solve bugs exist.
The DesignForExtension rule forces conversations amongst developers whenever something might be extended as an initial choice to get a quick-fix out the door when really what should happen is that the developers need to meet up and discuss what's changing, and discuss why extension might be appropriate. Many times, modifications to the main class are more appropriate and additional tests would be written given the new circumstances. This promotion of conversations is healthy for long-term code quality and intra-organizational knowledge sharing.
That being said, I do add to my checkstyle-suppressions.xml in my code for Spring-specific classes that cannot be declared final. Because, frameworks.
<suppress checks="DesignForExtension" files=".*Configuration\.java"/>
When adding a new feature to an existing system if you come across an existing function that almost does what you need is it best practice to:
Copy the existing function and make your changes on the new copy (knowing that copying code makes your fellow devs cry).
-or-
Edit the existing function to handle both the existing case and your new case risking that you may introduce new bugs into existing parts of the system (which makes the QA team cry)
If you edit the existing function where do you draw the line before you should just create a new independent function (based on a copy)...10% of the function, 50% of the function?
You can't always do this, but one solution here would be to split your existing function in other tiny parts, allowing you to use the parts you need without editing all the code, and making it easier to edit small pieces of code.
That being said, if you think you can introduce new bugs into existing parts of the system without noticing it, you should probably think about using units tests.
Rule of thumb I tend to follow is that if I can cover the new behaviour by adding an extra parameter (or new valid value) to the existing function, while leaving the code more-or-less "obviously the same" in the existing case, then there's not much danger in changing a function.
For example, old code:
def utf8len(s):
return len(s.encode('utf8')) # or maybe something more memory-efficient
New use case - I'm writing some code in a style that uses the null object pattern, so I want utf8len(None) to return None instead of throwing an exception. I could define a new function utf8len_nullobjectpattern, but that's going to get quite annoying quite quickly, so:
def utf8len(s):
if s != None:
return len(s.encode('utf8')) # old code path is untouched
else:
return None # new code path introduced
Then even if the unit tests for utf8len were incomplete, I can bet that I haven't changed the behavior for any input other than None. I also need to check that nobody was ever relying on utf8len to throw an exception for a None input, which is a question of (1) quality of documentation and/or tests; and (2) whether people actually pay any attention to defined interfaces, or just Use The Source. If the latter, I need to look at calling sites, but if things are done well then I pretty much don't.
Whether the old allowed inputs are still treated "obviously the same" isn't really a question of what percentage of code is modified, it's how it's modified. I've picked a deliberately trivial example, since the whole of the old function body is visibly still there in the new function, but I think it's something that you know when you see it. Another example would making something that was fixed configurable (perhaps by passing a value, or a dependency that's used to get a value) with a default parameter that just provides the old fixed value. Every instance of the old fixed thing is replaced with (a call to) the new parameter, so it's reasonably easy to see on a diff what the change means. You have (or write) at least some tests to give confidence that you haven't broken the old inputs via some stupid typo, so you can go ahead even without total confidence in your test coverage.
Of course you want comprehensive testing, but you don't necessarily have it. There are also two competing maintenance imperatives here: 1 - don't duplicate code, since if it has bugs in it, or behavior that might need to change in future, then you're duplicating the bugs / current behavior. 2 - the open/closed principle, which is a bit high-falutin' but basically says, "write stuff that works and then don't touch it". 1 says that you should refactor to share code between these two similar operations, 2 says no, you've shipped the old one, either it's usable for this new thing or it isn't, and if it isn't then leave it alone.
You should always strive to avoid code duplication. Therefore I would suggest that you try to write a new function that modifies the return value of the already existing function to implement your new feature.
I do realize that in some cases it might not be possible to do that. In those cases you definitely should consider rewriting the existing function without changing its interface. And introducing new bugs by doing that should be prevented by unit tests that can be run on the modified function before you add it to the project code.
And if you need only part of the existing function, consider extracting a new function from the existing one and use this new "helper" function in the existing and in your new function. Again confirming everything is working as intended via unit tests.
Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.
I'm in the process of catching up on technical documentation for a project I completed some months ago, and one I'm coming close to finishing. I used repositories to abstract out the data access layer in both and was writing a short summary of the pattern on our wiki at work.
It was whilst writing this summary that I realised I took a slightly different approach the second time.
One used an explicit InsertOnSubmit method coupled with a Unit of Work and an implicit update with the UoW tracking changes. The other had a Save method which inserted new entries and updated existing (without a UoW).
Which approach would you typically favour? Consider the usual CRUD scenarios, where should the the responbility for each of them lie?
I think whether a repository uses Unit of Work, caching, or any other related concepts should be left to the implementation. I prefer for the interface to resemble a data store which is aligned with the domain model at hand. So that a customer repository would look something like this:
interface ICustomerRepository
{
Customer Load(int id);
IEnumerable<Customer> Search(CustomerQuery q);
void Save(Customer c);
void Delete(Customer c);
}
This can be easily implemented by something like NHibernate, or NHibernate with NHibernate.Linq, or a direct SQL library, or even an XML or flat-file store. If possible, I like the keep the concept of transaction outside of the repository, or at a more global scope so that operations of several repositories may be part of a single transaction.
I was confused when I first started to see anti-singleton commentary. I have used the singleton pattern in some recent projects, and it was working out beautifully. So much so, in fact, that I have used it many, many times.
Now, after running into some problems, reading this SO question, and especially this blog post, I understand the evil that I have brought into the world.
So: How do I go about removing singletons from existing code?
For example:
In a retail store management program, I used the MVC pattern. My Model objects describe the store, the user interface is the View, and I have a set of Controllers that act as liason between the two. Great. Except that I made the Store into a singleton (since the application only ever manages one store at a time), and I also made most of my Controller classes into singletons (one mainWindow, one menuBar, one productEditor...). Now, most of my Controller classes get access the other singletons like this:
Store managedStore = Store::getInstance();
managedStore.doSomething();
managedStore.doSomethingElse();
//etc.
Should I instead:
Create one instance of each object and pass references to every object that needs access to them?
Use globals?
Something else?
Globals would still be bad, but at least they wouldn't be pretending.
I see #1 quickly leading to horribly inflated constructor calls:
someVar = SomeControllerClass(managedStore, menuBar, editor, sasquatch, ...)
Has anyone else been through this yet? What is the OO way to give many individual classes acces to a common variable without it being a global or a singleton?
Dependency Injection is your friend.
Take a look at these posts on the excellent Google Testing Blog:
Singletons are pathologic liars (but you probably already understand this if you are asking this question)
A talk on Dependency Injection
Guide to Writing Testable Code
Hopefully someone has made a DI framework/container for the C++ world? Looks like Google has released a C++ Testing Framework and a C++ Mocking Framework, which might help you out.
It's not the Singleton-ness that is the problem. It's fine to have an object that there will only ever be one instance of. The problem is the global access. Your classes that use Store should receive a Store instance in the constructor (or have a Store property / data member that can be set) and they can all receive the same instance. Store can even keep logic within it to ensure that only one instance is ever created.
My way to avoid singletons derives from the idea that "application global" doesn't mean "VM global" (i.e. static). Therefore I introduce a ApplicationContext class which holds much former static singleton information that should be application global, like the configuration store. This context is passed into all structures. If you use any IOC container or service manager, you can use this to get access to the context.
There's nothing wrong with using a global or a singleton in your program. Don't let anyone get dogmatic on you about that kind of crap. Rules and patterns are nice rules of thumb. But in the end it's your project and you should make your own judgments about how to handle situations involving global data.
Unrestrained use of globals is bad news. But as long as you are diligent, they aren't going to kill your project. Some objects in a system deserve to be singleton. The standard input and outputs. Your log system. In a game, your graphics, sound, and input subsystems, as well as the database of game entities. In a GUI, your window and major panel components. Your configuration data, your plugin manager, your web server data. All these things are more or less inherently global to your application. I think your Store class would pass for it as well.
It's clear what the cost of using globals is. Any part of your application could be modifying it. Tracking down bugs is hard when every line of code is a suspect in the investigation.
But what about the cost of NOT using globals? Like everything else in programming, it's a trade off. If you avoid using globals, you end up having to pass those stateful objects as function parameters. Alternatively, you can pass them to a constructor and save them as a member variable. When you have multiple such objects, the situation worsens. You are now threading your state. In some cases, this isn't a problem. If you know only two or three functions need to handle that stateful Store object, it's the better solution.
But in practice, that's not always the case. If every part of your app touches your Store, you will be threading it to a dozen functions. On top of that, some of those functions may have complicated business logic. When you break that business logic up with helper functions, you have to -- thread your state some more! Say for instance you realize that a deeply nested function needs some configuration data from the Store object. Suddenly, you have to edit 3 or 4 function declarations to include that store parameter. Then you have to go back and add the store as an actual parameter to everywhere one of those functions is called. It may be that the only use a function has for a Store is to pass it to some subfunction that needs it.
Patterns are just rules of thumb. Do you always use your turn signals before making a lane change in your car? If you're the average person, you'll usually follow the rule, but if you are driving at 4am on an empty high way, who gives a crap, right? Sometimes it'll bite you in the butt, but that's a managed risk.
Regarding your inflated constructor call problem, you could introduce parameter classes or factory methods to leverage this problem for you.
A parameter class moves some of the parameter data to it's own class, e.g. like this:
var parameterClass1 = new MenuParameter(menuBar, editor);
var parameterClass2 = new StuffParameters(sasquatch, ...);
var ctrl = new MyControllerClass(managedStore, parameterClass1, parameterClass2);
It sort of just moves the problem elsewhere though. You might want to housekeep your constructor instead. Only keep parameters that are important when constructing/initiating the class in question and do the rest with getter/setter methods (or properties if you're doing .NET).
A factory method is a method that creates all instances you need of a class and have the benefit of encapsulating creation of the said objects. They are also quite easy to refactor towards from Singleton, because they're similar to getInstance methods that you see in Singleton patterns. Say we have the following non-threadsafe simple singleton example:
// The Rather Unfortunate Singleton Class
public class SingletonStore {
private static SingletonStore _singleton
= new MyUnfortunateSingleton();
private SingletonStore() {
// Do some privatised constructing in here...
}
public static SingletonStore getInstance() {
return _singleton;
}
// Some methods and stuff to be down here
}
// Usage:
// var singleInstanceOfStore = SingletonStore.getInstance();
It is easy to refactor this towards a factory method. The solution is to remove the static reference:
public class StoreWithFactory {
public StoreWithFactory() {
// If the constructor is private or public doesn't matter
// unless you do TDD, in which you need to have a public
// constructor to create the object so you can test it.
}
// The method returning an instance of Singleton is now a
// factory method.
public static StoreWithFactory getInstance() {
return new StoreWithFactory();
}
}
// Usage:
// var myStore = StoreWithFactory.getInstance();
Usage is still the same, but you're not bogged down with having a single instance. Naturally you would move this factory method to it's own class as the Store class shouldn't concern itself with creation of itself (and coincidentally follow the Single Responsibility Principle as an effect of moving the factory method out).
From here you have many choices, but I'll leave that as an exercise for yourself. It is easy to over-engineer (or overheat) on patterns here. My tip is to only apply a pattern when there is a need for it.
Okay, first of all, the "singletons are always evil" notion is wrong. You use a Singleton whenever you have a resource which won't or can't ever be duplicated. No problem.
That said, in your example, there's an obvious degree of freedom in the application: someone could come along and say "but I want two stores."
There are several solutions. The one that occurs first of all is to build a factory class; when you ask for a Store, it gives you one named with some universal name (eg, a URI.) Inside that store, you need to be sure that multiple copies don't step on one another, via critical regions or some method of ensuring atomicity of transactions.
Miško Hevery has a nice article series on testability, among other things the singleton, where he isn't only talking about the problems, but also how you might solve it (see 'Fixing the flaw').
I like to encourage the use of singletons where necessary while discouraging the use of the Singleton pattern. Note the difference in the case of the word. The singleton (lower case) is used wherever you only need one instance of something. It is created at the start of your program and is passed to the constructor of the classes that need it.
class Log
{
void logmessage(...)
{ // do some stuff
}
};
int main()
{
Log log;
// do some more stuff
}
class Database
{
Log &_log;
Database(Log &log) : _log(log) {}
void Open(...)
{
_log.logmessage(whatever);
}
};
Using a singleton gives all of the capabilities of the Singleton anti-pattern but it makes your code more easily extensible, and it makes it testable (in the sense of the word defined in the Google testing blog). For example, we may decide that we need the ability to log to a web-service at some times as well, using the singleton we can easily do that without significant changes to the code.
By comparison, the Singleton pattern is another name for a global variable. It is never used in production code.