Interface segregation principle and single responsibility principle - solid-principles

I have an interface with 9 methods each doing something different but in one context. That is, when I use dependency injection on this interface, all 9 methods are used in one specific scope. Should I do 9 different interfaces for all methods and 9 classes to implement these interfaces if I always use all 9 methods after creating an instance?

If you would use all or most of these nine methods in every use case, then it is clear that you do not need interface segregation.
However, the reason you are even asking this is probably that having nine methods appear too much. This may be the case and it sounds like perhaps you need some refactoring, but no one can tell you that without studying your code.

Related

Logical way to sort your interfaces

I currently put them in an interface folder but this wont help readability for people who do not know the code base no more than lumping all of your implementation classes in a folder called implementation.
How do you guys logically sort your project interfaces.
I assume you're talking about the kind of interfaces that classes implement in OO languages.
I'd say it's better to name the folder by function, if you really want to separate the interface from implementing classes - call the folder 'listeners' or whatever these interfaces represent. The fact they're interfaces (or abstract classes) should be obvious from the way they're named and used.
Then again, if it's not some form of a framework other people will use, but end up with an interface and a two or three implementing classes you write and leave them be, you might as well stick them all together in the same package. I don't think that making a package for a single class/interface does much for clarity.
Not part of the question but I'll write it anyway - I'm also not a fan of the "I" prefix for interfaces. If it's not obvious without it, then it could probably use a different name/structure.

Why should you ever have to care whether an object reference is an interface or a class?

I often seem to run into the discussion of whether or not to apply some sort of prefix/suffix convention to interface type names, typically adding "I" to the beginning of the name.
Personally I'm in the camp that advocates no prefix, but that's not what this question is about. Rather, it's about one of the arguments I often hear in that discussion:
You can no longer see at-a-glance
whether something is an interface or a
class.
The question that immediately pops up in my head is: apart from object creation, why should you ever have to care whether an object reference is a class or an interface?
I've tagged this question as language agnostic, but as has been pointed out it may not be. I contend that it is because while specific language implementation details may be interesting, I'd like to keep this on a conceptual level. In other words, I think that, conceptually, you'd never have to care whether an object reference is typed as a class or an interface but I'm not sure, hence the question.
This is not a discussion about IDEs and what they do or don't do when visualizing the different types; caring about the type of an object is certainly a necessity when browsing through code (packages/sources/whatever form). Nor is it a discussion about the pros or cons about either naming convention. I just can't seem to figure out in what scenario, other than object creation, you actually care about wether or not you're referencing a concrete type or an interface.
Most of the time, you probably don't care. But here are some instances that I can think of where you would. There are several, and it does vary a little bit by language. Some languages don't mind as much as others.
In the case of inversion of control (where someone PASSES you a parameter) you probably don't care if it's an interface or an object as far as calling its methods etc. But when dealing with types, it definitely can make a difference.
In managed languages such as .NET languages, interfaces can usually only inherit one interface, whereas a class can inherit one class but implement many interfaces. The order of classes vs interfaces may also matter in a class or interface declaration. So you need to know which is which when defining a new class or interface.
In Delphi / VCL, interfaces are reference counted and automatically collected, whereas classes must be explicitly freed, so lifecyle management on the whole is affected, not just the creation.
Interfaces may not be viable sources for class references.
Interfaces can be cast to compatible interfaces, but in many languages, they cannot be cast to compatible classes. Classes can be cast to either.
Interfaces may be passed to parameters of type IID, or IUnknown, whereas classes cannot (without a cast and a supporting interface).
An interface's implementation is unknown. Its input and output are defined, but the implementation which creates the output is abstracted. In general, ones attitude may be that when working with a class, one may know how the class works. But when working with an interface, no such assumption should be made. In a perfect world, it might make no difference. But in reality, this most certainly can have affect your design.
I agree with you (and thereby do not use an "I" prefix for interfaces). We shouldn't have to care whether it is an abstract class or an interface.
Worth noting that Java needs to have a notion of interface solely because it does not support multiple inheritance. Otherwise, "abstract class" concept would suffice (which may be "all" abstract, or partially abstract, or almost concrete and just 1 tiny bit abstract, whatever).
Things that concrete class can have and the interfaces can't:
Constructors
Instance fields
Static methods and static fields
So if you use the convention of starting all interface names with 'I' then it indicates to the user of your library that the particular type will not have any of the above mentioned things.
But personally I feel that this is not a reason enough to start all interface names with 'I'. The modern IDEs are powerful enough to indicate if some type is an interface. Also it hides the true meaning of an interface name: imagine if Runnable and List interfaces were named IRunnable and IList repectively.
When a class is used, I can make the assumption that I will get objects from a relatively small and almost well-defined range of subclasses. That's because subclassing is - or at least it should be
- a decision that isn't made too easily, especially in languages that don't support multiple inheritance. In contrast, interfaces can be implemented by any class, and the implementation can be added later to any class.
So the information is useful, especially when browsing through code, and trying to get a feeling what the code author intended to do - but I think it should be enough, if the IDE shows interfaces/classes as distinctive icons.
You want to see at a glance which are the "interfaces" and which are the "concrete classes" so that you can focus your attention to the abstractions in the design instead of the details.
Good designs are based on abstractions - if you know and understand them you understand the system without knowing any of the details. So you know you can skip the classes without the I prefix, and focus on the ones that do have it while you are understanding the code, and you also know to avoid building new code around non-interface classes without having to refer to some other design document.
I agree that the I* naming convention is just not appropriate for modern OO languages, but truth is this question isn't really language agnostic. There are legitimate cases where you have an interface not for any architectural reason but because you simply don't have an implementation or have access to an implementation. For these cases you can read I* as *Stub or similar, and, in these cases, it might make sense to have an IBlah and a Blah class
These days, though, you rarely come up against this, and in modern OO languages when you say Interface you actually mean Interface not just I don't have the code for this. So there is no need for the I*, and in fact it encourages really bad OO design as you won't get the natural naming conflicts that would tell you something's gone wrong in your architecture. Say you had a List and an IList... what's the difference? when would you use one over the other? if you wanted to implement IList would you be constrained (conceptually at least) by what List does? I'll tell you what... if I found both an IBlah and a Blah class in any of my codebases I would purge one at random and take away that person's commit privileges.
Interfaces don't have fields, hence when you use IDisposable (or whatever), you know you're only declaring what you can do. That seems to me the main point of it.
Distinguishing between interfaces and classes may be useful, anywhere the type is referenced, in the IDE or out, to determine:
Can I make a new implementation of this type?
Can I implement this interface in a language that does not support multiple inheritance of implementation classes (e.g., Java).
Can there be multiple implementations of this type?
Can I easily mock this interface in an arbitrary mocking framework?
It is worth noting that UML distinguishes between interfaces and implementation classes. In addition, the "I" prefix is used in the examples in "The Unified Modeling Language User Guide" by the three amigos Booch, Jacobson and Rumbaugh. (Incidentally, this also provides an example why IDE syntax coloring alone is not sufficient to distinguish in all contexts.)
You should care, because :
An interface with capital "I" enables one, namely you or your co-workers to use any implementation which implements the interface. If in the future you figure out a better way to do something, say a better list sorting algorithm, you will be stuck with having the change ALL of the invoking methods as well.
It helps in understanding code - e.g. you don't need to memorize all 10 implementations of say, I_SortableList , you just care that it sorts a list (or something like that). Your code becomes practically self-documenting here.
To complete the discussion, here is a pseudocode example illustrating the above:
//Pseudocode - define implementations of ISortableList
Class SortList1 : ISortableLIst, SortList2:IsortableList, SortList3:IsortableList
//PseudoCode - the interface way
void Populate(ISortableList list, int[] nums)
{
list.set(nums)
}
//PseudoCode - the "i dont care way"
void Populate2( SortList1 list, int[] nums )
{
list.set(nums)
}
...
//Pseudocode - create instances
SortList1 list1 = new SortList1();
SortList2 list2 = new SortList2();
SortList3 list3 = new SortList3();
//Invoke Populate() - The "interface way"
Populate(list1,nums);//OK, list1 is ISortableList implementation
Populate(list2,nums);//OK, list2 is ISortableList implementation
Populate(list3,nums);//OK, list3 is ISortableList implementation
//Invoke Populate2() - the "I don't care way"
Populate(list1,nums);//OK, list1 is an instance of SortList1
Populate(list2,nums);//Not OK, list2 is not of required argument type, won't compile
Populate(list3,nums);//the same as above
Hope this helps,
Jas.

Interfaces vs Public Class Members

I've noticed that some programmers like to make interfaces for just about all their classes. I like interfaces for certain things (such as checking if an object supports a certain behavior and then having an interface for that behavior) but overuse of interfaces can sometimes bloat the code. When I declare methods or properties as public I'd expect people to just use my concrete classes and I don't really understand the need to create interfaces on top of that.
I'd like to hear your take on interfaces. When do you use them and for what purposes?
Thank you.
Applying any kind of design pattern or idea without thinking, just because somebody told you it's good practice, is a bad idea.
That ofcourse includes creating a separate interface for each and every class you create. You should at least be able to give a good reason for every design decision, and "because Joe says it's good practice" is not a good enough reason.
Interfaces are good for decoupling the interface of some unit of code from its implementation. A reason to create an interface is because you foresee that there might be multiple implementations of it in the future. It can also help with unit testing; you can make a mock implementation of the services that the unit you want to test depends on, and plug the mock implementations in instead of "the real thing" for testing.
Interfaces are a powerful tool for abstraction. With them, you can more freely substitute (for example) test classes and thereby decouple your code. They are also a way to narrow the scope of your code; you probably don't need the full feature set of a given class in a particular place - exactly what features do you need? That's a client-focused way of thinking about interfaces.
Unit tests.
With an interface describing all class methods and properties it is within the reach of a click to create a mock-up class to simulate behavior that is not within the scope of said test.
It's all about expecting and preparing for change.
One approach that some use (and I'm not necessarily advocating it)
is to create an IThing and a ThingFactory.
All code will reference IThing (instead of ConcreteThing).
All object creation can be done via the Factory Method.
ThingFactory.CreateThing(some params).
So, today we only have AmericanConcreteThing. And the possibility is that we may never need another. However, if experience has taught me anything, it is that we will ALWAYS need another.
You may not need EuropeanThing, but TexasAmericanThing is a distinct possibility.
So, In order to minimize the impact on my code, I can change the creational line to:
ThingFactory.CreateThing( Account )
and Create my class TexasAmericanThing : IThing.
Other than building the class, the only change is to the ThingFactory, which will require a change from
public static IThing CreateThing(Account a)
{
return new AmericanThing();
}
to
public static IThing CreateThing(Account a)
{
if (a.State == State.TEXAS) return new TexasAmericanThing();
return new AmericanThing();
}
I've seen plenty of mindless Interfaces myself. However, when used intelligently, they can save the day. You should use Interfaces for decoupling two components or two layers of an application. This can enable you to easily plug-in varying implementations of the interface without affecting the client, or simply insulate the client from constant changes to the implementation, as long as you stay true to the contract of the interface. This can make the code more maintainable in the long term and can save the effort of refactoring later.
However, overly aggressive decoupling can make for non-intuitive code. It's overuse can lead to nuisance. You should carefully identify the cohesive parts of your application and the boundaries between them and use interfaces there. Another benefit of using Interfaces between such parts is that they can be developed in parallel and tested independently using mock implementations of the interfaces they use.
OTOH, having client code access public member methods directly is perfectly okay if you really don't foresee any changes to the class that might also necessitate changes in the client. In any case, however, having public member fields I think is not good. This is extremely tight coupling! You are basically exposing the architecture of your class and making the client code dependent on it. Tomorrow if you realize that another data structure for a particular field will perform better, you can't change it without also changing the client code.
I primarily use interfaces for IoC to enable unit testing.
On the one hand, this could be interpreted as premature generalization. On the other hand, using interfaces as a rule helps you write code that is more easily composable and hence testable. I think the latter wins out in many cases.
I like interfaces:
* to define a contract between parts/modules/subsystems or 3rd party systems
* when there are exchangeable states or algorithms (state/strategy)

Is Method Overloading considered polymorphism? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is Method Overloading considered part of polymorphism?
There are different types of polymorphism:
overloading polymorphism (also called Ad-hoc polymorphism)
overriding polymorphism
So yes it is part of polymorphism.
"Polymorphism" is just a word and doesn't have a globally agreed-upon, precise definition. You will not be enlightened by either a "yes" or a "no" answer to your question because the difference will be in the chosen definition of "polymorphism" and not in the essence of method overloading as a feature of any particular language. You can see the evidence of that in most of the other answers here, each introducing its own definition and then evaluating the language feature against it.
Strictly speaking polymorphism, from wikipedia:
is the ability of one type, A, to appear as and be used like another type, B.
So, method overloading as such is not considered part of this definition polymorphism, as the overloads are defined as part of one type.
If you are talking about inclusion polymorphism (normally thought of as overriding), that is a different matter and then, yes it is considered to be part of polymorphism.
inclusion polymorphism is a concept in type theory wherein a name may denote instances of many different classes as long as they are related by some common super class.
There are 2 types of polymorphism.
static
dynamic.
Overloading is of type static polymorphism.. overriding comes under dynamic (or run-time) polymorphism..
ref. http://en.wikipedia.org/wiki/Polymorphism_(computer_science) which describes it more.
No, overloading is not. Maybe you refer to method overriding which is indeed part of polymorphism.
To further clarify, From the wikipedia:
Polymorphism is not the same as method
overloading or method overriding.1
Polymorphism is only concerned with
the application of specific
implementations to an interface or a
more generic base class.
So I'd say method overriding AND method overloading and convenient features of some language regarding polymorphism but notthe main concern of polymorphism (in object oriented programming) which only regards to the capability of an object to act as if it was another object in its hierarchy chain.
Method overriding or overloading is not polymorphism.
The right way to put it is that Polymorphism can be implemented using method overriding or overloading and using other ways as well.
In order to implement Polymorphism using method overriding, you can override the behaviour of a method in a sub-class.
In order to implement Polymorphism using method overloading, you need to write many methods with the same name and the same number of parameters but with different data types and implement different behavious in these methods. Now that is also polymorphism.
Other ways to implement polymorphism is operator overloading and implementing interfaces.
Wikipedia pedantics aside, one way to think about polymorphism is: the ability for a single line of code / single method call to do different things at runtime depending on the type of the object instance used to make the call.
Method overloading does not change behaviors at runtime. Overloading gives you more choices for argument lists on the same method name when you're writing and compiling the code, but when it's compiled the choice is fixed in code forever.
Not to be confused with method overriding, which is part of polymorphism.
It's a necessary evil that is and should only be used as a complement. In the end overloads should only convert and eventually forward to the main method. OverloDing is necessary because most vms for staticalky dispatched environments don't know how to convert one type to another so the parameter fits the target and this is where one uses overloads to help out.
StringBuilder
Append(String) // main
Append(Boolean) // converts and calls append(String)

Singletons: good design or a crutch? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Singletons are a hotly debated design pattern, so I am interested in what the Stack Overflow community thought about them.
Please provide reasons for your opinions, not just "Singletons are for lazy programmers!"
Here is a fairly good article on the issue, although it is against the use of Singletons:
scientificninja.com: performant-singletons.
Does anyone have any other good articles on them? Maybe in support of Singletons?
In defense of singletons:
They are not as bad as globals because globals have no standard-enforced initialization order, and you could easily see nondeterministic bugs due to naive or unexpected dependency orders. Singletons (assuming they're allocated on the heap) are created after all globals, and in a very predictable place in the code.
They're very useful for resource-lazy / -caching systems such as an interface to a slow I/O device. If you intelligently build a singleton interface to a slow device, and no one ever calls it, you won't waste any time. If another piece of code calls it from multiple places, your singleton can optimize caching for both simultaneously, and avoid any double look-ups. You can also easily avoid any deadlock condition on the singleton-controlled resource.
Against singletons:
In C++, there's no nice way to auto-clean-up after singletons. There are work-arounds, and slightly hacky ways to do it, but there's just no simple, universal way to make sure your singleton's destructor is always called. This isn't so terrible memory-wise -- just think of it as more global variables, for this purpose. But it can be bad if your singleton allocates other resources (e.g. locks some files) and doesn't release them.
My own opinion:
I use singletons, but avoid them if there's a reasonable alternative. This has worked well for me so far, and I have found them to be testable, although slightly more work to test.
Google has a Singleton Detector for Java that I believe started out as a tool that must be run on all code produced at Google. The nutshell reason to remove Singletons:
because they can make testing
difficult and hide problems with your
design
For a more explicit explanation see 'Why Singletons Are Controversial' from Google.
A singleton is just a bunch of global variables in a fancy dress.
Global variables have their uses, as do singletons, but if you think you're doing something cool and useful with a singleton instead of using a yucky global variable (everyone knows globals are bad mmkay), you're unfortunately misled.
The purpose of a Singleton is to ensure a class has only one instance, and provide a global point of access to it. Most of the time the focus is on the single instance point. Imagine if it were called a Globalton. It would sound less appealing as this emphasizes the (usually) negative connotations of a global variable.
Most of the good arguments against singletons have to do with the difficulty they present in testing as creating test doubles for them is not easy.
There's three pretty good blog posts about Singletons by Miško Hevery in the Google Testing blog.
Singletons are Pathological Liars
Where Have All the Singletons Gone?
Root Cause of Singletons
Singleton is not a horrible pattern, although it is misused a lot. I think this misuse is because it is one of the easier patterns and most new to the singleton are attracted to the global side effect.
Erich Gamma had said the singleton is a pattern he wishes wasn't included in the GOF book and it's a bad design. I tend to disagree.
If the pattern is used in order to create a single instance of an object at any given time then the pattern is being used correctly. If the singleton is used in order to give a global effect, it is being used incorrectly.
Disadvantages:
You are coupling to one class throughout the code that calls the singleton
Creates a hassle with unit testing because it is difficult to replace the instance with a mock object
If the code needs to be refactored later on because of the need for more than one instance, it is more painful than if the singleton class were passed into the object (using an interface) that uses it
Advantages:
One instance of a class is represented at any given point in time.
By design you are enforcing this
Instance is created when it is needed
Global access is a side effect
Chicks dig me because I rarely use singleton and when I do it's typically something unusual. No, seriously, I love the singleton pattern. You know why? Because:
I'm lazy.
Nothing can go wrong.
Sure, the "experts" will throw around a bunch of talk about "unit testing" and "dependency injection" but that's all a load of dingo's kidneys. You say the singleton is hard to unit test? No problem! Just declare everything public and turn your class into a fun house of global goodness. You remember the show Highlander from the 1990's? The singleton is kind of like that because: A. It can never die; and B. There can be only one. So stop listening to all those DI weenies and implement your singleton with abandon. Here are some more good reasons...
Everybody is doing it.
The singleton pattern makes you invincible.
Singleton rhymes with "win" (or "fun" depending on your accent).
I think there is a great misunderstanding about the use of the Singleton pattern. Most of the comments here refer to it as a place to access global data. We need to be careful here - Singleton as a pattern is not for accessing globals.
Singleton should be used to have only one instance of the given class. Pattern Repository has great information on Singleton.
One of the colleagues I have worked with was very Singleton-minded. Whenever there was something that was kind of a manager or boss like object he would make that into a singleton, because he figured that there should be only one boss. And each time the system took up some new requirements, it turned out there were perfectly valid reasons to allow multiple instances.
I would say that singleton should be used if the domain model dictates (not 'suggests') that there is one. All other cases are just accendentally single instances of a class.
I've been trying to think of a way to come to the poor singelton's rescue here, but I must admit it's hard. I've seen very few legitimate uses of them and with the current drive to do dependency injection andd unit testing they are just hard to use. They definetly are the "cargo cult" manifestation of programming with design patterns I have worked with many programmers that have never cracked the "GoF" book but they know 'Singelton' and thus they know 'Patterns'.
I do have to disagree with Orion though, most of the time I've seen singeltons oversused it's not global variables in a dress, but more like global services(methods) in a dress. It's interesting to note that if you try to use Singeltons in the SQL Server 2005 in safe mode through the CLR interface the system will flag the code. The problem is that you have persistent data beyond any given transaction that may run, of course if you make the instance variable read only you can get around the issue.
That issue lead to a lot of rework for me one year.
Holy wars! Ok let me see.. Last time I checked the design police said..
Singletons are bad because they hinder auto testing - instances cannot be created afresh for each test case.
Instead the logic should be in a class (A) that can be easily instantiated and tested. Another class (B) should be responsible for constraining creation. Single Responsibility Principle to the fore! It should be team-knowledge that you're supposed to go via B to access A - sort of a team convention.
I concur mostly..
Many applications require that there is only one instance of some class, so the pattern of having only one instance of a class is useful. But there are variations to how the pattern is implemented.
There is the static singleton, in which the class forces that there can only be one instance of the class per process (in Java actually one per ClassLoader). Another option is to create only one instance.
Static singletons are evil - one sort of global variables. They make testing harder, because it's not possible to execute the tests in full isolation. You need complicated setup and tear down code for cleaning the system between every test, and it's very easy to forget to clean some global state properly, which in turn may result in unspecified behaviour in tests.
Creating only one instance is good. You just create one instance when the programs starts, and then pass the pointer to that instance to all other objects which need it. Dependency injection frameworks make this easy - you just configure the scope of the object, and the DI framework will take care of creating the instance and passing it to all who need it. For example in Guice you would annotate the class with #Singleton, and the DI framework will create only one instance of the class (per application - you can have multiple applications running in the same JVM). This makes testing easy, because you can create a new instance of the class for each test, and let the garbage collector destroy the instance when it is no more used. No global state will leak from one test to another.
For more information:
The Clean Code Talks - "Global State and Singletons"
Singleton as an implementation detail is fine. Singleton as an interface or as an access mechanism is a giant PITA.
A static method that takes no parameters returning an instance of an object is only slightly different from just using a global variable. If instead an object has a reference to the singleton object passed in, either via constructor or other method, then it doesn't matter how the singleton is actually created and the whole pattern turns out not to matter.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
I must say you're not really describing somthing that should be a single object and it's debatable that any of them, other than Data Serialization should have been a singelton.
I can see at least 3 sets of classes that I would normally design in, but I tend to favor smaller simpler objects that do a narrow set of tasks very well. I know that this is not the nature of most programmers. (Yes I work on 5000 line class monstrosities every day, and I have a special love for the 1200 line methods some people write.)
I think the point is that in most cases you don't need a singelton and often your just making your life harder.
The biggest problem with singletons is that they make unit testing hard, particularly when you want to run your tests in parallel but independently.
The second is that people often believe that lazy initialisation with double-checked locking is a good way to implement them.
Finally, unless your singletons are immutable, then they can easily become a performance problem when you try and scale your application up to run in multiple threads on multiple processors. Contended synchronization is expensive in most environments.
Singletons have their uses, but one must be careful in using and exposing them, because they are way too easy to abuse, difficult to truly unit test, and it is easy to create circular dependencies based on two singletons that accesses each other.
It is helpful however, for when you want to be sure that all your data is synchronized across multiple instances, e.g., configurations for a distributed application, for instance, may rely on singletons to make sure that all connections use the same up-to-date set of data.
I find you have to be very careful about why you're deciding to use a singleton. As others have mentioned, it's essentially the same issue as using global variables. You must be very cautious and consider what you could be doing by using one.
It's very rare to use them and usually there is a better way to do things. I've run into situations where I've done something with a singleton and then had to sift through my code to take it out after I discovered how much worse it made things (or after I came up with a much better, more sane solution)
I've used singletons a bunch of times in conjunction with Spring and didn't consider it a crutch or lazy.
What this pattern allowed me to do was create a single class for a bunch of configuration-type values and then share the single (non-mutable) instance of that specific configuration instance between several users of my web application.
In my case, the singleton contained client configuration criteria - css file location, db connection criteria, feature sets, etc. - specific for that client. These classes were instantiated and accessed through Spring and shared by users with the same configuration (i.e. 2 users from the same company). * **I know there's a name for this type of application but it's escaping me*
I feel it would've been wasteful to create (then garbage collect) new instances of these "constant" objects for each user of the app.
I'm reading a lot about "Singleton", its problems, when to use it, etc., and these are my conclusions until now:
Confusion between the classic implementation of Singleton and the real requirement: TO HAVE JUST ONE INSTANCE OF a CLASS!
It's generally bad implemented. If you want a unique instance, don't use the (anti)pattern of using a static GetInstance() method returning a static object. This makes a class to be responsible for instantiating a single instance of itself and also perform logic. This breaks the Single Responsibility Principle. Instead, this should be implemented by a factory class with the responsibility of ensuring that only one instance exists.
It's used in constructors, because it's easy to use and must not be passed as a parameter. This should be resolved using dependency injection, that is a great pattern to achieve a good and testable object model.
Not TDD. If you do TDD, dependencies are extracted from the implementation because you want your tests to be easy to write. This makes your object model to be better. If you use TDD, you won't write a static GetInstance =). BTW, if you think in objects with clear responsibilities instead classes, you'll get the same effect =).
I really disagree on the bunch of global variables in a fancy dress idea. Singletons are really useful when used to solve the right problem. Let me give you a real example.
I once developed a small piece of software to a place I worked, and some forms had to use some info about the company, its employees, services and prices. At its first version, the system kept loading that data from the database every time a form was opened. Of course, I soon realized this approach was not the best one.
Then I created a singleton class, named company, which encapsulated everything about the place, and it was completely filled with data by the time the system was opened.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
Plus, it was a fixed, system-wide, easily accessible point to have the company data.
Singletons are very useful, and using them is not in and of itself an anti-pattern. However, they've gotten a bad reputation largely because they force any consuming code to acknowledge that they are a singleton in order to interact with them. That means if you ever need to "un-Singletonize" them, the impact on your codebase can be very significant.
Instead, I'd suggest either hiding the Singleton behind a factory. That way, if you need to alter the service's instantiation behavior in the future, you can just change the factory rather than all types that consume the Singleton.
Even better, use an inversion of control container! Most of them allow you to separate instantiation behavior from the implementation of your classes.
One scary thing on singletons in for instance Java is that you can end up with multiple instances of the same singleton in some cases. The JVM uniquely identifies based on two elements: A class' fully qualified name, and the classloader responsible for loading it.
That means the same class can be loaded by two classloaders unaware of each other, and different parts of your application would have different instances of this singleton that they interact with.
Write normal, testable, injectable objects and let Guice/Spring/whatever handle the instantiation. Seriously.
This applies even in the case of caches or whatever the natural use cases for singletons are. There's no need to repeat the horror of writing code to try to enforce one instance. Let your dependency injection framework handle it. (I recommend Guice for a lightweight DI container if you're not already using one).