How do Factories and Patterns relate? - language-agnostic

I was just reading a thread on SO that was discussing the merits of Singleton vs. Static Classes.
Some people mentioned that pattern X appeared to be more of a 'factory' rather than a Singleton 'pattern'.
What are the differences between a 'factory' and a 'design pattern'?

A "factory" is a specific design pattern:
http://en.wikipedia.org/wiki/Factory_method_pattern
Similarly "singleton" is also a design pattern:
http://en.wikipedia.org/wiki/Singleton_pattern

Factories and Singletons are some of the many design pattern.
A factory pattern can be implemented as a singleton pattern that produces objects. A factory could also be an instanced class, and therefore not a singleton. Likewise, a singleton can be a factory, but it can also be something else, like a global settings manager or event registry.

a 'factory' rather than a Singleton
'pattern'
Let me flesh that out and place the quotation marks correctly:
a 'factory pattern' rather than a 'singleton pattern'
Both are design patterns.

A Factory is a Design Pattern - not the other way around.

You got it slightly wrong. "Factory" is a pattern too and contrasts here with "Singleton".

'Factory' is a type of design pattern. You can see a couple examples of the Abstract Factory here or the Factory Method here based on the context

factory is a design pattern :-) as is Singleton. one could argue that singleton is kind of a factory. It creates an object when needed and uses a set chaching policy (which is always return the same object once it has been creaqted) but that's achedemic and would generally just be confusing in most debates about structure

A factory is a type of design pattern. Basically a factory returns a class dependant on the needs of the calling class. All classes returned by the factory should share the same interface so you can invoke the same public methods over them (although how each class implements the method may be different).
Here's a good link http://en.wikipedia.org/wiki/Factory_method_pattern

Lots of answers, but none seem to actually differentiate between the two patterns well. Let me try and see if I can't confuse issue more.
A Singleton is a pattern that restricts your system to creating only one instance of a given class. The restriction is usually implemented by creating a Factory that will either create an instance of the class (if none already exist) or return the already-created instance on subsequent calls.
A factory is used to create singletons and in other situations. It can be used to replace "new" in many cases. One advantage is that you can write your factory to allow the type of object to be returned to be "Set". This way your testing framework can "set" a mock object instead of the real one--and the rest of your system will then use the mock object.
Another case might be to have the factory evaluate from parameters which type to return, or from data (perhaps XML). They are also used to implement Dependency Injection where the factory looks at what you need and builds chains of objects to fulfill those needs.

Related

How do you describe new to a new CS student?

In OOP, there are entities (e.g. Person) which has attributes (e.g. name, address, etc) and it has methods. How do you describe new? Is it a method or just special token to bring an abstract entity to real one?
Sometimes it's a method, sometimes it's just syntactic sugar that invokes an allocator method. The language matters.
To your CS student? Don't sugar-coat it, they need to be able to get their heads around the concepts pretty quickly and using metaphors unrelated to the computer field, while fine for trying to explain it to your 80-year-old grandmother, will not help out a CS student much.
Simply put, tell them that a class is a specification for something and that an object is a concrete instance of that something. All new does is create a concrete instance based on the specification. This includes both creation (not necessarily class-specific, which is why I'd hesitate to call it a method, reserving that term for class-bound functions) and initialisation (which is class-specific).
Depending on the language new is a keyword which can be used as an operator or as a modifier. For instance in C# new can be used:
new operator - used to create objects on the heap and invoke constructors.
new modifier - used to hide an inherited member from a base class member
For brand new students I would describe new only as a keyword and leave the modifier out of the discussion. I describe classes as blueprints and new as the mechanism by which those blueprints turn into something tangible - objects.
You may want to checkout my question: How to teach object oriented programming to procedural programmers for other great answers on teaching OOP to new developers.
In most object-oriented languages, new is simply a convention for naming a factory method. But it's only one of many conventions.
In Ruby, for example, it is conventional to name the factory method [] for collection classes. In Python, classes are simply their own factories. In Io, the factory method is generally called clone, in Ioke and Seph it is called mimic.
In Smalltalk, factory methods often have more descriptive names than just new:. Something like fromList: or with:.
Here's a simile that has worked for me in the past.
An object definition is a Jello Mold. "new" is the process that actually makes a Jello snack from that mold. Each Goopy Jello thing that you give to your new neighbors can be different, this one's green, this one has bits of fruit in it, etc. It's its own unique "object." But the mold is the same.
Or you can use a factory analogy or something, (or blueprints vs building).
As far as its role in the syntax, it's just a keyword that lets the compiler know the allocate memory on the heap and run the constructor. There's not much more to it.
Smalltalk: it's an instance method on the metaclass. So "new is a method that returns a newly-allocated instance."
I tell people that a class is like a plan on how to make an object. An object is made from the class by new. If they need more than that, well, I just don't know what to say. ;-)
new is a keyword that calls the class constructor of the class to the right of it with the arguments listed inside ().
String str = new String("asdf");
str is defined as being a String class variable using the constructor and argument "asdf"
At least that's how it was presented to me.
In Ruby, I believe it's a instance method on the metaclass. In CLOS it's a generic function called make-instance but otherwise roughly the same.
In some languages, like Java, new has special syntax, and the metaclass part is hidden. In the case where you have to teach somebody OO with such a language, I don't know that there's much you can do. (Taking a break from teaching OO and Java to teach a second object system would almost certainly just confuse them further!) Just explain what it does, and that it's a special case.
You can say that a class is a prototype/blueprint for an object. When you give it the keyword new, that prototype/blueprint comes to life. It's like you're giving a breath of life to those dead instance.
In Java,
new allocates memory for a new class instance (object)
new runs a class's constructor to initialize that instance
new returns a reference to that new instance
As far as the relationship between object/instance and class, I sometimes think:
class is to instance as blueprint is to building
new in most languages does some variation of the following:
designate some memory region for a class instance, and if neccesary, inform the garbage collector about how to free that memory later.
initialize that memory region in the manner specific to that class, transforming the bytes of raw memory into bytes of a valid instance of the class
return a reference to the memory location to the caller.

Should we avoid to use Object as the input parameter/ output value of a method?

Take Java syntax as an example, though the question itself is language independent. If the following snippet takes an object MyAbstractEmailTemplate as input argument in the method setTemplate, the class MyGateway will then become tightly-coupled with the object MyAbstractEmailTemplate, which lessens the re-usability of the class MyGateway.
A compromise is to use dependency-injection to ease the instantiation of MyAbstractEmailTemplate. This might solve the coupling problem
to some extent, but the interface is still rigid, hardly providing enough flexibility to
other developers/ applications.
So if we only use primitive data type (or even plain XML in web service) as the input/ output of a method, it seems the coupling problem no longer exists. So what do you think?
public class MyGateway {
protected MyAbstractEmailTemplate template;
public void setTemplate(MyAbstractEmailTemplate template) {
this.template = template;
}
}
It's pretty difficult to understand what you are really asking, but going the route of typing everything to Object does not lead to loose coupling because you can't do anything with the input without downcasting, which would break the Liskov Substituion Principle.
Taken to the extreme it leads you here:
public class MyClass
{
public object Invoke(object obj);
}
This is not loose coupling, it's just obscure and hard-to-maintain code.
The name MyAbstractEmailTemplate makes me believe that you are talking about an abstract class.
You should always program against interfaces, so instead of having MyGateway depend on MyAbstractEmailTemplate, it should depend on an EmailTemplate interface, where MyAbstractEmailTemplate implements EmailTemplate. Then, you can pass your custom implementations around as you want to, without further tight coupling.
Combine this with DI and you've got yourself a pretty decent solution.
Not exactly sure what you mean with "the interface is still rigid", but obviously you should design your interface in such a way that it provides the functionality you need.
MyGateway has to assume something about the inputs. Even if it used XML, it would have to assume something about the structure and content of the XML. Coupling isn't an evil in its own right; expresses the contract between two pieces of code. The oft-repeated advice to avoid tight coupling is really just saying that coupling should express the essence of a contract, not more and not less. Passing a specific type (particularly an interface type) is a very good way to achieve this balance.
The first problem you will run into is that a lot of types are simply not representable by a primitive data type (It's a Java problem that there are primitive types at all.).
The coupling should be reduced by using a proper inheritance hierarchy. What means proper? The method should take exactly that part of the interface as a parameter that is need. Not more not less.
After all you won't be able to avoid dependencies. Methods have to know about what they can do with their input or have to able to make assumptions (see C++ concepts) about the capabilities of the input.
IMHO there is nothing inherently wrong in using objects (wth small cap, not Objects) as method parameters and/or class members. Yes, these create dependencies. You can manage this in (at least) two ways:
acknowledge that by creating this dependency, the two classes become tightly coupled. This is entirely appropriate in many cases, where two (or more) classes in fact form a component, which is a meaningful unit of reuse in itself, and its parts may not make much sense or be interchangeable.
if there are multiple interchangeable candidates for a method parameter, these are obvious candidates to form a class hierarchy. Then you program for the interface and can pass any object of any class implementing that interface as parameter to your method. Note that the phrase "there are multiple interchangeable candidates for a method parameter" is a loose rephrasing of the Liskov Substitution Principle, which is the foundation of polymorphism.
in some languages, e.g. C++, the third way would be using templates. Then you need no common interface, only specific methods/members need to resolvable when the template is instantiated. However, since instantiation happens at compile time, this is entirely static binding.
sThe problem is I would say, that the best java can offer are interfaces and people start to see that they are too rigid. It would be interesting to use something like what is in Go language, that allows flexible checking for all methods of an interface to be present in the type, you do not have to be explicit about implementing some interface. We also need something better than interfaces to specify the constraints - maybe some sort of contracts. Another thing is the interface evolution.

Can I change class types in a setter with an object-oriented language?

Here is the problem statement: Calling a setter on the object should result in the object to change to an object of a different class, which language can support this?
Ex. I have a class called "Man" (Parent Class), and two children namely "Toddler" and "Old Man", they are its children because they override a behaviour in Man called as walk. ( i.e Toddler sometimes walks using both his hands and legs kneeled down and the Old man uses a stick to support himself).
The Man class has a attribute called age, I have a setter on Man, say setAge(int ageValue). I have 3 objects, 2 toddlers, 1 old-Man. (The system is up and running, I guess when we say objects it is obvious). I will make this call, toddler.setAge(80), I expect the toddler to change to an object of type Old Man. Is this possible? Please suggest.
Thanks,
This sounds to me like the model is wrong. What you have is a Person whose relative temporal grouping and some specific behavior changes with age.
Perhaps you need a method named getAgeGroup() which returns an appropriate Enum, depending on what the current age is. You also need an internal state object which encapsulates the state-specific behavior to which your Person delegates behavior which changes with age.
That said, changing the type of an instantiated object dynamically will likely only be doable only with dynamically typed languages; certainly it's not doable in Java, and probably not doable in C# and most other statically typed languages.
This is a common problem that you can solve using combination of OO modelling and design patterns.
You will model the class the way you have where Toddler and OldMan inherit from Man base class. You will need to introduce a Proxy (see GoF design pattern) class as your access to your Man class. Internally, proxy hold a man object/pointer/reference to either Toddler or OldMan. The proxy will expose all the interfaces that is exposed by Man class so that you can use it as it is and in your scenario, you will implement setAge similar to the pseudo code below:
public void setAge(int age)
{
if( age > TODDLER_MAX && myMan is Toddler)
myMan = new OldMan();
else
.....
myMan.setAge(age);
}
If your language does not support changing the classtype at runtime, take a look at the decorator and strategy patterns.
Objects in Python can change their class by setting the __class__ attribute. Otherwise, use the Strategy pattern.
I wonder if subclassing is really the best solution here. A property (enum, probably) that has different types of people as its possible values is one alternative. Or, for that matter, a derived property or method that tells you the type of person based on the age.
Javascript can do this. At any time you can take an existing object and add new methods to it, or change its existing methods. This can be done at the individual object level.
Douglas Crockford writes about this in Classical Inheritance in JavaScript:
Class Augmentation
JavaScript's dynamism allows us to add
or replace methods of an existing
class. We can call the method method
at any time, and all present and
future instances of the class will
have that method. We can literally
extend a class at any time.
Inheritance works retroactively. We
call this Class Augmentation to avoid
confusion with Java's extends, which
means something else.
Object Augmentation
In the static object-oriented
languages, if you want an object which
is slightly different than another
object, you need to define a new
class. In JavaScript, you can add
methods to individual objects without
the need for additional classes. This
has enormous power because you can
write far fewer classes and the
classes you do write can be much
simpler. Recall that JavaScript
objects are like hashtables. You
can add new values at any time. If the
value is a function, then it becomes a
method.
Common Lisp can: use the generic function CHANGE-CLASS.
I am surprised no one so far seemed to notice that this is the exact case for the State design pattern (although #Fadrian in fact described the core idea of the pattern quite precisely - without mentioning its name).
The state pattern is a behavioral software design pattern, also known as
the objects for states pattern. This pattern is used in computer
programming to represent the state of an object. This is a clean way for an
object to partially change its type at runtime.
The referenced page gives examples in Java and Python. Obviously it can be implemented in other strongly typed languages as well. (OTOH weakly typed languages have no need for State, as these support such behaviour out of the box.)

Am I overdoing it with my Factory Method?

Part of our core product is a website CMS which makes use of various page widgets. These widgets are responsible for displaying content, listing products, handling event registration, etc. Each widget is represented by class which derives from the base widget class. When rendering a page the server grabs the page's widget from the database and then creates an instance of the correct class. The factory method right?
Private Function WidgetFactory(typeId)
Dim oWidget
Select Case typeId
Case widgetType.ContentBlock
Set oWidget = New ContentWidget
Case widgetType.Registration
Set oWidget = New RegistrationWidget
Case widgetType.DocumentList
Set oWidget = New DocumentListWidget
Case widgetType.DocumentDisplay
End Select
Set WidgetFactory = oWidget
End Function
Anyways, this is all fine but as time has gone on the number of types of widgets has increased to around 50 meaning the factory method is rather long. Every time I create a new type of widget I go to add another couple of lines to the method and a little alarm rings in my head that maybe this isn't the best way to do things. I tend to just ignore that alarm but it's getting louder.
So, am I doing it wrong? Is there a better way to handle this scenario?
I think the question you should ask yourself is: Why am I using a Factory method here?
If the answer is "because of A", and A is a good reason, then continue doing it, even if it means some extra code. If the answer is "I don't know; because I've heard that you are supposed to do it this way?" then you should reconsider.
Let's go over the standard reasons for using factories. Here's what Wikipedia says about the Factory method pattern:
[...], it deals with the problem of creating objects (products) without specifying the exact class of object that will be created. The factory method design pattern handles this problem by defining a separate method for creating the objects, whose subclasses can then override to specify the derived type of product that will be created.
Since your WidgetFactory is Private, this is obviously not the reason why you use this pattern. What about the "Factory pattern" itself (independent of whether you implement it using a Factory method or an abstract class)? Again, Wikipedia says:
Use the factory pattern when:
The creation of the object precludes reuse without significantly duplicating code.
The creation of the object requires access to information or resources not appropriate to contain within the composing object.
The lifetime management of created objects needs to be centralised to ensure consistent behavior.
From your sample code, it does not look like any of this matches your need. So, the question (which only you can answer) is: (1) How likely is it that you will need the features of a centralized Factory for your widgets in the future and (2) how costly is it to change everything back to a Factory approach if you need it in the future? If both are low, you can safely drop the Factory method for the time being.
EDIT: Let me get back to your special case after this generic elaboration: Usually, it's a = new XyzWidget() vs. a = WidgetFactory.Create(WidgetType.Xyz). In your case, however, you have some (numeric?) typeId from a database. As Mark correctly wrote, you need to have this typeId -> className map somewhere.
So, in that case, the good reason for using a factory method could be: "I need some kind of huge ConvertWidgetTypeIdToClassName select-case-statement anyway, so using a factory method takes no additional code plus it provides the factory method advantages for free, if I should ever need them."
As an alternative, you could store the class name of the widget in the database (you probably already have some WidgetType table with primary key typeId anyway, right?) and create the class using reflection (if your language allows for this type of thing). This has a lot of advantages (e.g. you could drop in DLLs with new widgets and don't have to change your core CMS code) but also disadvantages (e.g. "magic string" in your database which is not checked at compile time; possible code injection, depending on who has access to that table).
The WidgetFactory method is really a mapping from a typeId enumeration to concrete classes. In general it's best if you can avoid enumerations entirely, but sometimes (particularly in web applications) you need to round-trip to an environment (e.g. the browser) that doesn't understand polymorphism and you need such measures.
Refactoring contains a pretty good explanation of why switch/select case statements are code smells, but that mainly addresses the case where you have many similar switches.
If your WidgetFactory method is the only place where you switch on that particular enum, I would say that you don't have to worry. You need to have that map somewhere.
As an alternative, you could define the map as a dictionary, but the amount of code lines wouldn't decrease significantly - you may be able to cut the lines of code in half, but the degree of complexity would stay equivalent.
Your application of the factory pattern is correct. You have information which dictates which of N types is created. A factory is what knows how to do that. (It is a little odd as a private method. I would expect it to be on an IWidgetFactory interface.)
Your implementation, though, tightly couples the implementation to the concrete types. If you instead mapped typeId -> widgetType, you could use Activator.CreateInstance(widgetType) to make the factory understand any widget type.
Now, you can define the mappings however you want: a simple dictionary, discovery (attributes/reflection), in the configuration file, etc. You have to know all the types in one place somewhere, but you also have the option to compose multiple sources.
The classic way of implementing a factory is not to use a giant switch or if-ladder, but instead to use a map which maps object type name to an object creation function. Apart from anything else, this allows the factory to be modified at run-time.
Whether it's proper or not, I've always believed that the time to use a Factory is when the decision of what object type to create will be based upon information that is not available until run-time.
You indicated in a followup comment that the widget type is stored in a database. Since your code does not know what objects will be created until run-time, I think that this is a perfectly valid use of the Factory pattern. By having the factory, you enable your program to defer the decision of which object type to use until the time when the decision can actually be made.
It's been my experience that Factories grow so their dependencies don't have to. If you see this mapping duplicating itself in other places then you have cause for worry.
try categories your widgets, maybe based on their functionality.
if few of them are logically depending on each other, create them with single construction

Why do most system architects insist on first programming to an interface?

Almost every Java book I read talks about using the interface as a way to share state and behaviour between objects that when first "constructed" did not seem to share a relationship.
However, whenever I see architects design an application, the first thing they do is start programming to an interface. How come? How do you know all the relationships between objects that will occur within that interface? If you already know those relationships, then why not just extend an abstract class?
Programming to an interface means respecting the "contract" created by using that interface. And so if your IPoweredByMotor interface has a start() method, future classes that implement the interface, be they MotorizedWheelChair, Automobile, or SmoothieMaker, in implementing the methods of that interface, add flexibility to your system, because one piece of code can start the motor of many different types of things, because all that one piece of code needs to know is that they respond to start(). It doesn't matter how they start, just that they must start.
Great question. I'll refer you to Josh Bloch in Effective Java, who writes (item 16) why to prefer the use of interfaces over abstract classes. By the way, if you haven't got this book, I highly recommend it! Here is a summary of what he says:
Existing classes can be easily retrofitted to implement a new interface. All you need to do is implement the interface and add the required methods. Existing classes cannot be retrofitted easily to extend a new abstract class.
Interfaces are ideal for defining mix-ins. A mix-in interface allows classes to declare additional, optional behavior (for example, Comparable). It allows the optional functionality to be mixed in with the primary functionality. Abstract classes cannot define mix-ins -- a class cannot extend more than one parent.
Interfaces allow for non-hierarchical frameworks. If you have a class that has the functionality of many interfaces, it can implement them all. Without interfaces, you would have to create a bloated class hierarchy with a class for every combination of attributes, resulting in combinatorial explosion.
Interfaces enable safe functionality enhancements. You can create wrapper classes using the Decorator pattern, a robust and flexible design. A wrapper class implements and contains the same interface, forwarding some functionality to existing methods, while adding specialized behavior to other methods. You can't do this with abstract methods - you must use inheritance instead, which is more fragile.
What about the advantage of abstract classes providing basic implementation? You can provide an abstract skeletal implementation class with each interface. This combines the virtues of both interfaces and abstract classes. Skeletal implementations provide implementation assistance without imposing the severe constraints that abstract classes force when they serve as type definitions. For example, the Collections Framework defines the type using interfaces, and provides a skeletal implementation for each one.
Programming to interfaces provides several benefits:
Required for GoF type patterns, such as the visitor pattern
Allows for alternate implementations. For example, multiple data access object implementations may exist for a single interface that abstracts the database engine in use (AccountDaoMySQL and AccountDaoOracle may both implement AccountDao)
A Class may implement multiple interfaces. Java does not allow multiple inheritance of concrete classes.
Abstracts implementation details. Interfaces may include only public API methods, hiding implementation details. Benefits include a cleanly documented public API and well documented contracts.
Used heavily by modern dependency injection frameworks, such as http://www.springframework.org/.
In Java, interfaces can be used to create dynamic proxies - http://java.sun.com/j2se/1.5.0/docs/api/java/lang/reflect/Proxy.html. This can be used very effectively with frameworks such as Spring to perform Aspect Oriented Programming. Aspects can add very useful functionality to Classes without directly adding java code to those classes. Examples of this functionality include logging, auditing, performance monitoring, transaction demarcation, etc. http://static.springframework.org/spring/docs/2.5.x/reference/aop.html.
Mock implementations, unit testing - When dependent classes are implementations of interfaces, mock classes can be written that also implement those interfaces. The mock classes can be used to facilitate unit testing.
I think one of the reasons abstract classes have largely been abandoned by developers might be a misunderstanding.
When the Gang of Four wrote:
Program to an interface not an implementation.
there was no such thing as a java or C# interface. They were talking about the object-oriented interface concept, that every class has. Erich Gamma mentions it in this interview.
I think following all the rules and principles mechanically without thinking leads to a difficult to read, navigate, understand and maintain code-base. Remember: The simplest thing that could possibly work.
How come?
Because that's what all the books say. Like the GoF patterns, many people see it as universally good and don't ever think about whether or not it is really the right design.
How do you know all the relationships between objects that will occur within that interface?
You don't, and that's a problem.
If
you already know those relationships,
then why not just extend an abstract
class?
Reasons to not extend an abstract class:
You have radically different implementations and making a decent base class is too hard.
You need to burn your one and only base class for something else.
If neither apply, go ahead and use an abstract class. It will save you a lot of time.
Questions you didn't ask:
What are the down-sides of using an interface?
You cannot change them. Unlike an abstract class, an interface is set in stone. Once you have one in use, extending it will break code, period.
Do I really need either?
Most of the time, no. Think really hard before you build any object hierarchy. A big problem in languages like Java is that it makes it way too easy to create massive, complicated object hierarchies.
Consider the classic example LameDuck inherits from Duck. Sounds easy, doesn't it?
Well, that is until you need to indicate that the duck has been injured and is now lame. Or indicate that the lame duck has been healed and can walk again. Java does not allow you to change an objects type, so using sub-types to indicate lameness doesn't actually work.
Programming to an interface means respecting the "contract" created by
using that interface
This is the single most misunderstood thing about interfaces.
There is no way to enforce any such contract with interfaces. Interfaces, by definition, cannot specify any behaviour at all. Classes are where behaviour happens.
This mistaken belief is so widespread as to be considered the conventional wisdom by many people. It is, however, wrong.
So this statement in the OP
Almost every Java book I read talks about using the interface as a way
to share state and behavior between objects
is just not possible. Interfaces have neither state nor behaviour. They can define properties, that implementing classes must provide, but that's as close as they can get. You cannot share behaviour using interfaces.
You can make an assumption that people will implement an interface to provide the sort of behaviour implied by the name of its methods, but that's not anything like the same thing. And it places no restrictions at all on when such methods are called (eg that Start should be called before Stop).
This statement
Required for GoF type patterns, such as the visitor pattern
is also incorrect. The GoF book uses exactly zero interfaces, as they were not a feature of the languages used at the time. None of the patterns require interfaces, although some can use them. IMO, the Observer pattern is one in which interfaces can play a more elegant role (although the pattern is normally implemented using events nowadays). In the Visitor pattern it is almost always the case that a base Visitor class implementing default behaviour for each type of visited node is required, IME.
Personally, I think the answer to the question is threefold:
Interfaces are seen by many as a silver bullet (these people usually labour under the "contract" misapprehension, or think that interfaces magically decouple their code)
Java people are very focussed on using frameworks, many of which (rightly) require classes to implement their interfaces
Interfaces were the best way to do some things before generics and annotations (attributes in C#) were introduced.
Interfaces are a very useful language feature, but are much abused. Symptoms include:
An interface is only implemented by one class
A class implements multiple interfaces. Often touted as an advantage of interfaces, usually it means that the class in question is violating the principle of separation of concerns.
There is an inheritance hierarchy of interfaces (often mirrored by a hierarchy of classes). This is the situation you're trying to avoid by using interfaces in the first place. Too much inheritance is a bad thing, both for classes and interfaces.
All these things are code smells, IMO.
It's one way to promote loose coupling.
With low coupling, a change in one module will not require a change in the implementation of another module.
A good use of this concept is Abstract Factory pattern. In the Wikipedia example, GUIFactory interface produces Button interface. The concrete factory may be WinFactory (producing WinButton), or OSXFactory (producing OSXButton). Imagine if you are writing a GUI application and you have to go look around all instances of OldButton class and changing them to WinButton. Then next year, you need to add OSXButton version.
In my opinion, you see this so often because it is a very good practice that is often applied in the wrong situations.
There are many advantages to interfaces relative to abstract classes:
You can switch implementations w/o re-building code that depends on the interface. This is useful for: proxy classes, dependency injection, AOP, etc.
You can separate the API from the implementation in your code. This can be nice because it makes it obvious when you're changing code that will affect other modules.
It allows developers writing code that is dependent on your code to easily mock your API for testing purposes.
You gain the most advantage from interfaces when dealing with modules of code. However, there is no easy rule to determine where module boundaries should be. So this best practice is easy to over-use, especially when first designing some software.
I would assume (with #eed3s9n) that it's to promote loose coupling. Also, without interfaces unit testing becomes much more difficult, as you can't mock up your objects.
Why extends is evil. This article is pretty much a direct answer to the question asked. I can think of almost no case where you would actually need an abstract class, and plenty of situations where it is a bad idea. This does not mean that implementations using abstract classes are bad, but you will have to take care so you do not make the interface contract dependent on artifacts of some specific implementation (case in point: the Stack class in Java).
One more thing: it is not necessary, or good practice, to have interfaces everywhere. Typically, you should identify when you need an interface and when you do not. In an ideal world, the second case should be implemented as a final class most of the time.
There are some excellent answers here, but if you're looking for a concrete reason, look no further than Unit Testing.
Consider that you want to test a method in the business logic that retrieves the current tax rate for the region where a transaction occurrs. To do this, the business logic class has to talk to the database via a Repository:
interface IRepository<T> { T Get(string key); }
class TaxRateRepository : IRepository<TaxRate> {
protected internal TaxRateRepository() {}
public TaxRate Get(string key) {
// retrieve an TaxRate (obj) from database
return obj; }
}
Throughout the code, use the type IRepository instead of TaxRateRepository.
The repository has a non-public constructor to encourage users (developers) to use the factory to instantiate the repository:
public static class RepositoryFactory {
public RepositoryFactory() {
TaxRateRepository = new TaxRateRepository(); }
public static IRepository TaxRateRepository { get; protected set; }
public static void SetTaxRateRepository(IRepository rep) {
TaxRateRepository = rep; }
}
The factory is the only place where the TaxRateRepository class is referenced directly.
So you need some supporting classes for this example:
class TaxRate {
public string Region { get; protected set; }
decimal Rate { get; protected set; }
}
static class Business {
static decimal GetRate(string region) {
var taxRate = RepositoryFactory.TaxRateRepository.Get(region);
return taxRate.Rate; }
}
And there is also another other implementation of IRepository - the mock up:
class MockTaxRateRepository : IRepository<TaxRate> {
public TaxRate ReturnValue { get; set; }
public bool GetWasCalled { get; protected set; }
public string KeyParamValue { get; protected set; }
public TaxRate Get(string key) {
GetWasCalled = true;
KeyParamValue = key;
return ReturnValue; }
}
Because the live code (Business Class) uses a Factory to get the Repository, in the unit test you plug in the MockRepository for the TaxRateRepository. Once the substitution is made, you can hard code the return value and make the database unneccessary.
class MyUnitTestFixture {
var rep = new MockTaxRateRepository();
[FixtureSetup]
void ConfigureFixture() {
RepositoryFactory.SetTaxRateRepository(rep); }
[Test]
void Test() {
var region = "NY.NY.Manhattan";
var rate = 8.5m;
rep.ReturnValue = new TaxRate { Rate = rate };
var r = Business.GetRate(region);
Assert.IsNotNull(r);
Assert.IsTrue(rep.GetWasCalled);
Assert.AreEqual(region, rep.KeyParamValue);
Assert.AreEqual(r.Rate, rate); }
}
Remember, you want to test the business logic method only, not the repository, database, connection string, etc... There are different tests for each of those. By doing it this way, you can completely isolate the code that you are testing.
A side benefit is that you can also run the unit test without a database connection, which makes it faster, more portable (think multi-developer team in remote locations).
Another side benefit is that you can use the Test-Driven Development (TDD) process for the implementation phase of development. I don't strictly use TDD but a mix of TDD and old-school coding.
In one sense, I think your question boils down to simply, "why use interfaces and not abstract classes?" Technically, you can achieve loose coupling with both -- the underlying implementation is still not exposed to the calling code, and you can use Abstract Factory pattern to return an underlying implementation (interface implementation vs. abstract class extension) to increase the flexibility of your design. In fact, you could argue that abstract classes give you slightly more, since they allow you to both require implementations to satisfy your code ("you MUST implement start()") and provide default implementations ("I have a standard paint() you can override if you want to") -- with interfaces, implementations must be provided, which over time can lead to brittle inheritance problems through interface changes.
Fundamentally, though, I use interfaces mainly due to Java's single inheritance restriction. If my implementation MUST inherit from an abstract class to be used by calling code, that means I lose the flexibility to inherit from something else even though that may make more sense (e.g. for code reuse or object hierarchy).
One reason is that interfaces allow for growth and extensibility. Say, for example, that you have a method that takes an object as a parameter,
public void drink(coffee someDrink)
{
}
Now let's say you want to use the exact same method, but pass a hotTea object. Well, you can't. You just hard-coded that above method to only use coffee objects. Maybe that's good, maybe that's bad. The downside of the above is that it strictly locks you in with one type of object when you'd like to pass all sorts of related objects.
By using an interface, say IHotDrink,
interface IHotDrink { }
and rewrting your above method to use the interface instead of the object,
public void drink(IHotDrink someDrink)
{
}
Now you can pass all objects that implement the IHotDrink interface. Sure, you can write the exact same method that does the exact same thing with a different object parameter, but why? You're suddenly maintaining bloated code.
Its all about designing before coding.
If you dont know all the relationships between two objects after you have specified the interface then you have done a poor job of defining the interface -- which is relatively easy to fix.
If you had dived straight into coding and realised half way through you are missing something its a lot harder to fix.
You could see this from a perl/python/ruby perspective :
when you pass an object as a parameter to a method you don't pass it's type , you just know that it must respond to some methods
I think considering java interfaces as an analogy to that would best explain this . You don't really pass a type , you just pass something that responds to a method ( a trait , if you will ).
I think the main reason to use interfaces in Java is the limitation to single inheritance. In many cases this lead to unnecessary complication and code duplication. Take a look at Traits in Scala: http://www.scala-lang.org/node/126 Traits are a special kind of abstract classes, but a class can extend many of them.