How exactly do "Objects communicate with each other by passing messages"? - language-agnostic

In several introductory texts on Object-oriented programming, I've come across the above statement.
From wikipedia, "In OOP, each object is capable of receiving messages, processing data, and sending messages to other objects and can be viewed as an independent 'machine' with a distinct role or responsibility."
What exactly does the statement mean in code?
class A
{
methodA()
{
}
}
class B
{
methodB()
{
}
}
class C
{
main()
{
A a=new A();
B b=new B();
a.methodA(); // does this mean msgs passing??
b.methodB(); // or does this?? I may be completely off-track here..
}
}

If we are talking about OOP than the term "message passing" comes from Smalltalk. In a few words the Smalltalk basic principles are:
Object is the basic unit of object-oriented system.
Objects have their own state.
Objects communicate by sending and receiving messages.
If you are interested in Smalltalk take a look at Pharo or Squeak.
Java/C#/C++ and many other languages use slightly different approach probably derived from Simula. You invoke a method instead of pass a message.
I think this terms are more or less equivalent. May be the only interesting difference is that message passing (at least in Smalltalk) always rely on dynamic dispatch and late binding while in the case of method invocation one can use static dispatch and early binding too. For example, C++ (AFAIK) does early binding by default until "virtual" keyword appears somewhere...
Anyway, regardless of which formalism do your programming language use for communication between two objects (message passing or method invocation) it's always considered a good OOP style to forbid direct access to instance variables in Smalltalk terminology or data members in C++ terminology or whatever term is used in your programming language.
Smalltalk directly prohibits access to instance variables at the syntax level. As I mentioned above objects in Smalltalk program can interact only by passing/receiving messages. Many other languages allow access to instance variables at the syntax level but it's considered a bad practice. For example, the famous Effective C++ book contains the corresponding recommendation: Item 22: Declare data members private.
The reasons are:
syntactic consistency (the only way for clients to access an object is via member functions or message passing);
more precise control over the accessibility of data members (you can implement no access, read-only access, read-write access, and even write-only access);
you can later replace the data member without breaking your public interface.
The last one is the most important. It's the essence of encapsulation - information hiding on the class level.
The point about encapsulation is more important than it might initially appear. If you hide your data members from your clients (i.e., encapsulate them), you can ensure that class invariants are always maintained, because only member functions can affect them. Furthermore, you reserve the right to change your implementation decisions later. If you don't hide such decisions, you'll soon find that even if you own the source code to a class, your ability to change anything public is extremely restricted, because too much client code will be broken. Public means unencapsulated, and practically speaking, unencapsulated means unchangeable, especially for classes that are widely used. Yet widely used classes are most in need of encapsulation, because they are the ones that can most benefit from the ability to replace one implementation with a better one.
(с) Scott Meyers, Effective C++: 55 Specific Ways to Improve Your Programs and Designs (3rd Edition)

Not exactly an answer to your question, but a little digression about message dispatch vs. method invocation:
The term message refers to the fact that you don't know which method will be invoked due to polymorphism. You ask an object to do something (hence the term message) and it acts accordingly. The term method invocation is misleading as it suggest you pick one exact method.
The term message is also closer to the reality of dynamic language, where you could actually send a message that the object doesn't understand (see doesNotUnderstand in Smalltalk). You can then not really speak of method invocation given that there is none matching, and the message dispatch will fail. In static typed language, this problem is prevented.

"Passing a message" is an abstraction.
Most OO languages today implement that abstraction in the form of feature invocation. By feature I mean a method an operation (see edit below), property or something similar. Bertrand Meyer in OOSC2 argues that feature invocation is the basic unit of computation in modern OO languages; this is a perfectly valid and coherent way to implement the old abstract idea that "objects communicate by message passing".
Other implementation techniques are possible. For example, objects managed by some middleware systems communicate by passing messages via a queue facility.
In summary: don't confuse abstractions with code. In the olden days, programmers used to care a lot about theory because programming barely existed as a mainstream profession. Today, most programmers are rarely familiar with the theory behind the code. :-)
By the way, and for the theory inclined, agent-oriented modelling and programming approaches often emphasise message passing as a communication mechanism between agents, arguing that it derives from speech act theory. No method invocation is involved here.
Edit. In most OO languages, you call operations rather than methods. It is the runtime engine which decides which particular method to invoke as a response to the operation that you have called. This enables the implementation of the so often mentioned mechanisms of polymorphism. This little nuance is usually forgotten in routine parlance when we refer to "calling a method". However, it is necessary in order to differentiate between operations (as features that implement message passing) and methods (as specific versions of said operations).

They're referring to the fact that a client can invoke a method on a receiving object, and pass data to that object, but that object can decide autonomously what to do with that data, and maintain its own state as required.
The client object can't manipulate the state of the receiving object directly. This is an advantage of encapsulation - the receiving object can enforce its own state independently and change its implementation without affecting how clients interact with it.

In OOP, objects don't necessarily communicate with each other by passing messages. They communicate with each other in some way that allows them to specify what they want done, but leaves the implementation of that behavior to the receiving object. Passing a message is one way of achieving that separation of the interface from the implementation. Another way is to call a (virtual) method in the receiving object.
As to which of your member function calls would really fit those requirements, it's a bit difficult to say on a language-agnostic basis. Just for example, in Java member functions are virtual by default, so your calls to a.methodA() and b.methodB() would be equivalent to passing a message. Your (attempts a) calls to b.methodA() and a.methodB() wouldn't compile because Java is statically typed.
On the contrary, in C++, member functions are not virtual by default, so none of your calls is equivalent to message passing. To get something equivalent, you'd need to explicitly declare at least one of the member functions as virtual:
class A {
virtual void methodA() {}
};
As it stands, however, this is basically a "distinction without a difference." To get some idea of what this means, you need to use some inheritance:
struct base {
void methodA() { std::cout << "base::methodA\n"; }
virtual void methodB() { std::cout << "base::methodB\n"; }
};
struct derived {
void methodA() { std::cout << "derived::methodA\n"; }
virtual void methodB() { std::cout << "derived::methodB"; }
};
int main() {
base1 *b1 = new base;
base2 *b2 = new derived;
b1->methodA(); // "base::methodA"
b1->methodB(); // "base::methodB"
b2->methodA(); // "base::methodA"
b2->methodB(); // "derived::methodB"
return 0;
}

What you have posted will not compile in any oop language, as methodB does not belong to object A and methodA doesn't belong to object B.
If you called the correct method, then both of these are message passing by object C:
a.methodA();
b.methodB();
From wikipedia:
The process by which an object sends data to another object or asks the other object to invoke a method.

Your example won't work with Java or Python, so I have corrected and annotated your main
class C{
main()
{
A a=new A();
B b=new B();
a.methodA(); // C says to a that methodA should be executed
// C says to b that methodB should be executed
// and b says to C that the result is answer
answer = b.methodB();
}
}

Some of the early academic work on OO was in terms of objects passing messages to each other in order to invoke behavior. Some early OO languages were actually written that way (SmallTalk?).
Modern languages like C++, C# and Java do not work that way at all. They simply have code call methods on objects. This is exactly like a procedural language, except that a hidden reference to the class being called is passed in the call ("this").

passing object as parameter to a method of an object that belong to a different class type.
this way you pass attribute of an object to another object of a different class
just call an methods of a object of an other class.
so you can create an object of this class to get information of other object of different class.
Note:
this no override methods ok because they can be the same name but belong to a different class type.
override methods is went you heritage a methods in a sub class and you change the behavior of the same methods that you get for heritance of a super class.
the method to call is depending of arguments that you put in the method or the data type. the system call the right method and they can be located in a object of a superclass or in a object of a subclass.
a lot of people ask the same question. when they are working with OOP.
I recommend read those old books.
to understand what is OOP and not learn to programming object oriented in a programming language as CPP, JAVA and PHP.
introduction to OOP (Timothy Buud)
Object-Oriented Programming: An Evolutionary Approach
(Brad J Cox . Andrew J Novobilski)
and not forget to read Bjarne stroustrup CPP new books.
#include <iostream>
#include <string>
using namespace std;
class Car{
string brand;
public:
void setBrand(string newBrand){this->brand=newBrand;}
void Driver(){cout<<" IS DRIVING THIS CAR BRAND "<<brand<<endl;}
void Brake(){cout<<"IS BRAKING"<<endl;}
};
class Person{
private:string name;
public:
void setName(string newName){this->name=newName;}
//HERE WE CALL METHOD OF CAR CLASS AND REDEFINE METHODS NO OVERRIDE OK
void Driver(Car objectOfClassCar){cout<<this->name<<ends;
objectOfClassCar.Driver();}
void Brake(string str, Car objectOfClassCar){cout<<this->name<<"
"<<str<<ends;objectOfClassCar.Brake();}
};
int main(){
Car corolla;
corolla.setBrand("TOYOTA");
Person student;
student.setName("MIGUEL");
student.Driver(corolla);
student.Brake("CAR",corolla);
//it open a lot of opportunities to do the same.
}

Does that code works?
Anyway you're out of the road...
Message passing is a way for interprocess communication, one among many others. It means that two (or more) object can only speak one each other by messaging, which should say from who, to who, and what...
You can see it's very different from shared memory, for example...

Related

How do you describe new to a new CS student?

In OOP, there are entities (e.g. Person) which has attributes (e.g. name, address, etc) and it has methods. How do you describe new? Is it a method or just special token to bring an abstract entity to real one?
Sometimes it's a method, sometimes it's just syntactic sugar that invokes an allocator method. The language matters.
To your CS student? Don't sugar-coat it, they need to be able to get their heads around the concepts pretty quickly and using metaphors unrelated to the computer field, while fine for trying to explain it to your 80-year-old grandmother, will not help out a CS student much.
Simply put, tell them that a class is a specification for something and that an object is a concrete instance of that something. All new does is create a concrete instance based on the specification. This includes both creation (not necessarily class-specific, which is why I'd hesitate to call it a method, reserving that term for class-bound functions) and initialisation (which is class-specific).
Depending on the language new is a keyword which can be used as an operator or as a modifier. For instance in C# new can be used:
new operator - used to create objects on the heap and invoke constructors.
new modifier - used to hide an inherited member from a base class member
For brand new students I would describe new only as a keyword and leave the modifier out of the discussion. I describe classes as blueprints and new as the mechanism by which those blueprints turn into something tangible - objects.
You may want to checkout my question: How to teach object oriented programming to procedural programmers for other great answers on teaching OOP to new developers.
In most object-oriented languages, new is simply a convention for naming a factory method. But it's only one of many conventions.
In Ruby, for example, it is conventional to name the factory method [] for collection classes. In Python, classes are simply their own factories. In Io, the factory method is generally called clone, in Ioke and Seph it is called mimic.
In Smalltalk, factory methods often have more descriptive names than just new:. Something like fromList: or with:.
Here's a simile that has worked for me in the past.
An object definition is a Jello Mold. "new" is the process that actually makes a Jello snack from that mold. Each Goopy Jello thing that you give to your new neighbors can be different, this one's green, this one has bits of fruit in it, etc. It's its own unique "object." But the mold is the same.
Or you can use a factory analogy or something, (or blueprints vs building).
As far as its role in the syntax, it's just a keyword that lets the compiler know the allocate memory on the heap and run the constructor. There's not much more to it.
Smalltalk: it's an instance method on the metaclass. So "new is a method that returns a newly-allocated instance."
I tell people that a class is like a plan on how to make an object. An object is made from the class by new. If they need more than that, well, I just don't know what to say. ;-)
new is a keyword that calls the class constructor of the class to the right of it with the arguments listed inside ().
String str = new String("asdf");
str is defined as being a String class variable using the constructor and argument "asdf"
At least that's how it was presented to me.
In Ruby, I believe it's a instance method on the metaclass. In CLOS it's a generic function called make-instance but otherwise roughly the same.
In some languages, like Java, new has special syntax, and the metaclass part is hidden. In the case where you have to teach somebody OO with such a language, I don't know that there's much you can do. (Taking a break from teaching OO and Java to teach a second object system would almost certainly just confuse them further!) Just explain what it does, and that it's a special case.
You can say that a class is a prototype/blueprint for an object. When you give it the keyword new, that prototype/blueprint comes to life. It's like you're giving a breath of life to those dead instance.
In Java,
new allocates memory for a new class instance (object)
new runs a class's constructor to initialize that instance
new returns a reference to that new instance
As far as the relationship between object/instance and class, I sometimes think:
class is to instance as blueprint is to building
new in most languages does some variation of the following:
designate some memory region for a class instance, and if neccesary, inform the garbage collector about how to free that memory later.
initialize that memory region in the manner specific to that class, transforming the bytes of raw memory into bytes of a valid instance of the class
return a reference to the memory location to the caller.

What's the difference between closures and traditional classes?

What are the pros and cons of closures against classes, and vice versa?
Edit:
As user Faisal put it, both closures and classes can be used to "describe an entity that maintains and manipulates state", so closures provide a way to program in an object oriented way using functional languages. Like most programmers, I'm more familiar with classes.
The intention of this question is not to open another flame war about which programming paradigm is better, or if closures and classes are fully equivalent, or poor man's one-another.
What I'd like to know is if anyone found a scenario in which one approach really beats the other, and why.
Functionally, closures and objects are equivalent. A closure can emulate an object and vice versa. So which one you use is a matter of syntactic convenience, or which one your programming language can best handle.
In C++ closures are not syntactically available, so you are forced to go with "functors", which are objects that override operator() and may be called in a way that looks like a function call.
In Java you don't even have functors, so you get things like the Visitor pattern, which would just be a higher order function in a language that supports closures.
In standard Scheme you don't have objects, so sometimes you end up implementing them by writing a closure with a dispatch function, executing different sub-closures depending on the incoming parameters.
In a language like Python, the syntax of which has both functors and closures, it's basically a matter of taste and which you feel is the better way to express what you are doing.
Personally, I would say that in any language that has syntax for both, closures are a much more clear and clean way to express objects with a single method. And vice versa, if your closure starts handling dispatch to sub-closures based on the incoming parameters, you should probably be using an object instead.
Personally, I think it's a matter of using the right tool for the job...more specifically, of properly communicating your intent.
If you want to explicitly show that all your objects share a common definition and want strong type-checking of such, you probably want to use a class. The disadvantage of not being able to alter the structure of your class at runtime is actually a strength in this case, since you know exactly what you're dealing with.
If instead you want to create a heterogeneous collection of "objects" (i.e. state represented as variables closed under some function w/inner functions to manipulate that data), you might be better off creating a closure. In this case, there's no real guarantee about the structure of the object you end up with, but you get all the flexibility of defining it exactly as you like at runtime.
Thank you for asking, actually; I'd responded with a sort of knee-jerk "classes and closures are totally different!" attitude at first, but with some research I realize the problem isn't nearly as cut-and-dry as I'd thought.
Closures are very lightly related to classes. Classes let you define fields and methods, and closures hold information about local variables from a function call. There is no possible comparison of the two in a language-agnostic manner: they don't serve the same purpose at all. Besides, closures are much more related to functional programming than to object-oriented programming.
For instance, look at the following C# code:
static void Main(String[] args)
{
int i = 4;
var myDelegate = delegate()
{
i = 5;
}
Console.WriteLine(i);
myDelegate();
Console.WriteLine(i);
}
This gives "4" then "5". myDelegate, being a delegate, is a closure and knows about all the variables currently used by the function. Therefore, when I call it, it is allowed to change the value of i inside the "parent" function. This would not be permitted for a normal function.
Classes, if you know what they are, are completely different.
A possible reason of your confusion is that when a language has no language support for closures, it's possible to simulate them using classes that will hold every variable we need to keep around. For instance, we could rewrite the above code like this:
class MainClosure()
{
public int i;
void Apply()
{
i = 5;
}
}
static void Main(String[] args)
{
MainClosure closure;
closure.i = 4;
Console.WriteLine(closure.i);
closure.Apply();
Console.WriteLine(closure.i);
}
We've transformed the delegate to a class that we've called MainClosure. Instead of creating the variable i inside the Main function, we've created a MainClosure object, that has an i field. This is the one we'll use. Also, we've built the code the function executes inside an instance method, instead of inside the method.
As you can see, even though this was an easy example (only one variable), it is considerably more work. In a context where you want closures, using objects is a poor solution. However, classes are not only useful for creating closures, and their usual purpose is usually far different.

Should inheritance (of non-interface types) be removed from programming languages?

This is quite a controversial topic, and before you say "no", is it really, really needed?
I have been programming for about 10 years, and I can't honestly say that I can recall a time where inheritance solved a problem that couldn't be solved another way. On the other hand I can recall many times when I used inheritance, because I felt like I had to or because I though I was clever and ended up paying for it.
I can't really see any circumstances where, from an implementation stand point, aggregation or another technique could not be used instead of inheritance.
My only caveat to this is that we would still allow inheritance of interfaces.
(Update)
Let's give an example of why it's needed instead of saying, "sometimes it's just needed." That really isn't helpful at all. Where is your proof?
(Update 2 Code Example)
Here's the classic shape example, more powerful, and more explicit IMO, without inheritance. It is almost never the case in the real world that something really "Is a" of something else. Almost always "Is Implemented in Terms of" is more accurate.
public interface IShape
{
void Draw();
}
public class BasicShape : IShape
{
public void Draw()
{
// All shapes in this system have a dot in the middle except squares.
DrawDotInMiddle();
}
}
public class Circle : IShape
{
private BasicShape _basicShape;
public void Draw()
{
// Draw the circle part
DrawCircle();
_basicShape.Draw();
}
}
public class Square : IShape
{
private BasicShape _basicShape;
public void Draw()
{
// Draw the circle part
DrawSquare();
}
}
I blogged about this as a wacky idea a while ago.
I don't think it should be removed, but I think classes should be sealed by default to discourage inheritance when it's not appropriate. It's a powerful tool to have available, but it's like a chain-saw - you really don't want to use it unless it's the perfect tool for the job. Otherwise you might start losing limbs.
The are potential language features such as mix-ins which would make it easier to live without, IMO.
Inheritance can be rather useful in situations where your base class has a number of methods with the same implementation for each derived class, to save every single derived class from having to implement boiler-plate code. Take the .NET Stream class for example which defines the following methods:
public virtual int Read(byte[] buffer, int index, int count)
{
}
public int ReadByte()
{
// note: this is only an approximation to the real implementation
var buffer = new byte[1];
if (this.Read(buffer, 0, 1) == 1)
{
return buffer[0];
}
return -1;
}
Because inheritance is available the base class can implement the ReadByte method for all implementations without them having to worry about it. There are a number of other methods like this on the class which have default or fixed implementations. So in this type of situation it's a very valuable thing to have, compared with an interface where your options are either to make everyone re-implement everything, or to create a StreamUtil type class which they can call (yuk!).
To clarify, with inheritance all I need to write to create a DerivedStream class is something like:
public class DerivedStream : Stream
{
public override int Read(byte[] buffer, int index, int count)
{
// my read implementation
}
}
Whereas if we're using interfaces and a default implementation of the methods in StreamUtil I have to write a bunch more code:
public class DerivedStream : IStream
{
public int Read(byte[] buffer, int index, int count)
{
// my read implementation
}
public int ReadByte()
{
return StreamUtil.ReadByte(this);
}
}
}
So it's not a huge amount more code, but multiply this by a few more methods on the class and it's just unnecessary boiler plate stuff which the compiler could handle instead. Why make things more painful to implement than necessary? I don't think inheritance is the be-all and end-all, but it can be very useful when used correctly.
Of course you can write great programs happily without objects and inheritance; functional programmers do it all the time. But let us not be hasty. Anybody interested in this topic should check out the slides from Xavier Leroy's invited lecture about classes vs modules in Objective Caml. Xavier does a beautiful job laying out what inheritance does well and does not do well in the context of different kinds of software evolution.
All languages are Turing-complete, so of course inheritance isn't necessary. But as an argument for the value of inheritance, I present the Smalltalk blue book, especially the Collection hierarchy and the Number hierarchy. I'm very impressed that a skilled specialist can add an entirely new kind of number (or collection) without perturbing the existing system.
I will also remind questioner of the "killer app" for inheritance: the GUI toolkit. A well-designed toolkit (if you can find one) makes it very, very easy to add new kinds of graphical interaction widgets.
Having said all that, I think that inheritance has innate weaknesses (your program logic is smeared out over a large set of classes) and that it should be used rarely and only by skilled professionals. A person graduating with a bachelor's degree in computer science barely knows anything about inheritance---such persons should be permitted to inherit from other classes at need, but should never, ever write code from which other programmers inherit. That job should be reserved for master programmers who really know what they're doing. And they should do it reluctantly!
For an interesting take on solving similar problems using a completely different mechanism, people might want to check out Haskell type classes.
I wish languages would provide some mechanisms to make it easier to delegate to member variables. For example, suppose interface I has 10 methods, and class C1 implements this interface. Suppose I want to implement class C2 that is just like a C1 but with method m1() overridden. Without using inheritance, I would do this as follows (in Java):
public class C2 implements I {
private I c1;
public C2() {
c1 = new C1();
}
public void m1() {
// This is the method C2 is overriding.
}
public void m2() {
c1.m2();
}
public void m3() {
c1.m3();
}
...
public void m10() {
c1.m10();
}
}
In other words, I have to explicitly write code to delegate the behavior of methods m2..m10 to the member variable m1. That's a bit of a pain. It also clutters the code up so that it's harder to see the real logic in class C2. It also means that whenever new methods are added to interface I, I have to explicitly add more code to C1 just to delegate these new methods to C1.
I wish languages would allow me to say: C1 implements I, but if C1 is missing some method from I, automatically delegate to member variable c1. That would cut down the size of C1 to just
public class C2 implements I(delegate to c1) {
private I c1;
public C2() {
c1 = new C1();
}
public void m1() {
// This is the method C2 is overriding.
}
}
If languages allowed us to do this, it would be much easier to avoid use of inheritance.
Here's a blog article I wrote about automatic delegation.
Inheritance is one of those tools that can be used, and of course can be abused, but I think languages have to have more changes before class-based inheritance could be removed.
Let's take my world at the moment, which is mainly C# development.
For Microsoft to take away class-based inheritance, they would have to build in much stronger support for handling interfaces. Things like aggregation, where I need to add lots of boiler-plate code just to wire up an interface to an internal object. This really should be done anyway, but would be a requirement in such a case.
In other words, the following code:
public interface IPerson { ... }
public interface IEmployee : IPerson { ... }
public class Employee : IEmployee
{
private Person _Person;
...
public String FirstName
{
get { return _Person.FirstName; }
set { _Person.FirstName = value; }
}
}
This would basically have to be a lot shorter, otherwise I'd have lots of these properties just to make my class mimic a person good enough, something like this:
public class Employee : IEmployee
{
private Person _Person implements IPerson;
...
}
this could auto-create the code necessary, instead of me having to write it. Just returning the internal reference if I cast my object to an IPerson would do no good.
So things would have to be better supported before class-based inheritance could be taken off the table.
Also, you would remove things like visibility. An interface really just have two visibility settings: There, and not-there. In some cases you would be, or so I think, forced to expose more of your internal data just so that someone else can more easily use your class.
For class-based inheritance, you can usually expose some access points that a descendant can use, but outside code can't, and you would generally have to just remove those access points, or make them open to everyone. Not sure I like either alternative.
My biggest question would be what specifically the point of removing such functionality would be, even if the plan would be to, as an example, build D#, a new language, like C#, but without the class-based inheritance. In other words, even if you plan on building a whole new language, I still am not entirely sure what the ultimate goal would be.
Is the goal to remove something that can be abused if not in the right hands? If so, I have a list a mile long for various programming languages that I would really like to see addresses first.
At the top of that list: The with keyword in Delphi. That keyword is not just like shooting yourself in the foot, it's like the compiler buys the shotgun, comes to your house and takes aim for you.
Personally I like class-based inheritance. Sure, you can write yourself into a corner. But we can all do that. Remove class-based inheritance, I'll just find a new way of shooting myself in the foot with.
Now where did I put that shotgun...
Have fun implementing ISystemObject on all of your classes so that you have access to ToString() and GetHashcode().
Additionally, good luck with the ISystemWebUIPage interface.
If you don't like inheritance, my suggestion is to stop using .NET all together. There are way too many scenarios where it saves time (see DRY: don't repeat yourself).
If using inheritance is blowing up your code, then you need to take a step back and rethink your design.
I prefer interfaces, but they aren't a silver bullet.
For production code I almost never use inheritance. I go with using interfaces for everything (this helps with testing and improves readability i.e. you can just look at the interface to read the public methods and see what is going on because of well-named methods and class names). Pretty much the only time I would use inheritance would be because a third party library demands it. Using interfaces, I would get the same effect but I would mimic inheritance by using 'delegation'.
For me, not only is this more readable but it is much more testable and also makes refactoring a whole lot easier.
The only time I can think of that I would use inheritance in testing would be to create my own specific TestCases used to differentiate between types of tests I have in my system.
So I probably wouldn't get rid of it but I choose not to use it as much as possible for the reasons mentioned above.
No. Sometimes you need inheritance. And for those times where you don't -- don't use it. You can always "just" use interfaces (in languages that have them) and ADPs without data work like interfaces in those languages that don't have them. But I see no reason to remove what is sometimes a necessary feature just because you feel it isn't always needed.
No. Just because it's not often needed, doesn't mean it's never needed. Like any other tool in a toolkit, it can (and has been, and will be) misused. However, that doesn't mean it should never be used. In fact, in some languages (C++), there is no such thing as an 'interface' at the language level, so without a major change, you couldn't prohibit it.
No, it is not needed, but that does not mean it does not provide an overall benefit, which I think is more important than worrying about whether it is absolutely necessary.
In the end, almost all modern software language constructs amount to syntactic sugar - we could all be writing assembly code (or using punch cards, or working with vacuum tubes) if we really had to.
I find inheritance immensely useful those times that I truly want to express an "is-a" relationship. Inheritance seems to be the clearest means of expressing that intent. If I used delegation for all implementation re-use, I lose that expressiveness.
Does this allow for abuse? Of course it does. I often see questions asking how the developer can inherit from a class but hide a method because that method should not exist on the subclass. That person obviously misses the point of inheritance, and should be pointed toward delegation instead.
I don't use inheritance because it is needed, I use it because it is sometimes the best tool for the job.
I guess I have to play the devil's advocate. If we didn't have inheritance then we wouldn't be able to inherit abstract classes that uses the template method pattern. There are lots of examples where this is used in frameworks such as .NET and Java. Thread in Java is such an example:
// Alternative 1:
public class MyThread extends Thread {
// Abstract method to implement from Thread
// aka. "template method" (GoF design pattern)
public void run() {
// ...
}
}
// Usage:
MyThread t = new MyThread();
t.start();
The alternative is, in my meaning, verbose when you have to use it. Visual clutteer complexity goes up. This is because you need to create the Thread before you can actually use it.
// Alternative 2:
public class MyThread implements Runnable {
// Method to implement from Runnable:
public void run() {
// ...
}
}
// Usage:
MyThread m = new MyThread();
Thread t = new Thread(m);
t.start();
// …or if you have a curious perversion towards one-liners
Thread t = new Thread(new MyThread());
t.start();
Having my devil's advocate hat off I guess you could argue that the gain in the second implementation is dependency injection or seperation of concerns which helps designing testable classes. Depending on your definition of what an interface is (I've heard of at least three) an abstract class could be regarded as an interface.
Needed? No. You can write any program in C, for example, which doesn't have any sort of inheritance or objects. You could write it in assembly language, although it would be less portable. You could write it in a Turing machine and have it emulated. Somebody designed a computer language with exactly one instruction (something like subtract and branch if not zero), and you could write your program in that.
So, if you're going to ask if a given language feature is necessary (like inheritance, or objects, or recursion, or functions), the answer is no. (There are exceptions - you have to be able to loop and do things conditionally, although these need not be supported as explicit concepts in the language.)
Therefore, I find questions of this sort useless.
Questions like "When should we use inheritance" or "When shouldn't we" are a lot more useful.
a lot of the time I find myself choosing a base class over an interface just because I have some standard functionality. in C#, I can now use extension methods to achieve that, but it still doesn't achieve the same thing for several situations.
Is inheritance really needed? Depends what you mean by "really". You could go back to punch cards or flicking toggle switches in theory, but it's a terrible way to develop software.
In procedural languages, yes, class inheritance is a definite boon. It gives you a way to elegantly organise your code in certain circumstances. It should not be overused, as any other feature should not be overused.
For example, take the case of digiarnie in this thread. He/she uses interfaces for nearly everything, which is just as bad as (possibly worse than) using lots of inheritance.
Some of his points :
this helps with testing and improves readability
It doesn't do either thing. You never actually test an interface, you always test an object, that is, an instantiation of a class. And having to look at a completely different bit of code helps you understand the structure of a class? I don't think so.
Ditto for deep inheritance hierarchies though. You ideally want to look in one place only.
Using interfaces, I would get the same effect but I would mimic inheritance by using
'delegation'.
Delegation is a very good idea, and should often be used instead of inheritance (for example, the Strategy pattern is all about doing exactly this). But interfaces have zero to do with delegation, because you cannot specify any behaviour at all in an interface.
also makes refactoring a whole lot easier.
Early commitment to interfaces usually makes refactoring harder, not easier, because there are then more places to change. Overusing inheritance early is better (well, less bad) than overusing interfaces, as pulling out delegate classes is easier if the classes being modified do not implement any interfaces. And it's quite often from those delegates than you get useful interfaces.
So overuse of inheritance is a bad thing. Overuse of interfaces is a bad thing. And ideally, a class will neither inherit from anything (except maybe "object" or the language equivalent), nor implement any interfaces. But that doesn't mean either feature should be removed from a language.
If there is a framework class that does almost exactly what you want, but a particular function of its interface throws a NotSupported exception or for some other reason you only want to override one method to do something specific to your implementation, it's much easier to write a subclass and override that one method rather than write a brand new class and write pass-throughs for each of the other 27 methods in the class.
Similarly, What about Java, for example, where every object inherits from Object, and therefore automatically has implementations of equals, hashcode, etc. I don't have to re-implement them, and it "just works" when I want to use the object as a key in a hashtable. I don't have to write a default passthrough to a Hashtable.hashcode(Object o) method, which frankly seems like it's moving away from object orientation.
My initial thought was, You're crazy. But after thinking about it a while I kinda agree with you. I'm not saying remove Class Inheritance fully (abstract classes with partial implementation for example can be useful), but I have often inherited (pun intended) badly written OO code with multi level class inheritance that added nothing, other than bloat, to the code.
Note that inheritance means it is no longer possible to supply the base class functionality by dependency injection, in order to unit test a derived class in isolation of its parent.
So if you're deadly serious about dependency injection (which I'm not, but I do wonder whether I should be), you can't get much use out of inheritance anyway.
Here's a nice view at the topic:
IS-STRICTLY-EQUIVALENT-TO-A by Reg Braithwaite
I believe a better mechanism for code re-use which is sometimes achieved through inheritance are traits. Check this link (pdf) for a great discussion on this, including the distinction between traits and mixins, and why traits are favored.
There's some research that introduces traits into C# (pdf).
Perl has traits through Moose::Roles. Scala traits are like mixins, as in Ruby.
The question is, "Should inheritance (of non-interface types) be removed from programming languages?"
I say, "No", as it will break a hell of a lot of existing code.
That aside, should you use inheritance, other than inheritance of interfaces? I'm predominantly a C++ programmer and I follow a strict object model of multiple inheritance of interfaces followed by a chain of single inheritance of classes. The concrete classes are a "secret" of a component and it's friends, so what goes on there is nobodies business.
To help implement interfaces, I use template mixins. This allows the interface designer to provide snippets of code to help implement the interface for common scenarios. As a component developer I feel like I can go mixin shopping to get the reusable bits without being encumbered by how the interface designer thought I should build my class.
Having said that, the mixin paradigm is pretty much unique to C++. Without this, I expect that inheritance is very attractive to the pragmatic programmer.

How should I refactor my code to remove unnecessary singletons?

I was confused when I first started to see anti-singleton commentary. I have used the singleton pattern in some recent projects, and it was working out beautifully. So much so, in fact, that I have used it many, many times.
Now, after running into some problems, reading this SO question, and especially this blog post, I understand the evil that I have brought into the world.
So: How do I go about removing singletons from existing code?
For example:
In a retail store management program, I used the MVC pattern. My Model objects describe the store, the user interface is the View, and I have a set of Controllers that act as liason between the two. Great. Except that I made the Store into a singleton (since the application only ever manages one store at a time), and I also made most of my Controller classes into singletons (one mainWindow, one menuBar, one productEditor...). Now, most of my Controller classes get access the other singletons like this:
Store managedStore = Store::getInstance();
managedStore.doSomething();
managedStore.doSomethingElse();
//etc.
Should I instead:
Create one instance of each object and pass references to every object that needs access to them?
Use globals?
Something else?
Globals would still be bad, but at least they wouldn't be pretending.
I see #1 quickly leading to horribly inflated constructor calls:
someVar = SomeControllerClass(managedStore, menuBar, editor, sasquatch, ...)
Has anyone else been through this yet? What is the OO way to give many individual classes acces to a common variable without it being a global or a singleton?
Dependency Injection is your friend.
Take a look at these posts on the excellent Google Testing Blog:
Singletons are pathologic liars (but you probably already understand this if you are asking this question)
A talk on Dependency Injection
Guide to Writing Testable Code
Hopefully someone has made a DI framework/container for the C++ world? Looks like Google has released a C++ Testing Framework and a C++ Mocking Framework, which might help you out.
It's not the Singleton-ness that is the problem. It's fine to have an object that there will only ever be one instance of. The problem is the global access. Your classes that use Store should receive a Store instance in the constructor (or have a Store property / data member that can be set) and they can all receive the same instance. Store can even keep logic within it to ensure that only one instance is ever created.
My way to avoid singletons derives from the idea that "application global" doesn't mean "VM global" (i.e. static). Therefore I introduce a ApplicationContext class which holds much former static singleton information that should be application global, like the configuration store. This context is passed into all structures. If you use any IOC container or service manager, you can use this to get access to the context.
There's nothing wrong with using a global or a singleton in your program. Don't let anyone get dogmatic on you about that kind of crap. Rules and patterns are nice rules of thumb. But in the end it's your project and you should make your own judgments about how to handle situations involving global data.
Unrestrained use of globals is bad news. But as long as you are diligent, they aren't going to kill your project. Some objects in a system deserve to be singleton. The standard input and outputs. Your log system. In a game, your graphics, sound, and input subsystems, as well as the database of game entities. In a GUI, your window and major panel components. Your configuration data, your plugin manager, your web server data. All these things are more or less inherently global to your application. I think your Store class would pass for it as well.
It's clear what the cost of using globals is. Any part of your application could be modifying it. Tracking down bugs is hard when every line of code is a suspect in the investigation.
But what about the cost of NOT using globals? Like everything else in programming, it's a trade off. If you avoid using globals, you end up having to pass those stateful objects as function parameters. Alternatively, you can pass them to a constructor and save them as a member variable. When you have multiple such objects, the situation worsens. You are now threading your state. In some cases, this isn't a problem. If you know only two or three functions need to handle that stateful Store object, it's the better solution.
But in practice, that's not always the case. If every part of your app touches your Store, you will be threading it to a dozen functions. On top of that, some of those functions may have complicated business logic. When you break that business logic up with helper functions, you have to -- thread your state some more! Say for instance you realize that a deeply nested function needs some configuration data from the Store object. Suddenly, you have to edit 3 or 4 function declarations to include that store parameter. Then you have to go back and add the store as an actual parameter to everywhere one of those functions is called. It may be that the only use a function has for a Store is to pass it to some subfunction that needs it.
Patterns are just rules of thumb. Do you always use your turn signals before making a lane change in your car? If you're the average person, you'll usually follow the rule, but if you are driving at 4am on an empty high way, who gives a crap, right? Sometimes it'll bite you in the butt, but that's a managed risk.
Regarding your inflated constructor call problem, you could introduce parameter classes or factory methods to leverage this problem for you.
A parameter class moves some of the parameter data to it's own class, e.g. like this:
var parameterClass1 = new MenuParameter(menuBar, editor);
var parameterClass2 = new StuffParameters(sasquatch, ...);
var ctrl = new MyControllerClass(managedStore, parameterClass1, parameterClass2);
It sort of just moves the problem elsewhere though. You might want to housekeep your constructor instead. Only keep parameters that are important when constructing/initiating the class in question and do the rest with getter/setter methods (or properties if you're doing .NET).
A factory method is a method that creates all instances you need of a class and have the benefit of encapsulating creation of the said objects. They are also quite easy to refactor towards from Singleton, because they're similar to getInstance methods that you see in Singleton patterns. Say we have the following non-threadsafe simple singleton example:
// The Rather Unfortunate Singleton Class
public class SingletonStore {
private static SingletonStore _singleton
= new MyUnfortunateSingleton();
private SingletonStore() {
// Do some privatised constructing in here...
}
public static SingletonStore getInstance() {
return _singleton;
}
// Some methods and stuff to be down here
}
// Usage:
// var singleInstanceOfStore = SingletonStore.getInstance();
It is easy to refactor this towards a factory method. The solution is to remove the static reference:
public class StoreWithFactory {
public StoreWithFactory() {
// If the constructor is private or public doesn't matter
// unless you do TDD, in which you need to have a public
// constructor to create the object so you can test it.
}
// The method returning an instance of Singleton is now a
// factory method.
public static StoreWithFactory getInstance() {
return new StoreWithFactory();
}
}
// Usage:
// var myStore = StoreWithFactory.getInstance();
Usage is still the same, but you're not bogged down with having a single instance. Naturally you would move this factory method to it's own class as the Store class shouldn't concern itself with creation of itself (and coincidentally follow the Single Responsibility Principle as an effect of moving the factory method out).
From here you have many choices, but I'll leave that as an exercise for yourself. It is easy to over-engineer (or overheat) on patterns here. My tip is to only apply a pattern when there is a need for it.
Okay, first of all, the "singletons are always evil" notion is wrong. You use a Singleton whenever you have a resource which won't or can't ever be duplicated. No problem.
That said, in your example, there's an obvious degree of freedom in the application: someone could come along and say "but I want two stores."
There are several solutions. The one that occurs first of all is to build a factory class; when you ask for a Store, it gives you one named with some universal name (eg, a URI.) Inside that store, you need to be sure that multiple copies don't step on one another, via critical regions or some method of ensuring atomicity of transactions.
Miško Hevery has a nice article series on testability, among other things the singleton, where he isn't only talking about the problems, but also how you might solve it (see 'Fixing the flaw').
I like to encourage the use of singletons where necessary while discouraging the use of the Singleton pattern. Note the difference in the case of the word. The singleton (lower case) is used wherever you only need one instance of something. It is created at the start of your program and is passed to the constructor of the classes that need it.
class Log
{
void logmessage(...)
{ // do some stuff
}
};
int main()
{
Log log;
// do some more stuff
}
class Database
{
Log &_log;
Database(Log &log) : _log(log) {}
void Open(...)
{
_log.logmessage(whatever);
}
};
Using a singleton gives all of the capabilities of the Singleton anti-pattern but it makes your code more easily extensible, and it makes it testable (in the sense of the word defined in the Google testing blog). For example, we may decide that we need the ability to log to a web-service at some times as well, using the singleton we can easily do that without significant changes to the code.
By comparison, the Singleton pattern is another name for a global variable. It is never used in production code.

Why do most system architects insist on first programming to an interface?

Almost every Java book I read talks about using the interface as a way to share state and behaviour between objects that when first "constructed" did not seem to share a relationship.
However, whenever I see architects design an application, the first thing they do is start programming to an interface. How come? How do you know all the relationships between objects that will occur within that interface? If you already know those relationships, then why not just extend an abstract class?
Programming to an interface means respecting the "contract" created by using that interface. And so if your IPoweredByMotor interface has a start() method, future classes that implement the interface, be they MotorizedWheelChair, Automobile, or SmoothieMaker, in implementing the methods of that interface, add flexibility to your system, because one piece of code can start the motor of many different types of things, because all that one piece of code needs to know is that they respond to start(). It doesn't matter how they start, just that they must start.
Great question. I'll refer you to Josh Bloch in Effective Java, who writes (item 16) why to prefer the use of interfaces over abstract classes. By the way, if you haven't got this book, I highly recommend it! Here is a summary of what he says:
Existing classes can be easily retrofitted to implement a new interface. All you need to do is implement the interface and add the required methods. Existing classes cannot be retrofitted easily to extend a new abstract class.
Interfaces are ideal for defining mix-ins. A mix-in interface allows classes to declare additional, optional behavior (for example, Comparable). It allows the optional functionality to be mixed in with the primary functionality. Abstract classes cannot define mix-ins -- a class cannot extend more than one parent.
Interfaces allow for non-hierarchical frameworks. If you have a class that has the functionality of many interfaces, it can implement them all. Without interfaces, you would have to create a bloated class hierarchy with a class for every combination of attributes, resulting in combinatorial explosion.
Interfaces enable safe functionality enhancements. You can create wrapper classes using the Decorator pattern, a robust and flexible design. A wrapper class implements and contains the same interface, forwarding some functionality to existing methods, while adding specialized behavior to other methods. You can't do this with abstract methods - you must use inheritance instead, which is more fragile.
What about the advantage of abstract classes providing basic implementation? You can provide an abstract skeletal implementation class with each interface. This combines the virtues of both interfaces and abstract classes. Skeletal implementations provide implementation assistance without imposing the severe constraints that abstract classes force when they serve as type definitions. For example, the Collections Framework defines the type using interfaces, and provides a skeletal implementation for each one.
Programming to interfaces provides several benefits:
Required for GoF type patterns, such as the visitor pattern
Allows for alternate implementations. For example, multiple data access object implementations may exist for a single interface that abstracts the database engine in use (AccountDaoMySQL and AccountDaoOracle may both implement AccountDao)
A Class may implement multiple interfaces. Java does not allow multiple inheritance of concrete classes.
Abstracts implementation details. Interfaces may include only public API methods, hiding implementation details. Benefits include a cleanly documented public API and well documented contracts.
Used heavily by modern dependency injection frameworks, such as http://www.springframework.org/.
In Java, interfaces can be used to create dynamic proxies - http://java.sun.com/j2se/1.5.0/docs/api/java/lang/reflect/Proxy.html. This can be used very effectively with frameworks such as Spring to perform Aspect Oriented Programming. Aspects can add very useful functionality to Classes without directly adding java code to those classes. Examples of this functionality include logging, auditing, performance monitoring, transaction demarcation, etc. http://static.springframework.org/spring/docs/2.5.x/reference/aop.html.
Mock implementations, unit testing - When dependent classes are implementations of interfaces, mock classes can be written that also implement those interfaces. The mock classes can be used to facilitate unit testing.
I think one of the reasons abstract classes have largely been abandoned by developers might be a misunderstanding.
When the Gang of Four wrote:
Program to an interface not an implementation.
there was no such thing as a java or C# interface. They were talking about the object-oriented interface concept, that every class has. Erich Gamma mentions it in this interview.
I think following all the rules and principles mechanically without thinking leads to a difficult to read, navigate, understand and maintain code-base. Remember: The simplest thing that could possibly work.
How come?
Because that's what all the books say. Like the GoF patterns, many people see it as universally good and don't ever think about whether or not it is really the right design.
How do you know all the relationships between objects that will occur within that interface?
You don't, and that's a problem.
If
you already know those relationships,
then why not just extend an abstract
class?
Reasons to not extend an abstract class:
You have radically different implementations and making a decent base class is too hard.
You need to burn your one and only base class for something else.
If neither apply, go ahead and use an abstract class. It will save you a lot of time.
Questions you didn't ask:
What are the down-sides of using an interface?
You cannot change them. Unlike an abstract class, an interface is set in stone. Once you have one in use, extending it will break code, period.
Do I really need either?
Most of the time, no. Think really hard before you build any object hierarchy. A big problem in languages like Java is that it makes it way too easy to create massive, complicated object hierarchies.
Consider the classic example LameDuck inherits from Duck. Sounds easy, doesn't it?
Well, that is until you need to indicate that the duck has been injured and is now lame. Or indicate that the lame duck has been healed and can walk again. Java does not allow you to change an objects type, so using sub-types to indicate lameness doesn't actually work.
Programming to an interface means respecting the "contract" created by
using that interface
This is the single most misunderstood thing about interfaces.
There is no way to enforce any such contract with interfaces. Interfaces, by definition, cannot specify any behaviour at all. Classes are where behaviour happens.
This mistaken belief is so widespread as to be considered the conventional wisdom by many people. It is, however, wrong.
So this statement in the OP
Almost every Java book I read talks about using the interface as a way
to share state and behavior between objects
is just not possible. Interfaces have neither state nor behaviour. They can define properties, that implementing classes must provide, but that's as close as they can get. You cannot share behaviour using interfaces.
You can make an assumption that people will implement an interface to provide the sort of behaviour implied by the name of its methods, but that's not anything like the same thing. And it places no restrictions at all on when such methods are called (eg that Start should be called before Stop).
This statement
Required for GoF type patterns, such as the visitor pattern
is also incorrect. The GoF book uses exactly zero interfaces, as they were not a feature of the languages used at the time. None of the patterns require interfaces, although some can use them. IMO, the Observer pattern is one in which interfaces can play a more elegant role (although the pattern is normally implemented using events nowadays). In the Visitor pattern it is almost always the case that a base Visitor class implementing default behaviour for each type of visited node is required, IME.
Personally, I think the answer to the question is threefold:
Interfaces are seen by many as a silver bullet (these people usually labour under the "contract" misapprehension, or think that interfaces magically decouple their code)
Java people are very focussed on using frameworks, many of which (rightly) require classes to implement their interfaces
Interfaces were the best way to do some things before generics and annotations (attributes in C#) were introduced.
Interfaces are a very useful language feature, but are much abused. Symptoms include:
An interface is only implemented by one class
A class implements multiple interfaces. Often touted as an advantage of interfaces, usually it means that the class in question is violating the principle of separation of concerns.
There is an inheritance hierarchy of interfaces (often mirrored by a hierarchy of classes). This is the situation you're trying to avoid by using interfaces in the first place. Too much inheritance is a bad thing, both for classes and interfaces.
All these things are code smells, IMO.
It's one way to promote loose coupling.
With low coupling, a change in one module will not require a change in the implementation of another module.
A good use of this concept is Abstract Factory pattern. In the Wikipedia example, GUIFactory interface produces Button interface. The concrete factory may be WinFactory (producing WinButton), or OSXFactory (producing OSXButton). Imagine if you are writing a GUI application and you have to go look around all instances of OldButton class and changing them to WinButton. Then next year, you need to add OSXButton version.
In my opinion, you see this so often because it is a very good practice that is often applied in the wrong situations.
There are many advantages to interfaces relative to abstract classes:
You can switch implementations w/o re-building code that depends on the interface. This is useful for: proxy classes, dependency injection, AOP, etc.
You can separate the API from the implementation in your code. This can be nice because it makes it obvious when you're changing code that will affect other modules.
It allows developers writing code that is dependent on your code to easily mock your API for testing purposes.
You gain the most advantage from interfaces when dealing with modules of code. However, there is no easy rule to determine where module boundaries should be. So this best practice is easy to over-use, especially when first designing some software.
I would assume (with #eed3s9n) that it's to promote loose coupling. Also, without interfaces unit testing becomes much more difficult, as you can't mock up your objects.
Why extends is evil. This article is pretty much a direct answer to the question asked. I can think of almost no case where you would actually need an abstract class, and plenty of situations where it is a bad idea. This does not mean that implementations using abstract classes are bad, but you will have to take care so you do not make the interface contract dependent on artifacts of some specific implementation (case in point: the Stack class in Java).
One more thing: it is not necessary, or good practice, to have interfaces everywhere. Typically, you should identify when you need an interface and when you do not. In an ideal world, the second case should be implemented as a final class most of the time.
There are some excellent answers here, but if you're looking for a concrete reason, look no further than Unit Testing.
Consider that you want to test a method in the business logic that retrieves the current tax rate for the region where a transaction occurrs. To do this, the business logic class has to talk to the database via a Repository:
interface IRepository<T> { T Get(string key); }
class TaxRateRepository : IRepository<TaxRate> {
protected internal TaxRateRepository() {}
public TaxRate Get(string key) {
// retrieve an TaxRate (obj) from database
return obj; }
}
Throughout the code, use the type IRepository instead of TaxRateRepository.
The repository has a non-public constructor to encourage users (developers) to use the factory to instantiate the repository:
public static class RepositoryFactory {
public RepositoryFactory() {
TaxRateRepository = new TaxRateRepository(); }
public static IRepository TaxRateRepository { get; protected set; }
public static void SetTaxRateRepository(IRepository rep) {
TaxRateRepository = rep; }
}
The factory is the only place where the TaxRateRepository class is referenced directly.
So you need some supporting classes for this example:
class TaxRate {
public string Region { get; protected set; }
decimal Rate { get; protected set; }
}
static class Business {
static decimal GetRate(string region) {
var taxRate = RepositoryFactory.TaxRateRepository.Get(region);
return taxRate.Rate; }
}
And there is also another other implementation of IRepository - the mock up:
class MockTaxRateRepository : IRepository<TaxRate> {
public TaxRate ReturnValue { get; set; }
public bool GetWasCalled { get; protected set; }
public string KeyParamValue { get; protected set; }
public TaxRate Get(string key) {
GetWasCalled = true;
KeyParamValue = key;
return ReturnValue; }
}
Because the live code (Business Class) uses a Factory to get the Repository, in the unit test you plug in the MockRepository for the TaxRateRepository. Once the substitution is made, you can hard code the return value and make the database unneccessary.
class MyUnitTestFixture {
var rep = new MockTaxRateRepository();
[FixtureSetup]
void ConfigureFixture() {
RepositoryFactory.SetTaxRateRepository(rep); }
[Test]
void Test() {
var region = "NY.NY.Manhattan";
var rate = 8.5m;
rep.ReturnValue = new TaxRate { Rate = rate };
var r = Business.GetRate(region);
Assert.IsNotNull(r);
Assert.IsTrue(rep.GetWasCalled);
Assert.AreEqual(region, rep.KeyParamValue);
Assert.AreEqual(r.Rate, rate); }
}
Remember, you want to test the business logic method only, not the repository, database, connection string, etc... There are different tests for each of those. By doing it this way, you can completely isolate the code that you are testing.
A side benefit is that you can also run the unit test without a database connection, which makes it faster, more portable (think multi-developer team in remote locations).
Another side benefit is that you can use the Test-Driven Development (TDD) process for the implementation phase of development. I don't strictly use TDD but a mix of TDD and old-school coding.
In one sense, I think your question boils down to simply, "why use interfaces and not abstract classes?" Technically, you can achieve loose coupling with both -- the underlying implementation is still not exposed to the calling code, and you can use Abstract Factory pattern to return an underlying implementation (interface implementation vs. abstract class extension) to increase the flexibility of your design. In fact, you could argue that abstract classes give you slightly more, since they allow you to both require implementations to satisfy your code ("you MUST implement start()") and provide default implementations ("I have a standard paint() you can override if you want to") -- with interfaces, implementations must be provided, which over time can lead to brittle inheritance problems through interface changes.
Fundamentally, though, I use interfaces mainly due to Java's single inheritance restriction. If my implementation MUST inherit from an abstract class to be used by calling code, that means I lose the flexibility to inherit from something else even though that may make more sense (e.g. for code reuse or object hierarchy).
One reason is that interfaces allow for growth and extensibility. Say, for example, that you have a method that takes an object as a parameter,
public void drink(coffee someDrink)
{
}
Now let's say you want to use the exact same method, but pass a hotTea object. Well, you can't. You just hard-coded that above method to only use coffee objects. Maybe that's good, maybe that's bad. The downside of the above is that it strictly locks you in with one type of object when you'd like to pass all sorts of related objects.
By using an interface, say IHotDrink,
interface IHotDrink { }
and rewrting your above method to use the interface instead of the object,
public void drink(IHotDrink someDrink)
{
}
Now you can pass all objects that implement the IHotDrink interface. Sure, you can write the exact same method that does the exact same thing with a different object parameter, but why? You're suddenly maintaining bloated code.
Its all about designing before coding.
If you dont know all the relationships between two objects after you have specified the interface then you have done a poor job of defining the interface -- which is relatively easy to fix.
If you had dived straight into coding and realised half way through you are missing something its a lot harder to fix.
You could see this from a perl/python/ruby perspective :
when you pass an object as a parameter to a method you don't pass it's type , you just know that it must respond to some methods
I think considering java interfaces as an analogy to that would best explain this . You don't really pass a type , you just pass something that responds to a method ( a trait , if you will ).
I think the main reason to use interfaces in Java is the limitation to single inheritance. In many cases this lead to unnecessary complication and code duplication. Take a look at Traits in Scala: http://www.scala-lang.org/node/126 Traits are a special kind of abstract classes, but a class can extend many of them.