Related
Check: http://checkstyle.sourceforge.net/config_design.html#DesignForExtension
False positives: Checkstyle "Method Not Designed For Extension" error being incorrectly issued?
checkstyle Method is not designed for extension - needs to be abstract, final or empty
https://sourceforge.net/p/checkstyle/bugs/688/
Look like all switch that Check off in their configurations.
Does anybody could show real code example where this Check is useful ?
Is it useful for developers in practice, not in theory?
The documentation you linked to already explains the rationale behind the check. This can be useful in some situations. In practice, I've never turned it on, mostly because it is too cumbersome to administer, and you certainly don't want it for all your classes.
But you asked for a code example. Consider these two classes (YourSuperClass is part of the API you provide, and TheirSubClass is provided by the users of your API):
public abstract class YourSuperClass
{
public final void execute() {
doStuff();
hook();
doStuff();
}
private void doStuff() {
calculateStuff();
// do lots of stuff
}
protected abstract void hook();
protected final void calculateStuff() {
// perform your calculation of stuff
}
}
public class TheirSubClass extends YourSuperClass
{
protected void hook() {
// do whatever the hook needs to do as part of execute(), e.g.:
calculateStuff();
}
public static void main(final String[] args) {
TheirSubClass cmd = new TheirSubClass();
cmd.execute();
}
}
In this example, TheirSubClass cannot change the way execute works (do stuff, call hook, do stuff again). It also cannot change the calculateStuff() method. Thus, YourSuperClass is "designed for extension", because TheirSubClass cannot break the way it operates (like it could, if, say, execute() wasn't final). The designer of YourSuperClass remains in control, providing only specific hooks for subclasses to use. If the hook is abstract, TheirSubClass is forced to provide an implementation. If it is simply an empty method, TheirSubClass can choose to not use the hook.
Checkstyle's Check is a real-life example of a class designed for extension. Ironically, it would still fail the check, because getAcceptableTokens() is public but not final.
I have this rule enabled for all of my Spring-based projects. It's a royal PITA at first because it does represent a lot of code cleanup. But I've learned to love the principle of this rule. I find the rule to be useful at enforcing everyone to be consistent and clear in their thinking about which classes should be designed for extension and which shouldn't. In the code-bases that I've worked with, there are in reality, only a handful of classes that should be open to extension. Most are just asking for bugs by allowing extension. Maybe not today, but down the road when the current engineer is long-gone or that section of code is forgotten about and a quick change needs to come in to fix "X customer is complaining about Y and they just need Z".
Consider:
It's too easy to subclass anything in Java willy-nilly and therefore behavior can change over time through different extended classes. New programmers may get OOP happy and everything then extends something generic just because they can.
Overly-extended class-depth is difficult to reason about, and while you might be an amazing developer who'd never do something as atrocious as that ... I've worked in code-bases where that was the norm. And those deeply nested HTML Form generators were awful to work with and reason about what would actually happen So, this rule would, in theory, make the original engineer think twice about writing something so awful for their peers.
By enforcing the final rule or documenting a class designed for extension the possible bugs that could occur through inadvertent extension of a class may be avoided. I don't personally like the idea that some subclass could alter the behavior of my application because that could cause unintended side-effects in weird ways (especially large and partially tested applications). And, it's in these extended classes where complex behavior is hidden that the hard-to-solve bugs exist.
The DesignForExtension rule forces conversations amongst developers whenever something might be extended as an initial choice to get a quick-fix out the door when really what should happen is that the developers need to meet up and discuss what's changing, and discuss why extension might be appropriate. Many times, modifications to the main class are more appropriate and additional tests would be written given the new circumstances. This promotion of conversations is healthy for long-term code quality and intra-organizational knowledge sharing.
That being said, I do add to my checkstyle-suppressions.xml in my code for Spring-specific classes that cannot be declared final. Because, frameworks.
<suppress checks="DesignForExtension" files=".*Configuration\.java"/>
Reflection can be used to access just about any object's internals, which is kind of not cool. I tried to list the legitimate uses of it, but in the end only serialization came to my mind.
What other (legitimate) uses can you find to using reflection in order to access otherwise inaccessible members?
EDIT I mean stuff you can't do if you don't have reflection.
Unit testing comes to mind. If you want to test a private method but not expose it or use the InternalsVisibleToAttribute.
One is persisting objects to a DB. E.g. Hibernate does use reflection, and in some cases e.g. the object's ID might be a private member, with private setter/getter, because it is of no use in the Java domain. However it is needed in the DB, so Hibernate sets/gets it in the background, and uses it for determining object identity.
Another is writing the initial unit tests for badly designed legacy code, where you can't just start refactoring since you have no unit tests to safeguard yourself. In such cases, if other means (described in Working Effectively with Legacy Code) can't help you, reflection may be the last resort to start working towards maintainable code.
I guess one big area where it is used is in the context of managed environment, that is, when an framework or environment is supposed to provide some facilities and might require to access to private data.
Public/private access modifier is a concern at the application design level and a field shouldn't be turned public for the sake of making a framework happy. Examples include then framework like Hibernate which manages object persistence, dependency injection framework which can inject data in private field, or more generally application server which work regardless of the access modifier.
Even more generally, this fall into the umbrella of meta-programming. Some code inspects and modifies other objects dynamically without knowing their structure upfront.
Profiling or monitoring can be useful reasons to use reflection to private members and functions. For example, if you want to know how often a database connection is actually made, you can monitor access to a private SqlConnection.
This can better be done with AOP, but, if trying to get AOP to be acceptable is too difficult politically, then reflection can help out while you are in development.
You can also use reflection to invoke the methods based upon input type parameter ( like enum), thus essentially replacing a switch statement.
for example:
enter code here
public class Test
{
public enum Status
{
Add,
Delete,
Update
}
public void Save(Status status)
{
//use reflection to obtain corresponding method
MethodInfo method = this.GetType().GetMethod(status.ToString(),BindingFlags.NonPublic | BindingFlags.Instance);
//Invoke method here
}
//you don't want to expose these methods
private void Add()
{
}
private void Delete()
{
}
private void Update()
{
}
}
Debugging and profiling come to mind immediately -- they need to be able to get a more complete view of the internals of objects.
Garbage collection is another -- yes, the JVM has a garbage collector built in, but someone has to write that code and while most JVMs aren't written in java, there's no reason they couldn't be.
I was confused when I first started to see anti-singleton commentary. I have used the singleton pattern in some recent projects, and it was working out beautifully. So much so, in fact, that I have used it many, many times.
Now, after running into some problems, reading this SO question, and especially this blog post, I understand the evil that I have brought into the world.
So: How do I go about removing singletons from existing code?
For example:
In a retail store management program, I used the MVC pattern. My Model objects describe the store, the user interface is the View, and I have a set of Controllers that act as liason between the two. Great. Except that I made the Store into a singleton (since the application only ever manages one store at a time), and I also made most of my Controller classes into singletons (one mainWindow, one menuBar, one productEditor...). Now, most of my Controller classes get access the other singletons like this:
Store managedStore = Store::getInstance();
managedStore.doSomething();
managedStore.doSomethingElse();
//etc.
Should I instead:
Create one instance of each object and pass references to every object that needs access to them?
Use globals?
Something else?
Globals would still be bad, but at least they wouldn't be pretending.
I see #1 quickly leading to horribly inflated constructor calls:
someVar = SomeControllerClass(managedStore, menuBar, editor, sasquatch, ...)
Has anyone else been through this yet? What is the OO way to give many individual classes acces to a common variable without it being a global or a singleton?
Dependency Injection is your friend.
Take a look at these posts on the excellent Google Testing Blog:
Singletons are pathologic liars (but you probably already understand this if you are asking this question)
A talk on Dependency Injection
Guide to Writing Testable Code
Hopefully someone has made a DI framework/container for the C++ world? Looks like Google has released a C++ Testing Framework and a C++ Mocking Framework, which might help you out.
It's not the Singleton-ness that is the problem. It's fine to have an object that there will only ever be one instance of. The problem is the global access. Your classes that use Store should receive a Store instance in the constructor (or have a Store property / data member that can be set) and they can all receive the same instance. Store can even keep logic within it to ensure that only one instance is ever created.
My way to avoid singletons derives from the idea that "application global" doesn't mean "VM global" (i.e. static). Therefore I introduce a ApplicationContext class which holds much former static singleton information that should be application global, like the configuration store. This context is passed into all structures. If you use any IOC container or service manager, you can use this to get access to the context.
There's nothing wrong with using a global or a singleton in your program. Don't let anyone get dogmatic on you about that kind of crap. Rules and patterns are nice rules of thumb. But in the end it's your project and you should make your own judgments about how to handle situations involving global data.
Unrestrained use of globals is bad news. But as long as you are diligent, they aren't going to kill your project. Some objects in a system deserve to be singleton. The standard input and outputs. Your log system. In a game, your graphics, sound, and input subsystems, as well as the database of game entities. In a GUI, your window and major panel components. Your configuration data, your plugin manager, your web server data. All these things are more or less inherently global to your application. I think your Store class would pass for it as well.
It's clear what the cost of using globals is. Any part of your application could be modifying it. Tracking down bugs is hard when every line of code is a suspect in the investigation.
But what about the cost of NOT using globals? Like everything else in programming, it's a trade off. If you avoid using globals, you end up having to pass those stateful objects as function parameters. Alternatively, you can pass them to a constructor and save them as a member variable. When you have multiple such objects, the situation worsens. You are now threading your state. In some cases, this isn't a problem. If you know only two or three functions need to handle that stateful Store object, it's the better solution.
But in practice, that's not always the case. If every part of your app touches your Store, you will be threading it to a dozen functions. On top of that, some of those functions may have complicated business logic. When you break that business logic up with helper functions, you have to -- thread your state some more! Say for instance you realize that a deeply nested function needs some configuration data from the Store object. Suddenly, you have to edit 3 or 4 function declarations to include that store parameter. Then you have to go back and add the store as an actual parameter to everywhere one of those functions is called. It may be that the only use a function has for a Store is to pass it to some subfunction that needs it.
Patterns are just rules of thumb. Do you always use your turn signals before making a lane change in your car? If you're the average person, you'll usually follow the rule, but if you are driving at 4am on an empty high way, who gives a crap, right? Sometimes it'll bite you in the butt, but that's a managed risk.
Regarding your inflated constructor call problem, you could introduce parameter classes or factory methods to leverage this problem for you.
A parameter class moves some of the parameter data to it's own class, e.g. like this:
var parameterClass1 = new MenuParameter(menuBar, editor);
var parameterClass2 = new StuffParameters(sasquatch, ...);
var ctrl = new MyControllerClass(managedStore, parameterClass1, parameterClass2);
It sort of just moves the problem elsewhere though. You might want to housekeep your constructor instead. Only keep parameters that are important when constructing/initiating the class in question and do the rest with getter/setter methods (or properties if you're doing .NET).
A factory method is a method that creates all instances you need of a class and have the benefit of encapsulating creation of the said objects. They are also quite easy to refactor towards from Singleton, because they're similar to getInstance methods that you see in Singleton patterns. Say we have the following non-threadsafe simple singleton example:
// The Rather Unfortunate Singleton Class
public class SingletonStore {
private static SingletonStore _singleton
= new MyUnfortunateSingleton();
private SingletonStore() {
// Do some privatised constructing in here...
}
public static SingletonStore getInstance() {
return _singleton;
}
// Some methods and stuff to be down here
}
// Usage:
// var singleInstanceOfStore = SingletonStore.getInstance();
It is easy to refactor this towards a factory method. The solution is to remove the static reference:
public class StoreWithFactory {
public StoreWithFactory() {
// If the constructor is private or public doesn't matter
// unless you do TDD, in which you need to have a public
// constructor to create the object so you can test it.
}
// The method returning an instance of Singleton is now a
// factory method.
public static StoreWithFactory getInstance() {
return new StoreWithFactory();
}
}
// Usage:
// var myStore = StoreWithFactory.getInstance();
Usage is still the same, but you're not bogged down with having a single instance. Naturally you would move this factory method to it's own class as the Store class shouldn't concern itself with creation of itself (and coincidentally follow the Single Responsibility Principle as an effect of moving the factory method out).
From here you have many choices, but I'll leave that as an exercise for yourself. It is easy to over-engineer (or overheat) on patterns here. My tip is to only apply a pattern when there is a need for it.
Okay, first of all, the "singletons are always evil" notion is wrong. You use a Singleton whenever you have a resource which won't or can't ever be duplicated. No problem.
That said, in your example, there's an obvious degree of freedom in the application: someone could come along and say "but I want two stores."
There are several solutions. The one that occurs first of all is to build a factory class; when you ask for a Store, it gives you one named with some universal name (eg, a URI.) Inside that store, you need to be sure that multiple copies don't step on one another, via critical regions or some method of ensuring atomicity of transactions.
Miško Hevery has a nice article series on testability, among other things the singleton, where he isn't only talking about the problems, but also how you might solve it (see 'Fixing the flaw').
I like to encourage the use of singletons where necessary while discouraging the use of the Singleton pattern. Note the difference in the case of the word. The singleton (lower case) is used wherever you only need one instance of something. It is created at the start of your program and is passed to the constructor of the classes that need it.
class Log
{
void logmessage(...)
{ // do some stuff
}
};
int main()
{
Log log;
// do some more stuff
}
class Database
{
Log &_log;
Database(Log &log) : _log(log) {}
void Open(...)
{
_log.logmessage(whatever);
}
};
Using a singleton gives all of the capabilities of the Singleton anti-pattern but it makes your code more easily extensible, and it makes it testable (in the sense of the word defined in the Google testing blog). For example, we may decide that we need the ability to log to a web-service at some times as well, using the singleton we can easily do that without significant changes to the code.
By comparison, the Singleton pattern is another name for a global variable. It is never used in production code.
Almost every Java book I read talks about using the interface as a way to share state and behaviour between objects that when first "constructed" did not seem to share a relationship.
However, whenever I see architects design an application, the first thing they do is start programming to an interface. How come? How do you know all the relationships between objects that will occur within that interface? If you already know those relationships, then why not just extend an abstract class?
Programming to an interface means respecting the "contract" created by using that interface. And so if your IPoweredByMotor interface has a start() method, future classes that implement the interface, be they MotorizedWheelChair, Automobile, or SmoothieMaker, in implementing the methods of that interface, add flexibility to your system, because one piece of code can start the motor of many different types of things, because all that one piece of code needs to know is that they respond to start(). It doesn't matter how they start, just that they must start.
Great question. I'll refer you to Josh Bloch in Effective Java, who writes (item 16) why to prefer the use of interfaces over abstract classes. By the way, if you haven't got this book, I highly recommend it! Here is a summary of what he says:
Existing classes can be easily retrofitted to implement a new interface. All you need to do is implement the interface and add the required methods. Existing classes cannot be retrofitted easily to extend a new abstract class.
Interfaces are ideal for defining mix-ins. A mix-in interface allows classes to declare additional, optional behavior (for example, Comparable). It allows the optional functionality to be mixed in with the primary functionality. Abstract classes cannot define mix-ins -- a class cannot extend more than one parent.
Interfaces allow for non-hierarchical frameworks. If you have a class that has the functionality of many interfaces, it can implement them all. Without interfaces, you would have to create a bloated class hierarchy with a class for every combination of attributes, resulting in combinatorial explosion.
Interfaces enable safe functionality enhancements. You can create wrapper classes using the Decorator pattern, a robust and flexible design. A wrapper class implements and contains the same interface, forwarding some functionality to existing methods, while adding specialized behavior to other methods. You can't do this with abstract methods - you must use inheritance instead, which is more fragile.
What about the advantage of abstract classes providing basic implementation? You can provide an abstract skeletal implementation class with each interface. This combines the virtues of both interfaces and abstract classes. Skeletal implementations provide implementation assistance without imposing the severe constraints that abstract classes force when they serve as type definitions. For example, the Collections Framework defines the type using interfaces, and provides a skeletal implementation for each one.
Programming to interfaces provides several benefits:
Required for GoF type patterns, such as the visitor pattern
Allows for alternate implementations. For example, multiple data access object implementations may exist for a single interface that abstracts the database engine in use (AccountDaoMySQL and AccountDaoOracle may both implement AccountDao)
A Class may implement multiple interfaces. Java does not allow multiple inheritance of concrete classes.
Abstracts implementation details. Interfaces may include only public API methods, hiding implementation details. Benefits include a cleanly documented public API and well documented contracts.
Used heavily by modern dependency injection frameworks, such as http://www.springframework.org/.
In Java, interfaces can be used to create dynamic proxies - http://java.sun.com/j2se/1.5.0/docs/api/java/lang/reflect/Proxy.html. This can be used very effectively with frameworks such as Spring to perform Aspect Oriented Programming. Aspects can add very useful functionality to Classes without directly adding java code to those classes. Examples of this functionality include logging, auditing, performance monitoring, transaction demarcation, etc. http://static.springframework.org/spring/docs/2.5.x/reference/aop.html.
Mock implementations, unit testing - When dependent classes are implementations of interfaces, mock classes can be written that also implement those interfaces. The mock classes can be used to facilitate unit testing.
I think one of the reasons abstract classes have largely been abandoned by developers might be a misunderstanding.
When the Gang of Four wrote:
Program to an interface not an implementation.
there was no such thing as a java or C# interface. They were talking about the object-oriented interface concept, that every class has. Erich Gamma mentions it in this interview.
I think following all the rules and principles mechanically without thinking leads to a difficult to read, navigate, understand and maintain code-base. Remember: The simplest thing that could possibly work.
How come?
Because that's what all the books say. Like the GoF patterns, many people see it as universally good and don't ever think about whether or not it is really the right design.
How do you know all the relationships between objects that will occur within that interface?
You don't, and that's a problem.
If
you already know those relationships,
then why not just extend an abstract
class?
Reasons to not extend an abstract class:
You have radically different implementations and making a decent base class is too hard.
You need to burn your one and only base class for something else.
If neither apply, go ahead and use an abstract class. It will save you a lot of time.
Questions you didn't ask:
What are the down-sides of using an interface?
You cannot change them. Unlike an abstract class, an interface is set in stone. Once you have one in use, extending it will break code, period.
Do I really need either?
Most of the time, no. Think really hard before you build any object hierarchy. A big problem in languages like Java is that it makes it way too easy to create massive, complicated object hierarchies.
Consider the classic example LameDuck inherits from Duck. Sounds easy, doesn't it?
Well, that is until you need to indicate that the duck has been injured and is now lame. Or indicate that the lame duck has been healed and can walk again. Java does not allow you to change an objects type, so using sub-types to indicate lameness doesn't actually work.
Programming to an interface means respecting the "contract" created by
using that interface
This is the single most misunderstood thing about interfaces.
There is no way to enforce any such contract with interfaces. Interfaces, by definition, cannot specify any behaviour at all. Classes are where behaviour happens.
This mistaken belief is so widespread as to be considered the conventional wisdom by many people. It is, however, wrong.
So this statement in the OP
Almost every Java book I read talks about using the interface as a way
to share state and behavior between objects
is just not possible. Interfaces have neither state nor behaviour. They can define properties, that implementing classes must provide, but that's as close as they can get. You cannot share behaviour using interfaces.
You can make an assumption that people will implement an interface to provide the sort of behaviour implied by the name of its methods, but that's not anything like the same thing. And it places no restrictions at all on when such methods are called (eg that Start should be called before Stop).
This statement
Required for GoF type patterns, such as the visitor pattern
is also incorrect. The GoF book uses exactly zero interfaces, as they were not a feature of the languages used at the time. None of the patterns require interfaces, although some can use them. IMO, the Observer pattern is one in which interfaces can play a more elegant role (although the pattern is normally implemented using events nowadays). In the Visitor pattern it is almost always the case that a base Visitor class implementing default behaviour for each type of visited node is required, IME.
Personally, I think the answer to the question is threefold:
Interfaces are seen by many as a silver bullet (these people usually labour under the "contract" misapprehension, or think that interfaces magically decouple their code)
Java people are very focussed on using frameworks, many of which (rightly) require classes to implement their interfaces
Interfaces were the best way to do some things before generics and annotations (attributes in C#) were introduced.
Interfaces are a very useful language feature, but are much abused. Symptoms include:
An interface is only implemented by one class
A class implements multiple interfaces. Often touted as an advantage of interfaces, usually it means that the class in question is violating the principle of separation of concerns.
There is an inheritance hierarchy of interfaces (often mirrored by a hierarchy of classes). This is the situation you're trying to avoid by using interfaces in the first place. Too much inheritance is a bad thing, both for classes and interfaces.
All these things are code smells, IMO.
It's one way to promote loose coupling.
With low coupling, a change in one module will not require a change in the implementation of another module.
A good use of this concept is Abstract Factory pattern. In the Wikipedia example, GUIFactory interface produces Button interface. The concrete factory may be WinFactory (producing WinButton), or OSXFactory (producing OSXButton). Imagine if you are writing a GUI application and you have to go look around all instances of OldButton class and changing them to WinButton. Then next year, you need to add OSXButton version.
In my opinion, you see this so often because it is a very good practice that is often applied in the wrong situations.
There are many advantages to interfaces relative to abstract classes:
You can switch implementations w/o re-building code that depends on the interface. This is useful for: proxy classes, dependency injection, AOP, etc.
You can separate the API from the implementation in your code. This can be nice because it makes it obvious when you're changing code that will affect other modules.
It allows developers writing code that is dependent on your code to easily mock your API for testing purposes.
You gain the most advantage from interfaces when dealing with modules of code. However, there is no easy rule to determine where module boundaries should be. So this best practice is easy to over-use, especially when first designing some software.
I would assume (with #eed3s9n) that it's to promote loose coupling. Also, without interfaces unit testing becomes much more difficult, as you can't mock up your objects.
Why extends is evil. This article is pretty much a direct answer to the question asked. I can think of almost no case where you would actually need an abstract class, and plenty of situations where it is a bad idea. This does not mean that implementations using abstract classes are bad, but you will have to take care so you do not make the interface contract dependent on artifacts of some specific implementation (case in point: the Stack class in Java).
One more thing: it is not necessary, or good practice, to have interfaces everywhere. Typically, you should identify when you need an interface and when you do not. In an ideal world, the second case should be implemented as a final class most of the time.
There are some excellent answers here, but if you're looking for a concrete reason, look no further than Unit Testing.
Consider that you want to test a method in the business logic that retrieves the current tax rate for the region where a transaction occurrs. To do this, the business logic class has to talk to the database via a Repository:
interface IRepository<T> { T Get(string key); }
class TaxRateRepository : IRepository<TaxRate> {
protected internal TaxRateRepository() {}
public TaxRate Get(string key) {
// retrieve an TaxRate (obj) from database
return obj; }
}
Throughout the code, use the type IRepository instead of TaxRateRepository.
The repository has a non-public constructor to encourage users (developers) to use the factory to instantiate the repository:
public static class RepositoryFactory {
public RepositoryFactory() {
TaxRateRepository = new TaxRateRepository(); }
public static IRepository TaxRateRepository { get; protected set; }
public static void SetTaxRateRepository(IRepository rep) {
TaxRateRepository = rep; }
}
The factory is the only place where the TaxRateRepository class is referenced directly.
So you need some supporting classes for this example:
class TaxRate {
public string Region { get; protected set; }
decimal Rate { get; protected set; }
}
static class Business {
static decimal GetRate(string region) {
var taxRate = RepositoryFactory.TaxRateRepository.Get(region);
return taxRate.Rate; }
}
And there is also another other implementation of IRepository - the mock up:
class MockTaxRateRepository : IRepository<TaxRate> {
public TaxRate ReturnValue { get; set; }
public bool GetWasCalled { get; protected set; }
public string KeyParamValue { get; protected set; }
public TaxRate Get(string key) {
GetWasCalled = true;
KeyParamValue = key;
return ReturnValue; }
}
Because the live code (Business Class) uses a Factory to get the Repository, in the unit test you plug in the MockRepository for the TaxRateRepository. Once the substitution is made, you can hard code the return value and make the database unneccessary.
class MyUnitTestFixture {
var rep = new MockTaxRateRepository();
[FixtureSetup]
void ConfigureFixture() {
RepositoryFactory.SetTaxRateRepository(rep); }
[Test]
void Test() {
var region = "NY.NY.Manhattan";
var rate = 8.5m;
rep.ReturnValue = new TaxRate { Rate = rate };
var r = Business.GetRate(region);
Assert.IsNotNull(r);
Assert.IsTrue(rep.GetWasCalled);
Assert.AreEqual(region, rep.KeyParamValue);
Assert.AreEqual(r.Rate, rate); }
}
Remember, you want to test the business logic method only, not the repository, database, connection string, etc... There are different tests for each of those. By doing it this way, you can completely isolate the code that you are testing.
A side benefit is that you can also run the unit test without a database connection, which makes it faster, more portable (think multi-developer team in remote locations).
Another side benefit is that you can use the Test-Driven Development (TDD) process for the implementation phase of development. I don't strictly use TDD but a mix of TDD and old-school coding.
In one sense, I think your question boils down to simply, "why use interfaces and not abstract classes?" Technically, you can achieve loose coupling with both -- the underlying implementation is still not exposed to the calling code, and you can use Abstract Factory pattern to return an underlying implementation (interface implementation vs. abstract class extension) to increase the flexibility of your design. In fact, you could argue that abstract classes give you slightly more, since they allow you to both require implementations to satisfy your code ("you MUST implement start()") and provide default implementations ("I have a standard paint() you can override if you want to") -- with interfaces, implementations must be provided, which over time can lead to brittle inheritance problems through interface changes.
Fundamentally, though, I use interfaces mainly due to Java's single inheritance restriction. If my implementation MUST inherit from an abstract class to be used by calling code, that means I lose the flexibility to inherit from something else even though that may make more sense (e.g. for code reuse or object hierarchy).
One reason is that interfaces allow for growth and extensibility. Say, for example, that you have a method that takes an object as a parameter,
public void drink(coffee someDrink)
{
}
Now let's say you want to use the exact same method, but pass a hotTea object. Well, you can't. You just hard-coded that above method to only use coffee objects. Maybe that's good, maybe that's bad. The downside of the above is that it strictly locks you in with one type of object when you'd like to pass all sorts of related objects.
By using an interface, say IHotDrink,
interface IHotDrink { }
and rewrting your above method to use the interface instead of the object,
public void drink(IHotDrink someDrink)
{
}
Now you can pass all objects that implement the IHotDrink interface. Sure, you can write the exact same method that does the exact same thing with a different object parameter, but why? You're suddenly maintaining bloated code.
Its all about designing before coding.
If you dont know all the relationships between two objects after you have specified the interface then you have done a poor job of defining the interface -- which is relatively easy to fix.
If you had dived straight into coding and realised half way through you are missing something its a lot harder to fix.
You could see this from a perl/python/ruby perspective :
when you pass an object as a parameter to a method you don't pass it's type , you just know that it must respond to some methods
I think considering java interfaces as an analogy to that would best explain this . You don't really pass a type , you just pass something that responds to a method ( a trait , if you will ).
I think the main reason to use interfaces in Java is the limitation to single inheritance. In many cases this lead to unnecessary complication and code duplication. Take a look at Traits in Scala: http://www.scala-lang.org/node/126 Traits are a special kind of abstract classes, but a class can extend many of them.
Here is a problem I've struggled with ever since I first started learning object-oriented programming: how should one implement a logger in "proper" OOP code?
By this, I mean an object that has a method that we want every other object in the code to be able to access; this method would output to console/file/whatever, which we would use for logging--hence, this object would be the logger object.
We don't want to establish the logger object as a global variable, because global variables are bad, right? But we also don't want to have the pass the logger object in the parameters of every single method we call in every single object.
In college, when I brought this up to the professor, he couldn't actually give me an answer. I realize that there are actually packages (for say, Java) that might implement this functionality. What I am ultimately looking for, though, is the knowledge of how to properly and in the OOP way implement this myself.
You do want to establish the logger as a global variable, because global variables are not bad. At least, they aren't inherently bad. A logger is a great example of the proper use of a globally accessible object. Read about the Singleton design pattern if you want more information.
There are some very well thought out solutions. Some involve bypassing OO and using another mechanism (AOP).
Logging doesn't really lend itself too well to OO (which is okay, not everything does). If you have to implement it yourself, I suggest just instantiating "Log" at the top of each class:
private final log=new Log(this);
and all your logging calls are then trivial: log.print("Hey");
Which makes it much easier to use than a singleton.
Have your logger figure out what class you are passing in and use that to annotate the log. Since you then have an instance of log, you can then do things like:
log.addTag("Bill");
And log can add the tag bill to each entry so that you can implement better filtering for your display.
log4j and chainsaw are a perfect out of the box solution though--if you aren't just being academic, use those.
A globally accessible logger is a pain for testing. If you need a "centralized" logging facility create it on program startup and inject it into the classes/methods that need logging.
How do you test methods that use something like this:
public class MyLogger
{
public static void Log(String Message) {}
}
How do you replace it with a mock?
Better:
public interface ILog
{
void Log(String message);
}
public class MyLog : ILog
{
public void Log(String message) {}
}
I've always used the Singleton pattern to implement a logging object.
You could look at the Singleton pattern.
Create the logger as a singleton class and then access it using a static method.
I think you should use AOP (aspect-oriented programming) for this, rather than OOP.
In practice a singleton / global method works fine, in my opinion. Preferably the global thing is just a framework to which you can connect different listeners (observer pattern), e.g. one for console output, one for database output, one for Windows EventLog output, etc.
Beware for overdesign though, I find that in practice a single class with just global methods can work quite nicely.
Or you could use the infrastructure the particular framework you work in offers.
The Enterprise Library Logging Application Block that comes from Microsoft's Pattern & Practices group is a great example of implementing a logging framework in an OOP environment. They have some great documentation on how they have implemented their logging application block and all the source code is available for your own review or modification.
There are other similar implementations: log4net, log4j, log4cxx
They way they have implemented the Enterprise Library Logging Application Block is to have a static Logger class with a number of different methods that actually perform the log operation. If you were looking at patterns this would probably be one of the better uses of the Singleton pattern.
I am all for AOP together with log4*. This really helped us.
Google gave me this article for instance. You can try to search more on that subject.
(IMHO) how 'logging' happens isn't part of your solution design, it's more part of whatever environment you happen to be running in - like System and Calendar in Java.
Your 'good' solution is one that is as loosely coupled to any particular logging implementation as possible so think interfaces. I'd check out the trail here for an example of how Sun tackled it as they probably came up with a pretty good design and laid it all out for you to learn from!
use a static class, it has the least overhead and is accessible from all project types within a simple assembly reference
note that a Singleton is equivalent, but involves unnecessary allocation
if you are using multiple app domains, beware that you may need a proxy object to access the static class from domains other than the main one
also if you have multiple threads you may need to lock around the logging functions to avoid interlacing the output
IMHO logging alone is insufficient, that's why I wrote CALM
good luck!
Maybe inserting Logging in a transparent way would rather belong in the Aspect Oriented Programming idiom. But we're talking OO design here...
The Singleton pattern may be the most useful, in my opinion: you can access the Logging service from any context through a public, static method of a LoggingService class.
Though this may seem a lot like a global variable, it is not: it's properly encapsulated within the singleton class, and not everyone has access to it. This enables you to change the way logging is handled even at runtime, but protects the working of the logging from 'vilain' code.
In the system I work on, we create a number of Logging 'singletons', in order to be able to distinguish messages from different subsystems. These can be switched on/off at runtime, filters can be defined, writing to file is possible... you name it.
I've solved this in the past by adding an instance of a logging class to the base class(es) (or interface, if the language supports that) for the classes that need to access logging. When you log something, the logger looks at the current call stack and determines the invoking code from that, setting the proper metadata about the logging statement (source method, line of code if available, class that logged, etc.) This way a minimal number of classes have loggers, and the loggers don't need to be specifically configured with the metadata that can be determined automatically.
This does add considerable overhead, so it is not necessarily a wise choice for production logging, but aspects of the logger can be disabled conditionally if you design it in such a way.
Realistically, I use commons-logging most of the time (I do a lot of work in java), but there are aspects of the design I described above that I find beneficial. The benefits of having a robust logging system that someone else has already spent significant time debugging has outweighed the need for what could be considered a cleaner design (that's obviously subjective, especially given the lack of detail in this post).
I have had issues with static loggers causing permgen memory issues (at least, I think that's what the problem is), so I'll probably be revisiting loggers soon.
To avoid global variables, I propose to create a global REGISTRY and register your globals there.
For logging, I prefer to provide a singleton class or a class which provides some static methods for logging.
Actually, I'd use one of the existing logging frameworks.
One other possible solution is to have a Log class which encapsulates the logging/stored procedure. That way you can just instantiate a new Log(); whenever you need it without having to use a singleton.
This is my preferred solution, because the only dependency you need to inject is the database if you're logging via database. If you're using files potentially you don't need to inject any dependencies. You can also entirely avoid a global or static logging class/function.