Class design frustrations moving from Java to ActionScript - actionscript-3

I primarily work in Java, but I've recently started using ActionScript 3.0 for a multi-player Flash game that I am helping to develop. The project is in it's early stages, so I'm still working on the class structure. I keep running into limitations with the ActionScript language when I try to use many of the OOP features I expect in Java.
For example:
I need an abstract Character class. There is no reason why Character would ever be instantiated, but ActionScript doesn't support abstract classes. As a result my code has this comment at the top:
Character should be an abstract class,
but AS doesn't support abstract
classes.
DO NOT CREATE AN INSTANCE
OF THIS CLASS. (Only instantiate
classes that extend this one (ex.
Player, Zombie))
As a result of the design of Flixel (the library we are using), I need to have a CharacterGroup class with an inner class Character so that a CharacterGroup can also contain other sprites like guns and stuff. In Java, I would use an inner class. ActionScript doesn't support inner classes. There is something called a "helper class", but helper classes are not inherited, which makes them useless in this context.
My question is this: Is ActionScript's ability to deal with OOP design just less developed or am finding ActionScript so frustrating because I am trying to write it as if it were Java instead of working my head around how ActionScript was desinged?
In other words, is the "correct" way of doing OO design different in ActionScript than it is in Java?
(Note: I'm not asking for opinions about why ActionScript is better/worse than Java. I'm only asking if I am coding correctly or trying to pull too much of my experience from Java.)
Thanks!

AS3 is not missing features and you cannot define it as 'less developed'.
Firstly for your problem - there are ways around the abstract class methodology.
For your abstract class Character - you can make it so the user developer receives an error on trying to instantiate it.
package com.strangemother.database.abstract
{
public class CentralDispatch extends EventDispatcher
{
private static var _centralDispatch:CentralDispatch;
public static function getInstance():CentralDispatch
{
if(!_centralDispatch)
_centralDispatch = new CentralDispatch(SingletonLock);
return _centralDispatch;
}
public function CentralDispatch(lock:Class)
{
if(!lock is SingletonLock)
{
throw new Error("CentralDispatch is a singleton. Use CentralDispatch.getInstance() to use.");
}
}
}
}
internal class SingletonLock{}
AS you can see - this must be used by the '.getInstance Method' - but to extend of that, only this class can make a new instace of itself as its the only class of which can see the internal class 'SingletonLock{}'.
For your purpose - you may remove the 'getInstance()' method and force another way the user to receive an instance of this class.
This should also display the ability to make internal classes. These cannot be seen by any other class - only this package and the parent class CentralDispatch can use it.
Another way you may use the abstract function method - is write then into an interface
package com.strangemother.database.engines
{
import com.strangemother.database.Model;
import com.strangemother.database.ModelSchema;
import com.strangemother.database.events.ModelEvent;
public interface IEngine
{
/**
* Reads modelSchema and calls generateModel upon
* each model found.
* */
function generateModelSchema(modelSchema:ModelSchema=null):String
/**
* Generate your CREATE syntax model to output to your SQL
*
* If you are extending the framework with your own database
* engine, you must override this with your own model generation
* format.
* */
function generateModel(model:Model):String
}
}
then at any point to use this, you implement it at class level
public class SQLite3 extends EngineBase implements IEngine
{
now my SQLite3 class must have the methods defined in IEngine
I much prefer to write classes with defined functions of which are overridden when implemented.
AbstractBase.as
/**
* connect to the database. This is not usually
* required as the abstraction layer should
* automatically connect when required.
* */
public function connect(onComplete:Function=null):void
{
SQLite3 of which extends AbstractionBase at some point
overide public function connect(onComplete:Function=null):void
Now to refute #Allan's comment of it being less developed (Sorry dude)
No operator overloading - thats correct but neither does Java. it wasn't applied to ensure AS3 was readable.
function overloading - You can't hard type it, but you can have function makeTea(...args) passing in as many or as little data as you wish. you've also got getters/setters.
for inline functions you can create anonymous functions.
var myFunction:Function = Function(name:String):String{ return name + ' - rocks!'; }
You've got dynamic classes therefore class level overloading -
and good example of real code is Flex Lib - it open source and you can read read how all these elements are managed by glacing through the code.

It is somewhat subjective, but personally, I would say that the "correct" way of doing OO design in AS3 is the same as Java and yes AS3 is just less developed.
AS2 was very much prototype based much like JavaScript currently is, althought as with JavaScript you could still program it to fit a classical style. Then along came AS3 which was based off a draft of the ECMAScript edition 4. The update to the ECMAScript made it more classical, akin to Java (JavaScript 2 which was going to be based on it, but was dropped as members of the commitee deemed it changed too much). So while AS3 is now a more classical Java style language, as you have found out, it is light on language features. Off the top of my head it is missing things like:
operator overloading
function overloading
generics
abstract classes
inline functions
inner classes that you pointed out
and probably a lot more things that other languages have that I am not aware of. It is understandable that it is annoying to not be able to use language features that you are used to but most of the time I have come to learn that the things lacking are luxuries*. You can get by without them its just at times can make your code a little more dangerous and verbose and thats just something you have to learn to live with.
There are a few hack ways to try and emulate some of these features but I rarely bother.
*You can also try looking at the language Haxe. The code is compiled to LLVM to ABC bytecode. The Haxe language supports generics, inline functions, conditional compilation (and more). I use it whenever I am writing a library.

Related

Important Concepts I want to Hone my Understanding of

So I've been programming for a little under a year now, and I've taught myself parts of Python, PHP, and Javascript. I'm really trying to become a better programmer and understand the theory behind programming (which seems, in essence, manipulation of information).
Basically it seems to write good code - it must be modular - this reduces complexity of programs, which in turn makes them easier to read and reproduce similar effects. This uses functions and classes.
What is the difference between a function and a class? It seems like they are both something that can have an argument passed through to it - I understand that a class is "higher" than a function.
What is a namespace? How does it differ from a variable? A data structure? Where can I find resources on this sort of information? I've looked on the internet, but a lot of it seems like gobblygook. I also already have my bachelor's, so I am not looking to go back for a second degree. I just want to become a better programmer at this point. I've done enough "hacking" that I get the basic concept, but the theoretical underpinnings still aren't all there.
Any advice would be helpful.
Part of the problem with the question (which the relevancy police will probably be closing shortly) is that a lot of the terms you're using have different meanings depending on the language paradigm you're using. A function in PHP is a different beast from a function in JavaScript.
I recommend starting with "JavaScript: the Good Parts" for an understanding from the particular perspective of someone working in JS.
Okay lets go from the top down:
A namespace is a logical ordering of code.
A real world analogy is that of a Library. A library contains all the books but it make sense to have sections of the library devoted to specific areas e.g. books (code) about physics
A Class is a model (almost always derived from real-world objects) which exposes functions and properties. Classes can (and should) encapsulate (hide) properties and functions which the developer doesn't wish other developers to be able to reach. Consider:
public class Car{
public Car(){}//default constructor.
public Car(int tirecount){//this constructor allows initialisation of the class to some 'safe' state
Tires = new Tire[tirecount];
}
//properties
public Tire[] Tires{get;set}//bad. at any point you can remove a tire from the car
public bool IsStopped{get;private set;}//safe. can check if car is stoppped outside class but can only change value inside car
//functions (...methods)
public Start(){//starts car
IsStopped = false;
}
public Stop(){
IsStopped = true;
}
public RemoveTire(int tireIndex)
{
if(!this.IsStopped)this.Stop();
Tires[tireIndex].Remove();//safe to remove tire when stopped
}
}
In order to get code re-use and polymorphic behavior you must read about interfaces. Interfaces allow the definition of a contract. The internal implementation of the contract can change but the methods already defined cannot be changed without breaking the original code that relies on that contract. Extra agreements can be added without breaking old implementations. Example:
Class Man implements ITalk
Class Dog implements ITalk
ITalk contract states 'I have function 'Speak' i.e.
interface ITalk{
void Speak();
}
class World
{
List<ITalk> beings;
public World(List<ITalk> beingsToPopulateWorldWith)
{
beings = beingsToPopulateWorldWith;
}
public void MakeAllAnimalsTalk()
{
foreach(var b in beings)b.Speak();//because we know all object in the beings list use the ITalk interface (contract) we KNOW that we can call .Speak(). What each ITalk does in speak is up to them but we know we can call it.
}
}
So Man.Speak() may output "Hi" and Dog.Speak() may output "Woof".
Classes can also extend so consider the Man/Dog example. Their base class could be Animal. Animal defines IsAlive. Man derives from Animal so gains IsAlive as does Dog however Man could then define alternative behaviour i.e. AbilityToMakeTools than that of dog.
I find that as soon as you start to envisage classes as real world objects/processes (everything can be modelled even the most abstract 'thing') then classes start to make logical sense.
HTH

Language-integrated design patterns

I've noticed that getting started with design patterns is pretty difficult for beginners. Understanding the design patterns structure requires a lot of time. Applying the design patterns to your practice requires a lot of time too. Agree, you can't see the differences between various types of the design patterns for the first time if you're not familiar to them. This problem is partially solved, if your classes have the suitable names. Also you can break the design patterned class structure you implement, if you're missing some rules writing your code by chance or you're not so experienced in the design patterns. The compilers can protect you and help you to implement the interfaces - if you're not implementing interface, you can't compile your application. It's a good and safe approach. And if the compilers could protect you when you implement design patterns classes too? Look, a lot of programming languages supports "foreach" statement. And if the programming languages could provide support for the factories, bridges, proxies, mementos, etc? If it could be true, you could use something like the following to apply abstract and concrete factory pattern (I prefer C# as the base language for the pseudocode; it's assumed that the contextual keywords are used):
public abstract factory class AF {
public product AP1 GetProduct1();
public product AP2 GetProduct2();
};
public concrete factory class CF1 : AF {
public product CP1 GetProduct1() { ... }
public product CP2 GetProduct2() { ... }
};
It think it could help you to understand the new sources and keep the application source code structure integrity. What do you think about this?
If I understand what you're saying, you think that new language features ought to overcome the need for the boilerplate code usually associated with implementing design patterns.
This is already happening, it is nothing new.
Take the singleton, for example, one of the most well known patterns. Everyone knows how to implement it: you declare the constructor private, you keep a single global instance of the object as a static property, and add a public method to retrieve it.
It's quite a few lines of code for what is conceptually very simple.
In Scala, you don't need any boilerplate to create a singleton. To complement the class keyword, Scala has an object keyword, which declares a singleton object:
object MainApp {
def main(args: Array[String]) {
println("Hello, world!")
}
}
At runtime there will be one single, global instance of MainApp. There is no need to instantiate it using new; in fact, you can't use new MainApp at all.
There is an argument that the existence of a design pattern in a language demonstrates a weakness in the design of the language itself, and that the next generation of languages should learn from the design patterns that were common in the previous generation.
For example, see Peter Norvigs famous presentation about Design Patterns being invisible in Dynamic languages.
In fact, it's easy to come up with examples of this process already happening - as you say, foreach loops are arguably embedded iterators, Ruby has a Singleton mixin to inherit from, any language with multimethods doesn't need a Visitor pattern. Groovy has built-in Builders.
Your specific example of a factory sounds a bit like Noops integration of Dependency Injection into the language spec.
Of course there's only so far a type-checker can go in assuring correctness of code (at the moment). And embedding design patterns into the language isn't going to obviate the need for familiarity with the core concepts, or to think hard about the application to the problem at hand.
Your example is interesting, you suggest adding several keywords and rules to the language that (and I'm not that familiar with C#) add no clear benefit. What would the "factory" keyword tell the type checker (or another programmer) that isn't clear from declaring "AF" as the equivalent of a Java interface, and having "product" as the return type for its methods?
I think you are on to something. But where you miss the point (in my opinion) is that you're trying to specify a design pattern, which, as MHarris said, they tend to become deprecated or obsolete as time passes, making the language dependent of them is not such a good idea.
What I think is that, there could be a 'composite' language, where you have two artifacts: the design specification language (coupled to the implementation language, don't think UML) and the implementation. So taking your example, it could be done like this:
Design specification:
single public Factory
methods:
T Get[T]
Notice that if done like this, because the design specification is meant to be abstract (no need to specify low level details right there and then), it can have constructs that facilitate writing specifications. Know that in the specification you don't need to say if a method is public or private, design specifications (not included algorithm pseudocode) only cares about publicly visible behavior, not private implementation details.
Implementation:
public class ConcreteFactory : Factory {
public Product1 GetProduct1() { ... }
public Product2 GetProduct2() { ... }
};
Here two approaches could be used:
The compiler could take the design as an artifact, and then check if the implementation code is congruent with it.
The compiler or runtime could provide part of the implementation that can be automatically generated (like the singleton implementation), and the code itself could assume its a singleton and not need to re-specify, for example:
class ConcreteFactory : Factory {
Product1 GetProduct1 { new ConcreteProduct1 }
Product2 GetProduct2 { new ConcreteProduct2 }
}
Notice that the implementation can overlook higher-level stuff like the visibility of the class and visibility of the methods, because this is already specified at the design level (DRY). If you ask me, this type of language would have to come with a specialized IDE too, so as to provide context information about the design for a type. As Jeff Atwood has commented, any new language should come with its specialized IDE.

Should inheritance (of non-interface types) be removed from programming languages?

This is quite a controversial topic, and before you say "no", is it really, really needed?
I have been programming for about 10 years, and I can't honestly say that I can recall a time where inheritance solved a problem that couldn't be solved another way. On the other hand I can recall many times when I used inheritance, because I felt like I had to or because I though I was clever and ended up paying for it.
I can't really see any circumstances where, from an implementation stand point, aggregation or another technique could not be used instead of inheritance.
My only caveat to this is that we would still allow inheritance of interfaces.
(Update)
Let's give an example of why it's needed instead of saying, "sometimes it's just needed." That really isn't helpful at all. Where is your proof?
(Update 2 Code Example)
Here's the classic shape example, more powerful, and more explicit IMO, without inheritance. It is almost never the case in the real world that something really "Is a" of something else. Almost always "Is Implemented in Terms of" is more accurate.
public interface IShape
{
void Draw();
}
public class BasicShape : IShape
{
public void Draw()
{
// All shapes in this system have a dot in the middle except squares.
DrawDotInMiddle();
}
}
public class Circle : IShape
{
private BasicShape _basicShape;
public void Draw()
{
// Draw the circle part
DrawCircle();
_basicShape.Draw();
}
}
public class Square : IShape
{
private BasicShape _basicShape;
public void Draw()
{
// Draw the circle part
DrawSquare();
}
}
I blogged about this as a wacky idea a while ago.
I don't think it should be removed, but I think classes should be sealed by default to discourage inheritance when it's not appropriate. It's a powerful tool to have available, but it's like a chain-saw - you really don't want to use it unless it's the perfect tool for the job. Otherwise you might start losing limbs.
The are potential language features such as mix-ins which would make it easier to live without, IMO.
Inheritance can be rather useful in situations where your base class has a number of methods with the same implementation for each derived class, to save every single derived class from having to implement boiler-plate code. Take the .NET Stream class for example which defines the following methods:
public virtual int Read(byte[] buffer, int index, int count)
{
}
public int ReadByte()
{
// note: this is only an approximation to the real implementation
var buffer = new byte[1];
if (this.Read(buffer, 0, 1) == 1)
{
return buffer[0];
}
return -1;
}
Because inheritance is available the base class can implement the ReadByte method for all implementations without them having to worry about it. There are a number of other methods like this on the class which have default or fixed implementations. So in this type of situation it's a very valuable thing to have, compared with an interface where your options are either to make everyone re-implement everything, or to create a StreamUtil type class which they can call (yuk!).
To clarify, with inheritance all I need to write to create a DerivedStream class is something like:
public class DerivedStream : Stream
{
public override int Read(byte[] buffer, int index, int count)
{
// my read implementation
}
}
Whereas if we're using interfaces and a default implementation of the methods in StreamUtil I have to write a bunch more code:
public class DerivedStream : IStream
{
public int Read(byte[] buffer, int index, int count)
{
// my read implementation
}
public int ReadByte()
{
return StreamUtil.ReadByte(this);
}
}
}
So it's not a huge amount more code, but multiply this by a few more methods on the class and it's just unnecessary boiler plate stuff which the compiler could handle instead. Why make things more painful to implement than necessary? I don't think inheritance is the be-all and end-all, but it can be very useful when used correctly.
Of course you can write great programs happily without objects and inheritance; functional programmers do it all the time. But let us not be hasty. Anybody interested in this topic should check out the slides from Xavier Leroy's invited lecture about classes vs modules in Objective Caml. Xavier does a beautiful job laying out what inheritance does well and does not do well in the context of different kinds of software evolution.
All languages are Turing-complete, so of course inheritance isn't necessary. But as an argument for the value of inheritance, I present the Smalltalk blue book, especially the Collection hierarchy and the Number hierarchy. I'm very impressed that a skilled specialist can add an entirely new kind of number (or collection) without perturbing the existing system.
I will also remind questioner of the "killer app" for inheritance: the GUI toolkit. A well-designed toolkit (if you can find one) makes it very, very easy to add new kinds of graphical interaction widgets.
Having said all that, I think that inheritance has innate weaknesses (your program logic is smeared out over a large set of classes) and that it should be used rarely and only by skilled professionals. A person graduating with a bachelor's degree in computer science barely knows anything about inheritance---such persons should be permitted to inherit from other classes at need, but should never, ever write code from which other programmers inherit. That job should be reserved for master programmers who really know what they're doing. And they should do it reluctantly!
For an interesting take on solving similar problems using a completely different mechanism, people might want to check out Haskell type classes.
I wish languages would provide some mechanisms to make it easier to delegate to member variables. For example, suppose interface I has 10 methods, and class C1 implements this interface. Suppose I want to implement class C2 that is just like a C1 but with method m1() overridden. Without using inheritance, I would do this as follows (in Java):
public class C2 implements I {
private I c1;
public C2() {
c1 = new C1();
}
public void m1() {
// This is the method C2 is overriding.
}
public void m2() {
c1.m2();
}
public void m3() {
c1.m3();
}
...
public void m10() {
c1.m10();
}
}
In other words, I have to explicitly write code to delegate the behavior of methods m2..m10 to the member variable m1. That's a bit of a pain. It also clutters the code up so that it's harder to see the real logic in class C2. It also means that whenever new methods are added to interface I, I have to explicitly add more code to C1 just to delegate these new methods to C1.
I wish languages would allow me to say: C1 implements I, but if C1 is missing some method from I, automatically delegate to member variable c1. That would cut down the size of C1 to just
public class C2 implements I(delegate to c1) {
private I c1;
public C2() {
c1 = new C1();
}
public void m1() {
// This is the method C2 is overriding.
}
}
If languages allowed us to do this, it would be much easier to avoid use of inheritance.
Here's a blog article I wrote about automatic delegation.
Inheritance is one of those tools that can be used, and of course can be abused, but I think languages have to have more changes before class-based inheritance could be removed.
Let's take my world at the moment, which is mainly C# development.
For Microsoft to take away class-based inheritance, they would have to build in much stronger support for handling interfaces. Things like aggregation, where I need to add lots of boiler-plate code just to wire up an interface to an internal object. This really should be done anyway, but would be a requirement in such a case.
In other words, the following code:
public interface IPerson { ... }
public interface IEmployee : IPerson { ... }
public class Employee : IEmployee
{
private Person _Person;
...
public String FirstName
{
get { return _Person.FirstName; }
set { _Person.FirstName = value; }
}
}
This would basically have to be a lot shorter, otherwise I'd have lots of these properties just to make my class mimic a person good enough, something like this:
public class Employee : IEmployee
{
private Person _Person implements IPerson;
...
}
this could auto-create the code necessary, instead of me having to write it. Just returning the internal reference if I cast my object to an IPerson would do no good.
So things would have to be better supported before class-based inheritance could be taken off the table.
Also, you would remove things like visibility. An interface really just have two visibility settings: There, and not-there. In some cases you would be, or so I think, forced to expose more of your internal data just so that someone else can more easily use your class.
For class-based inheritance, you can usually expose some access points that a descendant can use, but outside code can't, and you would generally have to just remove those access points, or make them open to everyone. Not sure I like either alternative.
My biggest question would be what specifically the point of removing such functionality would be, even if the plan would be to, as an example, build D#, a new language, like C#, but without the class-based inheritance. In other words, even if you plan on building a whole new language, I still am not entirely sure what the ultimate goal would be.
Is the goal to remove something that can be abused if not in the right hands? If so, I have a list a mile long for various programming languages that I would really like to see addresses first.
At the top of that list: The with keyword in Delphi. That keyword is not just like shooting yourself in the foot, it's like the compiler buys the shotgun, comes to your house and takes aim for you.
Personally I like class-based inheritance. Sure, you can write yourself into a corner. But we can all do that. Remove class-based inheritance, I'll just find a new way of shooting myself in the foot with.
Now where did I put that shotgun...
Have fun implementing ISystemObject on all of your classes so that you have access to ToString() and GetHashcode().
Additionally, good luck with the ISystemWebUIPage interface.
If you don't like inheritance, my suggestion is to stop using .NET all together. There are way too many scenarios where it saves time (see DRY: don't repeat yourself).
If using inheritance is blowing up your code, then you need to take a step back and rethink your design.
I prefer interfaces, but they aren't a silver bullet.
For production code I almost never use inheritance. I go with using interfaces for everything (this helps with testing and improves readability i.e. you can just look at the interface to read the public methods and see what is going on because of well-named methods and class names). Pretty much the only time I would use inheritance would be because a third party library demands it. Using interfaces, I would get the same effect but I would mimic inheritance by using 'delegation'.
For me, not only is this more readable but it is much more testable and also makes refactoring a whole lot easier.
The only time I can think of that I would use inheritance in testing would be to create my own specific TestCases used to differentiate between types of tests I have in my system.
So I probably wouldn't get rid of it but I choose not to use it as much as possible for the reasons mentioned above.
No. Sometimes you need inheritance. And for those times where you don't -- don't use it. You can always "just" use interfaces (in languages that have them) and ADPs without data work like interfaces in those languages that don't have them. But I see no reason to remove what is sometimes a necessary feature just because you feel it isn't always needed.
No. Just because it's not often needed, doesn't mean it's never needed. Like any other tool in a toolkit, it can (and has been, and will be) misused. However, that doesn't mean it should never be used. In fact, in some languages (C++), there is no such thing as an 'interface' at the language level, so without a major change, you couldn't prohibit it.
No, it is not needed, but that does not mean it does not provide an overall benefit, which I think is more important than worrying about whether it is absolutely necessary.
In the end, almost all modern software language constructs amount to syntactic sugar - we could all be writing assembly code (or using punch cards, or working with vacuum tubes) if we really had to.
I find inheritance immensely useful those times that I truly want to express an "is-a" relationship. Inheritance seems to be the clearest means of expressing that intent. If I used delegation for all implementation re-use, I lose that expressiveness.
Does this allow for abuse? Of course it does. I often see questions asking how the developer can inherit from a class but hide a method because that method should not exist on the subclass. That person obviously misses the point of inheritance, and should be pointed toward delegation instead.
I don't use inheritance because it is needed, I use it because it is sometimes the best tool for the job.
I guess I have to play the devil's advocate. If we didn't have inheritance then we wouldn't be able to inherit abstract classes that uses the template method pattern. There are lots of examples where this is used in frameworks such as .NET and Java. Thread in Java is such an example:
// Alternative 1:
public class MyThread extends Thread {
// Abstract method to implement from Thread
// aka. "template method" (GoF design pattern)
public void run() {
// ...
}
}
// Usage:
MyThread t = new MyThread();
t.start();
The alternative is, in my meaning, verbose when you have to use it. Visual clutteer complexity goes up. This is because you need to create the Thread before you can actually use it.
// Alternative 2:
public class MyThread implements Runnable {
// Method to implement from Runnable:
public void run() {
// ...
}
}
// Usage:
MyThread m = new MyThread();
Thread t = new Thread(m);
t.start();
// …or if you have a curious perversion towards one-liners
Thread t = new Thread(new MyThread());
t.start();
Having my devil's advocate hat off I guess you could argue that the gain in the second implementation is dependency injection or seperation of concerns which helps designing testable classes. Depending on your definition of what an interface is (I've heard of at least three) an abstract class could be regarded as an interface.
Needed? No. You can write any program in C, for example, which doesn't have any sort of inheritance or objects. You could write it in assembly language, although it would be less portable. You could write it in a Turing machine and have it emulated. Somebody designed a computer language with exactly one instruction (something like subtract and branch if not zero), and you could write your program in that.
So, if you're going to ask if a given language feature is necessary (like inheritance, or objects, or recursion, or functions), the answer is no. (There are exceptions - you have to be able to loop and do things conditionally, although these need not be supported as explicit concepts in the language.)
Therefore, I find questions of this sort useless.
Questions like "When should we use inheritance" or "When shouldn't we" are a lot more useful.
a lot of the time I find myself choosing a base class over an interface just because I have some standard functionality. in C#, I can now use extension methods to achieve that, but it still doesn't achieve the same thing for several situations.
Is inheritance really needed? Depends what you mean by "really". You could go back to punch cards or flicking toggle switches in theory, but it's a terrible way to develop software.
In procedural languages, yes, class inheritance is a definite boon. It gives you a way to elegantly organise your code in certain circumstances. It should not be overused, as any other feature should not be overused.
For example, take the case of digiarnie in this thread. He/she uses interfaces for nearly everything, which is just as bad as (possibly worse than) using lots of inheritance.
Some of his points :
this helps with testing and improves readability
It doesn't do either thing. You never actually test an interface, you always test an object, that is, an instantiation of a class. And having to look at a completely different bit of code helps you understand the structure of a class? I don't think so.
Ditto for deep inheritance hierarchies though. You ideally want to look in one place only.
Using interfaces, I would get the same effect but I would mimic inheritance by using
'delegation'.
Delegation is a very good idea, and should often be used instead of inheritance (for example, the Strategy pattern is all about doing exactly this). But interfaces have zero to do with delegation, because you cannot specify any behaviour at all in an interface.
also makes refactoring a whole lot easier.
Early commitment to interfaces usually makes refactoring harder, not easier, because there are then more places to change. Overusing inheritance early is better (well, less bad) than overusing interfaces, as pulling out delegate classes is easier if the classes being modified do not implement any interfaces. And it's quite often from those delegates than you get useful interfaces.
So overuse of inheritance is a bad thing. Overuse of interfaces is a bad thing. And ideally, a class will neither inherit from anything (except maybe "object" or the language equivalent), nor implement any interfaces. But that doesn't mean either feature should be removed from a language.
If there is a framework class that does almost exactly what you want, but a particular function of its interface throws a NotSupported exception or for some other reason you only want to override one method to do something specific to your implementation, it's much easier to write a subclass and override that one method rather than write a brand new class and write pass-throughs for each of the other 27 methods in the class.
Similarly, What about Java, for example, where every object inherits from Object, and therefore automatically has implementations of equals, hashcode, etc. I don't have to re-implement them, and it "just works" when I want to use the object as a key in a hashtable. I don't have to write a default passthrough to a Hashtable.hashcode(Object o) method, which frankly seems like it's moving away from object orientation.
My initial thought was, You're crazy. But after thinking about it a while I kinda agree with you. I'm not saying remove Class Inheritance fully (abstract classes with partial implementation for example can be useful), but I have often inherited (pun intended) badly written OO code with multi level class inheritance that added nothing, other than bloat, to the code.
Note that inheritance means it is no longer possible to supply the base class functionality by dependency injection, in order to unit test a derived class in isolation of its parent.
So if you're deadly serious about dependency injection (which I'm not, but I do wonder whether I should be), you can't get much use out of inheritance anyway.
Here's a nice view at the topic:
IS-STRICTLY-EQUIVALENT-TO-A by Reg Braithwaite
I believe a better mechanism for code re-use which is sometimes achieved through inheritance are traits. Check this link (pdf) for a great discussion on this, including the distinction between traits and mixins, and why traits are favored.
There's some research that introduces traits into C# (pdf).
Perl has traits through Moose::Roles. Scala traits are like mixins, as in Ruby.
The question is, "Should inheritance (of non-interface types) be removed from programming languages?"
I say, "No", as it will break a hell of a lot of existing code.
That aside, should you use inheritance, other than inheritance of interfaces? I'm predominantly a C++ programmer and I follow a strict object model of multiple inheritance of interfaces followed by a chain of single inheritance of classes. The concrete classes are a "secret" of a component and it's friends, so what goes on there is nobodies business.
To help implement interfaces, I use template mixins. This allows the interface designer to provide snippets of code to help implement the interface for common scenarios. As a component developer I feel like I can go mixin shopping to get the reusable bits without being encumbered by how the interface designer thought I should build my class.
Having said that, the mixin paradigm is pretty much unique to C++. Without this, I expect that inheritance is very attractive to the pragmatic programmer.

Immutable Collections Actionscript 3

I've been trying lately to implement some clean coding practices in AS3. One of these has been to not give away references to Arrays from a containing object. The point being that I control addition and removal from one Class and all other users of the Array receive read only version.
At the moment that read only version is a ArrayIterator class I wrote, which implements a typical Iterator interface (hasNext, getNext). It also extends Proxy so it can be used in for each loops just as a Array can.
So my question is should this not be a fundamental feature of many languages? The ability to pass around references to read only views of collections?
Also now that there is improved type safety for collections in AS3 , in the form of the Vector class, when I wrap a a Vector in a VectorIterator I lose typing for the sake of immutability. Is there a way to implement the two desires, immutability and typing in AS3?
It seems that using an Iterator pattern is the best way currently in AS3 to pass a collection around a system, while guaranteeing that it will not be modified.
The IIterator interface I use is modeled on the Java Iterator, but I do not implement the remove() method, as this is considered a design mistake by many in the Java community, due to it allowing the user to remove array elements. Below is my IIterator implemention:
public interface IIterator
{
function get hasNext():Boolean
function next():*
}
This is then implemented by classes such as ArrayIterator, VectorIterator etc.
For convenience I also extend Proxy on my concrete Iterator classes, and provide support for the for-each loops in AS3 by overriding the nextNameIndex() and nextValue() methods. This means code that typically used Arrays does not need to change when using my IIterator.
var array:Array = ["one", "two", "three"]
for each (var eachNumber:String in array)
{
trace(eachNumber)
}
var iterator:IIterator = new ArrayIterator(array)
for each (var eachNumber:String in iterator)
{
trace(eachNumber)
}
Only problem is... there is no way for the user to look at the IIterator interface and know that they can use a for-each loop to iterate over the collection. They would have to look at the implementation of ArrayIterator to see this.
Some would argue that the fact that you can implement such patterns as libraries is an argument against adding features to the language itself (for example, the C++ language designers typically say that).
Do you have the immutability you want via the proxy object or not? Note, you can have the VectorIterator constructor take a mandatory Class parameter. Admittedly this is not designer friendly at the moment, but lets hope things will improve in the future.
I have created a small library of immutable collection classes for AS3, including a typed ordered list, which sounds like it would meet your needs. See this blog post for details.
Something I do to achieve this is to have the class that maintains the list only return a copy of that list in a getter via slice(). As an example, my game engine has a class Scene which maintains a list of all the Beings that have been added to it. That list is then exposed as a copy like so:
public function get beings():Vector.<Being>
{
return _beings.slice();
}
(Sorry to revive an old thread, I came across this while looking for ways to implement exactly what Brian's answer covers and thought I would throw my 2 cents in on the matter).

Why do most system architects insist on first programming to an interface?

Almost every Java book I read talks about using the interface as a way to share state and behaviour between objects that when first "constructed" did not seem to share a relationship.
However, whenever I see architects design an application, the first thing they do is start programming to an interface. How come? How do you know all the relationships between objects that will occur within that interface? If you already know those relationships, then why not just extend an abstract class?
Programming to an interface means respecting the "contract" created by using that interface. And so if your IPoweredByMotor interface has a start() method, future classes that implement the interface, be they MotorizedWheelChair, Automobile, or SmoothieMaker, in implementing the methods of that interface, add flexibility to your system, because one piece of code can start the motor of many different types of things, because all that one piece of code needs to know is that they respond to start(). It doesn't matter how they start, just that they must start.
Great question. I'll refer you to Josh Bloch in Effective Java, who writes (item 16) why to prefer the use of interfaces over abstract classes. By the way, if you haven't got this book, I highly recommend it! Here is a summary of what he says:
Existing classes can be easily retrofitted to implement a new interface. All you need to do is implement the interface and add the required methods. Existing classes cannot be retrofitted easily to extend a new abstract class.
Interfaces are ideal for defining mix-ins. A mix-in interface allows classes to declare additional, optional behavior (for example, Comparable). It allows the optional functionality to be mixed in with the primary functionality. Abstract classes cannot define mix-ins -- a class cannot extend more than one parent.
Interfaces allow for non-hierarchical frameworks. If you have a class that has the functionality of many interfaces, it can implement them all. Without interfaces, you would have to create a bloated class hierarchy with a class for every combination of attributes, resulting in combinatorial explosion.
Interfaces enable safe functionality enhancements. You can create wrapper classes using the Decorator pattern, a robust and flexible design. A wrapper class implements and contains the same interface, forwarding some functionality to existing methods, while adding specialized behavior to other methods. You can't do this with abstract methods - you must use inheritance instead, which is more fragile.
What about the advantage of abstract classes providing basic implementation? You can provide an abstract skeletal implementation class with each interface. This combines the virtues of both interfaces and abstract classes. Skeletal implementations provide implementation assistance without imposing the severe constraints that abstract classes force when they serve as type definitions. For example, the Collections Framework defines the type using interfaces, and provides a skeletal implementation for each one.
Programming to interfaces provides several benefits:
Required for GoF type patterns, such as the visitor pattern
Allows for alternate implementations. For example, multiple data access object implementations may exist for a single interface that abstracts the database engine in use (AccountDaoMySQL and AccountDaoOracle may both implement AccountDao)
A Class may implement multiple interfaces. Java does not allow multiple inheritance of concrete classes.
Abstracts implementation details. Interfaces may include only public API methods, hiding implementation details. Benefits include a cleanly documented public API and well documented contracts.
Used heavily by modern dependency injection frameworks, such as http://www.springframework.org/.
In Java, interfaces can be used to create dynamic proxies - http://java.sun.com/j2se/1.5.0/docs/api/java/lang/reflect/Proxy.html. This can be used very effectively with frameworks such as Spring to perform Aspect Oriented Programming. Aspects can add very useful functionality to Classes without directly adding java code to those classes. Examples of this functionality include logging, auditing, performance monitoring, transaction demarcation, etc. http://static.springframework.org/spring/docs/2.5.x/reference/aop.html.
Mock implementations, unit testing - When dependent classes are implementations of interfaces, mock classes can be written that also implement those interfaces. The mock classes can be used to facilitate unit testing.
I think one of the reasons abstract classes have largely been abandoned by developers might be a misunderstanding.
When the Gang of Four wrote:
Program to an interface not an implementation.
there was no such thing as a java or C# interface. They were talking about the object-oriented interface concept, that every class has. Erich Gamma mentions it in this interview.
I think following all the rules and principles mechanically without thinking leads to a difficult to read, navigate, understand and maintain code-base. Remember: The simplest thing that could possibly work.
How come?
Because that's what all the books say. Like the GoF patterns, many people see it as universally good and don't ever think about whether or not it is really the right design.
How do you know all the relationships between objects that will occur within that interface?
You don't, and that's a problem.
If
you already know those relationships,
then why not just extend an abstract
class?
Reasons to not extend an abstract class:
You have radically different implementations and making a decent base class is too hard.
You need to burn your one and only base class for something else.
If neither apply, go ahead and use an abstract class. It will save you a lot of time.
Questions you didn't ask:
What are the down-sides of using an interface?
You cannot change them. Unlike an abstract class, an interface is set in stone. Once you have one in use, extending it will break code, period.
Do I really need either?
Most of the time, no. Think really hard before you build any object hierarchy. A big problem in languages like Java is that it makes it way too easy to create massive, complicated object hierarchies.
Consider the classic example LameDuck inherits from Duck. Sounds easy, doesn't it?
Well, that is until you need to indicate that the duck has been injured and is now lame. Or indicate that the lame duck has been healed and can walk again. Java does not allow you to change an objects type, so using sub-types to indicate lameness doesn't actually work.
Programming to an interface means respecting the "contract" created by
using that interface
This is the single most misunderstood thing about interfaces.
There is no way to enforce any such contract with interfaces. Interfaces, by definition, cannot specify any behaviour at all. Classes are where behaviour happens.
This mistaken belief is so widespread as to be considered the conventional wisdom by many people. It is, however, wrong.
So this statement in the OP
Almost every Java book I read talks about using the interface as a way
to share state and behavior between objects
is just not possible. Interfaces have neither state nor behaviour. They can define properties, that implementing classes must provide, but that's as close as they can get. You cannot share behaviour using interfaces.
You can make an assumption that people will implement an interface to provide the sort of behaviour implied by the name of its methods, but that's not anything like the same thing. And it places no restrictions at all on when such methods are called (eg that Start should be called before Stop).
This statement
Required for GoF type patterns, such as the visitor pattern
is also incorrect. The GoF book uses exactly zero interfaces, as they were not a feature of the languages used at the time. None of the patterns require interfaces, although some can use them. IMO, the Observer pattern is one in which interfaces can play a more elegant role (although the pattern is normally implemented using events nowadays). In the Visitor pattern it is almost always the case that a base Visitor class implementing default behaviour for each type of visited node is required, IME.
Personally, I think the answer to the question is threefold:
Interfaces are seen by many as a silver bullet (these people usually labour under the "contract" misapprehension, or think that interfaces magically decouple their code)
Java people are very focussed on using frameworks, many of which (rightly) require classes to implement their interfaces
Interfaces were the best way to do some things before generics and annotations (attributes in C#) were introduced.
Interfaces are a very useful language feature, but are much abused. Symptoms include:
An interface is only implemented by one class
A class implements multiple interfaces. Often touted as an advantage of interfaces, usually it means that the class in question is violating the principle of separation of concerns.
There is an inheritance hierarchy of interfaces (often mirrored by a hierarchy of classes). This is the situation you're trying to avoid by using interfaces in the first place. Too much inheritance is a bad thing, both for classes and interfaces.
All these things are code smells, IMO.
It's one way to promote loose coupling.
With low coupling, a change in one module will not require a change in the implementation of another module.
A good use of this concept is Abstract Factory pattern. In the Wikipedia example, GUIFactory interface produces Button interface. The concrete factory may be WinFactory (producing WinButton), or OSXFactory (producing OSXButton). Imagine if you are writing a GUI application and you have to go look around all instances of OldButton class and changing them to WinButton. Then next year, you need to add OSXButton version.
In my opinion, you see this so often because it is a very good practice that is often applied in the wrong situations.
There are many advantages to interfaces relative to abstract classes:
You can switch implementations w/o re-building code that depends on the interface. This is useful for: proxy classes, dependency injection, AOP, etc.
You can separate the API from the implementation in your code. This can be nice because it makes it obvious when you're changing code that will affect other modules.
It allows developers writing code that is dependent on your code to easily mock your API for testing purposes.
You gain the most advantage from interfaces when dealing with modules of code. However, there is no easy rule to determine where module boundaries should be. So this best practice is easy to over-use, especially when first designing some software.
I would assume (with #eed3s9n) that it's to promote loose coupling. Also, without interfaces unit testing becomes much more difficult, as you can't mock up your objects.
Why extends is evil. This article is pretty much a direct answer to the question asked. I can think of almost no case where you would actually need an abstract class, and plenty of situations where it is a bad idea. This does not mean that implementations using abstract classes are bad, but you will have to take care so you do not make the interface contract dependent on artifacts of some specific implementation (case in point: the Stack class in Java).
One more thing: it is not necessary, or good practice, to have interfaces everywhere. Typically, you should identify when you need an interface and when you do not. In an ideal world, the second case should be implemented as a final class most of the time.
There are some excellent answers here, but if you're looking for a concrete reason, look no further than Unit Testing.
Consider that you want to test a method in the business logic that retrieves the current tax rate for the region where a transaction occurrs. To do this, the business logic class has to talk to the database via a Repository:
interface IRepository<T> { T Get(string key); }
class TaxRateRepository : IRepository<TaxRate> {
protected internal TaxRateRepository() {}
public TaxRate Get(string key) {
// retrieve an TaxRate (obj) from database
return obj; }
}
Throughout the code, use the type IRepository instead of TaxRateRepository.
The repository has a non-public constructor to encourage users (developers) to use the factory to instantiate the repository:
public static class RepositoryFactory {
public RepositoryFactory() {
TaxRateRepository = new TaxRateRepository(); }
public static IRepository TaxRateRepository { get; protected set; }
public static void SetTaxRateRepository(IRepository rep) {
TaxRateRepository = rep; }
}
The factory is the only place where the TaxRateRepository class is referenced directly.
So you need some supporting classes for this example:
class TaxRate {
public string Region { get; protected set; }
decimal Rate { get; protected set; }
}
static class Business {
static decimal GetRate(string region) {
var taxRate = RepositoryFactory.TaxRateRepository.Get(region);
return taxRate.Rate; }
}
And there is also another other implementation of IRepository - the mock up:
class MockTaxRateRepository : IRepository<TaxRate> {
public TaxRate ReturnValue { get; set; }
public bool GetWasCalled { get; protected set; }
public string KeyParamValue { get; protected set; }
public TaxRate Get(string key) {
GetWasCalled = true;
KeyParamValue = key;
return ReturnValue; }
}
Because the live code (Business Class) uses a Factory to get the Repository, in the unit test you plug in the MockRepository for the TaxRateRepository. Once the substitution is made, you can hard code the return value and make the database unneccessary.
class MyUnitTestFixture {
var rep = new MockTaxRateRepository();
[FixtureSetup]
void ConfigureFixture() {
RepositoryFactory.SetTaxRateRepository(rep); }
[Test]
void Test() {
var region = "NY.NY.Manhattan";
var rate = 8.5m;
rep.ReturnValue = new TaxRate { Rate = rate };
var r = Business.GetRate(region);
Assert.IsNotNull(r);
Assert.IsTrue(rep.GetWasCalled);
Assert.AreEqual(region, rep.KeyParamValue);
Assert.AreEqual(r.Rate, rate); }
}
Remember, you want to test the business logic method only, not the repository, database, connection string, etc... There are different tests for each of those. By doing it this way, you can completely isolate the code that you are testing.
A side benefit is that you can also run the unit test without a database connection, which makes it faster, more portable (think multi-developer team in remote locations).
Another side benefit is that you can use the Test-Driven Development (TDD) process for the implementation phase of development. I don't strictly use TDD but a mix of TDD and old-school coding.
In one sense, I think your question boils down to simply, "why use interfaces and not abstract classes?" Technically, you can achieve loose coupling with both -- the underlying implementation is still not exposed to the calling code, and you can use Abstract Factory pattern to return an underlying implementation (interface implementation vs. abstract class extension) to increase the flexibility of your design. In fact, you could argue that abstract classes give you slightly more, since they allow you to both require implementations to satisfy your code ("you MUST implement start()") and provide default implementations ("I have a standard paint() you can override if you want to") -- with interfaces, implementations must be provided, which over time can lead to brittle inheritance problems through interface changes.
Fundamentally, though, I use interfaces mainly due to Java's single inheritance restriction. If my implementation MUST inherit from an abstract class to be used by calling code, that means I lose the flexibility to inherit from something else even though that may make more sense (e.g. for code reuse or object hierarchy).
One reason is that interfaces allow for growth and extensibility. Say, for example, that you have a method that takes an object as a parameter,
public void drink(coffee someDrink)
{
}
Now let's say you want to use the exact same method, but pass a hotTea object. Well, you can't. You just hard-coded that above method to only use coffee objects. Maybe that's good, maybe that's bad. The downside of the above is that it strictly locks you in with one type of object when you'd like to pass all sorts of related objects.
By using an interface, say IHotDrink,
interface IHotDrink { }
and rewrting your above method to use the interface instead of the object,
public void drink(IHotDrink someDrink)
{
}
Now you can pass all objects that implement the IHotDrink interface. Sure, you can write the exact same method that does the exact same thing with a different object parameter, but why? You're suddenly maintaining bloated code.
Its all about designing before coding.
If you dont know all the relationships between two objects after you have specified the interface then you have done a poor job of defining the interface -- which is relatively easy to fix.
If you had dived straight into coding and realised half way through you are missing something its a lot harder to fix.
You could see this from a perl/python/ruby perspective :
when you pass an object as a parameter to a method you don't pass it's type , you just know that it must respond to some methods
I think considering java interfaces as an analogy to that would best explain this . You don't really pass a type , you just pass something that responds to a method ( a trait , if you will ).
I think the main reason to use interfaces in Java is the limitation to single inheritance. In many cases this lead to unnecessary complication and code duplication. Take a look at Traits in Scala: http://www.scala-lang.org/node/126 Traits are a special kind of abstract classes, but a class can extend many of them.