Terminology for an API where users implement methods - terminology

There is an approach in library API design where the user must implement a subclass (or, sometimes, a set of functions) in order to use the API. For example, libraries may provide an (abstract) base class which the user must extend, instantiate, and then pass back to the library.
Is there a specific name for this kind of approach?
(The phrase "Service Provider Interface" seems to appear in Java but not elsewhere. It is also widely used in "Plug-in" architectures but does not seem to be the same thing.)

The term abstract has a precise well known definition within the programming community, so I think we could say
We provide an abstract API consisting of a set of abstract classes
and interfaces designed for you to extend and customise with concrete
implementations.
and most developers would be familiar with the intended meaning.

Related

How to monkey patch a generic type tag function table

I found it interesting to read on one of the ways that you can do functional dynamic dispatch in sicp - using a table of type tag + name -> functions that you can fetch from or add to.
I was wondering, is this a typical type dispatch mechanism for a dynamic non OO language?
Also what would be the typical way to monkey path this, using a chaining list of tables(if you don't find it in the first table try next table recursively)? Rebind the table within local scope to a modified copy? ect?
I believe this is a typical type dispatch mechanism, even for non-dynamic non-OO languages, based on this article about the JHC Haskell compiler and how it implements type classes. The implication in the article is that most Haskell compilers implement type classes (a kind of type dispatch) by passing dictionaries. His alternative is direct case analysis, which likely would not be applicable in dynamically typed languages, since you don't know ahead of time what the types of the constituents of your expression will be. On the other hand, this isn't extensible at run-time either.
As for dynamic non-OO languages, I'm not aware of many examples outside Lisp/Scheme. Common Lisp's CLOS makes Lisp a proper OO language and provides dynamic dispatch as well as multiple dispatch (you can add or remove generics and methods at run-time, and they can key off the type of more than just the first parameter). I don't know how this is usually implemented, but I do know that it is usually an add-on facility rather than a built-in facility, which implies it's using functionality available to the would-be monkey-patcher, and also that certain versions have been criticized for their lack of speed (CLISP, I think, but they may have resolved this). Of course, you could implement this type of parallel dispatch mechanism within an OO language as well, and you can probably find plenty of examples of that.
If you were using purely-functional persistent maps or dictionaries, you could certainly implement this faculty without even needing the chain of inherited maps; as you "modify" the map, you get a new map back, but all the existing references to the old map would still be valid and see it as the old version. If you were implementing a language with this facility you could interpret it by putting the type->function map in the Reader monad and wrapping your interpreter in it.

Application design - When should interfaces be used?

I kind of understand an interface as being a contract that can be applied to classes that would otherwise have nothing in common (ex: Comparable in Java). However, in what situation(s) would you have the reflex of adding an interface at the design stage?
Whenever you are using a statically typed language, and you want to make it possible for the developer to use your code while providing an alternate implementation - in other words, in such language it is necessary to achieve low(er) coupling.
Languages that use ducktyping as a rule, rather than strict type checking, for example, python, would generally have no need for interfaces.
"I kind of understand an interface as being a contract that can be applied to classes that would otherwise have nothing in common" - that's probably not the way to think about what an Interface is.
An Interface describes behaviour, and implementing an interface means a class enters into a contract to deliver that behavior.
By programming to an interface, rather than an implementation, you enable polymorphism and get more flexible code with lower coupling. For example, this method can take any instance that implements IQuack:
public void DoSomething(IQuack quacker)
{
// ...
}
If you are designing a product and you know the product is going to interact with a type of device, service etc. but not necessarily which, you can use an interface to move forward with the overall architecture, PROVIDED that you know enough about those types of devices to write an interface that can be successfully used by any given device of that type. Of course if you are in the design phase, you better have that knowledge. It's not uncommon to do high level designs using only interface declarations. I'm not saying it's good or bad, but it seems to be a pretty common practice of those who use software (like Rose etc) to generate a skeleton from UML.
Another time would be if you know exactly what device you are going to use but you think there might be a chance that you will need to work with different or multiple types of that device down the road.
A third usage of interfaces is to reduce duplicated code. This is probably the only place people ever get carried away with interface usage and if it wasnt for that, I'd be comfortable saying dont ask "Should this be an interface?" but "Can this be an interface?".

Why should you ever have to care whether an object reference is an interface or a class?

I often seem to run into the discussion of whether or not to apply some sort of prefix/suffix convention to interface type names, typically adding "I" to the beginning of the name.
Personally I'm in the camp that advocates no prefix, but that's not what this question is about. Rather, it's about one of the arguments I often hear in that discussion:
You can no longer see at-a-glance
whether something is an interface or a
class.
The question that immediately pops up in my head is: apart from object creation, why should you ever have to care whether an object reference is a class or an interface?
I've tagged this question as language agnostic, but as has been pointed out it may not be. I contend that it is because while specific language implementation details may be interesting, I'd like to keep this on a conceptual level. In other words, I think that, conceptually, you'd never have to care whether an object reference is typed as a class or an interface but I'm not sure, hence the question.
This is not a discussion about IDEs and what they do or don't do when visualizing the different types; caring about the type of an object is certainly a necessity when browsing through code (packages/sources/whatever form). Nor is it a discussion about the pros or cons about either naming convention. I just can't seem to figure out in what scenario, other than object creation, you actually care about wether or not you're referencing a concrete type or an interface.
Most of the time, you probably don't care. But here are some instances that I can think of where you would. There are several, and it does vary a little bit by language. Some languages don't mind as much as others.
In the case of inversion of control (where someone PASSES you a parameter) you probably don't care if it's an interface or an object as far as calling its methods etc. But when dealing with types, it definitely can make a difference.
In managed languages such as .NET languages, interfaces can usually only inherit one interface, whereas a class can inherit one class but implement many interfaces. The order of classes vs interfaces may also matter in a class or interface declaration. So you need to know which is which when defining a new class or interface.
In Delphi / VCL, interfaces are reference counted and automatically collected, whereas classes must be explicitly freed, so lifecyle management on the whole is affected, not just the creation.
Interfaces may not be viable sources for class references.
Interfaces can be cast to compatible interfaces, but in many languages, they cannot be cast to compatible classes. Classes can be cast to either.
Interfaces may be passed to parameters of type IID, or IUnknown, whereas classes cannot (without a cast and a supporting interface).
An interface's implementation is unknown. Its input and output are defined, but the implementation which creates the output is abstracted. In general, ones attitude may be that when working with a class, one may know how the class works. But when working with an interface, no such assumption should be made. In a perfect world, it might make no difference. But in reality, this most certainly can have affect your design.
I agree with you (and thereby do not use an "I" prefix for interfaces). We shouldn't have to care whether it is an abstract class or an interface.
Worth noting that Java needs to have a notion of interface solely because it does not support multiple inheritance. Otherwise, "abstract class" concept would suffice (which may be "all" abstract, or partially abstract, or almost concrete and just 1 tiny bit abstract, whatever).
Things that concrete class can have and the interfaces can't:
Constructors
Instance fields
Static methods and static fields
So if you use the convention of starting all interface names with 'I' then it indicates to the user of your library that the particular type will not have any of the above mentioned things.
But personally I feel that this is not a reason enough to start all interface names with 'I'. The modern IDEs are powerful enough to indicate if some type is an interface. Also it hides the true meaning of an interface name: imagine if Runnable and List interfaces were named IRunnable and IList repectively.
When a class is used, I can make the assumption that I will get objects from a relatively small and almost well-defined range of subclasses. That's because subclassing is - or at least it should be
- a decision that isn't made too easily, especially in languages that don't support multiple inheritance. In contrast, interfaces can be implemented by any class, and the implementation can be added later to any class.
So the information is useful, especially when browsing through code, and trying to get a feeling what the code author intended to do - but I think it should be enough, if the IDE shows interfaces/classes as distinctive icons.
You want to see at a glance which are the "interfaces" and which are the "concrete classes" so that you can focus your attention to the abstractions in the design instead of the details.
Good designs are based on abstractions - if you know and understand them you understand the system without knowing any of the details. So you know you can skip the classes without the I prefix, and focus on the ones that do have it while you are understanding the code, and you also know to avoid building new code around non-interface classes without having to refer to some other design document.
I agree that the I* naming convention is just not appropriate for modern OO languages, but truth is this question isn't really language agnostic. There are legitimate cases where you have an interface not for any architectural reason but because you simply don't have an implementation or have access to an implementation. For these cases you can read I* as *Stub or similar, and, in these cases, it might make sense to have an IBlah and a Blah class
These days, though, you rarely come up against this, and in modern OO languages when you say Interface you actually mean Interface not just I don't have the code for this. So there is no need for the I*, and in fact it encourages really bad OO design as you won't get the natural naming conflicts that would tell you something's gone wrong in your architecture. Say you had a List and an IList... what's the difference? when would you use one over the other? if you wanted to implement IList would you be constrained (conceptually at least) by what List does? I'll tell you what... if I found both an IBlah and a Blah class in any of my codebases I would purge one at random and take away that person's commit privileges.
Interfaces don't have fields, hence when you use IDisposable (or whatever), you know you're only declaring what you can do. That seems to me the main point of it.
Distinguishing between interfaces and classes may be useful, anywhere the type is referenced, in the IDE or out, to determine:
Can I make a new implementation of this type?
Can I implement this interface in a language that does not support multiple inheritance of implementation classes (e.g., Java).
Can there be multiple implementations of this type?
Can I easily mock this interface in an arbitrary mocking framework?
It is worth noting that UML distinguishes between interfaces and implementation classes. In addition, the "I" prefix is used in the examples in "The Unified Modeling Language User Guide" by the three amigos Booch, Jacobson and Rumbaugh. (Incidentally, this also provides an example why IDE syntax coloring alone is not sufficient to distinguish in all contexts.)
You should care, because :
An interface with capital "I" enables one, namely you or your co-workers to use any implementation which implements the interface. If in the future you figure out a better way to do something, say a better list sorting algorithm, you will be stuck with having the change ALL of the invoking methods as well.
It helps in understanding code - e.g. you don't need to memorize all 10 implementations of say, I_SortableList , you just care that it sorts a list (or something like that). Your code becomes practically self-documenting here.
To complete the discussion, here is a pseudocode example illustrating the above:
//Pseudocode - define implementations of ISortableList
Class SortList1 : ISortableLIst, SortList2:IsortableList, SortList3:IsortableList
//PseudoCode - the interface way
void Populate(ISortableList list, int[] nums)
{
list.set(nums)
}
//PseudoCode - the "i dont care way"
void Populate2( SortList1 list, int[] nums )
{
list.set(nums)
}
...
//Pseudocode - create instances
SortList1 list1 = new SortList1();
SortList2 list2 = new SortList2();
SortList3 list3 = new SortList3();
//Invoke Populate() - The "interface way"
Populate(list1,nums);//OK, list1 is ISortableList implementation
Populate(list2,nums);//OK, list2 is ISortableList implementation
Populate(list3,nums);//OK, list3 is ISortableList implementation
//Invoke Populate2() - the "I don't care way"
Populate(list1,nums);//OK, list1 is an instance of SortList1
Populate(list2,nums);//Not OK, list2 is not of required argument type, won't compile
Populate(list3,nums);//the same as above
Hope this helps,
Jas.

Why to use Singleton patern?

So.. I can't understand why should I even use the Singleton pattern in ActionScript 3. Can anyone explain me this? Maybe I just don't understand the purpose of it. I mean how it differs from other patterns? How it works?
I checked the PureMVC source and it's full of Singletons. Why are they using them in the View, Module, Controller?
I have next to no practical experience with PureMVC so I can't argue for or against their use of Singletons. Hence, I'll try to keep my answer more generic.
A singleton is a type of object that can only be instantiated once and is globally accessible.
Typically, this kind of pattern is used in order to have easy access to services of some kind, perhaps a service facade used to retrieve data from a server or an application model that holds information about settings or such.
The singleton pattern is by many considered to be an anti-pattern for a number of reasons, a few of which are mentioned below:
They carry state, making certain tasks such as unit testing virtually impossible.
They inherently violate the Single Responsibility Principle.
They promote tight coupling between classes due to them being globally accessible.
I won't list all of the reasons why a singleton may be an anti pattern, there are plenty of resources on the subject.
The singleton pattern restricts the instantiation of an object to only one instance. Sometimes in systems this pattern is used so an object that controls parts of the system can't be just created at-will. If you have some object that manages settings, for example, you would want something that changes settings to only modify that one object, and not create a new one.

Interfaces vs Public Class Members

I've noticed that some programmers like to make interfaces for just about all their classes. I like interfaces for certain things (such as checking if an object supports a certain behavior and then having an interface for that behavior) but overuse of interfaces can sometimes bloat the code. When I declare methods or properties as public I'd expect people to just use my concrete classes and I don't really understand the need to create interfaces on top of that.
I'd like to hear your take on interfaces. When do you use them and for what purposes?
Thank you.
Applying any kind of design pattern or idea without thinking, just because somebody told you it's good practice, is a bad idea.
That ofcourse includes creating a separate interface for each and every class you create. You should at least be able to give a good reason for every design decision, and "because Joe says it's good practice" is not a good enough reason.
Interfaces are good for decoupling the interface of some unit of code from its implementation. A reason to create an interface is because you foresee that there might be multiple implementations of it in the future. It can also help with unit testing; you can make a mock implementation of the services that the unit you want to test depends on, and plug the mock implementations in instead of "the real thing" for testing.
Interfaces are a powerful tool for abstraction. With them, you can more freely substitute (for example) test classes and thereby decouple your code. They are also a way to narrow the scope of your code; you probably don't need the full feature set of a given class in a particular place - exactly what features do you need? That's a client-focused way of thinking about interfaces.
Unit tests.
With an interface describing all class methods and properties it is within the reach of a click to create a mock-up class to simulate behavior that is not within the scope of said test.
It's all about expecting and preparing for change.
One approach that some use (and I'm not necessarily advocating it)
is to create an IThing and a ThingFactory.
All code will reference IThing (instead of ConcreteThing).
All object creation can be done via the Factory Method.
ThingFactory.CreateThing(some params).
So, today we only have AmericanConcreteThing. And the possibility is that we may never need another. However, if experience has taught me anything, it is that we will ALWAYS need another.
You may not need EuropeanThing, but TexasAmericanThing is a distinct possibility.
So, In order to minimize the impact on my code, I can change the creational line to:
ThingFactory.CreateThing( Account )
and Create my class TexasAmericanThing : IThing.
Other than building the class, the only change is to the ThingFactory, which will require a change from
public static IThing CreateThing(Account a)
{
return new AmericanThing();
}
to
public static IThing CreateThing(Account a)
{
if (a.State == State.TEXAS) return new TexasAmericanThing();
return new AmericanThing();
}
I've seen plenty of mindless Interfaces myself. However, when used intelligently, they can save the day. You should use Interfaces for decoupling two components or two layers of an application. This can enable you to easily plug-in varying implementations of the interface without affecting the client, or simply insulate the client from constant changes to the implementation, as long as you stay true to the contract of the interface. This can make the code more maintainable in the long term and can save the effort of refactoring later.
However, overly aggressive decoupling can make for non-intuitive code. It's overuse can lead to nuisance. You should carefully identify the cohesive parts of your application and the boundaries between them and use interfaces there. Another benefit of using Interfaces between such parts is that they can be developed in parallel and tested independently using mock implementations of the interfaces they use.
OTOH, having client code access public member methods directly is perfectly okay if you really don't foresee any changes to the class that might also necessitate changes in the client. In any case, however, having public member fields I think is not good. This is extremely tight coupling! You are basically exposing the architecture of your class and making the client code dependent on it. Tomorrow if you realize that another data structure for a particular field will perform better, you can't change it without also changing the client code.
I primarily use interfaces for IoC to enable unit testing.
On the one hand, this could be interpreted as premature generalization. On the other hand, using interfaces as a rule helps you write code that is more easily composable and hence testable. I think the latter wins out in many cases.
I like interfaces:
* to define a contract between parts/modules/subsystems or 3rd party systems
* when there are exchangeable states or algorithms (state/strategy)