What are some different ways of implementing a plugin system? - language-agnostic

I'm not looking so much for language-specific answers, just general models for implementing a plugin system (if you want to know, I'm using Python). I have my own idea (register callbacks, and that's about it), but I know others exist. What's normally used, and what else is reasonable?
What do you mean by a plugin system? Does Dependency Injection and IOC containers sounds like a good solution?
I mean, uh, well, a way to insert functionality into the base program without altering it. I didn't intend to define it when I set out. Dependency Injection doesn't look particularly suitable for what I'm doing, but I don't know much about them.

A simple plugin architecture can define a plugin interface with all the methods the plugin ought to implement. The plugin handles event from the application, and can use the application's standard code, model objects, etc. to get things done. Basically the same as an ASP.NET Form does, except that you're overriding rather than implementing.
Nobody taught me this part, and I'm no expert, but I feel: In general a plugin will be less stable than its application, so the application should always be in control and only give the plugin periodic opportunities to act. If a plugin can register an Observer, then calls to the delegate should be tried/caught.

There is a very good episode of Software Engineering Radio, which you may be interested in.
For future reference, I have reproduced here the "Rules for Enablers" (alternative link) given in the excellent Contributing to Eclipse by Erich Gamma, Kent Beck.
Invitation Rule - Whenever possible, let others contribute to your contributions.
Lazy Loading Rule - Contributions are only loaded when they are needed.
Safe Platform Rule - As the provider of an extension point, you must protect yourself against misbehavior on the part of extenders.
Fair Play Rule - All clients play by the same rules, even me.
Explicit Extension Rule - Declare explicitly where a platform can be extended.
Diversity Rule - Extension points accept multiple extensions.
Good Fences Rule - When passing control outside your code, protect yourself.
Explicit API Rule - separate the API from internals.
Stability Rule - Once you invite someone to contribute, don?t change the rules.
Defensive API Rule - Reveal only the API in which you are confident, but be prepared to reveal more API as clients ask for it.

In Python you can use the entry-point system provided by setuptools and pkg_resources. Each entry point should be a function that returns information about the plugin -- name, author, setup and teardown functions, etc.

How about abstract factory? Your base program defines how the abstract concepts interact with each other, but the caller has to provide the implementation.

Related

Primefaces 6.1 p:editor or p:textEditor [duplicate]

I am using eclipse to develop a web application. Just today I have updated my struts version by changing the JAR file. I am getting warnings at some places that methods are deprecated, but the code is working fine.
I want to know some things
Is it wrong to use Deprecated methods or classes in Java?
What if I don't change any method and run my application with warnings that I have, will it create any performance issue.
1. Is it wrong to use Deprecated methods or classes in Java?
From the definition of deprecated:
A program element annotated #Deprecated is one that programmers are discouraged from using, typically because it is dangerous, or because a better alternative exists.
The method is kept in the API for backward compatibility for an unspecified period of time, and may in future releases be removed. That is, no, it's not wrong, but there is a better way of doing it, which is more robust against API changes.
2. What if I don't change any method and run my application with warnings that I have, will it create any performance issue.
Most likely no. It will continue to work as before the deprecation. The contract of the API method will not change. If some internal data structure changes in favor of a new, better method, there could be a performance impact, but it's quite unlikely.
The funniest deprecation in the Java API, is imo, the FontMetrics.getMaxDecent. Reason for deprecation: Spelling error.
Deprecated. As of JDK version 1.1.1, replaced by getMaxDescent().
You can still use deprecated code without performance being changed, but the whole point of deprecating a method/class is to let users know there's now a better way of using it, and that in a future release the deprecated code is likely to be removed.
Terminology
From the official Sun glossary:
deprecation: Refers to a class, interface, constructor, method or field that is no longer recommended, and may cease to exist in a future version.
From the how-and-when to deprecate guide:
You may have heard the term, "self-deprecating humor," or humor that minimizes the speaker's importance. A deprecated class or method is like that. It is no longer important. It is so unimportant, in fact, that you should no longer use it, since it has been superseded and may cease to exist in the future.
The #Deprecated annotation went a step further and warn of danger:
A program element annotated #Deprecated is one that programmers are discouraged from using, typically because it is dangerous, or because a better alternative exists.
References
java.sun.com Glossary
Language guide/How and When to Deprecate APIs
Annotation Type Deprecated API
Right or wrong?
The question of whether it's right or wrong to use deprecated methods will have to be examined on individual basis. Here are ALL the quotes where the word "deprecated" appears in Effective Java 2nd Edition:
Item 7: Avoid finalizers: The only methods that claim to guarantee finalization are System.runFinalizersOnExit and its evil twin Runtime.runFinalizersOnExit. These methods are fatally flawed and have been deprecated.
Item 66: Synchronize access to shared mutable data: The libraries provide the Thread.stop method, but this method was deprecated long ago because it's inherently unsafe -- its use can result in data corruption.
Item 70: Document thread safety: The System.runFinalizersOnExit method is thread-hostile and has been deprecated.
Item 73: Avoid thread groups: They allow you to apply certain Thread primitives to a bunch of threads at once. Several of these primitives have been deprecated, and the remainder are infrequently used. [...] thread groups are obsolete.
So at least with all of the above methods, it's clearly wrong to use them, at least according to Josh Bloch.
With other methods, you'd have to consider the issues individually, and understand WHY they were deprecated, but generally speaking, when the decision to deprecate is justified, it will tend to lean toward wrong than right to continue using them.
Related questions
Difference between a Deprecated and Legacy API?
Aside from all the excellent responses above I found there is another reason to remove deprecated API calls.
Be researching why a call is deprecated I often find myself learning interesting things about the Java/the API/the Framework. There is often a good reason why a method is being deprecated and understanding these reasons leads to deeper insights.
So from a learning/growing perspective, it is also a worthwhile effort
It certainly doesn't create a performance issue -- deprecated means in the future it's likely that function won't be part of the library anymore, so you should avoid using it in new code and change your old code to stop using it, so you don't run into problems one day when you upgrade struts and find that function is no longer present
It's not wrong, it's just not recommended. It generally means that at this point there is a better way of doing things and you'd do good if you use the new improved way. Some deprecated stuff are really dangerous and should be avoided altogether. The new way can yield better performance than the deprecated one, but it's not always the case.
You may have heard the term, "self-deprecating humor". That is humor that minimizes your importance. A deprecated class or method is like that. It is no longer important. It is so unimportant, in fact, that it should no longer be used at all, as it will probably cease to exist in the future.
Try to avoid it
Generally no, it's not absolutely wrong to use deprecated methods as long as you have a good contingency plan to avoid any problems if/when those methods disappear from the library you're using. With Java API itself this never happens but with just about anything else it means that it's going to be removed. If you specifically plan not to upgrade (although you most likely should in the long run) your software's supporting libraries then there's no problem in using deprecated methods.
No.
Yes, it is wrong.
Deprecated methods or classes will be removed in future versions of Java and should not be used. In each case, there should be an alternative available. Use that.
There are a couple of cases when you have to use a deprecated class or method in order to meet a project goal. In this case, you really have no choice but to use it. Future versions of Java may break that code, but if it's a requirement you have to live with that. It probably isn't the first time you had to do something wrong in order to meet a project requirement, and it certainly won't be the last.
When you upgrade to a new version of Java or some other library, sometimes a method or a class you were using becomes deprecated. Deprecated methods are not supported, but shouldn't produce unexpected results. That doesn't mean that they won't, though, so switch your code ASAP.
The deprecation process is there to make sure that authors have enough time to change their code over from an old API to a new API. Make use of this time. Change your code over ASAP.
It is not wrong, but some of the deprecated methods are removed in the future versions of the software, so you will possibly end up with not working code.
Is it wrong to use Deprecated methods or classes in Java?"
Not wrong as such but it can save you some trouble. Here is an example where it's strongly discouraged to use a deprecated method:
http://java.sun.com/j2se/1.4.2/docs/guide/misc/threadPrimitiveDeprecation.html
Why is Thread.stop deprecated?
Because it is inherently unsafe.
Stopping a thread causes it to unlock
all the monitors that it has locked.
(The monitors are unlocked as the
ThreadDeath exception propagates up
the stack.) If any of the objects
previously protected by these monitors
were in an inconsistent state, other
threads may now view these objects in
an inconsistent state. Such objects
are said to be damaged. When threads
operate on damaged objects, arbitrary
behavior can result. This behavior may
be subtle and difficult to detect, or
it may be pronounced. Unlike other
unchecked exceptions, ThreadDeath
kills threads silently; thus, the user
has no warning that his program may be
corrupted. The corruption can manifest
itself at any time after the actual
damage occurs, even hours or days in
the future.
What if don't change any method and run my application with warnings that I have, will it create any performance issue.
There should be no issues in terms of performance. The standard API is designed to respect some backward compatibility so applications can be gradually adapted to newer versions of Java.
Is it wrong to use Deprecated methods or classes in Java?
It is not "wrong", still working but avoid it as much as possible.
Suppose there is a security vulnerability associated with a method and the developers determine that it is a design flaw. So they may decide to deprecate the method and introduce the new way.
So if you still use the old method, you have a threat. So be aware of the reason to the deprecation and check whether how it affects to you.
what if don't change any method and run my application with warnings that I have, will it create any performance issue.
If the deprecation is due to a performance issue, then you will suffer from a performance issue, otherwise there is no reason to have such a problem. Again would like to point out, be aware of the reason to deprecation.
In Java it's #Deprecated, in C# it's [Obsolete].
I think I prefer C#'s terminology. It just means it's obsolete. You can still use it if you want to, but there's probably a better way.
It's like using Windows 3.1 instead of Windows 7 if you believe that Windows 3.1 is obsolete. You can still use it, but there's probably better features in a future version, plus the future versions will probably be supported - the obsolete one won't be.
Same for Java's #Deprecated - you can still use the method, but at your own risk - in future, it might have better alternatives, and might not even be supported.
If you are using code that is deprecated, it's usually fine, as long as you don't have to upgrade to a newer API - the deprecated code might not exist there. I suggest if you see something that is using deprecated code, to update to use the newer alternatives (this is usually pointed out on the annotation or in a Javadoc deprecated comment).
Edit: And as pointed out by Michael, if the reason for deprecation is due to a flaw in the functionality (or because the functionality should not even exist), then obviously, one shouldn't use the deprecated code.
Of course not - since the whole Java is getting #Deprecated :-) you can feel free to use them for as long as Java lasts. Not going to notice any diff anyway, unless it's something really broken. Meaning - have to read about it and then decide.
In .Net however, when something is declared [Obsolete], go and read about it immediately even if you never used it before - you have about 50% chance that it's more efficient and/or easier to use than replacement :-))
So in general, it can be quite beneficial to be techno-conservative these days, but you have to do your reading chore first.
I feel that deprecated method means; there is an alternate=ive method available which is better in all aspects than existing method. Better to use the good method than existing old method. For backward compatibility, old methods are left as deprecated.

Is it possible to use SecureSWF and still utilize reflection?

I have just inherited a project that uses SecureSWF. I am trying to utilize RobotLegs (which uses SwiftSuspenders for reflection to implement dependency injection) and have just discovered that SecureSWF breaks the build. Has anyone had a similar problem? Is there a workaround? Is it possible to obscure a SWF that's built with RobotLegs at all?
It's straightforward, actually. You need NAMES for reflection. And they are the primary target for ANY kind of obfuscation and mangling. Since we absolutely can not abuse the verify mechanism in flash player VM (which is damn good), we have no way in getting around it.
I'm using secureSWF too, and I have a mechanism of sewing skins and controllers together with descrybeType() and a hell of a lot of checking of types and members. I exclude my sensitive to obfuscation classes from the protection workflow. They are of no use to a hacker anyway.

Framework vs. Toolkit vs. Library [duplicate]

This question already has answers here:
What is the difference between a framework and a library? [closed]
(22 answers)
Closed 6 years ago.
What is the difference between a Framework, a Toolkit and a Library?
The most important difference, and in fact the defining difference between a library and a framework is Inversion of Control.
What does this mean? Well, it means that when you call a library, you are in control. But with a framework, the control is inverted: the framework calls you. (This is called the Hollywood Principle: Don't call Us, We'll call You.) This is pretty much the definition of a framework. If it doesn't have Inversion of Control, it's not a framework. (I'm looking at you, .NET!)
Basically, all the control flow is already in the framework, and there's just a bunch of predefined white spots that you can fill out with your code.
A library on the other hand is a collection of functionality that you can call.
I don't know if the term toolkit is really well defined. Just the word "kit" seems to suggest some kind of modularity, i.e. a set of independent libraries that you can pick and choose from. What, then, makes a toolkit different from just a bunch of independent libraries? Integration: if you just have a bunch of independent libraries, there is no guarantee that they will work well together, whereas the libraries in a toolkit have been designed to work well together – you just don't have to use all of them.
But that's really just my interpretation of the term. Unlike library and framework, which are well-defined, I don't think that there is a widely accepted definition of toolkit.
Martin Fowler discusses the difference between a library and a framework in his article on Inversion of Control:
Inversion of Control is a key part of
what makes a framework different to a
library. A library is essentially a
set of functions that you can call,
these days usually organized into
classes. Each call does some work and
returns control to the client.
A framework embodies some abstract
design, with more behavior built in.
In order to use it you need to insert
your behavior into various places in
the framework either by subclassing or
by plugging in your own classes. The
framework's code then calls your code
at these points.
To summarize: your code calls a library but a framework calls your code.
Diagram
If you are a more visual learner, here is a diagram that makes it clearer:
(Credits: http://tom.lokhorst.eu/2010/09/why-libraries-are-better-than-frameworks)
The answer provided by Barrass is probably the most complete. However, the explanation could easily be stated more clearly. Most people miss the fact that these are all nested concepts. So let me lay it out for you.
When writing code:
eventually you discover sections of code that you're repeating in your program, so you refactor those into Functions/Methods.
eventually, after having written a few programs, you find yourself copying functions you already made into new programs. To save yourself time you bundle those functions into Libraries.
eventually you find yourself creating the same kind of user interfaces every time you make use of certain libraries. So you refactor your work and create a Toolkit that allows you to create your UIs more easily from generic method calls.
eventually, you've written so many apps that use the same toolkits and libraries that you create a Framework that has a generic version of this boilerplate code already provided so all you need to do is design the look of the UI and handle the events that result from user interaction.
Generally speaking, this completely explains the differences between the terms.
Introduction
There are various terms relating to collections of related code, which have both historical (pre-1994/5 for the purposes of this answer) and current implications, and the reader should be aware of both, particularly when reading classic texts on computing/programming from the historic era.
Library
Both historically, and currently, a library is a collection of code relating to a specific task, or set of closely related tasks which operate at roughly the same level of abstraction. It generally lacks any purpose or intent of its own, and is intended to be used by (consumed) and integrated with client code to assist client code in executing its tasks.
Toolkit
Historically, a toolkit is a more focused library, with a defined and specific purpose. Currently, this term has fallen out of favour, and is used almost exclusively (to this author's knowledge) for graphical widgets, and GUI components in the current era. A toolkit will most often operate at a higher layer of abstraction than a library, and will often consume and use libraries itself. Unlike libraries, toolkit code will often be used to execute the task of the client code, such as building a window, resizing a window, etc. The lower levels of abstraction within a toolkit are either fixed, or can themselves be operated on by client code in a proscribed manner. (Think Window style, which can either be fixed, or which could be altered in advance by client code.)
Framework
Historically, a framework was a suite of inter-related libraries and modules which were separated into either 'General' or 'Specific' categories. General frameworks were intended to offer a comprehensive and integrated platform for building applications by offering general functionality, such as cross platform memory management, multi-threading abstractions, dynamic structures (and generic structures in general). Historical general frameworks (Without dependency injection, see below) have almost universally been superseded by polymorphic templated (parameterised) packaged language offerings in OO languages, such as the STL for C++, or in packaged libraries for non-OO languages (guaranteed Solaris C headers). General frameworks operated at differing layers of abstraction, but universally low level, and like libraries relied on the client code carrying out it's specific tasks with their assistance.
'Specific' frameworks were historically developed for single (but often sprawling) tasks, such as "Command and Control" systems for industrial systems, and early networking stacks, and operated at a high level of abstraction and like toolkits were used to carry out execution of the client codes tasks.
Currently, the definition of a framework has become more focused and taken on the "Inversion of Control" principle as mentioned elsewhere as a guiding principle, so program flow, as well as execution is carried out by the framework. Frameworks are still however targeted either towards a specific output; an application for a specific OS for example (MFC for MS Windows for example), or for more general purpose work (Spring framework for example).
SDK: "Software Development Kit"
An SDK is a collection of tools to assist the programmer to create and deploy code/content which is very specifically targeted to either run on a very particular platform or in a very particular manner. An SDK can consist of simply a set of libraries which must be used in a specific way only by the client code and which can be compiled as normal, up to a set of binary tools which create or adapt binary assets to produce its (the SDK's) output.
Engine
An Engine (In code collection terms) is a binary which will run bespoke content or process input data in some way. Game and Graphics engines are perhaps the most prevalent users of this term, and are almost universally used with an SDK to target the engine itself, such as the UDK (Unreal Development Kit) but other engines also exist, such as Search engines and RDBMS engines.
An engine will often, but not always, allow only a few of its internals to be accessible to its clients. Most often to either target a different architecture, change the presentation of the output of the engine, or for tuning purposes. Open Source Engines are by definition open to clients to change and alter as required, and some propriety engines are fixed completely. The most often used engines in the world however, are almost certainly JavaScript Engines. Embedded into every browser everywhere, there are a whole host of JavaScript engines which will take JavaScript as an input, process it, and then output to render.
API: "Application Programming Interface"
The final term I am answering is a personal bugbear of mine: API, was historically used to describe the external interface of an application or environment which, itself was capable of running independently, or at least of carrying out its tasks without any necessary client intervention after initial execution. Applications such as Databases, Word Processors and Windows systems would expose a fixed set of internal hooks or objects to the external interface which a client could then call/modify/use, etc to carry out capabilities which the original application could carry out. API's varied between how much functionality was available through the API, and also, how much of the core application was (re)used by the client code. (For example, a word processing API may require the full application to be background loaded when each instance of the client code runs, or perhaps just one of its linked libraries; whereas a running windowing system would create internal objects to be managed by itself and pass back handles to the client code to be utilised instead.
Currently, the term API has a much broader range, and is often used to describe almost every other term within this answer. Indeed, the most common definition applied to this term is that an API offers up a contracted external interface to another piece of software (Client code to the API). In practice this means that an API is language dependent, and has a concrete implementation which is provided by one of the above code collections, such as a library, toolkit, or framework.
To look at a specific area, protocols, for example, an API is different to a protocol which is a more generic term representing a set of rules, however an individual implementation of a specific protocol/protocol suite that exposes an external interface to other software would most often be called an API.
Remark
As noted above, historic and current definitions of the above terms have shifted, and this can be seen to be down to advances in scientific understanding of the underlying computing principles and paradigms, and also down to the emergence of particular patterns of software. In particular, the GUI and Windowing systems of the early nineties helped to define many of these terms, but since the effective hybridisation of OS Kernel and Windowing system for mass consumer operating systems (bar perhaps Linux), and the mass adoption of dependency injection/inversion of control as a mechanism to consume libraries and frameworks, these terms have had to change their respective meanings.
P.S. (A year later)
After thinking carefully about this subject for over a year I reject the IoC principle as the defining difference between a framework and a library. There ARE a large number of popular authors who say that it is, but there are an almost equal number of people who say that it isn't. There are simply too many 'Frameworks' out there which DO NOT use IoC to say that it is the defining principle. A search for embedded or micro controller frameworks reveals a whole plethora which do NOT use IoC and I now believe that the .NET language and CLR is an acceptable descendant of the "general" framework. To say that IoC is the defining characteristic is simply too rigid for me to accept I'm afraid, and rejects out of hand anything putting itself forward as a framework which matches the historical representation as mentioned above.
For details of non-IoC frameworks, see, as mentioned above, many embedded and micro frameworks, as well as any historical framework in a language that does not provide callback through the language (OK. Callbacks can be hacked for any device with a modern register system, but not by the average programmer), and obviously, the .NET framework.
A library is simply a collection of methods/functions wrapped up into a package that can be imported into a code project and re-used.
A framework is a robust library or collection of libraries that provides a "foundation" for your code. A framework follows the Inversion of Control pattern. For example, the .NET framework is a large collection of cohesive libraries in which you build your application on top of. You can argue there isn't a big difference between a framework and a library, but when people say "framework" it typically implies a larger, more robust suite of libraries which will play an integral part of an application.
I think of a toolkit the same way I think of an SDK. It comes with documentation, examples, libraries, wrappers, etc. Again, you can say this is the same as a framework and you would probably be right to do so.
They can almost all be used interchangeably.
very, very similar, a framework is usually a bit more developed and complete then a library, and a toolkit can simply be a collection of similar librarys and frameworks.
a really good question that is maybe even the slightest bit subjective in nature, but I believe that is about the best answer I could give.
Library
I think it's unanimous that a library is code already coded that you can use so as not to have to code it again. The code must be organized in a way that allows you to look up the functionality you want and use it from your own code.
Most programming languages come with standard libraries, especially some code that implements some kind of collection. This is always for the convenience that you don't have to code these things yourself. Similarly, most programming languages have construct to allow you to look up functionality from libraries, with things like dynamic linking, namespaces, etc.
So code that finds itself often needed to be re-used is great code to be put inside a library.
Toolkit
A set of tools used for a particular purpose. This is unanimous. The question is, what is considered a tool and what isn't. I'd say there's no fixed definition, it depends on the context of the thing calling itself a toolkit. Example of tools could be libraries, widgets, scripts, programs, editors, documentation, servers, debuggers, etc.
Another thing to note is the "particular purpose". This is always true, but the scope of the purpose can easily change based on who made the toolkit. So it can easily be a programmer's toolkit, or it can be a string parsing toolkit. One is so broad, it could have tool touching everything programming related, while the other is more precise.
SDKs are generally toolkits, in that they try and bundle a set of tools (often of multiple kind) into a single package.
I think the common thread is that a tool does something for you, either completely, or it helps you do it. And a toolkit is simply a set of tools which all perform or help you perform a particular set of activities.
Framework
Frameworks aren't quite as unanimously defined. It seems to be a bit of a blanket term for anything that can frame your code. Which would mean: any structure that underlies or supports your code.
This implies that you build your code against a framework, whereas you build a library against your code.
But, it seems that sometimes the word framework is used in the same sense as toolkit or even library. The .Net Framework is mostly a toolkit, because it's composed of the FCL which is a library, and the CLR, which is a virtual machine. So you would consider it a toolkit to C# development on Windows. Mono being a toolkit for C# development on Linux. Yet they called it a framework. It makes sense to think of it this way too, since it kinds of frame your code, but a frame should more support and hold things together, then do any kind of work, so my opinion is this is not the way you should use the word.
And I think the industry is trying to move into having framework mean an already written program with missing pieces that you must provide or customize. Which I think is a good thing, since toolkit and library are great precise terms for other usages of "framework".
Framework: installed on you machine and allowing you to interact with it. without the framework you can't send programming commands to your machine
Library: aims to solve a certain problem (or several problems related to the same category)
Toolkit: a collection of many pieces of code that can solve multiple problems on multiple issues (just like a toolbox)
It's a little bit subjective I think. The toolkit is the easiest. It's just a bunch of methods, classes that can be use.
The library vs the framework question I make difference by the way to use them. I read somewhere the perfect answer a long time ago. The framework calls your code, but on the other hand your code calls the library.
In relation with the correct answer from Mittag:
a simple example. Let's say you implement the ISerializable interface (.Net) in one of your classes. You make use of the framework qualities of .Net then, rather than it's library qualities. You fill in the "white spots" (as mittag said) and you have the skeleton completed. You must know in advance how the framework is going to "react" with your code. Actually .net IS a framework, and here is where i disagree with the view of Mittag.
The full, complete answer to your question is given very lucidly in Chapter 19 (the whole chapter devoted to just this theme) of this book, which is a very good book by the way (not at all "just for Smalltalk").
Others have noted that .net may be both a framework and a library and a toolkit depending on which part you use but perhaps an example helps. Entity Framework for dealing with databases is a part of .net that does use the inversion of control pattern. You let it know your models it figures out what to do with them. As a programmer it requires you to understand "the mind of the framework", or more realistically the mind of the designer and what they are going to do with your inputs. datareader and related calls, on the other hand, are simply a tool to go get or put data to and from table/view and make it available to you. It would never understand how to take a parent child relationship and translate it from object to relational, you'd use multiple tools to do that. But you would have much more control on how that data was stored, when, transactions, etc.

Do any "major" frameworks make use of monkey-patching/open classes

I am curious about the usage of the feature known as open classes or monkey-patching in languages like e.g. Ruby, Python, Groovy etc. This feature allows you to make modifications (like adding or replacing methods) to existing classes or objects at runtime.
Does anyone know if major frameworks (such as Rails/Grails/Zope) make (extensive) use of this opportunity in order to provide services to the developer? If so, please provide examples.
Rails does this to a (IMHO) ridiculous extent.
.Net allows it via extension methods.
Linq, specifically, relies heavily on extension methods monkey-patched onto the IEnumerable interface.
An example of its use on the Java platform (since you mentioned Groovy) is load-time weaving with something like AspectJ and JVM instrumentation. In this particular case, however, you have the option of using compile-time weaving instead. Interestingly, one of my recent SO questions was related to problems with using this load-time weaving, with some recommending compile-time as the only reliable option.
An example of AspectJ using load-time (run-time) weaving to provide a helpful service to the developer can be Spring's #Configuration annotation which allows you to use Dependency Injection on object not instantiated by Spring's BeanFactory.
You specifically mentioned modifying the method (or how it works), and an example of that being used is an aspect which intercepts am http request before being sent to the handler (either some Controller method or doPost, etc) and checking to see if the user is authorized to access that resource. Your aspect could then decide to return – prematurely – a response with a redirect to login. While not modifying the contents of the method per se, you are still modifying the way the method works my changing the return value it would otherwise give.

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete