Correct design using dependency inversion principle across modules? - solid-principles

I understand dependency inversion when working inside a single module, but I would like to also apply it when I have a cross-module dependency. In the following diagrams I have an existing application and I need to implement some new requirements for reference data services. I thought I will create a new jar (potentially a stand-alone service in the future). The first figure shows the normal way I have approached such things in the past. The referencedataservices jar has an interface which the app will use to invoke it.
The second figure shows my attempt to use DIP, the app now owns its abstraction so it is not subject to change just because the reference data service changes. This seems to be a wrong design though, because it creates a circular dependency. MyApp depends on referencedataservices jar, and referencedataservices jar depends on MyApp.
So the third figure gets back to the more normal dependency by creating an extra layer of abstraction. Am I right? Or is this really not what DIP was intended for? Interested in hearing about other approaches or advice.
,

The second example is on the right track by separating the implementation from its abstraction. To achieve modularity, a concrete class should not be in the same package (module) as its abstract interface.
The fault in the second example is that the client owns the abstraction, while the service owns the implementation. These two roles must be reversed: services own interfaces; clients own implementations. In this way, the service presents a contract (API) for the client to implement. The service guarantees interaction with any client that adheres to its API. In terms of dependency inversion, the client injects a dependency into the service.
Kirk K. is something of an authority on modularity in Java. He had a blog that eventually turned into a book on the subject. His blog seems to be missing at the moment, but I was able to find it in the Wayback Machine. I think you would be particularly interested in the four-part series titled Applied Modularity. In terms of other approaches or alternatives to DIP, take a look at Fun With Modules, which covers three of them.

In second approach that you presented, if you move RefDataSvc abstraction to separate package you break the cycle and referencedataservices package use only package with RefDataSvc abstraction.
Other code apart from Composition Root in MyApp package should depend also on RefDataSvc. In Composition Root of your application you should then compose all dependencies that are needed in your app.

Related

Does Castle.Windsor allow Dependency Inversion Principle?

We're trying to stick to DIP in our C# application, which means a higher level class in one project depends on an interface that is in the same project, and that interface is implemented by a lower level class in another project. The lower level project naturally depends on the higher level one.
This way, the higher level project can happily work with the interface without any knowledge or dependency on its implementation. But now I would like to run Windsor installers to register my needed dependencies top down, so my main class wants to register the implementations of the interfaces it uses, while it doesn't know(have dependency/project reference) them. I need to somehow run installers in projects without depending on them. I was hoping Castle had a way of "finding" those implementations and doing this for me cleanly, but it's probably not possible, is it?
I ended up running the installer in every assembly in the same directory as my main assembly, which works and is simple enough. It just doesn't feel clean... I wish there was a way of just asking for the implementation on my interface without depending on that project...

Logging crosscutting concern needs access to data layer

Say I have an architecture similar to the Layered Architecture Sample. Let's also assume each large box is its own project. The Frameworks box and each layer would then be its own project. If we don't use IoC and instead go the traditional layered approach without interfaces, Service would reference Business which would reference Data and all of these would reference Frameworks.
Now the requirement is that logging be done to the database, so presumably Frameworks would need to reference Service to reach Data. There are two issues with this:
You now have a circular dependency (remember, no interfaces). Most of this can be solved if you use interfaces with IoC (Dependency Injection or Service Locator) and a composition root. Is this the only way or can you somehow do it without an interface?
If you're in Business and need to log something, it has to make the jump to service again. Is there any way to only make the jump from Presentation but not from Service/Business/Data without a composition root?
I know I'm somewhat answering my own question, but I basically want to understand if this architecture is feasible at all without IoC.
Without some inversion of control, there is not too much you can do.
Assuming your language supports something like Reflection in .NET you could try to dynamically invoke code and invert the control at runtime. You don't need interfaces but you might have to mark/decorate or have a convention for the types that you need in the upper layers.
I'm just thinking now about crazy, non-pragmatic approaches: you could post-process the binary and inject the logging code in every layer.
For this example, I would call the logging methods in the Business Layer where logging is needed. It really doesn't make any sense to call up a level. That would an unnecessary abstraction, and it sounds like you've gathered as much.
Is there any abstraction provided in the Services layer for logging that you would require when logging from the business layer? If so, perhaps some sort of facade could be created for the purpose of business layer logging. If you do not require this abstraction, though, I would call the Business logging methods directly.
IMO, Since Logging is cross cutting concern, it should not refer your Data layer. In your question, I see that you had assumed that you are logging into the database. Even if this is your requirement, you have to keep database connection/insert of log records code as seperate from your application data Layer. It will be part of your Logging library rather than part of Data layer. Do NOT treat it as part of data layer. It is this perspective with which you can continue to develop/enhance logging [framework] as well as it will be seperate from your data layer.
From My perspective, data layer only constitues of Application Data Access and not logging. For concrete you can see, NLog or Log4Net libraries and see how they are not concerned with Application's data Access layer strategy.
Hope this helps.

Windsor WCF facility and typed factories

My question has to do with the WCF and typed factory facilities that is provided by Windsor. I am quite new to the IoC container concepts and especially facilities but started looking into it after evaluating a project we wrote a while back. The program is n-layered and not very unit testable because dependency injection is not implemented everywhere. The issue is that since it is n-layered, to do a proper DI would end up making the top layer responsible for creating an instance of something that will get used like 5 layers down; so I turned to IoC.
However, after reading many a SO article and other sites I am now stuck with a couple of issues. One of the main issues initially was to decouple a class from a physical implementation of a WCF service and I did it as follows:
service = ManagerService.IoC.Resolve<IGridSubmitWorkService>(new { binding = new BasicHttpBinding(), remoteAddress = new EndpointAddress(WorkerAddress) });
result = service.SubmitWorkUnit(out errorString, aWorkUnit);
ManagerService.IoC.Release(service);
However, I found multiple mentions as to how you should not be using IoC.Resolve<>() and should rather use factories and WCF facilities to remove the dependency on your IoC container and that it is a service locator pattern which some people consider to be an anti-pattern.
My issue is this: The above three lines of code solves almost all my problems but to follow the proper patterns I have to instead create a WCF facility, a typed factory (that will deal with providing me instances of the class where this code is used) and a new interface for the factory and this feels like over-engineering and adding unnecessary complication to the code just for the sake of pleasing a pattern.
Question 1: Is there something fundamental I am missing here?
Question 2: As you can see from the code, I am calling a non-empty constructor for the web service. Will this still be as simple if I implement a WCF facility?
Question 3: Can you point me to a decent tutorial on using WCF facilities since I found the explanations on the Windsor wiki to be very brief and not really useful?
Thanks
Q1. Yes I think there is something important being missed. In your three line example, all you have done is replaced one implementation dependency (your GridSubmitWorkService) with another one (ManagerService.IoC). To illustrate, why not make it even simpler with the following?
service = new GridSubmitWorkService(binding,remoteAddress);
result = service.SubmitWorkUnit();
This is even simpler again but it has the hardwired implementation dependency. The point is that constructor and property injection allow you to write clean components that do not have to understand how the plumbing is done. In both your code and my code that principle is violated - the components are managing their own plumbing and in a large application, that becomes very painful, very quickly. This is why ServiceLocator is considered by many to be an anti-pattern.
The name of your service locator itself gives the game away. It's called IoC - abbreviation for Inversion of Control. But when you look at the structure of these two bits of code you can clearly see that nothing has been inverted - the code is doing pretty much the same thing in both cases, but your example is just a little more abstract and convoluted.
Also no-one offering IoC is trying to force you to stick to a pattern. If you don't see the benefit or the benefits simply aren't there for your app, then please don't use the pattern.
Q2. Any decent DI container is based on constructor injection. Using the Windsor WCF facility is very straight-forward and it supports a variety of hosting methods so things like bindings and addresses are handled properly anyway.
Q3. There is plenty of information around ranging from forums to questions here on stackoverflow.com, in addition to the Windsor docs and wikis. I suggest putting a specific question if there is something you are unable to resolve.

Dependency injection - best practice for fully decoupled components?

I want to use dependency injection (Unity) and at the moment I'm thinking about how to setup my project (it's a fancy demo I'm working on).
So, to fully decouple all components and have no more assembly dependencies, is it advisable to create an assembly ".Contracts" or something similar and put all interfaces and shared data structures there?
Would you consider this the best practice or am I on a wrong track here?
What I want to accomplish:
Full testability, I want all components as sharply decouples as possible and inject everything, no component will ever talk directly to a concrete implementation anymore.
The first and probably most important step is to program to interfaces, rather than concrete implementations.
Doing so, the application will be loosely coupled whether or not DI is used.
I wouldn't separate interfaces in other assembly. If you have to interact with something that is a part of your domain, why separate it? Examples of interfaces are repositories, an email sender, etc. Supose you have a Model assembly where you have your domain objects. This assembly exposes the interfaces, and implementations, obviously, reference Model in order to implement them.

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete