I have several specific repository classes and I want to share the same db context among all repositories. For now i am creating new context per each repository. I am thinking to use mvc.unity. I dont see very useful example about sharing the db context. Just want know the experts opinions before implementing that.
Thanks,
I would take a look at structuremap for a DI container and scope your dbcontext on the request level so all the repositories that are invoked in the same request get the same instance.
The kind of application that you're building is very usefull to provide a good answer though.
Related
Suppose you have a Schema that is used in a UI-App (e.g. Vue), a Node.js or Springboot Server and has to validate against Databases (e.g. SQL, mongoDB,...), and maybe some Micro-services running on whatever.
How and where do I manage a this JSON-Schema, so that if I have to change the schema for whatever Reason, that every architectural component can handle the new JSON-Schema(s).
Otherwise I need to update the Schema in up to 10 projects so none is incompatible.
Is it really as simple as having a git project full with just JSON-Schemas or do I need specific loaders for each language/environment?
Are there best practices that I am unaware of?
PS: I don't really think I need the automatically synchronized on runtime, so don't really think I need another Microservice to achieve that.
That being said, if a Microservice is the best way to go, then getting a Microservice it is.
If you keep them in a git project, how do you load them? Clone the project each time the app starts? It may work, but I would go with a more flexible approach that should take too much effort to be done:
Build a JSON schema repository accessible via a REST API
When the app starts, it makes a request to grab the schema (latest, or a specific version)
That way you get an uniform (and scalable) way of playing with the schemas. Even if you think about a hot-reload sometime in the future, you can leverage this approach to do that.
Here is an old project in this direction, you may give it a shot to see if it works (or for some inspiration, at least)
My question has to do with the WCF and typed factory facilities that is provided by Windsor. I am quite new to the IoC container concepts and especially facilities but started looking into it after evaluating a project we wrote a while back. The program is n-layered and not very unit testable because dependency injection is not implemented everywhere. The issue is that since it is n-layered, to do a proper DI would end up making the top layer responsible for creating an instance of something that will get used like 5 layers down; so I turned to IoC.
However, after reading many a SO article and other sites I am now stuck with a couple of issues. One of the main issues initially was to decouple a class from a physical implementation of a WCF service and I did it as follows:
service = ManagerService.IoC.Resolve<IGridSubmitWorkService>(new { binding = new BasicHttpBinding(), remoteAddress = new EndpointAddress(WorkerAddress) });
result = service.SubmitWorkUnit(out errorString, aWorkUnit);
ManagerService.IoC.Release(service);
However, I found multiple mentions as to how you should not be using IoC.Resolve<>() and should rather use factories and WCF facilities to remove the dependency on your IoC container and that it is a service locator pattern which some people consider to be an anti-pattern.
My issue is this: The above three lines of code solves almost all my problems but to follow the proper patterns I have to instead create a WCF facility, a typed factory (that will deal with providing me instances of the class where this code is used) and a new interface for the factory and this feels like over-engineering and adding unnecessary complication to the code just for the sake of pleasing a pattern.
Question 1: Is there something fundamental I am missing here?
Question 2: As you can see from the code, I am calling a non-empty constructor for the web service. Will this still be as simple if I implement a WCF facility?
Question 3: Can you point me to a decent tutorial on using WCF facilities since I found the explanations on the Windsor wiki to be very brief and not really useful?
Thanks
Q1. Yes I think there is something important being missed. In your three line example, all you have done is replaced one implementation dependency (your GridSubmitWorkService) with another one (ManagerService.IoC). To illustrate, why not make it even simpler with the following?
service = new GridSubmitWorkService(binding,remoteAddress);
result = service.SubmitWorkUnit();
This is even simpler again but it has the hardwired implementation dependency. The point is that constructor and property injection allow you to write clean components that do not have to understand how the plumbing is done. In both your code and my code that principle is violated - the components are managing their own plumbing and in a large application, that becomes very painful, very quickly. This is why ServiceLocator is considered by many to be an anti-pattern.
The name of your service locator itself gives the game away. It's called IoC - abbreviation for Inversion of Control. But when you look at the structure of these two bits of code you can clearly see that nothing has been inverted - the code is doing pretty much the same thing in both cases, but your example is just a little more abstract and convoluted.
Also no-one offering IoC is trying to force you to stick to a pattern. If you don't see the benefit or the benefits simply aren't there for your app, then please don't use the pattern.
Q2. Any decent DI container is based on constructor injection. Using the Windsor WCF facility is very straight-forward and it supports a variety of hosting methods so things like bindings and addresses are handled properly anyway.
Q3. There is plenty of information around ranging from forums to questions here on stackoverflow.com, in addition to the Windsor docs and wikis. I suggest putting a specific question if there is something you are unable to resolve.
First off, apologies for the long description of my brainspace below. I'm still wrapping my head around lots of these new ideas, so I'm sure I'm describing something incorrectly. Please feel free to correct me where I'm wrong.
We are in the R&D phase of a new ASP.net MVC2 site and want to ensure that we can 1) decouple our data store from our application, 2) allow for our application to be tested via unit tests and 3) allow us to change out our datastore or use something other than Linq2SQL down the line.
This seemingly simple goal has opened up a whole new world to me that includes the Repository pattern, IoC, DI, and all sorts of other things that are making my head swim. Here's what is so far coming into focus, or at least what I believe is a somewhat correct plan to reach our goals:
We will have a number of ISpecificRepository interfaces that define the contract between users of the interface and the underlying data store.
The SpecificRepository implementations will query specific datastores and return POCO representing our domain objects (or collections of them).
Our Service Layer will perform the application specific business logic using an instance of ISpecificRepository passed to the various service methods and pass these POCO domain objects back to our presentation layer.
As mentioned, we are planning on using Linq2SQL to implement our specific repositories for the application and have decided to decouple our service layer from this implementation by creating the POCO for our domain objects and create a mapping to and from these objects to the LINQ generated entities. In the service layer, we can then create business logic to query the repository, add data, and do whatever else we need to do for each use case. This seems fine but my concern is that since we're using Linq2SQL, our specific Linq repository implementation will now have to house all of the many Get queries that the service layer requires to implement the business logic efficiently.
I'm curious as to whether this somehow breaks the Repository pattern since we're now housing application specific logic not in the service layer but in the repository instead.
The reason I feel that we need to do it this way is so that I can write more efficient Linq queries on my specific Linq repository using various DataLoadOptions, etc. without returning IQueryable from my repository up to my service layer, where it would seem that sort of logic actually belongs. Also, all of the example IRepository interfaces I've seen seem very lightweight and only provide a few methods to GetByID, GetAll, Find, Insert, Delete, and SubmitChanges to the underlying data store. In my case, it sounds like my specific repositories will be doing a great deal more than that.
Thanks for reading this far. Any and all help that can clarify my misconceptions would be greatly appreciated.
-Mustafa
our specific Linq repository
implementation will now have to house
all of the many Get queries that the
service layer requires to implement
the business logic efficiently.
I'm curious as to whether this somehow
breaks the Repository pattern
Not at all. A Repository is a collection of domain entities. If I have a Repository of Accounts, it is perfectly reasonable to want Accounts.ThatAreOverdue().
I personally prefer fluent naming. Accounts.ThatAreOverdue() feels better than AccountRepository.GetOverdue() .. but I suppose that is a point of preference.
Also, all of the example IRepository
interfaces I've seen seem very
lightweight and only provide a few
methods to GetByID, GetAll, Find,
Insert, Delete, and SubmitChanges to
the underlying data store.
A Repository interface can be thin. Find is meant to be used with the Specification pattern. Encapsulate the criteria in another object. The implementation of the criteria can be passed Linq2Sql objects from which to query - but it will be more difficult to re-use the criteria classes against in-memory domain objects (versus in database, where Linq2Sql is involved).
Our Service Layer will perform the
application specific business logic
using an instance of
ISpecificRepository passed to the
various service methods and pass these
POCO domain objects back to our
presentation layer.
Are you saying that your logic will all be in Services and the "domain objects" will be bags of properties and bound to in the view?
I don't think I'd recommend that.
If the same object that is used in the application logic is also used in the view, then you have tightly coupled the two application layers and experience says that causes problems. It will be very difficult to maintain coherence in the Services and Domain through changes if the View uses the same objects. The View will need pieces of data and they will inevitably get stuck onto places they don't really belong in the domain.
After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete
In my Repositories, I am making assignments to my domain objects from the Linq Entity queries. I then have a service layer to act on these object returned from repositories.
Should my Domain objects be in the repository like this? Or should my repositories be restricted to the Entities and Data Access, and instead have my service layer make assignments to the domain objects?
Doing all assignments in Repository seems easier, but now the distinction between my database and domain objects is not apparent. What is proper practice here? tia
IMO if the app is relativly simple and you cant imagine ripping out the Data access go ahead and make the asignments in the Repository. But if you think the app will get more complicated in the future or that you may want to change the data access keep this functionality out of the repositories.
I have done apps with assignement in the repositories and other in the service layer and yet another one i had a seperate conversion layer (it was not a one on one conversion and the objects were complex).
One thing to remember about best practices, There there to help, if it makes thing more dificult then dont use it.
I used to not like it. But now usually never look back. Basically the thing is that if you need to change to an external datasource that is structured different, you can set up a new mapping along with the implementation of the repository code and be done with it.
It is about data mapping. Check this link: http://www.martinfowler.com/eaaCatalog/repository.html
Also check this related question: IRepository confusion on objects returned. I have used a similar mapper, but have made it operated at the IQueryable level, which have made able to do some pretty interesting stuff while working with the Domain Object after the mapping.