Specification Pattern defined in Domain - linq-to-sql

Using Linq to SQL, and a DDD style Domain Layer with de-coupled repositories, does anyone have any good ideas on how to implement a specification pattern without bleeding L2S concerns up into the domain layer, that is still understandable? :)
We have complex business logic surrounding the selection of a set of transaction data, and would like those rules/specifications to be owned by the Domain. We've also done a good job of keeping our domain persistence ignorant.
This presents a problem, because in order to implement a Specification, the domain (as far as I can tell) needs to see the types being queried (L2S types).
Any ideas?
Also, nHibernate is out of the question for reasons I don't want to explain.. :)

Have you considered mapping your generic Specifications into an Expression tree that would translate into proper L2S syntax? It seems that is the main problem you are hitting here. The Specification pattern isn't the problem, but the mapping to L2S is.

Linq-To-Sql classes can be partial. This means that you can extend them by implementing a partial that implements a common interface. That Interface can be shared between layers without the "bleeding" problem you are describing. The rest is just the details of your "IsStatisfiedBy" which should be easy to encapsulate.

I recently had the same issue. Different pattern, but still LINQ to SQL (L2S). I tried two different ways to avoid the leakage.
First we tried using DTOs and a mapping layer. So we wrote super simple objects that had a one to one mapping to the tables. They were all decorated with L2S attributes. We then wrote a mapping layer to map the DTOs to our business objects. All of this was hidden via the Repository pattern from Doman Driven Design. So consumers of the business objects had no idea the L2S was under the hood.
Next, mostly for variety. We tried using the XML mapping features of L2S so the objects themselves needed no attributes. For collections we exposed IEnumerable instead of any of L2S collections. If you looked at the internals of the business classes you could still detect some usage of L2S (EntitySet or Ref). But consumers of the class had no idea. So some bits of leakage but nothing drastic.
In the end we stuck with the first pattern. The second worked and we could have replaced L2S without changing the interface of the business layer, but I was never happy with XML mapping. The first pattern had a much cleaner separation between the database and the business objects. It took more code. The first one also worked better for us because it allowed us to evolve the business objects differently than the tables. In the early days of the project the xml mapping worked because our objects were pretty much one to one with the tables.
So in the end we put a layer between L2S and the domain. It worked. It took more code, but it was really simple stuff. And it was all very testable.

If you want to avoid referencing Linq2Sql from your domain layer, you must work against interfaces that represent your entities instead of working with the actual entities themselves. You then need a mapping layer between your interfaces and your entities.
I've worked this way and found it to be a severe hindrance. I switched to NHibernate for new projects and for the older projects I simply stopped worrying about the domain referencing Linq2Sql entities directly. Overcoming that restriction is simply too much of a time-cost in my opinion.

Related

Question about Domain Objects, A Service Layer, and Using Linq2SQL and ASP.net MVC with the Repository Pattern

First off, apologies for the long description of my brainspace below. I'm still wrapping my head around lots of these new ideas, so I'm sure I'm describing something incorrectly. Please feel free to correct me where I'm wrong.
We are in the R&D phase of a new ASP.net MVC2 site and want to ensure that we can 1) decouple our data store from our application, 2) allow for our application to be tested via unit tests and 3) allow us to change out our datastore or use something other than Linq2SQL down the line.
This seemingly simple goal has opened up a whole new world to me that includes the Repository pattern, IoC, DI, and all sorts of other things that are making my head swim. Here's what is so far coming into focus, or at least what I believe is a somewhat correct plan to reach our goals:
We will have a number of ISpecificRepository interfaces that define the contract between users of the interface and the underlying data store.
The SpecificRepository implementations will query specific datastores and return POCO representing our domain objects (or collections of them).
Our Service Layer will perform the application specific business logic using an instance of ISpecificRepository passed to the various service methods and pass these POCO domain objects back to our presentation layer.
As mentioned, we are planning on using Linq2SQL to implement our specific repositories for the application and have decided to decouple our service layer from this implementation by creating the POCO for our domain objects and create a mapping to and from these objects to the LINQ generated entities. In the service layer, we can then create business logic to query the repository, add data, and do whatever else we need to do for each use case. This seems fine but my concern is that since we're using Linq2SQL, our specific Linq repository implementation will now have to house all of the many Get queries that the service layer requires to implement the business logic efficiently.
I'm curious as to whether this somehow breaks the Repository pattern since we're now housing application specific logic not in the service layer but in the repository instead.
The reason I feel that we need to do it this way is so that I can write more efficient Linq queries on my specific Linq repository using various DataLoadOptions, etc. without returning IQueryable from my repository up to my service layer, where it would seem that sort of logic actually belongs. Also, all of the example IRepository interfaces I've seen seem very lightweight and only provide a few methods to GetByID, GetAll, Find, Insert, Delete, and SubmitChanges to the underlying data store. In my case, it sounds like my specific repositories will be doing a great deal more than that.
Thanks for reading this far. Any and all help that can clarify my misconceptions would be greatly appreciated.
-Mustafa
our specific Linq repository
implementation will now have to house
all of the many Get queries that the
service layer requires to implement
the business logic efficiently.
I'm curious as to whether this somehow
breaks the Repository pattern
Not at all. A Repository is a collection of domain entities. If I have a Repository of Accounts, it is perfectly reasonable to want Accounts.ThatAreOverdue().
I personally prefer fluent naming. Accounts.ThatAreOverdue() feels better than AccountRepository.GetOverdue() .. but I suppose that is a point of preference.
Also, all of the example IRepository
interfaces I've seen seem very
lightweight and only provide a few
methods to GetByID, GetAll, Find,
Insert, Delete, and SubmitChanges to
the underlying data store.
A Repository interface can be thin. Find is meant to be used with the Specification pattern. Encapsulate the criteria in another object. The implementation of the criteria can be passed Linq2Sql objects from which to query - but it will be more difficult to re-use the criteria classes against in-memory domain objects (versus in database, where Linq2Sql is involved).
Our Service Layer will perform the
application specific business logic
using an instance of
ISpecificRepository passed to the
various service methods and pass these
POCO domain objects back to our
presentation layer.
Are you saying that your logic will all be in Services and the "domain objects" will be bags of properties and bound to in the view?
I don't think I'd recommend that.
If the same object that is used in the application logic is also used in the view, then you have tightly coupled the two application layers and experience says that causes problems. It will be very difficult to maintain coherence in the Services and Domain through changes if the View uses the same objects. The View will need pieces of data and they will inevitably get stuck onto places they don't really belong in the domain.

Why to use Singleton patern?

So.. I can't understand why should I even use the Singleton pattern in ActionScript 3. Can anyone explain me this? Maybe I just don't understand the purpose of it. I mean how it differs from other patterns? How it works?
I checked the PureMVC source and it's full of Singletons. Why are they using them in the View, Module, Controller?
I have next to no practical experience with PureMVC so I can't argue for or against their use of Singletons. Hence, I'll try to keep my answer more generic.
A singleton is a type of object that can only be instantiated once and is globally accessible.
Typically, this kind of pattern is used in order to have easy access to services of some kind, perhaps a service facade used to retrieve data from a server or an application model that holds information about settings or such.
The singleton pattern is by many considered to be an anti-pattern for a number of reasons, a few of which are mentioned below:
They carry state, making certain tasks such as unit testing virtually impossible.
They inherently violate the Single Responsibility Principle.
They promote tight coupling between classes due to them being globally accessible.
I won't list all of the reasons why a singleton may be an anti pattern, there are plenty of resources on the subject.
The singleton pattern restricts the instantiation of an object to only one instance. Sometimes in systems this pattern is used so an object that controls parts of the system can't be just created at-will. If you have some object that manages settings, for example, you would want something that changes settings to only modify that one object, and not create a new one.

Migrating from LINQ to SQL to Entity Framework 4.0 - Tips, Documentation, etc

I tried out EF back in .NET 3.5 SP1, and I was one of the many who got frustrated and decided to learn LINQ to SQL instead. Now that I know EF is the "chosen" path forward, plus EF 4.0 has some exciting new features, I'd like to migrate my app to EF 4.0.
Can anyone suggest any good resources that are specifically targeted towards 4.0 and L2S migration? NOTE: I can find plenty of blogs and articles related to migrating from L2S to EF on .NET 3.5, but I feel like many of those were obviously dated and unhelpful to someone using 4.0.
I'd really like as much deep, under-the-hood stuff as I can get; I want to really come away feeling like I know EF 4.0 the way I currently know L2S 3.5.
TIA!
I have done loads of this very type of conversion and FWIW, I would say there are more similarities than differences. I don't think there is any definitive documentation that will make you feel like an expert in EF4, beyond the stuff that is already out there...
http://msdn.microsoft.com/en-us/library/ex6y04yf(VS.100).aspx
What I can give you are the more obvious "gotchas." Specifically, Linq2Sql wanted to combine the business layer and the data layer a lot more obviously. It really pushed you to create your own partial classes. I could go on and on about way, but the most specific reason is the way the one-to-one mapper will create public parent and child properties for all relations.
If you attempt to use any type of serialization against this model, you will like run into circular reference problems as a serializer moves from a parent to a child and then back to the parent as the Linq2Sql serialization behavior automatically includes all children in the graph. This can also be really annoying when you try to grab a customer record to check the "Name" property and automatically get all the related order records included in the graph. You can set these parent and child navigation properties to be either "public" or "internal" which means if you want access to them, but don't want the serializers to automatically create circular references, you pretty much have to access them in partial classes.
Once you start down the partial class path you generally just continue the pattern and eventually will start to add helper methods for accessing your data into your individual entity classes. Also, with the Linq2Sql DataContext being more lightweight, you often find people using some kind of Singleton pattern or Repository pattern for their context. You don't see this as much at all with EF 3.5 / 4.
So let's say you have some environment similar to the one described and you want to start converting. Well, you need to find out when your DataContext is going to be create/destroyed...some people will just start each Business Layer method with a using() statement and let the context pretty much live for the lifetime of the method. Obviously this means you can get into some hairy situations that require adding .ToList() or some other extension method to the ends of your questions you can have a fully in-memory collection of your objects to pass to a child method or whatever and even then you can have problems with attempting to update entities on a context that they weren't originally retrieved from.
You'll also need to figure out how to much of the BusinessLogic incorporated in your Linq2Sql partial classes out into another layer if it doesn't deal explicitly with the data operations. This will not be painless as you figure out when you need/don't need your context, but it is for the best..
Next, you will want to deal with the object graph situation. Because of the difference in the way lazy-loading works (they made this configurable in EF 4.0 to make it behave more like Linq2Sql for those who wanted it) you will probably need to check any implied uses of child objects in the graph from your Linq2Sql implementation and verify that it doesn't now require an explicit .Include() or a .Load() to get the child objects in the graph.
Finally, you will need to decide on a serialization solution in general. By default, the DataContracts and DataMember attributes that are generated as part of an EF model work great with WCF, but not at all great with the XmlSerializer used for things like old .asmx WebServices. Even in this circumstance you might be able to get away with it if you never need to serialize child objects over the wire. Since that usually isn't the case, you are going to want to move to WCF if you have a more SOA, which will add a whole new host of opportunies, yet headaches.
In order to deal with the partial classes situation, and the hefty DataContext and even the serialization issues, there are a number of new code-generation templates available with EF 4.0. The POCO-Entity template has a lot of people excited as it creates POCO classes, just as you'd expect (the trouble is that excludes any class or member attributes for WCF etc etc). Also, the Self-Tracking Entities model pretty much solves the context issue, because you can pass your entities around and let them remember when and how they were updated, so you can create/dispose your contexts much more freely (like Linq2Sql). As another bonus, this template is the go-to template for WCF or anything that builds on WCF like RIA Services or WCF Data Services, so they have the [DataContract], [DataMember], and [KnownType] attributes already figured out.
Here is a link to the POCO template (not included out of the box):
(EDIT: I cannot post two hyperlinks, so just visit the visualstudio gallery website and search for "ADO.NET C# POCO Entity Generator")
Be sure to read the link on the ADO.net team blog about implementing this. You might like the bit about splitting your context and your entities into separate projects/assemblies if you fall into the WebService vs. WCF Service category. The "Add Service Reference..." proxy generation doesn't do namespaces the same way "Add Web Reference..." used to, so you might like to actually reference your entity class assembly in your client app so you can "exclude types from reference libraries" or whatever on your service references so you don't get a lot of ambiguous references from multiple services which use the same EF model and expose those entities...
I know this is long and rambling, but these little gotchas were waaay more of an issue for me than remembering to use context.EntityCollection.AddObject() instead of context.EntityCollection.InsertOnSubmit() and context.SaveChanges() instead of context.SubmitChanges()...
For EF Code First, it's mostly about reverse engineering the existing tables into EF classes. EF Power Tools now does this for you:
http://msdn.microsoft.com/en-us/data/jj200620.aspx
The rest is the obvious work of modifying your existing code to use those generated classes to talk to the database instead of LINQ to SQL.

Is there a simple Linq to SQL generator with bi-directional serialization attributes?

I'm trying to find a way to generate Linq to SQL classes with bi-directional serialization attributes. Basically I want a DataMember tag (with an appropriate order) on every association property, not just the ones where the class is the primary key (like the Visual Studio generator and SQL Metal do). I checked MyGeneration, but didn't really find anything that worked for me. I thought the T4 Toolbox was going to be my solution, it would be quite easy to modify it to add the attributes, but I get an exception on the calling side of my WCF service, and I've gotten no response back on the issue. I'm about to try installing CodeSmith and using PLINQO, but I'd prefer something free.
I'm pretty close to just writing my own T4 generator, but before I do that, I was hoping to find an pre-built solution to this rather simple problem first.
I ended up writing my own code generator for our L2S classes. We actually generate two sets of classes. One is a "lightweight" set of entities for client application use. These classes have no L2S plumbing. But they have the full datamember attributes with proper order. Then we have our L2S entities, which are strictly for backend use. This has worked out quite well.
Be careful using PLINQO. I've looked at that product extensively. In fact, much of my code generator is based on the code PLINQO generates. However, they have a "major flaw" (their words) in how they have implemented many to many relationships.
You might want to also look at a product named "Reegenerator".
Randy
This turned out to be the solution to my problem. I had just resigned myself to start researching my own generator when I stumbled upon that. It has a bidirectional serialization option and it works great! Here's a link to the author's bog, which contains a great video example of how to get started.

Does Model Driven Architecture play nice with LINQ-to-SQL or Entity Framework?

My newly created system was created using the Model Driven Architecture approach so all I have is the model (let's say comprehensive 'Order' and 'Product' classes). These are fully tested classes that support the business of my application. Now it's time to persist these classes as objects on the harddrive and at some later time retrieve them in the same state (thinking very abstractly here). Typically I'd create an IOrderRepository interface and eventually a ADO.NET-driven OrderRepository class with methods such as GetAll(), GetById(), Save(), etc... or at some point a BinaryFormatter-driven OrderRepostiroy class that serves a similar purpose through this same common interface.
Is this approach just not conducive to LINQ-To-Sql or the Entity Framework. Something that attempts to build my model from a pre-existing DB structure just seems wrong. Could I take advantage of these technologies but retain this 'MDA' approach to software engineering?
... notice I did not mention that this was a Web App. It may or may not be -- and shouldn't matter.
In general, I think that you should not make types implementing business methods and types used for O/R mapping the same type. I think this violates the single responsibility principle. The point of your entity types is to bridge the gap between relational space and object space. The point of your business types is to have collections of testable behavior. Instead, I would suggest that you project from your entity types onto your business types when materializing objects from the database. Separating these two allows your business methods and data mappings to evolve independently, which is very important, especially if you cannot always control the schema of the database. I explain this idea more fully in this presentation.