Class model to use in DataMapper - datamapper

When implementing the DataMapper pattern, should the class model that I implement in the DataMapper package more closely resemble the domain model or the data model?

The whole point of a datamapper is to make the mapping transparent. So you class should be constructed on the terms of the object model. The gory details of mapping it back/forth from the storage (be it database or otherwise), resides in the mapper.

Related

WPF+REST+EF: what is the best way to organize DTO's?

I have a WPF MVVM app with 3 layers:
UI
Services
DAL
and some item, for example Order. I need 3 DTO:
Class for MVVM layer, with PropertyChanged notification;
Class for Json deserializer (get objects by REST API)
Class for Entity Framework (cache data in DB).
Well, I can use ONE class for all three cases, but this will be mix of different attributes (from EF, JSon, MVVM) and excess dependencies of layers.
Another way: make 3 classes, each layer has own class, and use AutoMapper for fast convert between. No bad, but 3 almost identical (90%) copy of each DTO class... not elegant solution.
What is the best approach? What do you use?
Thanks.
What is the best approach? What do you use?
The second approach, i.e. you define your business objects in a separate assembly that you can reference from all your applications. These classes should not implement any client-specific interfaces such as INotifyPropertyChanged but be pure POCO classes that contains business logic only.
In your WPF application, you then create a view model class that implements the INotifyPropertyChanged interface and wraps any properties of the business object that it makes sense to expose to and bind to from the view.
The view model then has a reference to the model and the view binds to the view model. This is typically how the MVVM design pattern is (or should be) implemented in a WPF application. The view model class contains your application logic, for example how to notify the view when a data bound property value is changed, and the model contains the business logic that is the same across all platforms and applications.
Of course this means that you will end up with a larger number of classes in total but this is not necessarily a bad thing as each class has its own responsibility.
The responsibility of a view model is to act as a model for the application specific XAML view whereas the responsibility of the model class is to implement the business logic and the responsibility of the DTO class is to simply transfer the data between the different tiers. This is a far better solution - at least in my opinion and probably in most enterprise architect's opinions as well - than defining a single class that implements all kind of UI specific logic just for the sake of reducing the number of classes.

Entity Framework 4.1 and T4 class generation. Is this design overkill?

I am trying to get some design validation on modeling a domain using EF4.1 and T4.
At design time I run a customized a T4 poco generator template that reads edmx and creates 3 partial classes:
1) domain-level class (where any specific business methods will reside). this is only generated one time. Once Gen'd it's owned.
2) poco class just properties and virtual navigation properties to related objects, loaded lazily. this can be regen'ed if/when any underlying columns in the database change.
3) metadata class with an internal class whose properties are decorated with data annotations to provide additional column-level validation before inserting / updating data.
Is this overkill? I liked the separation, namely between the poco and domain object so that I can add methods to the partial domain object at any time without having to worry about method loss when needing to rerun the T4 template after underlying data specs may change. What about the metadata class? Is that unnecessary if my application will be performing field validation?

Debugging Entity Framework DBContext API Mappings

I am mapping some pre-existing Business Objects to our database using Entity Framework. These object were originally using a home-grown data access method, but we wanted to try out Entity Framework on it now that it is using Code-First. It was my expectation that this would be fairly simple, but now I am having some doubts.
I am trying to use only attributes to accomplish this so that I don't have some of the mapping here, some of it there, and still more of it over there....
When I query for entities, I am getting System.Data.Entity.DynamicProxies.MyClass_23A498C7987EFFF2345908623DC45345 and similar objects back. These objects have the data from the associated record there as well as related objects (although those are DynamicProxies also).
What is happening here? Is something going wrong with my mapping? Why is it not bringing back MyBusinessObject.MyClass instead?
That has nothing to do with mapping. Those types you see are called dynamic proxies. EF at runtime derives class from every type you map and use it instead of your type. These classes has some additional internal logic inside overriden property setters and getters. The logic is needed for lazy loading and dynamic change tracking of attached entities.
This behaviour can be turned off in context instance:
context.Configuration.ProxyCreationEnabled = false;
Your navigation properties will not be loaded automatically once you do this and you will have to use eager loading (Include method in queries) or explicit loading.

AutoMapper classes with a Transient lifestyle in IoC

I'm using AutoMapper to map domain entities to view models in an Asp.Net MVC app. I register these mapping classes in Castle Windsor so they are available to the controller thru ctor dependency injection. These mapping classes has a virtual CreateMap method where I can override AutoMapper's mapping, telling it how to map fields from the entity to the view model, which fields to ignore, pointing to methods that transforms the data, etc. All of this is working well; big kudos to the people behind AutoMapper!
So far I've been registering the mapping classes with a Singleton lifestyle in Windsor, but one of them needs to use the IAuthorizationRepository from Rhino.Security which needs to have its components registered as Transient. This forces me to register the mapping classes also as transient, because a singleton mapping class holding a reference to a transient IAuthorizationRepository causes problems the second time the mapper is used (i.e., ISession is already closed errors).
Is it a waste of resources to register all of these mapping classes with a Transient lifestyle, which will cause the mapping class to be instantiated and the CreateMap method to run each time the system wants to map a domain entity to a view model?
Or should I try to find a way to separate the IAuthorizationRepository from the mapping class so I can keep the mapping classes as Singletons?
Thanks
Dan
Another way around it is using the TypedFactoryFacility, then instead of injecting IAuthorizationRepository into your singletons you can inject Func<IAuthorizationRepository>

Data Mapper Pattern

Up until now I've been using Active records in all my c# database driven applications. But now my application requires my persistence code being split from my business objects. I have read a lot of posts regarding Martin Fowler's data mapping pattern, but my knowledge of this pattern is still very limited.
Let's use the following example:
If I have 2 tables - Customer and CustomerParameters. The CustomerParameters table contains default Customer values for creating a new Customer.
I will then have to create a CustomersMapper class to handle all of the Customer persistence. My Customer and CustomersList class will then collaborate with this mapper class in order to persist customer data.
I have the following questions:
How would I transfer raw data TO & FROM my Customer class to the mapper without breaking certain business rules? DTO's?
Is it acceptable to have a SaveAll and LoadAll method in my Mapper class for updating and loading multiple customers' data? If so, in case of SaveAll, how will the mapper know when to update or insert data?
Will the Customer mapper class be responsible for retrieving the default values from the CustomerParameters table as well, or will it be better to create a CustomerParameters mapper?
A O/R mapper tool is not really here. The database I'm using is Transactional and requires that I write my own Mapper Pattern.
Any ideas and comments will be greatly appreciated.
Shaun I would answer your questions this way:
ad 1) Mapper is responsible for creating Customer object. Your Mapper object will have something like RetrieveById method (for example). It will accept an ID and somehow (that't he responsibility of the Mapper object) construct the valid Customer object. The same is true the other way. When you call Mapper.Update method with a valid Customer object, the Mapper object is responsible for making sure that all the relevant data are persisted (wherever appropriate - db, memory, file, etc.)
ad 2) As I noted above retrieve/persist are methods on Mapper object. It is its responsibility to provide such a functionality. Therefore LoadAll, SaveAll (probably passing an array of value objects) are valid Mapper methods.
ad 3) I would say yes. But you can separate various aspects of Mapper objects into separate classes (if you want to/need to): default values, rule validation, etc.
I hope it helps. I really suggest/recommend you to read Martin Fowler's book Patterns of Enterprise Application Architecture.
I would suggest that you take a look at an O/R-mapper tool before you try to implement the Data Mapper pattern yourself. It will save you a lot of time. A popular choice of O/R-mapper is NHibernate.
You could check out iBATIS.NET as an alternative to NHibernate. It's also an O/R tool, but I've found it to be a little easier to use than NHibernate.
http://ibatis.apache.org/