I have posted a question about multilanguage database design here,[]What are best practices for multi-language database design?I like Martin's suggestion,but now I have a question what will be the best way to create business objects? If I will create product which will contains ProductTranslation object, the binding and working in UI will be complex, if only the localized object I will have to create a different objects for CMSThanks a lot!
Difficult to answer, since this depends on your exact needs. What we have in one place is this (based on the DB model described in the other question):
the business objects are modeled after the database, meaning we have a class Product which has a collection of ProductTranslation objects
in Product class we have properties for the multilingual-data, e.g. Description
the getter of these properties look up the correct translation object (based on the current language) and return the corresponding value
a very simple example (showing only the relevant parts):
class ProductTranslation
{
public string Description;
}
public class Product
{
private List<ProductTranslation> _translations;
private ProductTranslation GetTranslation(string language)
{
// return translation for specified language
// or return translation for default language
}
public string Description
{
get
{
return GetTranslation(GetCurrentLanguage()).Description;
}
}
}
We chose this approach for an ASP.NET web application. The CurrentLanguage may be different for each user (users can select their preferred language for the UI and the data). This approach allows us to cache data globally for all users.
Depending on your needs, this approach might not be the best. E.g. it might be better to model the Product and ProductTranslation tables as one business object (Product) which is then loaded for a specific language (e.g. if the data is read-only and it is not required to cache it application-wide).
Related
This is a common requirement in many web-based projects: an entity has to show information to another related entity. For example, a book in an e-commerce site has to show relevant information about its author.
Let say I model both the book and author as an entity, how should I implement a feature which display a book and its author's information on the same page.
I can make a call to the BookRepo to retrieve the book's information, and then another call to the AuthorRepo to retrieve the author, using the authorid inside the book entity. This is 2 queries
I can write a query where I join the Book and Author tables together and retrieve both information in 1 query. But which repo does this query goes? Does this break DDD because I am assuming details about the Book and Author entity?
Which is the 'best practices', and what are other ways I can approach this problem?
(I am assuming using the use of standard SQL queries [such as PHP + MySQL], since in EF 4 you would define associations between Book and Author which would solve the problem rather easily).
There is no silver bullet solution to this, but you have a few options.
Your first proposed solution of making a call to two repositories is perfectly valid and happens all the time in practice. For example, it takes over a hundred different services to render an Amazon product page. Each service is responsible for providing data specific to its bounded context. You can create a service called something like BookService which calls the two repositories to return a reporting object or a DTO that has all the data that you need for the particular view. If you feel that performance is due to two repository calls is going to be an issue, you can employ caching, or CQRS to create appropriate read models, but don't jump to those solutions prematurely.
But which repo does this query goes?
I would just add it to the BookRepository, or even a whole new repository called BookDetailsReportingRepository, perhaps a method called GetBookDetails. This method would not return an editable entity, but a reporting object which is a projection of values from multiple entities.
Does this break DDD because I am assuming details about the Book and Author entity?
This does not violate DDD and in my opinion makes it easier to apply. Just regard as the data returned by the aforementioned repository as reporting objects, not entities.
But even though you don't seems to use ORM's you probably have tto populate your entities from your SQL querys and your entities has to relate to each other in some way (collections, navigation properties etc...).
If your domain contains entities that have no associations between each other, I wouldn't call it DDD since you lack some important ingredients like aggregates, value objects, bi-/unidirectional relationships.
But what do I know :-) Maybe you have made your puzzle well and this last piece is to merge entities into a "view" that can be useful for your clients.
Since repositories normally operates on an aggregate-root entity you can have repository methods like ListBooksByAuthor (BookRepository) or ListAuthors (AuthorRepository).
When you want to display complex data from many different aggregates in a web page I recommend using Data Transfer Objects. Let that DTO object be unique for that page or Use Case and being a "view" that displays all (or most of the) data that web page needs.
I also recommend NOT using DTO's everywhere unless you're using web services. Using DTO's together gives both pros and cons. Together with a service layer it gives you a nive anti corruption layer and also gives you the place to inject Book and Author repository. Then from service layer you can assemble and reassemble DTO's (look at AutoMApper or similar...helps you a lot).
BUT to much DTO's everywhere also gives you overhead in maintenance of the application. It adds another layer to maintain.
I prefer to just use it for certain clients/web pages.
I hope you understand what I'm trying to explain :-)
If you look at this page it describes two ways to load and relate the aggregate roots. Linking this back to your example:
The Book class would encapsulate the relevant Author information as a Value Type so when the Book information is displayed on the web page it has all the information on the author it needs. If the users decides to view more information on the Author they can by following a link to an Author Page (whatever the requirement is).
If you have a service method called FindBookByTitle then
loading the Book entity would then load the relevant author information from the BookRepo.
class Author
{
public Author(int authorID, FullName name) { }
int AuthorID { get; }
FullName Name { get; }
List<BookDetails> AuthoredBooks { get; set; }
}
class BookDetails
{
public BookDetails(int ID, string title) { }
int BookID { get; }
string Title { get; }
}
class Book
{
public Book(int ID, string Title) { }
int BookID { get; }
string Title { get; }
List<AuthorDetails> Writers { get; set; }
}
class AuthorDetails
{
public AuthorDetails(int ID, FullName name) { }
int AuthorID { get; }
public FullName fullName { get; }
}
class FullName
{
public FullName(string name, string surname) { }
public string Name { get; }
public string Surname { get; }
}
I am working on a project following the suggested repository pattern in Steven Sanderson's excellent book "Pro ASP.NET MVC 2 Framework".
Take the following example: I have a table for "Products" and for "Images". Both have an own repository that creates a new DataContext in the constructor. Now, I want to establish a many-to-many relationship between the two entities called "ImagesForProducts".
Should I create a separate repository for the ImagesForProducts entities? If so, how can I share the DataContext between all the entities? In that case I have to instantiate my ProductController with two repositories (for Products and for ImagesForProducts), right?
I'd rather access the images using my product instances, so that I can write myProduct.AddImage(img). But how can I persist the relation in the database using the ProductRepository?
As you can see, I am not sure about the overall architecture and would highly appreciate a basic code example.
Thanks in advance!
After some careful research and consideration, I decided to let the repositories handle image attachments instead of the product instances (mostly because the instances shouldn't deal with any database related stuff).
I already got an ImagesForProducts entity because I am using Linq-to-SQL mapping. I therefore added a Table of that type to my product repository which I can initiate with the current DataContext of the product repository. That way, both instances always use a shared DataContext and I can simply implement a method "AttachImageToProduct" like this:
public class MsSqlProductsRepository : MsSqlRepository<Product>, IProductsRepository
{
protected Table<ImagesForProducts> imageRelationsTable { get; set; }
public MsSqlProductsRepository(string connectionString)
: base(connectionString)
{
imageRelationsTable = DataContext.GetTable<ImagesForProducts>();
}
public void AttachImageToProduct(Image image, Product product)
{
if (imageRelationsTable.First(r => r.ImageId == image.Id && r.ProductId == product.Id) != null)
return;
ImagesForProducts rel = new ImagesForProducts();
rel.ImageId = image.Id;
rel.ProductId = product.Id;
imageRelationsTable.InsertOnSubmit(rel);
entitiesTable.Context.SubmitChanges();
}
}
Do you have any general concerns about this solution?
The repository pattern should be used to represent an in-memory store for your domain objects. Since you want your domain model to be ignorant of the persistence internals and also have everything designed around aggregate roots, then it does not make sense to have a ImagesForProducts entity and thus doesn't make sense to have a separate repository for ImagesForProducts entities.
First of all I Would recommend building your domain model with POCO objects that can be used in any persistence scenario (LINQ to SQL, EF, Stored Procedures..).
You should have only two repositories (ProductRepository and ImageRepository) and resolve the many to meny relation as "relational" properties in both domain objects. For example you can add an Image collection to the Product domain object and a Product collection to the Image domain object. Once you build your POCO objects, then you can handle mappings to the specific persistence store inside your repositories (preferrably in the constructor).
Once you implement the plubming, you can and add an image to the product:
product.Images.Add(image);
Then you can call your repository like this:
productRepository.Add(product);
So, I'm developing some software, and trying to keep myself using TDD and other best practices.
I'm trying to write tests to define the classes and repository.
Let's say I have the classes, Customer, Order, OrderLine.
Now, do I create the Order class as something like
abstract class Entity {
int ID { get; set; }
}
class Order : Entity {
Customer Customer { get; set; }
List<OrderLine> OrderLines { get; set; }
}
Which will serialize nice, but, if I don't care about the OrderLines, or Customer details is not as lightweight as one would like. Or do I just store IDs to items and add a function for getting them?
class Order : Entity {
int CustomerID { get; set; }
List<OrderLine> GetOrderLines() {};
}
class OrderLine : Entity {
int OrderID { get; set; }
}
And how would you structure the repository for something like this?
Do I use an abstract CRUD repository with methods GetByID(int), Save(entity), Delete(entity) that each items repository inherits from, and adds it's own specific methods too, something like this?
public abstract class RepositoryBase<T, TID> : IRepository<T, TID> where T : AEntity<TID>
{
private static List<T> Entities { get; set; }
public RepositoryBase()
{
Entities = new List<T>();
}
public T GetByID(TID id)
{
return Entities.Where(x => x.Id.Equals(id)).SingleOrDefault();
}
public T Save(T entity)
{
Entities.RemoveAll(x => x.Id.Equals(entity.Id));
Entities.Add(entity);
return entity;
}
public T Delete(T entity)
{
Entities.RemoveAll(x => x.Id.Equals(entity.Id));
return entity;
}
}
What's the 'best practice' here?
Entities
Let's start with the Order entity. An order is an autonomous object, which isn't dependent on a 'parent' object. In domain-driven design this is called an aggregate root; it is the root of the entire order aggregate. The order aggregate consists of the root and several child entities, which are the OrderLine entities in this case.
The aggregate root is responsible for managing the entire aggregate, including the lifetime of the child entities. Other components are not allowed to access the child entities; all changes to the aggregate must go through the root. Also, if the root ceases to exist, so do the children, i.e. order lines cannot exist without a parent order.
The Customer is also an aggregate root. It isn't part of an order, it's only related to an order. If an order ceases to exist, the customer doesn't. And the other way around, if a customer ceases to exist, you'll want to keep the orders for bookkeeping purposes. Because Customer is only related, you'll want to have just the CustomerId in the order.
class Order
{
int OrderId { get; }
int CustomerId { get; set; }
IEnumerable<OrderLine> OrderLines { get; private set; }
}
Repositories
The OrderRepository is responsible for loading the entire Order aggregate, or parts of it, depending on the requirements. It is not responsible for loading the customer. If you need the customer, load it from the CustomerRepository, using the CustomerId from the order.
class OrderRepository
{
Order GetById(int orderId)
{
// implementation details
}
Order GetById(int orderId, OrderLoadOptions loadOptions)
{
// implementation details
}
}
enum OrderLoadOptions
{
All,
ExcludeOrderLines,
// other options
}
If you ever need to load the order lines afterwards, you should use the tell, don't ask principle. Tell the order to load its order lines, and which repository to use. The order will then tell the repository the information it needs to know.
class Order
{
int OrderId { get; }
int CustomerId { get; set; }
IEnumerable<OrderLine> OrderLines { get; private set; }
void LoadOrderLines(IOrderRepository orderRepository)
{
// simplified implementation
this.OrderLines = orderRepository.GetOrderLines(this.OrderId);
}
}
Note that the code uses an IOrderRepository to retrieve the order lines, rather than a separate repository for order lines. Domain-driven design states that there should be a repository for each aggregate root. Methods for retrieving child entities belong in the repository of the root and should only be accessed by the root.
Abstract/base repositories
I have written abstract repositories with CRUD operations myself, but I found that it didn't add any value. Abstraction is useful when you want to pass instances of subclasses around in your code. But what kind of code will accept any BaseRepository implementation as a parameter?
Also, the CRUD operations can differ per entity, making a base implementation useless. Do you really want to delete an order, or just set its status to deleted? If you delete a customer, what will happen to the related orders?
My advice is to keep things simple. Stay away from abstraction and generic base classes. Sure, all repositories share some kind of functionality and generics look cool. But do you actually need it?
I would divide my project up into the relevant parts. Data Transfer Objects (DTO), Data Access Objects (DAO). The DTO's I would want to be as simple as possible, terms like POJO (Plain Old Java Object) and POCO (Plain Old C Object) are used here, simply put they are container objects with very little if any functionality built into them.
The DTO's are basically the building blocks to the whole application, and will marry up the layers. For every object that is modeled in the system, there should be at least one DTO. How you then put these into collections is entirely up to the design of the application. Obviously there are natural One to many relationships floating around, such as Customer has many Orders. But the fundamentals of these objects are what they are. For example, an order has a relationship with a customer, but can also be stand alone and so needs to be separate from the customer object. All Many to Many Relationships should be resolved down into One to Many relationships which is easy when dealing with nested classes.
Presumably there should be CRUD objects that appear within the Data Access Objects category. This is where it gets tricky as you have to manage all the relationships that have been discovered in design and the lifetime models of each. When fetching DTO's back from the DAO the loading options are essential as this can mean the difference between your system running like a dog from over eager loading, or high network traffic from fetching data back and fourth from your application and the store by lazy loading.
I won't go into flags and loading options as others here have done all that.
class OrderDAO
{
public OrderDTO Create(IOrderDTO order)
{
//Code here that will create the actual order and store it, updating the
flelds in the OrderDTO where necessary. One being the GUID field of the new ID.
I stress guid as this means for better scalability.
return OrderDTO
}
}
As you can see the OrderDTO is passed into the Create Method.
For the Create Method, when dealing with brand new nested Objects, there will have to be some code dealing with the marrying up of data that has been stored, for example a customer with old orders, and a new order. The system will have to deal with the fact that some of the operations are update statements, whilst others are Create.
However one piece of the puzzle that is always missed is that of multi-user environments where DTO's (plain Objects) are disconnected from the application and returned back to the DAO for CRUD. This usually involves some Concurrency Control which can be nasty and can get complicated. A simple mechanism such as DateTime or Version number works here, although when doing crud on a nested object, you must develop the rules on what gets updated and in what order, also if an update fails concurrency, you have to decide on whether you fail all the operation or partial.
Why not create separate Order classes? It sounds to me like you're describing a base Order object, which would contain the basic order and customer information (or maybe not even the customer information), and a separate Order object that has line items in it.
In the past, I've done as Niels suggested, and either used boolean flags or enums to describe optionally loading child objects, lists, etc. In Clean Code, Uncle Bob says that these variables and function parameters are excuses that programmers use to not refactor a class or function into smaller, easier to digest pieces.
As for your class design, I'd say that it depends. I assume that an Order could exist without any OrderLines, but could not exist without a Customer (or at least a way to reference the customer, like Niels suggested). If this is the case, why not create a base Order class and a second FullOrder class. Only FullOrder would contain the list of OrderLines. Following that thought, I'd create separate repositories to handle CRUD operations for Order and FullOrder.
If you are interested in domain driven design (DDD) implementation with POCOs along with explanations take a look at the following 2 posts:
http://devtalk.dk/2009/06/09/Entity+Framework+40+Beta+1+POCO+ObjectSet+Repository+And+UnitOfWork.aspx
http://www.primaryobjects.com/CMS/Article122.aspx
There is also a project that implements domain driven patterns (repository, unit of work, etc, etc) for various persistence frameworks (NHibernate, Entity Frameworks, etc, etc) called NCommon
If it's important to keep data access 'away' from business and presentation layers, what alternatives or approaches can I take so that my LINQ to SQL entities can stay in the data access layer?
So far I seem to be simply duplicating the classes produced by sqlmetal, and passing those object around instead simply to keep the two layers appart.
For example, I have a table in my DB called Books. If a user is creating a new book via the UI, the Book class generated by sqlmetal seems like a perfect fit although I'm tightly coupling my design by doing so.
What I do is to have all my DataAccess (LINQ-to-SQL in your case) in one project and then I have another business project which uses the DataAccess project, thereby segrating the DataAccess project form the UI layer.
In your example for books, my business layer would have a class called Book:
public class Book
{
private IAuthorRespository _authorRepository = new LinqToSqlAuthorRepository();
private IBookRespository _bookRepository = new LinqToSqlBookRepository();
public int BookId { get { return _bookId; }}
private int _bookId;
public virtual string BookName{get;set;}
public virtual string ISBN {get;set;}
// ...Other properties
public Book()
{
// When creating a new book
_bookId = 0;
}
public Book(int id)
{
// For an existing book
_bookId = id;
Load();
}
protected void Load()
{
BookEntity book = _bookRepository.GetBook(BookId);
BookName = book.BookName;
ISBN = book.ISBN;
}
public void Save()
{
BookEntity book = MapEntityFromThisClass();
_bookRepository.Save(book);
}
public Author GetAuthor()
{
return _authorRepository.GetAuthor();
}
}
This then means that your UI is totally separated from the actual data access and that all of your Book logic is contained sensibly within a class.
You can make this further separated by using IoC with a system such as Microsoft Unity or Castle so that you don't have to write = new LinqToSqlXYZ(); and can instead write something along the lines of IoC.Resolve<IBookRepostory>(); (depending on your implementation). This then means your Book class is not tied down to LINQ-to-SQL anymore either.
Linq to Sql offers a 1:1 mapping between entities and your database tables. It could be argued that the entities themselves are a level of abstraction away from the database, and that is what you are tied down to.
If you are making a 1:1 duplication of the entities offered up by linq to sql, then it may mean that its not worth having them there, because you are still just as tied to those classes as you are to the entities offered by linq to sql.
By creating another layer, you are also elminating the benefits of change tracking provided by linq to sql, meaning you have to copy any changes from your classes into the entities provided by linq to sql to perform data operations.
If you would like to abstract away the DataContext type code from any presentation or business layers, and control the interface to your data more tightly, then the repository pattern is good. You can always have your repository return the entity types created by linq to sql, which means you are not duplicating types, you also get change tracking, but you are still keeping the code that controls the DataContext inside the repository.
You may consider projecting the data into a different class for the benefit of your presentation (a view model), or business logic. This is the route I tend to go down, if I want to use linq to sql, but I don't want a 1:1 mapping between the entities and my view models.
I'm trying to decide on the best pattern for data access in my MVC application.
Currently, having followed the MVC storefront series, I am using repositories, exposing IQueryable to a service layer, which then applies filters. Initially I have been using LINQtoSQL e.g.
public interface IMyRepository
{
IQueryable<MyClass> GetAll();
}
Implemented in:
public class LINQtoSQLRepository : IMyRepository
{
public IQueryable<MyClass> GetAll()
{
return from table in dbContext.table
select new MyClass
{
Field1 = table.field1,
... etc.
}
}
}
Filter for IDs:
public static class TableFilters
{
public static MyClass WithID(this IQueryable<MyClass> qry, string id)
{
return (from t in qry
where t.ID == id
select t).SingleOrDefault();
}
}
Called from service:
public class TableService
{
public MyClass RecordsByID(string id)
{
return _repository.GetAll()
.WithID(id);
}
}
I ran into a problem when I experimented with implementing the repository using Entity Framework with LINQ to Entities. The filters class in my project contains some more complex operations than the "WHERE ... == ..." in the example above, which I believe require different implementations depending on the LINQ provider. Specifically I have a requirement to perform a SQL "WHERE ... IN ..." clause. I am able to implement this in the filter class using:
string[] aParams = // array of IDs
qry = qry.Where(t => aParams.Contains(t.ID));
However, in order to perform this against Entity Framework, I need to provide a solution such as the BuildContainsExpression which is tied to the Entity Framework. This means I have to have 2 different implementations of this particular filter, depending on the underlying provider.
I'd appreciate any advice on how I should proceed from here.
It seemed to me that exposing an IQueryable from my repository, would allow me to perform filters on it regardless of the underlying provider, enabling me to switch between providers if and when required. However the problem I describe above makes me think I should be performing all my filtering within the repositories and returning IEnumerable, IList or single classes.
Many thanks,
Matt
This is a very popular question. One that I constantly ask myself. I've always felt it best to return IEnumerable rather than IQueryable from a repository.
The purpose of a repository is to encapsulate the database infrastructure so the client need not worry about the data source. However, if you return IQueryable you are at the mercy of the consumer as to what kind of query will get run against your db, and whether they will do something that the LINQ provider doesn't support.
Take paging for example. Lets say you have a Customer entity and your database could have hundreds of thousands of customers. Which code would you rather have your client write?
var customers = repos.GetCustomers().Skip(skipCount).Take(pageSize).ToList();
OR
var customers = repos.GetCustomers(pageIndex, pageSize);
In the first approach you make it impossible for the repository to restrict the number of records retrieved from the data source. Also, your consumer has to calculate the skipCount.
In the second approach you provide a more coarse grained interface to your client. Now your repository can enforce some constraints on the pageSize in order to optimize the query. You also encapsulate the calculation of the skipCount.
However, that being said, in your situation your client is your service. So I suppose the question really comes down to a separation of concerns. Where is it better to perform such validation logic? Well that answer may very well be "in the service". But what about the answer to "Where is it better to contain query logic?". To me the answer is clearly "The Repository". That is its intended area of expertise.