If i have declared entity relationship in my model as virtual then there is no need to use the Include statement in my LINQ query, right ??-
For ex: This is my model class :
public class Brand
{
public int BrandID { get; set; }
public string BrandName { get; set; }
public string BrandDesc { get; set; }
public string BrandUrl { get; set; }
public virtual ICollection<Product> Products { get; set; }
}
Now, for the above model class, i dont need to use the var brandsAndProduct = pe.Brands.Include("Products").Single(brand => brand.BrandID == 22); .
Instead, I can just use the simple var brandsAndProduct = pe.Brands.Where(brand => brand.BrandID == 22); and i will automatically have the related entity available when accessed.
Am I correct in my understanding ?
Also, please tell me in what situations i should prefer one over the other ??
You are correct but the rule is more complex to make it really work as expected. If you define your navigation property virtual EF will at runtime create a new class (dynamic proxy) derived from your Brand class and use it instead. This new dynamically created class contains logic to load navigation property when accessed for the first time. This feature is called lazy loading (or better transparent lazy loading).
What rules must be meet to make this work:
All navigation properties in class must be virtual
Dynamic proxy creation must not be disabled (context.Configuration.ProxyCreationEnabled). It is enabled by default.
Lazy loading must not be disabled (context.Configuration.LazyLoadingEnabled). It is enabled by default.
Entity must be attached (default if you load entity from the database) to context and context must not be disposed = lazy loading works only within scope of living context used to load it from database (or where proxied entity was attached)
The opposite of lazy loading is called eager loading and that is what Include does. If you use Include your navigation property is loaded together with main entity.
Usage of lazy loading and eager loading depends on your needs and also on performance. Include loads all data in single database query but it can result in huge data set when using a lot of includes or loading a lot of entities. If you are sure that you will need Brand and all Products for processing you should use eager loading.
Lazy loading is in turn used if you are not sure which navigation property you will need. For example if you load 100 brands but you will need to access only products from one brand it is not needed to load products for all brands in initial query. The disadvantage of the lazy loading is separate query (database roundtrip) for each navigation property => if you load 100 brands without include and you will access Products property in each Brand instance your code will generate another 100 queries to populate these navigation properties = eager loading would use just singe query but lazy loading used 101 queries (it is called N + 1 problem).
In more complex scenarios you can find that neither of these strategies perform as you need and you can use either third strategy called explicit loading or separate queries to load brands and than products for all brands you need.
Explicit loading has similar disadvantages as lazy loading but you must trigger it manually:
context.Entry(brand).Collection(b => b.Products).Load();
The main advantages for explicit loading is ability to filter relation. You can use Query() before Load() and use any filtering or even eager loading of nested relations.
So i know this is possible using a superclass, however, this is very limiting in flexibility. So my question is then, can i use an interface? Something ala.
interface Taggable {
/*Adds tag(s) and returns a list of currently set tags*/
List<String> addTags(String ... tag)
/*Removes tag(s) and returns a list of currently set tags*/
List<String> removeTags(String ... tag)
}
class User implements Taggable {
String username
static hasMany = [tags:Tag]
}
class Tag {
String name
static hasMany = [references:Taggable]
static belongsTo = Taggable
static constraints = {
name(nullable: false, blank: false, unique: true)
}
}
Im interested in a reference back to the object who has the following tag. This object however can't extend a concrete class. Thats why im wondering if this can be done with an interface instead.
So, can it be done?
Hibernate can map an interface - see example. I doubt if Grails supports this in by-convention mapping - but you can try using the mapping annotations from example above, or XML config.
edit: answering a comment question:
On a database level, you have to have a Taggable table for Tag.References to reference with a foreign key.
Discriminator will NOT defeat polymorphism, if it's added automatically - for instance, in table-per-hierarchy mapping, Hibernate/Gorm adds a class field in order to find out a concrete class when reading object from db.
If you map your Taggables to two tables - Taggable part to Taggable and everything else to specific table, referenced 1:1 - all the discriminator work should be done for you by Hibernate.
BTW class field is pretty long - it contains fully qualified class name.
edit 2:
Either way, it's getting pretty complex, and I'd personally go with the approach I suggested in another question:
dynamically query all the classes with Taggable interface for hasMany=[tags:Tag] property;
or, less preferable - to have a hand-crafted child table and a discriminator.
I am working on a project following the suggested repository pattern in Steven Sanderson's excellent book "Pro ASP.NET MVC 2 Framework".
Take the following example: I have a table for "Products" and for "Images". Both have an own repository that creates a new DataContext in the constructor. Now, I want to establish a many-to-many relationship between the two entities called "ImagesForProducts".
Should I create a separate repository for the ImagesForProducts entities? If so, how can I share the DataContext between all the entities? In that case I have to instantiate my ProductController with two repositories (for Products and for ImagesForProducts), right?
I'd rather access the images using my product instances, so that I can write myProduct.AddImage(img). But how can I persist the relation in the database using the ProductRepository?
As you can see, I am not sure about the overall architecture and would highly appreciate a basic code example.
Thanks in advance!
After some careful research and consideration, I decided to let the repositories handle image attachments instead of the product instances (mostly because the instances shouldn't deal with any database related stuff).
I already got an ImagesForProducts entity because I am using Linq-to-SQL mapping. I therefore added a Table of that type to my product repository which I can initiate with the current DataContext of the product repository. That way, both instances always use a shared DataContext and I can simply implement a method "AttachImageToProduct" like this:
public class MsSqlProductsRepository : MsSqlRepository<Product>, IProductsRepository
{
protected Table<ImagesForProducts> imageRelationsTable { get; set; }
public MsSqlProductsRepository(string connectionString)
: base(connectionString)
{
imageRelationsTable = DataContext.GetTable<ImagesForProducts>();
}
public void AttachImageToProduct(Image image, Product product)
{
if (imageRelationsTable.First(r => r.ImageId == image.Id && r.ProductId == product.Id) != null)
return;
ImagesForProducts rel = new ImagesForProducts();
rel.ImageId = image.Id;
rel.ProductId = product.Id;
imageRelationsTable.InsertOnSubmit(rel);
entitiesTable.Context.SubmitChanges();
}
}
Do you have any general concerns about this solution?
The repository pattern should be used to represent an in-memory store for your domain objects. Since you want your domain model to be ignorant of the persistence internals and also have everything designed around aggregate roots, then it does not make sense to have a ImagesForProducts entity and thus doesn't make sense to have a separate repository for ImagesForProducts entities.
First of all I Would recommend building your domain model with POCO objects that can be used in any persistence scenario (LINQ to SQL, EF, Stored Procedures..).
You should have only two repositories (ProductRepository and ImageRepository) and resolve the many to meny relation as "relational" properties in both domain objects. For example you can add an Image collection to the Product domain object and a Product collection to the Image domain object. Once you build your POCO objects, then you can handle mappings to the specific persistence store inside your repositories (preferrably in the constructor).
Once you implement the plubming, you can and add an image to the product:
product.Images.Add(image);
Then you can call your repository like this:
productRepository.Add(product);
So, I'm developing some software, and trying to keep myself using TDD and other best practices.
I'm trying to write tests to define the classes and repository.
Let's say I have the classes, Customer, Order, OrderLine.
Now, do I create the Order class as something like
abstract class Entity {
int ID { get; set; }
}
class Order : Entity {
Customer Customer { get; set; }
List<OrderLine> OrderLines { get; set; }
}
Which will serialize nice, but, if I don't care about the OrderLines, or Customer details is not as lightweight as one would like. Or do I just store IDs to items and add a function for getting them?
class Order : Entity {
int CustomerID { get; set; }
List<OrderLine> GetOrderLines() {};
}
class OrderLine : Entity {
int OrderID { get; set; }
}
And how would you structure the repository for something like this?
Do I use an abstract CRUD repository with methods GetByID(int), Save(entity), Delete(entity) that each items repository inherits from, and adds it's own specific methods too, something like this?
public abstract class RepositoryBase<T, TID> : IRepository<T, TID> where T : AEntity<TID>
{
private static List<T> Entities { get; set; }
public RepositoryBase()
{
Entities = new List<T>();
}
public T GetByID(TID id)
{
return Entities.Where(x => x.Id.Equals(id)).SingleOrDefault();
}
public T Save(T entity)
{
Entities.RemoveAll(x => x.Id.Equals(entity.Id));
Entities.Add(entity);
return entity;
}
public T Delete(T entity)
{
Entities.RemoveAll(x => x.Id.Equals(entity.Id));
return entity;
}
}
What's the 'best practice' here?
Entities
Let's start with the Order entity. An order is an autonomous object, which isn't dependent on a 'parent' object. In domain-driven design this is called an aggregate root; it is the root of the entire order aggregate. The order aggregate consists of the root and several child entities, which are the OrderLine entities in this case.
The aggregate root is responsible for managing the entire aggregate, including the lifetime of the child entities. Other components are not allowed to access the child entities; all changes to the aggregate must go through the root. Also, if the root ceases to exist, so do the children, i.e. order lines cannot exist without a parent order.
The Customer is also an aggregate root. It isn't part of an order, it's only related to an order. If an order ceases to exist, the customer doesn't. And the other way around, if a customer ceases to exist, you'll want to keep the orders for bookkeeping purposes. Because Customer is only related, you'll want to have just the CustomerId in the order.
class Order
{
int OrderId { get; }
int CustomerId { get; set; }
IEnumerable<OrderLine> OrderLines { get; private set; }
}
Repositories
The OrderRepository is responsible for loading the entire Order aggregate, or parts of it, depending on the requirements. It is not responsible for loading the customer. If you need the customer, load it from the CustomerRepository, using the CustomerId from the order.
class OrderRepository
{
Order GetById(int orderId)
{
// implementation details
}
Order GetById(int orderId, OrderLoadOptions loadOptions)
{
// implementation details
}
}
enum OrderLoadOptions
{
All,
ExcludeOrderLines,
// other options
}
If you ever need to load the order lines afterwards, you should use the tell, don't ask principle. Tell the order to load its order lines, and which repository to use. The order will then tell the repository the information it needs to know.
class Order
{
int OrderId { get; }
int CustomerId { get; set; }
IEnumerable<OrderLine> OrderLines { get; private set; }
void LoadOrderLines(IOrderRepository orderRepository)
{
// simplified implementation
this.OrderLines = orderRepository.GetOrderLines(this.OrderId);
}
}
Note that the code uses an IOrderRepository to retrieve the order lines, rather than a separate repository for order lines. Domain-driven design states that there should be a repository for each aggregate root. Methods for retrieving child entities belong in the repository of the root and should only be accessed by the root.
Abstract/base repositories
I have written abstract repositories with CRUD operations myself, but I found that it didn't add any value. Abstraction is useful when you want to pass instances of subclasses around in your code. But what kind of code will accept any BaseRepository implementation as a parameter?
Also, the CRUD operations can differ per entity, making a base implementation useless. Do you really want to delete an order, or just set its status to deleted? If you delete a customer, what will happen to the related orders?
My advice is to keep things simple. Stay away from abstraction and generic base classes. Sure, all repositories share some kind of functionality and generics look cool. But do you actually need it?
I would divide my project up into the relevant parts. Data Transfer Objects (DTO), Data Access Objects (DAO). The DTO's I would want to be as simple as possible, terms like POJO (Plain Old Java Object) and POCO (Plain Old C Object) are used here, simply put they are container objects with very little if any functionality built into them.
The DTO's are basically the building blocks to the whole application, and will marry up the layers. For every object that is modeled in the system, there should be at least one DTO. How you then put these into collections is entirely up to the design of the application. Obviously there are natural One to many relationships floating around, such as Customer has many Orders. But the fundamentals of these objects are what they are. For example, an order has a relationship with a customer, but can also be stand alone and so needs to be separate from the customer object. All Many to Many Relationships should be resolved down into One to Many relationships which is easy when dealing with nested classes.
Presumably there should be CRUD objects that appear within the Data Access Objects category. This is where it gets tricky as you have to manage all the relationships that have been discovered in design and the lifetime models of each. When fetching DTO's back from the DAO the loading options are essential as this can mean the difference between your system running like a dog from over eager loading, or high network traffic from fetching data back and fourth from your application and the store by lazy loading.
I won't go into flags and loading options as others here have done all that.
class OrderDAO
{
public OrderDTO Create(IOrderDTO order)
{
//Code here that will create the actual order and store it, updating the
flelds in the OrderDTO where necessary. One being the GUID field of the new ID.
I stress guid as this means for better scalability.
return OrderDTO
}
}
As you can see the OrderDTO is passed into the Create Method.
For the Create Method, when dealing with brand new nested Objects, there will have to be some code dealing with the marrying up of data that has been stored, for example a customer with old orders, and a new order. The system will have to deal with the fact that some of the operations are update statements, whilst others are Create.
However one piece of the puzzle that is always missed is that of multi-user environments where DTO's (plain Objects) are disconnected from the application and returned back to the DAO for CRUD. This usually involves some Concurrency Control which can be nasty and can get complicated. A simple mechanism such as DateTime or Version number works here, although when doing crud on a nested object, you must develop the rules on what gets updated and in what order, also if an update fails concurrency, you have to decide on whether you fail all the operation or partial.
Why not create separate Order classes? It sounds to me like you're describing a base Order object, which would contain the basic order and customer information (or maybe not even the customer information), and a separate Order object that has line items in it.
In the past, I've done as Niels suggested, and either used boolean flags or enums to describe optionally loading child objects, lists, etc. In Clean Code, Uncle Bob says that these variables and function parameters are excuses that programmers use to not refactor a class or function into smaller, easier to digest pieces.
As for your class design, I'd say that it depends. I assume that an Order could exist without any OrderLines, but could not exist without a Customer (or at least a way to reference the customer, like Niels suggested). If this is the case, why not create a base Order class and a second FullOrder class. Only FullOrder would contain the list of OrderLines. Following that thought, I'd create separate repositories to handle CRUD operations for Order and FullOrder.
If you are interested in domain driven design (DDD) implementation with POCOs along with explanations take a look at the following 2 posts:
http://devtalk.dk/2009/06/09/Entity+Framework+40+Beta+1+POCO+ObjectSet+Repository+And+UnitOfWork.aspx
http://www.primaryobjects.com/CMS/Article122.aspx
There is also a project that implements domain driven patterns (repository, unit of work, etc, etc) for various persistence frameworks (NHibernate, Entity Frameworks, etc, etc) called NCommon
If it's important to keep data access 'away' from business and presentation layers, what alternatives or approaches can I take so that my LINQ to SQL entities can stay in the data access layer?
So far I seem to be simply duplicating the classes produced by sqlmetal, and passing those object around instead simply to keep the two layers appart.
For example, I have a table in my DB called Books. If a user is creating a new book via the UI, the Book class generated by sqlmetal seems like a perfect fit although I'm tightly coupling my design by doing so.
What I do is to have all my DataAccess (LINQ-to-SQL in your case) in one project and then I have another business project which uses the DataAccess project, thereby segrating the DataAccess project form the UI layer.
In your example for books, my business layer would have a class called Book:
public class Book
{
private IAuthorRespository _authorRepository = new LinqToSqlAuthorRepository();
private IBookRespository _bookRepository = new LinqToSqlBookRepository();
public int BookId { get { return _bookId; }}
private int _bookId;
public virtual string BookName{get;set;}
public virtual string ISBN {get;set;}
// ...Other properties
public Book()
{
// When creating a new book
_bookId = 0;
}
public Book(int id)
{
// For an existing book
_bookId = id;
Load();
}
protected void Load()
{
BookEntity book = _bookRepository.GetBook(BookId);
BookName = book.BookName;
ISBN = book.ISBN;
}
public void Save()
{
BookEntity book = MapEntityFromThisClass();
_bookRepository.Save(book);
}
public Author GetAuthor()
{
return _authorRepository.GetAuthor();
}
}
This then means that your UI is totally separated from the actual data access and that all of your Book logic is contained sensibly within a class.
You can make this further separated by using IoC with a system such as Microsoft Unity or Castle so that you don't have to write = new LinqToSqlXYZ(); and can instead write something along the lines of IoC.Resolve<IBookRepostory>(); (depending on your implementation). This then means your Book class is not tied down to LINQ-to-SQL anymore either.
Linq to Sql offers a 1:1 mapping between entities and your database tables. It could be argued that the entities themselves are a level of abstraction away from the database, and that is what you are tied down to.
If you are making a 1:1 duplication of the entities offered up by linq to sql, then it may mean that its not worth having them there, because you are still just as tied to those classes as you are to the entities offered by linq to sql.
By creating another layer, you are also elminating the benefits of change tracking provided by linq to sql, meaning you have to copy any changes from your classes into the entities provided by linq to sql to perform data operations.
If you would like to abstract away the DataContext type code from any presentation or business layers, and control the interface to your data more tightly, then the repository pattern is good. You can always have your repository return the entity types created by linq to sql, which means you are not duplicating types, you also get change tracking, but you are still keeping the code that controls the DataContext inside the repository.
You may consider projecting the data into a different class for the benefit of your presentation (a view model), or business logic. This is the route I tend to go down, if I want to use linq to sql, but I don't want a 1:1 mapping between the entities and my view models.