In SQL, I have a 1:1 relationship defined between 2 tables which are linked by 2 mapping tables, four in total. I have no influence on the database schema.
I'd like to reflect this in my Code First model so that I can say Foo.Bar and Bar.Foo rather than Foo.Mapping1.Mapping2.Bar (or similar). Is this possible using the Fluent API? I know you can specify a many to many relationship using the designer which results in Foo.Bars and Bar.Foos so hopefully this is possible.
I don't know that you can map it with Fluent API but I know you can create an extension class and create an extension to handle the mapping like so:
public static class FooExtension
{
public static Bar Bar(this Foo)
{
var bar = Foo.Mapping1.Mapping2.Bar;
return bar;
}
}
Then you would call the extension method
var foosBar = Foo.Bar()
Related
I'm new to MVC 3 and Entity Framework so I'd like to know what is the best approach.
Basically, I have 3 entities: Course, Module and Chapter. Each is a parent of the next with a one to many relationship (A Course has many Modules and a Module has many Chapters). I have a column SortOrder for Modules and Chapters to have them ordered sequentially.
My idea was is to use partial views for the child entities when updating the parent.
I have 3 views in mind:
Create/Update Course: all basic details for a course
Course Modules (basically a different view for Update Course) which has an option to add multiple partial views, each creating a Module
Course Timeline (still a different view for update course) which lists all Modules (on separate divs) and has the option to add multiple partial views, each creating a Chapter
Does my plan sound right and plausible? I plan to use hidden fields to store IDs. I also want the saves to occur asynchronously.
Any piece of advise or information would be highly appreciated. Thanks!
I think this is what your after but not sure. For handling persistence of child/grandchild entities, you can do this in several ways. You can either perform crud operations on each entity separately. So that will involve for example saving the modules by themselves with a reference to the course, probably courseId.
Or you can look at saving just the aggregate root, which in this case looks like its your Course entity. This will involve Loading the course, populating the modules on the course, and for each module populate the chapters. Then when you `db.Courses.Add(newCourse); db.SaveChanges(); all the entities will be persisted. You have to make sure your foreign key and model references are setup correctly.
For example, to save child entities:
public ActionResult DoSomething(int courseId, Module newModule)
{
var course = someService.LoadCourse(courseId);
course.Modules.Add(newModule);
using (var db = new MyDbContext())
{
db.Courses.Add(course);
db.SaveChanges();
}
return RedirectToAction("Success");
}
Or you can save individually:
public ActionResult DoSomething(Module newModule)
{
using (var db = new MyDbContext())
{
//You will need to make sure newModule.CourseId is set correctly
db.Modules.Add(newModule);
db.SaveChanges();
}
return RedirectToAction("Success");
}
Depending on your views, you will be able to judge which way is best to go. Regarding asynchronous saving, you will be able to call these endpoints with jquery posting the models as json. On a side note, one thing to look at would be to create a custom Anti Forgery Token validator for json requests, example.
I am working on a project following the suggested repository pattern in Steven Sanderson's excellent book "Pro ASP.NET MVC 2 Framework".
Take the following example: I have a table for "Products" and for "Images". Both have an own repository that creates a new DataContext in the constructor. Now, I want to establish a many-to-many relationship between the two entities called "ImagesForProducts".
Should I create a separate repository for the ImagesForProducts entities? If so, how can I share the DataContext between all the entities? In that case I have to instantiate my ProductController with two repositories (for Products and for ImagesForProducts), right?
I'd rather access the images using my product instances, so that I can write myProduct.AddImage(img). But how can I persist the relation in the database using the ProductRepository?
As you can see, I am not sure about the overall architecture and would highly appreciate a basic code example.
Thanks in advance!
After some careful research and consideration, I decided to let the repositories handle image attachments instead of the product instances (mostly because the instances shouldn't deal with any database related stuff).
I already got an ImagesForProducts entity because I am using Linq-to-SQL mapping. I therefore added a Table of that type to my product repository which I can initiate with the current DataContext of the product repository. That way, both instances always use a shared DataContext and I can simply implement a method "AttachImageToProduct" like this:
public class MsSqlProductsRepository : MsSqlRepository<Product>, IProductsRepository
{
protected Table<ImagesForProducts> imageRelationsTable { get; set; }
public MsSqlProductsRepository(string connectionString)
: base(connectionString)
{
imageRelationsTable = DataContext.GetTable<ImagesForProducts>();
}
public void AttachImageToProduct(Image image, Product product)
{
if (imageRelationsTable.First(r => r.ImageId == image.Id && r.ProductId == product.Id) != null)
return;
ImagesForProducts rel = new ImagesForProducts();
rel.ImageId = image.Id;
rel.ProductId = product.Id;
imageRelationsTable.InsertOnSubmit(rel);
entitiesTable.Context.SubmitChanges();
}
}
Do you have any general concerns about this solution?
The repository pattern should be used to represent an in-memory store for your domain objects. Since you want your domain model to be ignorant of the persistence internals and also have everything designed around aggregate roots, then it does not make sense to have a ImagesForProducts entity and thus doesn't make sense to have a separate repository for ImagesForProducts entities.
First of all I Would recommend building your domain model with POCO objects that can be used in any persistence scenario (LINQ to SQL, EF, Stored Procedures..).
You should have only two repositories (ProductRepository and ImageRepository) and resolve the many to meny relation as "relational" properties in both domain objects. For example you can add an Image collection to the Product domain object and a Product collection to the Image domain object. Once you build your POCO objects, then you can handle mappings to the specific persistence store inside your repositories (preferrably in the constructor).
Once you implement the plubming, you can and add an image to the product:
product.Images.Add(image);
Then you can call your repository like this:
productRepository.Add(product);
I am porting an existing application from Linq to SQL to Entity Framework 4 (default code generation).
One difference I noticed between the two is that a foreign key property is not updated when resetting the object reference. Now I need to decide how to deal with this.
For example supposing you have two entity types, Company and Employee. One Company has many Employees.
In Linq To SQL, setting the company also sets the company id:
var company=new Company(ID=1);
var employee=new Employee();
Debug.Assert(employee.CompanyID==0);
employee.Company=company;
Debug.Assert(employee.CompanyID==1); //Works fine!
In Entity Framework (and without using any code template customization) this does not work:
var company=new Company(ID=1);
var employee=new Employee();
Debug.Assert(employee.CompanyID==0);
employee.Company=company;
Debug.Assert(employee.CompanyID==1); //Throws, since CompanyID was not updated!
How can I make EF behave the same way as LinqToSQL? I had a look at the default code generation T4 template, but I could not figure out how to make the necessary changes. It seems like a one-liner should do the trick, but I could not figure out how to get the ID property for a given reference.
From what I can see in the default T4 template, the foreign key properties of entities are not directly linked to the entity reference associated with the key.
Theres a couples to approach to your issue regarding migration from Linq to SQL to EF4. One of them would be to register to the AssociationChanged event of your associations so that it updates your field automatically. In your context, one approach could be something like like this :
// Extends Employee entity
public partial class Employee
{
private void CompanyChanged(Object sender, CollectionChangeEventArgs e)
{
// Apply reactive changes; aka set CompanyID
// here
}
// Create a default constructor that registers your event handler
public Employee()
{
this.CompanyReference.AssociationChanged += CompanyChanged;
}
}
Personally, if you want to limit the maintenance required to maintain this sort of logic, I'd suggest changing your T4 template (either change it yourself or find one) so that it sets the CompanyId when Company is changed as shown previously.
Gil Fink wrote a pretty good introdution to T4 templates with EF4, and you can look up Scott Hanselman wrapped a good bunch of useful links and ressources to work with T4 templates.
On a last note, unless I'm mistaken, accessing foreign keys directly as propeties of an entity is something new from EF3.5 to 4. In 3.5, only way you could access it was through the associated entity (Employee.Company.CompanyID). I believe the feature was added in EF4 so that you didn't have to load associations (using "include") in order to get the foreign key when selecting from the data store.
Perhaps EF's take on this would be, if you got the association, go through the association to get the ID, first and foremost. But that's just speculation as I got no quotes to back it up.
[EDIT 2010-06-16]:
After a quick readthrough and analysis of the edmx xml elements, I found one called ReferentialConstraint which appears to contain foreign key fields to a specfic FK_Relation.
Heres the code snippet to modify inside a default T4 edmx template, section Write Navigation Properties. (Template_RegionNavigationProperties), around line 388 of an unmodified template. Try to ignore the horrible formatting...
<#=code.SpaceAfter(NewModifier(navProperty))#><#=Accessibility.ForProperty(navProperty)#> <#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#> <#=code.Escape(navProperty)#>
{
<#=code.SpaceAfter(Accessibility.ForGetter(navProperty))#>get
{
return ((IEntityWithRelationships)this).RelationshipManager.GetRelatedReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>").Value;
}
<#=code.SpaceAfter(Accessibility.ForSetter(navProperty))#>set
{
// edit begins here
if(value != null)
{
// Automatically sets the foreign key attributes according to linked entity
<#
AssociationType association = GetSourceSchemaTypes<AssociationType>().FirstOrDefault(_ => _.FullName == navProperty.RelationshipType.FullName);
foreach(var cons in association.ReferentialConstraints)
{
foreach(var metadataProperty in cons.FromProperties)
{
#>
this.<#=metadataProperty.Name#> = value.<#=metadataProperty.Name#>;
//this._<#=metadataProperty.Name#> = value._<#=metadataProperty.Name#>; // use private field to bypass the OnChanged events, property validation and the likes..
<#
}
}
#>
}
else
{
// what usually happens in Linq-to-SQL when an association is set to null
// here
}
// edit ends here
((IEntityWithRelationships)this).RelationshipManager.GetRelatedReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>").Value = value;
}
}
I roughly tested it, but it's a given that theres some validation and such missing. Perhaps it could give you a tip towards a solution regardless.
Thanks for this solution. I've enhanced it (does not depend on specific naming conventions anymore) and encluded in a fix that also fixes an other issue with the Entity Framework template.
Check here for my solution and fixed code generation template
If it's important to keep data access 'away' from business and presentation layers, what alternatives or approaches can I take so that my LINQ to SQL entities can stay in the data access layer?
So far I seem to be simply duplicating the classes produced by sqlmetal, and passing those object around instead simply to keep the two layers appart.
For example, I have a table in my DB called Books. If a user is creating a new book via the UI, the Book class generated by sqlmetal seems like a perfect fit although I'm tightly coupling my design by doing so.
What I do is to have all my DataAccess (LINQ-to-SQL in your case) in one project and then I have another business project which uses the DataAccess project, thereby segrating the DataAccess project form the UI layer.
In your example for books, my business layer would have a class called Book:
public class Book
{
private IAuthorRespository _authorRepository = new LinqToSqlAuthorRepository();
private IBookRespository _bookRepository = new LinqToSqlBookRepository();
public int BookId { get { return _bookId; }}
private int _bookId;
public virtual string BookName{get;set;}
public virtual string ISBN {get;set;}
// ...Other properties
public Book()
{
// When creating a new book
_bookId = 0;
}
public Book(int id)
{
// For an existing book
_bookId = id;
Load();
}
protected void Load()
{
BookEntity book = _bookRepository.GetBook(BookId);
BookName = book.BookName;
ISBN = book.ISBN;
}
public void Save()
{
BookEntity book = MapEntityFromThisClass();
_bookRepository.Save(book);
}
public Author GetAuthor()
{
return _authorRepository.GetAuthor();
}
}
This then means that your UI is totally separated from the actual data access and that all of your Book logic is contained sensibly within a class.
You can make this further separated by using IoC with a system such as Microsoft Unity or Castle so that you don't have to write = new LinqToSqlXYZ(); and can instead write something along the lines of IoC.Resolve<IBookRepostory>(); (depending on your implementation). This then means your Book class is not tied down to LINQ-to-SQL anymore either.
Linq to Sql offers a 1:1 mapping between entities and your database tables. It could be argued that the entities themselves are a level of abstraction away from the database, and that is what you are tied down to.
If you are making a 1:1 duplication of the entities offered up by linq to sql, then it may mean that its not worth having them there, because you are still just as tied to those classes as you are to the entities offered by linq to sql.
By creating another layer, you are also elminating the benefits of change tracking provided by linq to sql, meaning you have to copy any changes from your classes into the entities provided by linq to sql to perform data operations.
If you would like to abstract away the DataContext type code from any presentation or business layers, and control the interface to your data more tightly, then the repository pattern is good. You can always have your repository return the entity types created by linq to sql, which means you are not duplicating types, you also get change tracking, but you are still keeping the code that controls the DataContext inside the repository.
You may consider projecting the data into a different class for the benefit of your presentation (a view model), or business logic. This is the route I tend to go down, if I want to use linq to sql, but I don't want a 1:1 mapping between the entities and my view models.
I'm trying to decide on the best pattern for data access in my MVC application.
Currently, having followed the MVC storefront series, I am using repositories, exposing IQueryable to a service layer, which then applies filters. Initially I have been using LINQtoSQL e.g.
public interface IMyRepository
{
IQueryable<MyClass> GetAll();
}
Implemented in:
public class LINQtoSQLRepository : IMyRepository
{
public IQueryable<MyClass> GetAll()
{
return from table in dbContext.table
select new MyClass
{
Field1 = table.field1,
... etc.
}
}
}
Filter for IDs:
public static class TableFilters
{
public static MyClass WithID(this IQueryable<MyClass> qry, string id)
{
return (from t in qry
where t.ID == id
select t).SingleOrDefault();
}
}
Called from service:
public class TableService
{
public MyClass RecordsByID(string id)
{
return _repository.GetAll()
.WithID(id);
}
}
I ran into a problem when I experimented with implementing the repository using Entity Framework with LINQ to Entities. The filters class in my project contains some more complex operations than the "WHERE ... == ..." in the example above, which I believe require different implementations depending on the LINQ provider. Specifically I have a requirement to perform a SQL "WHERE ... IN ..." clause. I am able to implement this in the filter class using:
string[] aParams = // array of IDs
qry = qry.Where(t => aParams.Contains(t.ID));
However, in order to perform this against Entity Framework, I need to provide a solution such as the BuildContainsExpression which is tied to the Entity Framework. This means I have to have 2 different implementations of this particular filter, depending on the underlying provider.
I'd appreciate any advice on how I should proceed from here.
It seemed to me that exposing an IQueryable from my repository, would allow me to perform filters on it regardless of the underlying provider, enabling me to switch between providers if and when required. However the problem I describe above makes me think I should be performing all my filtering within the repositories and returning IEnumerable, IList or single classes.
Many thanks,
Matt
This is a very popular question. One that I constantly ask myself. I've always felt it best to return IEnumerable rather than IQueryable from a repository.
The purpose of a repository is to encapsulate the database infrastructure so the client need not worry about the data source. However, if you return IQueryable you are at the mercy of the consumer as to what kind of query will get run against your db, and whether they will do something that the LINQ provider doesn't support.
Take paging for example. Lets say you have a Customer entity and your database could have hundreds of thousands of customers. Which code would you rather have your client write?
var customers = repos.GetCustomers().Skip(skipCount).Take(pageSize).ToList();
OR
var customers = repos.GetCustomers(pageIndex, pageSize);
In the first approach you make it impossible for the repository to restrict the number of records retrieved from the data source. Also, your consumer has to calculate the skipCount.
In the second approach you provide a more coarse grained interface to your client. Now your repository can enforce some constraints on the pageSize in order to optimize the query. You also encapsulate the calculation of the skipCount.
However, that being said, in your situation your client is your service. So I suppose the question really comes down to a separation of concerns. Where is it better to perform such validation logic? Well that answer may very well be "in the service". But what about the answer to "Where is it better to contain query logic?". To me the answer is clearly "The Repository". That is its intended area of expertise.