How do I populate a Data Access Layer Model Efficiently? - linq-to-sql

I'm working on developing my first Data Driven Domain using Dependency Injection in ASP.net.
In my Data Access Layer if have created some domain data models, for example:
public class Company {
public Guid CompanyId { get; set; }
public string Name { get; set; }
}
public class Employee {
public Guid EmployeeId { get; set; }
public Guid CompanyId { get; set; }
public string Name { get; set; }
}
I have then developed an interface such as:
public interface ICompanyService {
IEnumerable<Model.Company> GetCompanies();
IEnumerable<Model.Employee> GetEmployees();
IEnumerable<Model.Employee> GetEmployees(Guid companyId);
}
In a separate module I have implemented this interface using Linq to Sql:
public class CompanyService : ICompanyService {
public IEnumerable<Model.Employee> GetEmployees();
{
return EmployeeDb
.OrderBy(e => e.Name)
.Select(e => e.ToDomainEntity())
.AsEnumerable();
}
}
Where ToDomainEntity() is implemented in the employee repository class as an extension method to the base entity class:
public Model.EmployeeToDomainEntity()
{
return new Model.Employee {
EmployeeId = this.EmployeeId,
CompanyId = this.CompanyId,
Name = this.Name
};
}
To this point, I have more or less followed the patterns as described in Mark Seeman's excellent book 'Dependency Injection in .NET' - and all works nicely.
I would like however to extend my basic models to also include key reference models, so the domain Employee class would become:
public class Employee {
public Guid EmployeeId { get; set; }
public Guid CompanyId { get; set; }
public Company { get; set; }
public string Name { get; set; }
}
and the ToDomainEntity() function would be extended to:
public Model.Employee ToDomainEntity()
{
return new Model.Employee {
EmployeeId = this.EmployeeId,
CompanyId = this.CompanyId,
Company = (this.Company == null) ? null : this.Company.ToDomainEntity()
Name = this.Name
};
}
I suspect that this might be 'bad practice' from a domain modelling point of view, but the problem I have encountered would also, I think, hold true if I were to develop a specific View Model to achieve the same purpose.
In essence, the problem I have run into is the speed/efficiency of populating the data models. If I use the ToDomainEntity() approach described above, Linq to Sql creates a separate SQL call to retrieve the data for each Employee's Company record. This, as you would expect, increases the time taken to evaluate the SQL expression quite considerably (from around 100ms to 7 seconds on our test database), particularly if the data tree is complex (as separate SQL calls are made to populate each node/sub-node of the tree).
If I create the data model 'inline...
public IEnumerable<Model.Employee> GetEmployees();
{
return EmployeeDb
.OrderBy(e => e.Name)
.Select(e => new Model.Employee {
EmployeeId = e.EmployeeId,
/* Other field mappings */
Company = new Model.Company {
CompanyId = e.Company.CompanyId,
/* Other field mappings */
}
}).AsEnumerable();
}
Linq to SQL produces a nice, tight SQL statement that natively uses the 'inner join' method to associate the Company with the Employee.
I have two questions:
1) Is it considered 'bad practice' to reference associated data classes from within a domain class object?
2) If this is the case, and a specific View Model is created for the purpose, what is the right way of populating the model using without having to resort to creating inline assignment blocks to build the expression tree?
Any help/advice would be much appreciated.

The problem is caused by having both data layer entities and domain layer entities and needing a mapping between the two. Although you can get this to work, this makes everything very complex, as you are already experiencing. You are making mappings between data and domain, and will soon add many more mappings for these same entities, because of performance reasons and because other business logic and presentation logic will need different data.
The only real solution is to ditch your data entities and create POCO model objects that can directly be serialized to your backend store (SQL server).
POCO entities is something that is supported in LINQ to SQL from day one, but I think it would be better to migrate to Entity Framework Code First.
When doing this, you can expose IQueryable<T> interfaces from your repositories (you currently called your repository ICompanyService, but a better name would be ICompanyRepository). This allows you to do efficient LINQ queries. When querying directly over a query provider you can prevent loading complete entities. For instance:
from employee in this.repository.GetEmployees()
where employee.Company.Name.StartWith(searchString)
select new
{
employee.Name,
employee.Company.Location
};
When working with IQueryable<T>, LINQ to SQL and Entity Framework will translate this to a very efficient SQL query that only returns the employe name and company location from the database with filtering inside the database (compared to do filtering in your .NET application when GetEmployees() returns an IEnumerable<T>).

You can ask Linq2Sql to preload certain entities (as opposed to lazy load them) using DataLoadOptions.LoadWith method see: http://msdn.microsoft.com/en-us/library/bb534268.aspx.
If you do this with the Company entity then I think Linq2Sql won't have to reach to the database to fetch it again.

Related

EF 4.1 Code First doesn't create column for List<string>

I have been playing around quite a lot with EF4 Code First and I do love it. However, I cannot seem to sort this easy one out.
When trying to create something like this, no columns are created in my database:
public IList<String> Recievers { get; set; }
public List<String> RecieversTest { get; set; }
public virtual List<String> RecieversAnotherTest { get; set; }
public virtual ICollection<Int32> RecieversAnotherTest { get; set; }
Ive tried Annotations to map it to a different column name, I've tried IEnumerable and all sorts of other collections, but it refuses to create a column for it.
After an hour on google I found one that claims she has done it, but I'm starting to doubt that. Should it even be possible?
I can't really see why it just doesn't create a column and use JSON or CSV.
It can't be that rare, can it? In my case i just want to store a list of emails.
What am I missing? The project creates all other types without problems, and I've inspected the database to see how other properties I add to test with gets created, while these gets ignored.
So the problem must lie in some setting I'm missing or some configuration....
EF 4.1 RTW on an SQL Server 2008 db.
I have bad news for you. EF doesn't do anything like that. If you want any serialization and deserialization you must do it yourselves = you must expose and map property with serialized value:
private IList<String> _receivers;
// This will be skipped
public IList<String> Receivers
{
get
{
return _receivers;
}
set
{
_receivers = value;
}
}
// This will be mapped
public string ReceiversSer
{
get
{
return String.Join(";", _receivers);
}
set
{
_receivers = value.Split(';').ToList();
}
}
Now ReceiversSer will be mapped to a column in the database.
You can't have a column based on a collection/list of something. A column is a singular item such as public string Receiver.
If you are expecting EF CF to take your IList or List and make several Columns out of it you are correct in that it won't.
In EF CF you create lists in your Entity to represent a relationship to another table. An Order may have many Items in it. You would in this case have an Order class with a list to an OrderItem object.
You would then have an OrderItem class to describe the OrderItem table. This would then essentially represent the 1 to many relationship of Order and OrderItems.

Entity Framework/Linq to sql model to business model

I'm coming from a stored procedure and creating the data access layer manually approach. I am trying to understand where I should fit Linq To SQL or entity frameworks into my normal planning. I normally seperate out the business layer from the DAL layer and use a repository inbetween.
It seems that people will either use the generated classes from linq to sql, extend them by using the partial class or do a full seperation and map the generated linq classes to seperate business entities. I am partial to the seperate Business entities. However, this seems to be counterintuitive.
One of my last projects used DDD and the entity framework. When needing to udpate an object it moved the business entity to the repistory layer which when going to the DAL layer would create a context and than requery the object. It would than update the values and resbumit.
I didn't see the large point as the data context wasn't saved and required an extra query to grab the object before updating. Normally I would just do the update(If concurrency wasn't an issue)
So my questions come down to:
Does it make sense to seperate linq to sql generated classes into Business entities?
Should the data context be saved or is that impractical?
Thanks for your time, trying to make sure I understand. I normally like to seperate out as it makes it cleaner to understand even in some smaller porjects.
I currently hand roll my own Dto classes and Datacontext instead of using auto-generated code files from Linq to Sql. To give some background of my solution architecture/modeling, I have a "Contract" project, and a "Dal" project. (Also a "Model" project, but I'll try to stay focused here on Dal only). Hand-rolling my own Dtos and Datacontext, makes everything a lot smaller and simpler, I'll give a few examples of how I do that here.
I never return out a Dto object outside of the Dal, in fact I make sure to declare them as internal. The way I return them out is I cast them as an interface (interfaces are located in my "Contract" layer). We'll make a simple "PersonRepository" that implements an "IPersonRetriever and IPersonSaver" interfaces.
Contracts:
public interface IPersonRetriever
{
IPerson GetPersonById(Guid personId);
}
public interface IPersonSaver
{
void SavePerson(IPerson person);
}
Dal:
public class PersonRepository : IPersonSaver, IPersonRetriever
{
private string _connectionString;
public PersonRepository(string connectionString)
{
_connectionString = connectionString;
}
IPerson IPersonRetriever.GetPersonById(Guid id)
{
using (var dc = new PersonDataContext(_connectionString))
{
return dc.PersonDtos.FirstOrDefault(p => p.PersonId == id);
}
}
void IPersonSaver.SavePerson(IPerson person)
{
using (var dc = new PersonDataContext(_connectionString))
{
var personDto = new PersonDto
{
Id = person.Id,
FirstName = person.FirstName,
Age = person.Age
};
dc.PersonDtos.InsertOnSubmit(personDto);
dc.SubmitChanges();
}
}
}
PersonDataContext:
internal class PersonDataContext : System.Data.Linq.DataContext
{
static MappingSource _mappingSource = new AttributeMappingSource(); // necessary for pre-compiled linq queries in .Net 4.0+
internal PersonDataContext(string connectionString) : base(connectionString, _mappingSource) { }
internal Table<PersonDto> PersonDtos { get { return GetTable<PersonDto>(); } }
}
[Table(Name = "dbo.Persons")]
internal class PersonDto : IPerson
{
[Column(Name = "PersonIdentityId", IsPrimaryKey = true, IsDbGenerated = false)]
internal Guid Id { get; set; }
[Column]
internal string FirstName { get; set; }
[Column]
internal int Age { get; set; }
#region IPerson implementation
Guid IPerson.Id { get { return this.Id; } }
string IPerson.FirstName { get { return this.FirstName; } }
int IPerson.Age { get { return this.Age; } }
#endregion
}
You will need to add the "Column" attribute to all of your Dto properties, but if you notice, if there is a one-to-one correlation between what you want the field to be exposed as on the interface, and the name of the actual table column, you won't need to add any of the Named Parameters. In this example my PersonId in the database is stored as "PersonIdentityId", yet I only want my interface to make the field say "Id".
That's how I do my Dal layer, I believe this layer should be dumb, real dumb. Dumb in the sense that it is only there for CRUD (Create, Retrieve, Update and Delete) operations. All of the business logic would go into my "Model" project, which would consume and utilize the IPersonSaver and IPersonRetriever interfaces.
Hope this helps!

PLINQO / LINQ-To-SQL - Generated Entity Self Save Method?

Hi I'm trying to create a basic data model / layer
The idea is to have:
Task task = TaskRepository.GetTask(2);
task.Description = "The task has changed";
task.Save();
Is this possible? I've tried the code below
Note: The TaskRepository.GetTask() methods detaches the Task entity.
I'd expect this to work, any ideas why it doesnt?
Thanks
public partial class Task
{
// Place custom code here.
public void Save()
{
using (TinyTaskDataContext db = new TinyTaskDataContext { Log = Console.Out })
{
db.Task.Attach(this);
db.SubmitChanges();
}
}
#region Metadata
// For more information about how to use the metadata class visit:
// http://www.plinqo.com/metadata.ashx
[CodeSmith.Data.Audit.Audit]
internal class Metadata
{
// WARNING: Only attributes inside of this class will be preserved.
public int TaskId { get; set; }
[Required]
public string Name { get; set; }
[Now(EntityState.New)]
[CodeSmith.Data.Audit.NotAudited]
public System.DateTime DateCreated { get; set; }
}
#endregion
}
Having done some reading I've realised I was implmenting the Repository pattern incorrectly. I should have been adding the Save method to the repository for conventions sake.
However, the actually problem I was having with regard to commiting the disconnected dataset was due to optimistic concurrency. The datacontext's job is to keep track of the state of it's entities. When entities become disconnected you loose that state.
I've found you need to add a timestamp field to the database table or I can set the UpdateCheck field on each column in my dbml file.
Here is some info about the UpdateCheck
Some useful links about disconnected Linq and plinqo
Great info on implementing the Repository pattern with LINQ
Short tutorial for implementing for updating and reattaching entities
Previously answer question
Rick Strahl on LINQ to SQL and attaching Entities
There is no need for this line (Task task = new Task();). The above should work although I've never seen it implemented in this manner. Have you thought about using the managers? Are you running into any runtime errors?
Thanks
-Blake Niemyjski

Share Json data between Asp.Net MVC 2 and Asp.Net server side C# code?

I created and love my Asp.Net MVC2 application. It's a very nice DDD app with Domain Model classes, View Model classes, a repository, and Json action methods to expose data.
My coworker wants to share my data with his Asp.Net Forms based C# code. He wants to pull through the Internet a class definition (like a Data Contract), then fill it with my Json results, effectively using something like a remote repository.
Any links or ideas on how to provide him with data contracts and data?
Darin Dimitrov had an excellent idea of consuming JSON data using data contracts here. Just wondering if it's possible to use MVC as the source for these items, then let him create the objects on his side, filled with data from my side.
The key to this question is how to send him my data classes, then send him my data.
class Program
{
[DataContract]
class Person
{
[DataMember(Name = "name")]
public string Name { get; set; }
[DataMember(Name = "surname")]
public string Surname { get; set; }
[DataMember(Name="age")]
public int Age { get; set; }
}
static void Main(string[] args)
{
var json = #"{""name"" : ""michael"", ""surname"" : ""brown"", ""age"" : ""35""}";
var serializer = new DataContractJsonSerializer(typeof(Person));
using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(json)))
{
var person = (Person)serializer.ReadObject(stream);
Console.WriteLine("Name : {0}, Surname : {1}, Age : {2}",
person.Name, person.Surname, person.Age);
}
}
}
Write an OData service. The format is JSON, but the tools to consume it easily -- from many languages -- are already written for you.
The nice thing about this is that your data is now not only consumable by your JS and your friend's ASP.NET app, it's consumable by Excel, PHP, etc.

Repository without ORM for saving object graph

I know it's fairly straight forward to create Repository for retreiving domain models without ORM (Repository Pattern without LINQ or other ORM?). However, what about saving domain models and its internal object graph?
public class Car: IAggregateRoot, IEntity, ICar
{
public IEnumerable<IWheel> Wheels {get; set;}
}
public class CarRepository
{
public void Save(ICar car)
{
// calls Dao
// update/insert all wheels as required
// update/insert car as required
}
}
Here we need to take about change tracking etc. How does one go about to implement it?
For my specific implementation I'm treating Linq to Sql as Dao. Linq to sql does change tracking but the domain models that I created are not. They are straight POCO and not mapped directly by linq to sql. Everything is done by a custom DataMapper
public class CarDataMapper : IMapper<LinqData.Car, Domain.ICar>
{
public ICar Map(LinqData.Car linqCar)
{
ICar = new Car () { Wheels = linqCar.Wheels.Select( w => new WheelDataMapper().Map(w));
}
}
Is there any straight forward way to implement Repository to save object graph without exposing linqToSql or NHibernate to the domain layer? Or am I missing something here?
I am too struggling to find out solution..
DDD without ORM, is it possible..
Look at http://solveme.wordpress.com/2009/11/11/ddd-without-any-orm-tool-is-it-possible/