Entity Framework/Linq to sql model to business model - linq-to-sql

I'm coming from a stored procedure and creating the data access layer manually approach. I am trying to understand where I should fit Linq To SQL or entity frameworks into my normal planning. I normally seperate out the business layer from the DAL layer and use a repository inbetween.
It seems that people will either use the generated classes from linq to sql, extend them by using the partial class or do a full seperation and map the generated linq classes to seperate business entities. I am partial to the seperate Business entities. However, this seems to be counterintuitive.
One of my last projects used DDD and the entity framework. When needing to udpate an object it moved the business entity to the repistory layer which when going to the DAL layer would create a context and than requery the object. It would than update the values and resbumit.
I didn't see the large point as the data context wasn't saved and required an extra query to grab the object before updating. Normally I would just do the update(If concurrency wasn't an issue)
So my questions come down to:
Does it make sense to seperate linq to sql generated classes into Business entities?
Should the data context be saved or is that impractical?
Thanks for your time, trying to make sure I understand. I normally like to seperate out as it makes it cleaner to understand even in some smaller porjects.

I currently hand roll my own Dto classes and Datacontext instead of using auto-generated code files from Linq to Sql. To give some background of my solution architecture/modeling, I have a "Contract" project, and a "Dal" project. (Also a "Model" project, but I'll try to stay focused here on Dal only). Hand-rolling my own Dtos and Datacontext, makes everything a lot smaller and simpler, I'll give a few examples of how I do that here.
I never return out a Dto object outside of the Dal, in fact I make sure to declare them as internal. The way I return them out is I cast them as an interface (interfaces are located in my "Contract" layer). We'll make a simple "PersonRepository" that implements an "IPersonRetriever and IPersonSaver" interfaces.
Contracts:
public interface IPersonRetriever
{
IPerson GetPersonById(Guid personId);
}
public interface IPersonSaver
{
void SavePerson(IPerson person);
}
Dal:
public class PersonRepository : IPersonSaver, IPersonRetriever
{
private string _connectionString;
public PersonRepository(string connectionString)
{
_connectionString = connectionString;
}
IPerson IPersonRetriever.GetPersonById(Guid id)
{
using (var dc = new PersonDataContext(_connectionString))
{
return dc.PersonDtos.FirstOrDefault(p => p.PersonId == id);
}
}
void IPersonSaver.SavePerson(IPerson person)
{
using (var dc = new PersonDataContext(_connectionString))
{
var personDto = new PersonDto
{
Id = person.Id,
FirstName = person.FirstName,
Age = person.Age
};
dc.PersonDtos.InsertOnSubmit(personDto);
dc.SubmitChanges();
}
}
}
PersonDataContext:
internal class PersonDataContext : System.Data.Linq.DataContext
{
static MappingSource _mappingSource = new AttributeMappingSource(); // necessary for pre-compiled linq queries in .Net 4.0+
internal PersonDataContext(string connectionString) : base(connectionString, _mappingSource) { }
internal Table<PersonDto> PersonDtos { get { return GetTable<PersonDto>(); } }
}
[Table(Name = "dbo.Persons")]
internal class PersonDto : IPerson
{
[Column(Name = "PersonIdentityId", IsPrimaryKey = true, IsDbGenerated = false)]
internal Guid Id { get; set; }
[Column]
internal string FirstName { get; set; }
[Column]
internal int Age { get; set; }
#region IPerson implementation
Guid IPerson.Id { get { return this.Id; } }
string IPerson.FirstName { get { return this.FirstName; } }
int IPerson.Age { get { return this.Age; } }
#endregion
}
You will need to add the "Column" attribute to all of your Dto properties, but if you notice, if there is a one-to-one correlation between what you want the field to be exposed as on the interface, and the name of the actual table column, you won't need to add any of the Named Parameters. In this example my PersonId in the database is stored as "PersonIdentityId", yet I only want my interface to make the field say "Id".
That's how I do my Dal layer, I believe this layer should be dumb, real dumb. Dumb in the sense that it is only there for CRUD (Create, Retrieve, Update and Delete) operations. All of the business logic would go into my "Model" project, which would consume and utilize the IPersonSaver and IPersonRetriever interfaces.
Hope this helps!

Related

Implementing with both Adapter Design Pattern and Facade Design pattern

I'm new to design patterns.
I'm implementing a tool which can connect to different databases as user need.
this is my code structure.
in controllers I have my API calls. Below I paste post APi call for get all databases in server
#PostMapping("/allDatabases")
public List<String> getDatabases(#RequestBody DatabaseModel db)
throws IOException, SQLException {
return migrationInterface.getAllDatabases(db);
}
for now I'm getting response by calling a method in interface inside service package.
But when database server is change(ex: postgres,mysql) I have to use different queries.
Ex:
public class PostgresPreparedStatements {
public PreparedStatement getAllDbs(Connection con) throws SQLException {
return con.prepareStatement(
"SELECT datname FROM pg_database
WHERE datistemplate = false;");
}
}
This query is not working in MySQL database. So I'll keep deferent prepared statements for deferent databases. My idea is calling to a BaseAdapter from controller and check server type like below.
public class BaseAdapter {
public void checkServerType(String server) {
switch(server) {
case "postgres" :
// postgres functions
break;
case "mysql" :
// mysql functions
break;
default:
break;
}
}
}
I want to call PostgresConnector.java if server is postgres. from Connector I want to call Facade to call functions and related queries.
Any idea how to do this?
please note: For now I'm implementing this for postgres and MySQL,but in future this should work with any database.
Adapter pattern is not used when you want to add new behaviour such as new databases in your case. The goal of adapter class is to allow other class to access the existing functionality. Adapter converts the interface of one class into something that may be used by another class.
It looks like BaseAdapter has a responsibility to choose SQL statement for different databases. We can paraphraze this responsibility like we want to have generated SQL query based on database. So it looks like
we can replace this switch statement with HashTable(Java) or Dictionary(C#). And this HashTable(Java) or Dictionary(C#) can be a simple factory that creates SQL queries. And our generated SQL queries can be strategies for concrete database.
So let's dive in code.
It looks like this is a place where Strategy pattern can be used:
Strategy pattern is a behavioral software design pattern that enables
selecting an algorithm at runtime. Instead of implementing a single
algorithm directly, code receives run-time instructions as to which in
a family of algorithms to use.
Let me show an example via C#. I am sorry I am not Java guy, however I provided comments about how code could look in Java.
We need to have some common behaviour that will be shared across all strategies. In our case, it would be just one GetAllDbs() method from different data providers:
public interface IDatabaseStatement
{
IEnumerable<string> GetAllDbs();
}
And its concrete implementations. These are exchangeable strategies:
public class PostgresDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new [] { "PostgresDatabaseStatement" };
}
}
public class MySQLDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "MySQLDatabaseStatement" };
}
}
public class SqlServerDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "SqlServerDatabaseStatement" };
}
}
We need a place where all strategies can be stored. And we should be able to get necessary strategy from this store. So this is a place where simple factory can be used. Simple factory is not Factory method pattern and not Abstract factory.
public enum DatabaseName
{
SqlServer, Postgres, MySql
}
public class DatabaseStatementFactory
{
private Dictionary<DatabaseName, IDatabaseStatement> _statementByDatabaseName
= new Dictionary<DatabaseName, IDatabaseStatement>()
{
{ DatabaseName.SqlServer, new SqlServerDatabaseStatement() },
{ DatabaseName.Postgres, new PostgresDatabaseStatement() },
{ DatabaseName.MySql, new MySQLDatabaseStatement() },
};
public IDatabaseStatement GetInstanceByType(DatabaseName databaseName) =>
_statementByDatabaseName[databaseName];
}
and then you can get instance of desired storage easier:
DatabaseStatementFactory databaseStatementFactory = new();
IDatabaseStatement databaseStatement = databaseStatementFactory
.GetInstanceByType(DatabaseName.MySql);
IEnumerable<string> allDatabases = databaseStatement.GetAllDbs(); // OUTPUT:
// MySQLDatabaseStatement
This design is compliant with the open/closed principle.

How do I populate a Data Access Layer Model Efficiently?

I'm working on developing my first Data Driven Domain using Dependency Injection in ASP.net.
In my Data Access Layer if have created some domain data models, for example:
public class Company {
public Guid CompanyId { get; set; }
public string Name { get; set; }
}
public class Employee {
public Guid EmployeeId { get; set; }
public Guid CompanyId { get; set; }
public string Name { get; set; }
}
I have then developed an interface such as:
public interface ICompanyService {
IEnumerable<Model.Company> GetCompanies();
IEnumerable<Model.Employee> GetEmployees();
IEnumerable<Model.Employee> GetEmployees(Guid companyId);
}
In a separate module I have implemented this interface using Linq to Sql:
public class CompanyService : ICompanyService {
public IEnumerable<Model.Employee> GetEmployees();
{
return EmployeeDb
.OrderBy(e => e.Name)
.Select(e => e.ToDomainEntity())
.AsEnumerable();
}
}
Where ToDomainEntity() is implemented in the employee repository class as an extension method to the base entity class:
public Model.EmployeeToDomainEntity()
{
return new Model.Employee {
EmployeeId = this.EmployeeId,
CompanyId = this.CompanyId,
Name = this.Name
};
}
To this point, I have more or less followed the patterns as described in Mark Seeman's excellent book 'Dependency Injection in .NET' - and all works nicely.
I would like however to extend my basic models to also include key reference models, so the domain Employee class would become:
public class Employee {
public Guid EmployeeId { get; set; }
public Guid CompanyId { get; set; }
public Company { get; set; }
public string Name { get; set; }
}
and the ToDomainEntity() function would be extended to:
public Model.Employee ToDomainEntity()
{
return new Model.Employee {
EmployeeId = this.EmployeeId,
CompanyId = this.CompanyId,
Company = (this.Company == null) ? null : this.Company.ToDomainEntity()
Name = this.Name
};
}
I suspect that this might be 'bad practice' from a domain modelling point of view, but the problem I have encountered would also, I think, hold true if I were to develop a specific View Model to achieve the same purpose.
In essence, the problem I have run into is the speed/efficiency of populating the data models. If I use the ToDomainEntity() approach described above, Linq to Sql creates a separate SQL call to retrieve the data for each Employee's Company record. This, as you would expect, increases the time taken to evaluate the SQL expression quite considerably (from around 100ms to 7 seconds on our test database), particularly if the data tree is complex (as separate SQL calls are made to populate each node/sub-node of the tree).
If I create the data model 'inline...
public IEnumerable<Model.Employee> GetEmployees();
{
return EmployeeDb
.OrderBy(e => e.Name)
.Select(e => new Model.Employee {
EmployeeId = e.EmployeeId,
/* Other field mappings */
Company = new Model.Company {
CompanyId = e.Company.CompanyId,
/* Other field mappings */
}
}).AsEnumerable();
}
Linq to SQL produces a nice, tight SQL statement that natively uses the 'inner join' method to associate the Company with the Employee.
I have two questions:
1) Is it considered 'bad practice' to reference associated data classes from within a domain class object?
2) If this is the case, and a specific View Model is created for the purpose, what is the right way of populating the model using without having to resort to creating inline assignment blocks to build the expression tree?
Any help/advice would be much appreciated.
The problem is caused by having both data layer entities and domain layer entities and needing a mapping between the two. Although you can get this to work, this makes everything very complex, as you are already experiencing. You are making mappings between data and domain, and will soon add many more mappings for these same entities, because of performance reasons and because other business logic and presentation logic will need different data.
The only real solution is to ditch your data entities and create POCO model objects that can directly be serialized to your backend store (SQL server).
POCO entities is something that is supported in LINQ to SQL from day one, but I think it would be better to migrate to Entity Framework Code First.
When doing this, you can expose IQueryable<T> interfaces from your repositories (you currently called your repository ICompanyService, but a better name would be ICompanyRepository). This allows you to do efficient LINQ queries. When querying directly over a query provider you can prevent loading complete entities. For instance:
from employee in this.repository.GetEmployees()
where employee.Company.Name.StartWith(searchString)
select new
{
employee.Name,
employee.Company.Location
};
When working with IQueryable<T>, LINQ to SQL and Entity Framework will translate this to a very efficient SQL query that only returns the employe name and company location from the database with filtering inside the database (compared to do filtering in your .NET application when GetEmployees() returns an IEnumerable<T>).
You can ask Linq2Sql to preload certain entities (as opposed to lazy load them) using DataLoadOptions.LoadWith method see: http://msdn.microsoft.com/en-us/library/bb534268.aspx.
If you do this with the Company entity then I think Linq2Sql won't have to reach to the database to fetch it again.

Localization using a DI framework - good idea?

I am working on a web application which I need to localize and internationalize. It occurred to me that I could do this using a dependency injection framework. Let's say I declare an interface ILocalResources (using C# for this example but that's not really important):
interface ILocalResources {
public string OkString { get; }
public string CancelString { get; }
public string WrongPasswordString { get; }
...
}
and create implementations of this interface, one for each language I need to support. I would then setup my DI framework to instantiate the proper implementation, either statically or dynamically (for example based on the requesting browsers preferred language).
Is there some reason I shouldn't be using a DI framework for this sort of thing? The only objection I could find myself is that it might be a bit overkill, but if I'm using a DI framework in my web app anyway, I might as well use it for internationalization as well?
A DI framework is built to do dependency injection and localization could just be one of your services, so in that case there's no reason not to use a DI framework IMO. Perhaps we should start discussing the provided ILocalResources interface. While I'm a favor of having compile time support, I'm not sure the supplied interface will help you, because that interface will be probably the type in your system that will change the most. And with that interface the type/types that implement it. Perhaps you should go with a different design.
When we look at most localization frameworks/providers/factories (or whatever), they're all string based. Because of this, think about the following design:
public interface ILocalResources
{
string GetStringResource(string key);
string GetStringResource(string key, CultureInfo culture);
}
This would allow you to add keys and cultures to the underlying message data store, without changing the interface. Downside is of course that you should never change a key, because that will probably be a hell.
Another approach could be an abstract base type:
public abstract class LocalResources
{
public string OkMessage { get { return this.GetString("OK"); } }
public string CancelMessage { get { return this.GetString("Cancel"); } }
...
protected abstract string GetStringResource(string key,
CultureInfo culture);
private string GetString(string key)
{
Culture culture = CultureInfo.CurrentCulture;
string resource = GetStringResource(key, culture);
// When the resource is not found, fall back to the neutral culture.
while (resource == null && culture != CultureInfo.InvariantCulture)
{
culture = culture.Parent;
resource = this.GetStringResource(key, culture);
}
if (resource == null) throw new KeyNotFoundException(key);
return resource;
}
}
And implementation of this type could look like this:
public sealed class SqlLocalResources : LocalResources
{
protected override string GetStringResource(string key,
CultureInfo culture)
{
using (var db = new LocalResourcesContext())
{
return (
from resource in db.StringResources
where resource.Culture == culture.Name
where resource.Key == key
select resource.Value).FirstOrDefault();
}
}
}
This approach takes best of both worlds, because the keys won't be scattered through the application and adding new properties just has to be done in one single place. Using your favorite DI library, you can register an implementation like this:
container.RegisterSingleton<LocalResources>(new SqlLocalResources());
And since the LocalResources type has exactly one abstract method that does all the work, it is easy to create a decorator that adds caching to prevent requesting the same data from the database:
public sealed class CachedLocalResources : LocalResources
{
private readonly Dictionary<CultureInfo, Dictionary<string, string>> cache =
new Dictionary<CultureInfo, Dictionary<string, string>>();
private readonly LocalResources decoratee;
public CachedLocalResources(LocalResources decoratee) { this.decoratee = decoratee; }
protected override string GetStringResource(string key, CultureInfo culture) {
lock (this.cache) {
string res;
var cultureCache = this.GetCultureCache(culture);
if (!cultureCache.TryGetValue(key, out res)) {
cultureCache[key] = res= this.decoratee.GetStringResource(key, culture);
}
return res;
}
}
private Dictionary<string, string> GetCultureCache(CultureInfo culture) {
Dictionary<string, string> cultureCache;
if (!this.cache.TryGetValue(culture, out cultureCache)) {
this.cache[culture] = cultureCache = new Dictionary<string, string>();
}
return cultureCache;
}
}
You can apply the decorator as follows:
container.RegisterSingleton<LocalResources>(
new CachedLocalResources(new SqlLocalResources()));
Do note that this decorator caches the string resources indefinitely, which might cause memory leaks, so you wish to wrap the strings in WeakReference instances or have some sort of expiration timeout on it. But the idea is that you can apply caching without having to change any existing implementation.
I hope this helps.
If you cannot use an existing resource framework (like that built into ASP.Net) and would have to build your own, I will assume that you at some point will need to expose services that provide localized resources.
DI frameworks are used to handle service instantiation. Your localization framework will expose services providing localization. Why shouldn't that service be served up by the framework?
Not using DI for its purpose here is like saying, "I'm building a CRM app but cannot use DI because DI is not built for customer relations management".
So yes, if you're already using DI in the rest of your application, IMO it would be wrong to not use it for the services handling localization.
The only disadvantage I can see is that for any update to "resources", you would have to recompile the assembly containing resources. And depending on your project, this disadvantage may be a good advise to only use a DI framework for resolving a ResourceService of some kind, rather than the values itself.

Problems creating my own Custom DataAttribute for LinqToSql

I'm building LINQ Models by hand, because I want to (understand what's reall happening).
There is a great light weight tutorial on turning standard classes into Linq Models I am reading Here.
For my sample application I have created some models that look like:
public class LoginModel
{
[Column(IsPrimaryKey=true,
DbType="UniqueIdentifier NOT NULL",
CanBeNull=false)]
public Guid LoginID { get; set; }
// .. and more question useless properties...
}
I'm definitely seeing a pattern for the primary key which led me to creating...
[AttributeUsage(AttributeTargets.Property
| AttributeTargets.Field,
AllowMultiple = false)]
public sealed class ColumnPrimaryKeyAttribute : DataAttribute
{
public ColumnPrimaryKeyAttribute()
{
CanBeNull = false;
IsPrimaryKey = true;
DbType = "UniqueIdentifier NOT NULL";
}
// etc, etc...
}
So when I use my new Attribute, LINQ is not picking up my attribute (even though it inherits from the same DataAttribute as Column. Is there a step I'm missing, or should I abandon this idea?
Try inheriting from ColumnAttribute...
public class ColumnPrimaryKeyAttribute : ColumnAttribute
Edit:
Never mind, I see that ColumnAttribute is sealed. You may be out of luck as my guess is LINQ is doing a System.Attribute.GetCustomAttributes(typeof(ColumnAttribute));

Is there anyway to serilize linq object for Memcached?

I'm just start switching to memcached and currently on testing with memcached.
I'm having 2 object, I created an object and put [Serializable] on it (for instance, let call this Object1), the other object is created using Linq DBML (Object2)..
I tried to memcached List<Object1>, it work just fine, like charm, everything here is cache and loaded properly.
But then, i move on to the Linq object, now i try to add to memcached List<Object2> this does not work, it did not add to memcached at all. no key was added
I move on and change the Serialization Mode to Unidirectional, do the add again, still no hope.
Is there anyway to make this work?
Here is the simple test I just wrote, using MemcachedProvider from codeplex to demonstrate:
public ActionResult Test()
{
var returnObj = DistCache.Get<List<Post>>("testKey");
if (returnObj == null)
{
DataContext _db = new DataContext();
returnObj = _db.Posts.ToList();
DistCache.Add("testKey", returnObj, new TimeSpan(29, 0, 0, 0));
_db.Dispose();
}
return Content(returnObj.First().TITLE);
}
this is from Memcached, no STORE was called:
> NOT FOUND _x_testKey
>532 END
<528 get _x_testKey
> NOT FOUND _x_testKey
>528 END
<516 get _x_testKey
> NOT FOUND _x_testKey
>516 END
And in my SQL profiler, it called 3 query for 3 test time => Proved that the object called back from Memcached is null, then it query.
It looks like the default implementation (DefaultTranscoder) is to use BinaryFormatter; the "unidirectional" stuff is an instruction to a different serializer (DataContractSerializer), and doesn't add [Serializable].
(Note: I've added a memo to myself to try to write a protobuf-net transcoder for memcached; that would be cool and would fix most of this for free)
I haven't tested, but a few options present themselves:
write a different transcoder implementation that detects [DataContract] and uses DataContractSerializer, and hook this transcoder
add [Serializable] to your types via a partial class (I'm not convinced this will work due to the LINQ field types not being serializable)
add an ISerializable implementation in a partial class that uses DataContractSerializer
like 3, but using protobuf-net, which a: works with "unidirectional", and b: is faster and smaller than DataContractSerializer
write a serializable DTO and map your types to that
The last is simple but may add more work.
I'd be tempted to to look at the 3rd option first, as the 1st involves rebuilding the provider; the 4th option would also definitely be on my list of things to test.
I struggled with 3, due to the DCS returning a different object during deserialization; I switched to protobuf-net instead, so here's a version that shows adding a partial class to your existing [DataContract] type that makes it work with BinaryFormatter. Actually, I suspect (with evidence) this will also make it much efficient (than raw [Serializable]), too:
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using ProtoBuf;
/* DBML generated */
namespace My.Object.Model
{
[DataContract]
public partial class MyType
{
[DataMember(Order = 1)]
public int Id { get; set; }
[DataMember(Order = 2)]
public string Name { get; set; }
}
}
/* Your extra class file */
namespace My.Object.Model
{
// this adds **extra** code into the existing MyType
[Serializable]
public partial class MyType : ISerializable {
public MyType() {}
void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) {
Serializer.Serialize(info, this);
}
protected MyType(SerializationInfo info, StreamingContext context) {
Serializer.Merge(info, this);
}
}
}
/* quick test via BinaryFormatter */
namespace My.App
{
using My.Object.Model;
static class Program
{
static void Main()
{
BinaryFormatter bf = new BinaryFormatter();
MyType obj = new MyType { Id = 123, Name = "abc" }, clone;
using (MemoryStream ms = new MemoryStream())
{
bf.Serialize(ms, obj);
ms.Position = 0;
clone = (MyType)bf.Deserialize(ms);
}
Console.WriteLine(clone.Id);
Console.WriteLine(clone.Name);
}
}
}