I'm just start switching to memcached and currently on testing with memcached.
I'm having 2 object, I created an object and put [Serializable] on it (for instance, let call this Object1), the other object is created using Linq DBML (Object2)..
I tried to memcached List<Object1>, it work just fine, like charm, everything here is cache and loaded properly.
But then, i move on to the Linq object, now i try to add to memcached List<Object2> this does not work, it did not add to memcached at all. no key was added
I move on and change the Serialization Mode to Unidirectional, do the add again, still no hope.
Is there anyway to make this work?
Here is the simple test I just wrote, using MemcachedProvider from codeplex to demonstrate:
public ActionResult Test()
{
var returnObj = DistCache.Get<List<Post>>("testKey");
if (returnObj == null)
{
DataContext _db = new DataContext();
returnObj = _db.Posts.ToList();
DistCache.Add("testKey", returnObj, new TimeSpan(29, 0, 0, 0));
_db.Dispose();
}
return Content(returnObj.First().TITLE);
}
this is from Memcached, no STORE was called:
> NOT FOUND _x_testKey
>532 END
<528 get _x_testKey
> NOT FOUND _x_testKey
>528 END
<516 get _x_testKey
> NOT FOUND _x_testKey
>516 END
And in my SQL profiler, it called 3 query for 3 test time => Proved that the object called back from Memcached is null, then it query.
It looks like the default implementation (DefaultTranscoder) is to use BinaryFormatter; the "unidirectional" stuff is an instruction to a different serializer (DataContractSerializer), and doesn't add [Serializable].
(Note: I've added a memo to myself to try to write a protobuf-net transcoder for memcached; that would be cool and would fix most of this for free)
I haven't tested, but a few options present themselves:
write a different transcoder implementation that detects [DataContract] and uses DataContractSerializer, and hook this transcoder
add [Serializable] to your types via a partial class (I'm not convinced this will work due to the LINQ field types not being serializable)
add an ISerializable implementation in a partial class that uses DataContractSerializer
like 3, but using protobuf-net, which a: works with "unidirectional", and b: is faster and smaller than DataContractSerializer
write a serializable DTO and map your types to that
The last is simple but may add more work.
I'd be tempted to to look at the 3rd option first, as the 1st involves rebuilding the provider; the 4th option would also definitely be on my list of things to test.
I struggled with 3, due to the DCS returning a different object during deserialization; I switched to protobuf-net instead, so here's a version that shows adding a partial class to your existing [DataContract] type that makes it work with BinaryFormatter. Actually, I suspect (with evidence) this will also make it much efficient (than raw [Serializable]), too:
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using ProtoBuf;
/* DBML generated */
namespace My.Object.Model
{
[DataContract]
public partial class MyType
{
[DataMember(Order = 1)]
public int Id { get; set; }
[DataMember(Order = 2)]
public string Name { get; set; }
}
}
/* Your extra class file */
namespace My.Object.Model
{
// this adds **extra** code into the existing MyType
[Serializable]
public partial class MyType : ISerializable {
public MyType() {}
void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) {
Serializer.Serialize(info, this);
}
protected MyType(SerializationInfo info, StreamingContext context) {
Serializer.Merge(info, this);
}
}
}
/* quick test via BinaryFormatter */
namespace My.App
{
using My.Object.Model;
static class Program
{
static void Main()
{
BinaryFormatter bf = new BinaryFormatter();
MyType obj = new MyType { Id = 123, Name = "abc" }, clone;
using (MemoryStream ms = new MemoryStream())
{
bf.Serialize(ms, obj);
ms.Position = 0;
clone = (MyType)bf.Deserialize(ms);
}
Console.WriteLine(clone.Id);
Console.WriteLine(clone.Name);
}
}
}
Related
I am trying to map my Name column to a dynamic object. This is how the raw JSON data looks (note that this is SQL-morphed from our old relational data and I am not able to generate or interact with this column via EF Core):
{ "en": "Water", "fa": "آب", "ja": "水", ... }
Just to note, available languages are stored in a separate table and thus are dynamically defined.
Through T-SQL I can perfectly interact with these objects eg
SELECT *
FROM [MyObjects]
WHERE JSON_VALUE(Name, '$.' + #languageCode) = #searchQuery
But it seems EF Core doesn't want to even deserialize these objects as whole, let alone query them.
What I get in a simple GetAll query is an empty Name. Other columns are not affected though.
I have tried so far
Using an empty class with a [JsonExtensionData] dictionary inside
Using a : DynamicObject inheritance and implementing GetDynamicMembers, TryGetMember, TrySetMember, TryCreateInstance
Directly mapping to a string dictionary.
Combining 1 & 2 and adding an indexer operator on top.
All yield the same results: an empty Name.
I have other options like going back to a junction table relational which I have many issues with, hardcoding languages which is not really intuitive and might cause problems in the future, using HasJsonConversion which basically destroys the performance on any search action... so I'm basically stuck here with this.
I think currently it's not fully supported:
You can not use dynamic operations on an expression tree like a Select statement because it needs to be translated.
JsonValue and JsonQuery requires a path to be resolved.
If you specify OwnsOne(entity = >entity.owned, owned => owned.ToJson()) and the Json could not be parsed you will get an error.
I suggest this workaround while the EF team improves the functionality.
Create a static class with static methods to be used as decoys in the expression tree. This will be mapped to the server built-in functions.
public static class DBF
{
public static string JsonValue(this string column, [NotParameterized] string path)
=> throw new NotSupportedException();
public static string JsonQuery(this string column, [NotParameterized] string path) => throw new NotSupportedException();
}
Include the database functions on your OnModelCreating method.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.HasDbFunction(
typeof(DBF).GetMethod(nameof(DBF.JsonValue))!
).HasName("JSON_VALUE").IsBuiltIn();
modelBuilder.HasDbFunction(
typeof(DBF).GetMethod(nameof(DBF.JsonQuery))!
).HasName("JSON_QUERY").IsBuiltIn();
/// ...
modelBuilder.Entity(entity => {
//treat entity as text
entity.Property(x => x.Metadata)
.HasColumnType("varchar")
.HasMaxLength(8000);
});
}
Call them dynamically with LINQ.
var a = await _context.FileInformation
.AsNoTracking()
.Where(x => x.Metadata!.JsonValue("$.Property1") == "some value")
.Select(x => x.Metadata!.JsonValue("$.Property2"))
.ToListAsync();
You can add casts or even build anonymous types with this method.
My solution was I added a new class which has KEY and VALUE , which will represent the dictionary i needed :
public class DictionaryObject
{
public string Key { set; get; }
public string Value { set; get; }
}
and instead of having this line in the JSON class :
public Dictionary<string, string> Name { get; set; }
I changed to :
public List<DictionaryObject> Name { get; set; }
Hope it helps.
I've got following setup: C#, ServiceStack, MariaDB, POCOs with objects and structs, JSON.
The main question is: how to use ServiceStack to store POCOs to MariaDB having complex types (objects and structs) blobbed as JSON and still have working de/serialization of the same POCOs? All of these single tasks are supported, but I had problems when all put together mainly because of structs.
... finally during writing this I found some solution and it may look like I answered my own question, but I still would like to know the answer from more skilled people, because the solution I found is a little bit complicated, I think. Details and two subquestions arise later in the context.
Sorry for the length and for possible misinformation caused by my limited knowledge.
Simple example. This is the final working one I ended with. At the beginning there were no SomeStruct.ToString()/Parse() methods and no JsConfig settings.
using Newtonsoft.Json;
using ServiceStack;
using ServiceStack.DataAnnotations;
using ServiceStack.OrmLite;
using ServiceStack.Text;
using System.Diagnostics;
namespace Test
{
public class MainObject
{
public int Id { get; set; }
public string StringProp { get; set; }
public SomeObject ObjectProp { get; set; }
public SomeStruct StructProp { get; set; }
}
public class SomeObject
{
public string StringProp { get; set; }
}
public struct SomeStruct
{
public string StringProp { get; set; }
public override string ToString()
{
// Unable to use .ToJson() here (ServiceStack does not serialize structs).
// Unable to use ServiceStack's JSON.stringify here because it just takes ToString() => stack overflow.
// => Therefore Newtonsoft.Json used.
var serializedStruct = JsonConvert.SerializeObject(this);
return serializedStruct;
}
public static SomeStruct Parse(string json)
{
// This method behaves differently for just deserialization or when part of Save().
// Details in the text.
// After playing with different options of altering the json input I ended with just taking what comes.
// After all it is not necessary, but maybe useful in other situations.
var structItem = JsonConvert.DeserializeObject<SomeStruct>(json);
return structItem;
}
}
internal class ServiceStackMariaDbStructTest
{
private readonly MainObject _mainObject = new MainObject
{
ObjectProp = new SomeObject { StringProp = "SomeObject's String" },
StringProp = "MainObject's String",
StructProp = new SomeStruct { StringProp = "SomeStruct's String" }
};
public ServiceStackMariaDbStructTest()
{
// This one line is needed to store complex types as blobbed JSON in MariaDB.
MySqlDialect.Provider.StringSerializer = new JsonStringSerializer();
JsConfig<SomeStruct>.RawSerializeFn = someStruct => JsonConvert.SerializeObject(someStruct);
JsConfig<SomeStruct>.RawDeserializeFn = json => JsonConvert.DeserializeObject<SomeStruct>(json);
}
public void Test_Serialization()
{
try
{
var json = _mainObject.ToJson();
if (!string.IsNullOrEmpty(json))
{
var objBack = json.FromJson<MainObject>();
}
}
catch (System.Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
public void Test_Save()
{
var cs = "ConnectionStringToMariaDB";
var dbf = new OrmLiteConnectionFactory(cs, MySqlDialect.Provider);
using var db = dbf.OpenDbConnection();
db.DropAndCreateTable<MainObject>();
try
{
db.Save(_mainObject);
var dbObject = db.SingleById<MainObject>(_mainObject.Id);
}
catch (System.Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
}
}
What (I think) I know / have tried but at first didn't help to solve it myself:
ServiceStack stores complex types in DB as blobbed JSV by default (last paragraph of first section: https://github.com/ServiceStack/ServiceStack.OrmLite), so it is necessary to set it the way it is proposed: MySqlDialect.Provider.StringSerializer = new JsonStringSerializer(); (https://github.com/ServiceStack/ServiceStack.OrmLite#pluggable-complex-type-serializers)=> default JSV changed to JSON.
the ServiceStack's serialization does not work with structs, it is necessary to treat them special way:
a) according to https://github.com/ServiceStack/ServiceStack.Text#c-structs-and-value-types and example https://github.com/ServiceStack/ServiceStack.Text/#using-structs-to-customize-json it is necessary to implement TStruct.ToString() and static TStruct.ParseJson()/ParseJsv() methods.
b) according to https://github.com/ServiceStack/ServiceStack.Text/#typeserializer-details-jsv-format and unit tests https://github.com/ServiceStack/ServiceStack.Text/blob/master/tests/ServiceStack.Text.Tests/CustomStructTests.cs it shall be TStruct.ToString() (the same as in a) and static TStruct.Parse().
Subquestion #1: which one is the right one? For me, ParseJson() was never called, Parse() was. Documentation issue or is it used in other situation?
I implemented option b). Results:
IDbConnection.Save(_mainObject) saved the item to MariaDB. Success.
Through the saving process ToString() and Parse() were called. In Parse, incoming JSON looked this way:
"{\"StringProp\":\"SomeStruct's String\"}". Fine.
Serialization worked. Success.
Deserialization failed. I don't know the reason, but JSON incoming to Parse() was "double-escaped":
"{\\\"StringProp\\\":\\\"SomeStruct's String\\\"}"
Subquestion #2: Why the "double-escaping" in Parse on deserialization?
I tried to solve structs with JsConfig (and Newtonsoft.Json to get proper JSON):
JsConfig<SomeStruct>.SerializeFn = someStruct => JsonConvert.SerializeObject(someStruct);
JsConfig<SomeStruct>.DeSerializeFn = json => JsonConvert.DeserializeObject<SomeStruct>(json);
a) at first without ToString() and Parse() defined in the TStruct. Results:
Save failed: the json input in JsonConvert.DeserializeObject(json) that is used during Save was just type name "WinAmbPrototype.SomeStruct".
De/serialization worked.
b) then I implemented ToString() also using Newtonsoft.Json. During Save ToString() was used instead of JsConfig.SerializeFn even the JsConfig.SerializeFn was still set (maybe by design, I do not judge). Results:
Save failed: but the json input of DeserializeFn called during Save changed, now it was JSV-like "{StringProp:SomeStruct's String}", but still not deserializable as JSON.
De/serialization worked.
Then (during writing this I was still without any solution) I found JsConfig.Raw* "overrides" and tried them:
JsConfig<SomeStruct>.RawSerializeFn = someStruct => JsonConvert.SerializeObject(someStruct);
JsConfig<SomeStruct>.RawDeserializeFn = json => JsonConvert.DeserializeObject<SomeStruct>(json);
a) at first without ToString() and Parse() defined in the TStruct. Results are the same as in 2a.
b) then I implemented ToString(). Results:
BOTH WORKED. No Parse() method needed for this task.
But it is very fragile setup:
if I removed ToString(), it failed (now I understand why, default ToString produced JSON with just type name in 2a, 3a).
if I removed RawSerializeFn setting, it failed in RawDeserializeFn ("double-escaped" JSON).
Is there some simpler solution? I would be very glad if someone points me to better direction.
Acceptable would be maybe two (both of them accessible because of different circumstances):
if I am the TStruct owner: with just pure TStruct.ToString() and static TStruct.Parse() to support out of the box de/serialization and DB by ServiceStack (without different input in Parse()).
if I am a consumer of TStruct with no JSON support implemented and I am without access to its code: until now I did not find the way, if the ToString is not implemented: Save to DB did not work. Maybe would be fine to ensure JsConfig serialize functions are enough for both de/serialization and when used during saving to DB.
And the best one would be without employing other dependency (e.g. Newtonsoft.Json) to serialize structs. Maybe some JsConfig.ShallProcessStructs = true; (WARNING: just a tip, not working as of 2021-04-02) would be fine for such situations.
ServiceStack treats structs like a single scalar value type, just like most of the core BCL Value Types (e.g. TimeSpan, DateTime, etc). Overloading the Parse() and ToString() methods and Struct's Constructor let you control the serialization/deserialization of custom structs.
Docs have been corrected. Structs use Parse whilst classes use ParseJson/ParseJsv
If you want to serialize a models properties I'd suggest you use a class instead as the behavior you're looking for is that of a POCO DTO.
If you want to have structs serailized as DTOs in your RDBMS an alternative you can try is to just use JSON.NET for the complex type serialization, e.g:
public class JsonNetStringSerializer : IStringSerializer
{
public To DeserializeFromString<To>(string serializedText) =>
JsonConvert.DeserializeObject<To>(serializedText);
public object DeserializeFromString(string serializedText, Type type) =>
JsonConvert.DeserializeObject(serializedText, type);
public string SerializeToString<TFrom>(TFrom from) =>
JsonConvert.SerializeObject(from);
}
MySqlDialect.Provider.StringSerializer = new JsonNetStringSerializer();
I have the following dependency chain:
IUserAppService
IUserDomainService
IUserRepository
IUserDataContext - UserDataContextImpl(string conn)
All interfaces above and implementations are registered in a Windsor Castle container. When I use one connection string, everything works fine.
Now we want to support multiple databases, In UserAppServiceImpl.cs, we want to get different IUserRepository (different IUserDatabaseContext) according to userId as below:
// UserAppServiceImpl.cs
public UserInfo GetUserInfo(long userId)
{
var connStr = userId % 2 == 0 ? "conn1" : "conn2";
//var repo = container.Resolve<IUserRepository>(....)
}
How can I pass the argument connStr to UserDataContextImpl?
Since the connection string is runtime data in your case, it should not be injected directly into the constructor of your components, as explained here. Since however the connection string is contextual data, it would be awkward to pass it along all public methods in your object graph.
Instead, you should hide it behind an abstraction that allows you to retrieve the proper value for the current request. For instance:
public interface ISqlConnectionFactory
{
SqlConnection Open();
}
An implementation of the ISqlConnectionFactory itself could depend on a dependency that allows retrieving the current user id:
public interface IUserContext
{
int UserId { get; }
}
Such connection factory might therefore look like this:
public class SqlConnectionFactory : ISqlConnectionFactory
{
private readonly IUserContext userContext;
private readonly string con1;
private readonly string con2;
public SqlConnectionFactory(IUserContext userContext,
string con1, string con2) {
...
}
public SqlConnection Open() {
var connStr = userContext.UserId % 2 == 0 ? "conn1" : "conn2";
var con = new SqlConnection(connStr);
con.Open();
return con;
}
}
This leaves us with an IUserContext implementation. Such implementation will depend on the type of application we are building. For ASP.NET it might look like this:
public class AspNetUserContext : IUserContext
{
public string UserId => int.Parse(HttpContext.Current.Session["UserId"]);
}
You have to start from the beginning of your dependency resolver and resolve all of your derived dependencies to a "named" resolution.
Github code link:https://github.com/castleproject/Windsor/blob/master/docs/inline-dependencies.md
Example:
I have my IDataContext for MSSQL and another for MySQL.
This example is in Unity, but I am sure Windsor can do this.
container.RegisterType(Of IDataContextAsync, dbEntities)("db", New InjectionConstructor())
container.RegisterType(Of IUnitOfWorkAsync, UnitOfWork)("UnitOfWork", New InjectionConstructor(New ResolvedParameter(Of IDataContextAsync)("db")))
'Exceptions example
container.RegisterType(Of IRepositoryAsync(Of Exception), Repository(Of Exception))("iExceptionRepository",
New InjectionConstructor(New ResolvedParameter(Of IDataContextAsync)("db"),
New ResolvedParameter(Of IUnitOfWorkAsync)("UnitOfWork")))
sql container
container.RegisterType(Of IDataContextAsync, DataMart)(New HierarchicalLifetimeManager)
container.RegisterType(Of IUnitOfWorkAsync, UnitOfWork)(New HierarchicalLifetimeManager)
'brands
container.RegisterType(Of IRepositoryAsync(Of Brand), Repository(Of Brand))
controller code:
No changes required at the controller level.
results:
I can now have my MSSQL context do its work and MySQL do its work without any developer having to understand my container configuration. The developer simply consumes the correct service and everything is implemented.
I'm using EF 4.1 Code First. I have an entity defined with a property like this:
public class Publication
{
// other stuff
public virtual MailoutTemplate Template { get; set; }
}
I've configured this foreign key using fluent style like so:
modelBuilder.Entity<Publication>()
.HasOptional(p => p.Template)
.WithMany()
.Map(p => p.MapKey("MailoutTemplateID"));
I have an MVC form handler with some code in it that looks like this:
public void Handle(PublicationEditViewModel publicationEditViewModel)
{
Publication publication = Mapper.Map<PublicationEditViewModel, Publication>(publicationEditViewModel);
publication.Template = _mailoutTemplateRepository.Get(publicationEditViewModel.Template.Id);
if (publication.Id == 0)
{
_publicationRepository.Add(publication);
}
else
{
_publicationRepository.Update(publication);
}
_unitOfWork.Commit();
}
In this case, we're updating an existing Publication entity, so we're going through the else path. When the _unitOfWork.Commit() fires, an UPDATE is sent to the database that I can see in SQL Profiler and Intellitrace, but it does NOT include the MailoutTemplateID in the update.
What's the trick to get it to actually update the Template?
Repository Code:
public virtual void Update(TEntity entity)
{
_dataContext.Entry(entity).State = EntityState.Modified;
}
public virtual TEntity Get(int id)
{
return _dbSet.Find(id);
}
UnitOfWork Code:
public void Commit()
{
_dbContext.SaveChanges();
}
depends on your repository code. :) If you were setting publication.Template while Publication was being tracked by the context, I would expect it to work. When you are disconnected and then attach (with the scenario that you have a navigation property but no explicit FK property) I'm guessing the context just doesn't have enough info to work out the details when SaveChanges is called. I'd do some experiments. 1) do an integration test where you query the pub and keep it attached to the context, then add the template, then save. 2) stick a MailOutTemplateId property on the Publicaction class and see if it works. Not suggesting #2 as a solution, just as a way of groking the behavior. I"m tempted to do this experiment, but got some other work I need to do. ;)
I found a way to make it work. The reason why I didn't initially want to have to do a Get() (aside from the extra DB hit) was that then I couldn't do this bit of AutoMapper magic to get the values:
Publication publication = Mapper.Map<PublicationEditViewModel, Publication>(publicationEditViewModel);
However, I found another way to do the same thing that doesn't use a return value, so I updated my method like so and this works:
public void Handle(PublicationEditViewModel publicationEditViewModel)
{
Publication publication = _publicationRepository.Get(publicationEditViewModel.Id);
_mappingEngine.Map(publicationEditViewModel, publication);
// publication = Mapper.Map<PublicationEditViewModel, Publication>(publicationEditViewModel);
publication.Template = _mailoutTemplateRepository.Get(publicationEditViewModel.Template.Id);
if (publication.Id == 0)
{
_publicationRepository.Add(publication);
}
else
{
_publicationRepository.Update(publication);
}
_unitOfWork.Commit();
}
I'm injecting an IMappingEngine now into the class, and have wired it up via StructureMap like so:
For<IMappingEngine>().Use(() => Mapper.Engine);
For more on this, check out Jimmy's AutoMapper and IOC post.
I'm coming from a stored procedure and creating the data access layer manually approach. I am trying to understand where I should fit Linq To SQL or entity frameworks into my normal planning. I normally seperate out the business layer from the DAL layer and use a repository inbetween.
It seems that people will either use the generated classes from linq to sql, extend them by using the partial class or do a full seperation and map the generated linq classes to seperate business entities. I am partial to the seperate Business entities. However, this seems to be counterintuitive.
One of my last projects used DDD and the entity framework. When needing to udpate an object it moved the business entity to the repistory layer which when going to the DAL layer would create a context and than requery the object. It would than update the values and resbumit.
I didn't see the large point as the data context wasn't saved and required an extra query to grab the object before updating. Normally I would just do the update(If concurrency wasn't an issue)
So my questions come down to:
Does it make sense to seperate linq to sql generated classes into Business entities?
Should the data context be saved or is that impractical?
Thanks for your time, trying to make sure I understand. I normally like to seperate out as it makes it cleaner to understand even in some smaller porjects.
I currently hand roll my own Dto classes and Datacontext instead of using auto-generated code files from Linq to Sql. To give some background of my solution architecture/modeling, I have a "Contract" project, and a "Dal" project. (Also a "Model" project, but I'll try to stay focused here on Dal only). Hand-rolling my own Dtos and Datacontext, makes everything a lot smaller and simpler, I'll give a few examples of how I do that here.
I never return out a Dto object outside of the Dal, in fact I make sure to declare them as internal. The way I return them out is I cast them as an interface (interfaces are located in my "Contract" layer). We'll make a simple "PersonRepository" that implements an "IPersonRetriever and IPersonSaver" interfaces.
Contracts:
public interface IPersonRetriever
{
IPerson GetPersonById(Guid personId);
}
public interface IPersonSaver
{
void SavePerson(IPerson person);
}
Dal:
public class PersonRepository : IPersonSaver, IPersonRetriever
{
private string _connectionString;
public PersonRepository(string connectionString)
{
_connectionString = connectionString;
}
IPerson IPersonRetriever.GetPersonById(Guid id)
{
using (var dc = new PersonDataContext(_connectionString))
{
return dc.PersonDtos.FirstOrDefault(p => p.PersonId == id);
}
}
void IPersonSaver.SavePerson(IPerson person)
{
using (var dc = new PersonDataContext(_connectionString))
{
var personDto = new PersonDto
{
Id = person.Id,
FirstName = person.FirstName,
Age = person.Age
};
dc.PersonDtos.InsertOnSubmit(personDto);
dc.SubmitChanges();
}
}
}
PersonDataContext:
internal class PersonDataContext : System.Data.Linq.DataContext
{
static MappingSource _mappingSource = new AttributeMappingSource(); // necessary for pre-compiled linq queries in .Net 4.0+
internal PersonDataContext(string connectionString) : base(connectionString, _mappingSource) { }
internal Table<PersonDto> PersonDtos { get { return GetTable<PersonDto>(); } }
}
[Table(Name = "dbo.Persons")]
internal class PersonDto : IPerson
{
[Column(Name = "PersonIdentityId", IsPrimaryKey = true, IsDbGenerated = false)]
internal Guid Id { get; set; }
[Column]
internal string FirstName { get; set; }
[Column]
internal int Age { get; set; }
#region IPerson implementation
Guid IPerson.Id { get { return this.Id; } }
string IPerson.FirstName { get { return this.FirstName; } }
int IPerson.Age { get { return this.Age; } }
#endregion
}
You will need to add the "Column" attribute to all of your Dto properties, but if you notice, if there is a one-to-one correlation between what you want the field to be exposed as on the interface, and the name of the actual table column, you won't need to add any of the Named Parameters. In this example my PersonId in the database is stored as "PersonIdentityId", yet I only want my interface to make the field say "Id".
That's how I do my Dal layer, I believe this layer should be dumb, real dumb. Dumb in the sense that it is only there for CRUD (Create, Retrieve, Update and Delete) operations. All of the business logic would go into my "Model" project, which would consume and utilize the IPersonSaver and IPersonRetriever interfaces.
Hope this helps!