I'm building an application on top of a legacy database (which I cannot change). I'm using Linq to SQL for the data access, which means I have a (Linq to SQL) class for each table.
My domain model does not match with the database. For example, there are two tables named Users and Employees, and therefore I have two Linq to SQL classes named User and Employee. But in my domain model I'd like to have a User class which should contain some fields from either table (but I don't care about a lot of the other fields of these tables).
I'm not sure how I should design my repositories:
should the repositories perform the mapping between Linq to SQL classes (e.g. User, Employee) to the domain classes (User) and only return the domain classes to the application
or should my repositories return the Linq to SQL classes and leave the mapping to the caller
The first approach seems to make more sense to me, but is this the correct way to implement my repositories?
The purist (I try to stay pure) will tell you that your model represents your data. And therefore, anything that needs to be persisted is done so only when needed through repositories. Also, when you have complex entities, you want to use a service to combine them. For example, User + Employee = UserEmployee entity that is only accessible through an IUserEmployeeService.
With those vague statements, you have an excellent opportunity here.
Build an anti-corruption layer, which allows you to start moving off of the legacy DB at the same time.
This is an another chapter in the DDD playbook. An Anti-Corruption layer is used to interface with a legacy system using Facades, Translators, and Adapters to isolate the legacy DB with your pure Domain model.
Now, this may be a lot more work than you wanted. So, you have to ask yourself at this point:
Do I want to start the process of
moving off of this legacy DB, or will
it remain for the life of the
application?
If your answer is you can start migrating, then model your actual domain the way you want it. Persist it with normal repositories and services. Have fun designing it the way YOU want it stored. Then, use the services of the aggregate roots to reach into the anti-corruption layer and pull out the entities, store/update them locally, and translate into your domain's entities.
If the answer is that the legacy DB will remain for the life of the project, then your task is much easier. Use your domain's services (e.g. UserEmployeeService) to reach into the anti-corruption's UserFacade and EmployeeFacade (similar to a "Remote Service" concept).
Within the Facades, access the legacy db using the Adapters (e.g. LegacyDbSqlDatabase) to get a raw legacyUser(). The next step would be to use an UserTranslator() and EmployeeTranslator() mapper that converts the legacy user data into your actual domain's version of the User() entity, and return it from the UserFacade back to your UserEmployeeService, where it is combined with the Employee entity that came from the same place.
Whoa, that was a lot of typing...
With your Adapters and Facades of your Anti-Corruption layer, you can do your Linq-to-Sql or whatever you want to do. It doesn't matter because you have completely isolated the legacy DB/system away from your nice and pure Domain - your domain that has its own version of User() and Employee() entities and value objects.
DDD and Linq To SQL don't go together very well because the generated classes are not meant to deviate significantly from your DB table structure. You'll have to either map your classes in a way that makes working with Linq to SQL a pain or just live with a non-ideal object model.
If you really want to utilize DDD and the repository pattern go for Entity Framework or even better NHibernate.
Related
I'm still kind of new to Grails (and Groovy), so apologies if this question seems dumb.
I'm trying to access a SQL database, and it seems that I could use SQL commands in the Controller (taken from this StackOverflow question):
import groovy.sql.Sql
class MyFancySqlController {
def dataSource // the Spring-Bean "dataSource" is auto-injected
def list = {
def db = new Sql(dataSource) // Create a new instance of groovy.sql.Sql with the DB of the Grails app
def result = db.rows("SELECT foo, bar FROM my_view") // Perform the query
[ result: result ] // return the results as model
}
}
I know that if I were to create a domain class with some variables, it would create a database table in SQL:
package projecttracker2
class ListProject {
String name
String description
Date dueDate
static constraints = {
}
}
but this would create the table named "list_projects". If I didn't do that and just created the SQL table outside of Grails, and if this follow-up question says that you can disconnect the Domain class from your database, what purpose do Domain classes serve? I'm looking to do some sql queries to insert, update, delete, etc. data, so I'm wondering what's the best way to do this.
Domain classes are used to model your domain of knowledge within your application. This is not only the structure of the data but also the basis of interaction of those models within your domain of knowledge.
That said, there is no reason why you can't create a Grails project without any domain classes and use your own SQL statements to create, read, update, and delete data in your database. I have worked on projects where there was no domain classes and everything was modeled using DTO (data transfer objects) and services for accessing an already existing database and tables.
Of course by not using Domain classes you lose the integration with GORM, but that doesn't seem like an issue for your case (nor was it in the case I outlined above).
That's part of the beauty of Grails. You don't have to use all of it, you can use only the parts that make sense for your project.
In one of my projects I needed to dump the contents of a MySQL into a Lucene index. Creating the the whole domain class structure for such an one-off operation would be an overkill, so the groovy SQL API did just fine.
So, my answer is no, you DON'T have to use the domain classes if you don't want to.
I agree with what #joshua-moore have said, plus domain classes can drastically simplify you project if you use them properly
I agree to both answers but for your particular case, I would suggest having a domain model for the underlying table.
Reasons:
You mentioned about all CRUD operations in your requirement. With a domain class it will be convenient to let GORM handle the boiler plate code for any of the CRUD operation.
When using SQL, you have to handle transactions manually for update operation, if transaction is a requirement. With GORM and Hibernate, you get that handled automatically.
Code will be DRY. You do not have to create a SQL instance every time you need a operation to be done.
You can conveniently create domain classes for existing tables using db-reverse-engineer plugin
You get one level of abstraction using domain classes. In future, if there is a plan to replace a MySQL db with Oracle or a no-sql db then all that will be needed is to change the driver (in most cases, with Mongodb there will be a bit of churn involved but very less as compared to replacing SQL queries)
Auditing can be easily achieved if domain class is used.
This feature (add/update/delete) can be easily exposed as a service, if required.
Data Validation is easier in domain classes
Better support to avoid SQL Injection as compared to plain vanilla queries.
First a background. Our application is built on ASP.NET MVC3, .NET 4.0, and uses Linq-to-Sql (PLINQO) as its primary means of data access. Our web application is a multi-tenant/multi-client system where each client gets their own Sql Server database. Each Sql Server database up to now has had exactly the same schema.
Often times, clients will ask us to track custom fields in their Db that other clients don't track. The way we've handled this is by reserving a number of customfields in the db in our main tables. For example, our Widget table may have a CustomText1, CustomText2.. CustomText10, and a CustomDate1, CustomDate2..CustomDate10 fields. Again, all our schemas across clients are the same, so Linq-to-Sql handles these fields just as easily as any other field.
Now we are running into an issue where a client wants several hundred CustomBool fields, but doesn't need the others. So, basically, we are researching for ways to still use the Linq-to-Sql, but have it work against potentially different schemas depending on the database it is connected to (although they are different in a very specific way.)
Too much code has already been built on Linq-to-Sql and accessing the Widget classes generated by it that I'd like to not just fall back to straight SQL.
I've seen answers here and on the web on ways for Linq to Sql to access different tables that have the same schema, but I have not found a good answer to the same table name across different dbs with different columns.
Is this possible?
If the main objective is to store a few extra fields for existing domain objects then why not create a generic table that can store key value pairs. This is extremely flexible since there is no need to change your schema if a customer requires a new property.
We do this frequently and normally have some helpers to correctly cast the properties e.g.
Service.GetProperty<bool>("SomeCustomProperty")
If you are looking for a more "pluggable" domain model that can be completely different for each tenant, I think you will struggle if you are following a database driven approach and using the L2S designer to generate your code.
To achieve this you really need to be generating your database based on your code (domain driven design) which will give you much more flexibility i.e. you can load a tenant specific configuration (set of classes, business rules etc.) at runtime and use this to generate/validate your schema.
Update
It would be good if you could elaborate on exactly what design approach you have taken i.e. are you using the Linq designer and generating your model from the database?
It's clear that a generic key value pair store is not going to meet your querying requirements.
It's hard to provide a solution without suggesting a different technology. Relational SQL databases aren't really suited for dynamic domain models. You may be better off with a document database such as MongoDb or RavenDb where you are not tied to a specific schema. You could even make use of these just for your custom properties.
If that's not ideal then another solution would be to use something like Dapper to construct your queries. Assuming you are developing against interfaces you can have a implementation of your data service per tenant that makes use of their custom fields.
Ayende did a whole series of posts on Multitenancy and covers tenant specific domain models. It starts here and may be of some use to you.
I'm trying to design application that will have UI with database in the backend.
I will be using Linq-to-SQL as the database layer to update and insert.
Now I'm trying to find out the best practice to use in designing the project, suppose I have 2 tables in the DB (Customers, Orders)
Shall I depend on the generated Linq-to-SQL classes, or shall I still create classes for Customers, Orders?
Shall I wrap the generated Linq-to-SQL inside another class to add validations?
I hope my questions are clear.
L2S is in my opinion an excellent light-weight data access method. If you have control over the database and have limited application data processing logic it is often a good choice.
If you have a two-tier app with a UI communicating directly with the DB then you can depend on the L2S generated classes. If you have a multi tier app with a client communicating with e.g. a WCF service you probably need Data Transfer Objects.
Use the partial methods on the L2S classes for validation.
I think you should use other ORMs for better implementation DAL for example Entity Framework or Nhibernate this ORMs allow you Model First approach without attributes
and the validation logic you should separate in other classes for exmaple MyEntityValidator
And also good approach to use the Repository pattern this pattern allow doesn't depend on Data access EF or Nhibernate
and look at this Entity Framework and Repository
LINQ will generate a set of classes from the SQL files. Should these be used directly, or should they be wrapped in another class so the model is not so dependent on the implementation?
You can do it either way. Generally I wrap the Linq to SQL classes in a repository, but if the app is small you can use the repository methods directly.
If the app is larger you can add a business layer.
If you actually need to abstract from your sql database's model, then Linq-To-Sql is probably the wrong choice. Sure, you can make it work (but that isn't what it was made for).
If you need that level of abstraction, you will want to move on to a more "enterprisey" ORM like Entity Framework. They require more configuration, which is used to specify the more intricate mappings that allow your object model and database model to not resemble each other,
On the other hand, if this is overkill then use Ling to Sql. It's simple and it's easy, as long as you can stick with its simplified approach to mappings.
I think it's fine to use the generated model classes directly in your business and presentation tiers - however, I would definitely encapsulate data access for those entities inside a repository pattern of some description (GetOne(), Save(), Search(), Delete() etc).
The main reason for doing so is to 'disconnect' query results before returning them to a calling layer, so that clients don't inadvertently execute queries directly against the database when they use LINQ on returned results. Eg, calling ToList() on an IQueryable<T> will return a local copy of the sequence that can be managed using plain LINQ to Objects.
It also promotes better separation of layers and less coupling, as clients will interact via interface methods on the repository, rather than use LINQ to SQL directly for data access, so if you do decide to chuck LINQ to SQL in favour of the Entity Framework (shudders), it's easier to do the refactoring.
The one exception I would make is when LINQ to SQL objects need to cross a service boundary, ie, sent as data transfer objects to or from a WCF service. In this case, I think it's a good idea to have a separate, light-weight object model that supports serialization - don't send your LINQ to SQL objects directly over the wire.
First a bit about the environment:
We use a program called Clearview to manage service relationships with our customers, including call center and field service work. In order to better support clients and our field technicians we also developed a web site to provide access to the service records in Clearview and reporting. Over time our need to customize the behavior and add new features led to more and more things being tied to this website and it's database.
At this point we're dealing with things like a Company being defined partly in the Clearview database and partly in the website database. For good measure we're also starting to tie the scripting for our phone system into the same website, which will require talking to the phone system's own database as well.
All of this is set up and working... BUT we don't have a good data layer to work with it all. We moved to Linq to SQL and now have two DBMLs that we can use, along with some custom classes I wrote before I'd ever heard of Linq, along with some of the old style ADO datasets. So yeah, basically things are a mess.
What I want is a data layer that provides a single front end for our applications, and on the back end manages everything into the correct database.
I had heard something about Entity Framework allowing classes to be built from multiple sources, but it turns out there can only be one database. So the question is, how could I proceed with this?
I'm currently thinking of getting the Linq To SQL classes all set for each database, then manually writing Linq compatible front ends that tie those together. Seems like a lot of work, and given Linq's limitations (such as not being able to refresh) I'm not sure it's a good idea.
Could I do something with Entity Framework that would turn out better? Should I look into another tool? Am I crazy?
The Entity Framework does give a certain measure of database independence, insofar as you can build an entity model from one database, and then connect it to a different database by using a different entity connect string. However, as you say, it's still just one database, and, moreover, it's limited to databases which support the Entity Framework. Many do, but not all of them. You could use multiple entity models within a single application in order to combine multiple databases using the Entity Framework. There is some information on this on the ADO.NET team blog. However, the Entity Framework support for doing this is, at best, in an early stage.
My approach to this problem is to abstract my use of the Entity Framework behind the Repository pattern. The most immediate benefit of this, for me, is to make unit testing very simple; instead of trying to mock my Entity model, I simply substitute a mock repository which returns IQueryables. But the same pattern is also really good for combining multiple data sources, or data sources for which there is no Entity Framework provider, such as a non-data-services-aware Web service.
So I'm not going to say, "Don't use the Entity Framework." I like it, and use it, myself. In view of recent news from Microsoft, I believe it is a better choice than LINQ to SQL. But it will not, by itself, solve the problem you describe. Use the Repository pattern.
if you want to use tools like Linq2SQl or EF and don't want to have to manage multiple DBMLS (or whaetever its called in EF or other tools), you could create views in your website database, that reference back to the ClearView or Phone system's DB.
This allows you to decouple your web site from their database structure. I believe Linq2Sql and EF can use a view as the source for an Entity. If they can't look at nHibernate.
This will also let you have composite entities that are pulled from the various data sources. There are some limitations updating views in SQL Server; however, you can define your own Instead of trigger(s) on the view which can then do the actual insert update delete statements.
L2S works with views, perfectly, in my project. You only need to make a small trick:
1. Add a secondary DB table to the current DB as a view.
2. In Designer, add a primary key attribute to a id field on the view.
3. Only now, add an association to whatever other table you want in the original DB.
Now, you might see the view available for the navigation.