Multiple DBML files - type sharing? - linq-to-sql

I have a Client/Server application, where the Client and Server have some common tables (which are kept in synchronisation as part of the application).
We currently store these tables (i.e. FileDetails) in a Shared.dbml file. Until now, any stored proc that returns a result of set of FileDetails, has been placed in the Shared.dbml (even it is a Server-only) SP.
I released that the LINQ to SQL supports a Base Class property on the DBML, and I thought that perhaps I could have a Server.dbml, that extends my Shared.dbml. In theory this would give me a ServerDataContext with all the shared tables and SPs, as well as the server-specific elements. Normally in the SQL designer I would drag and drop the SP, over the FileDetails table to show this is what was returned, however as the class is in a different DBML this is not possible, and in the XML I don't think the ElementType IdRef="1" approach will work (as the ref needs to point to another file)
I found I can get around that problem by editing the XMLs return type manually:
<Function Name="dbo.SELECT_FTS_FILES" Method="SELECT_FTS_FILES">
<Return Type="ISingleResult<DataTypes.FileDetails>" />
</Function>
My question is, does anyone have any experience with this kind of approach, and could point me to further resources? Are there any obvious drawbacks to it (other than than manual XML updates)
All feedback welcome

You could inherit from your datacontext. However in your new datacontext you wouldn't be able to use the linq designer you would have to code things out manually.
Is there any reason you don't want two datacontext?
Inheritance and LinqToSql don't play nice together in general. If you have a deep need for it you should look into another ORM like NHibernate.

Related

Refactor Entity Framework code to use views instead of tables?

We are possibly looking at switching our tables, for views in EF 4.3.1.
We are using db first via the edmx file, so it generates our entities and dbcontext.
Has anyone got any tips for remapping our entities from tables to views?
Is this prone to disaster? We've had trouble with updating the edmx file in the past via the designer where the underlying changes weren't reflected deep somewhere within the code and we ended up with missing columns.
Or will views act very similar to tables in the EF world?
Designer handles views in completely different way - first of all all views used by EF through designer are read only unless you map stored procedures or custom SQL commands to insert, update and delete operation for each entity you want to modify.
Normally if you have updatable view you can simply modify SSDL part of EDMX and cheat it to pretend that the view is actually a table but this has two consequences:
You must modify EDMX directly as XML
You must not use Update from database any more because it always deletes whole SSDL part and creates a new one without your changes = you must maintain your EDMX manually or buy some extension for VS which will allow you updating only selected tables.

Why does LinqPad create Fields instead of Properties?

I recently took on a project of creating a tool for LinqPad that would Dump query results into CSV format in order to use the tool on massive databases for quick results. One thing I wanted out of the tool is for it to be able to work in Visual Studio, and LinqPad. Thus, if I was using LinqtoSQL in VS2010, or LinqPad, I could dump results quickly to a csv file, and then open it up into Excel to view the results.
The biggest hiccup in the project came from how LinqPad builds their DataContexts vs. how Visual Studio builds their DataContexts. The best information I could find on how LinqPad does it comes from here. Basically what I found from my project, was that VS2010 creates properties for their DataContexts, but LinqPad creates Fields. Thus when using reflection:
LinqPad:
dataContextType.GetProperties() //returns 0
dataContextType.GetFields() //returns the Fields from LinqPad created DataContext
VS 2010 LinqToSQL:
dataContextType.GetProperties() //returns the Properties from VS created DataContext
dataContextType.GetFields() //returns 0
So why does LinqPad use Fields instead of Properties in their DataContexts? Wouldn't it have been more feasible to copy the Visual Studio LinqToSQL pattern?
Update
Based on a comment I decided to ask the same question within the LinqPad forum as well.
This is a good question. The main reason for LINQPad using fields to map columns is for performance when building the typed DataContext that backs database-connected queries.
We're not talking about the speed of executing the properties themselves (there's actually very little overhead in executing simple accessors and the JIT may even inline them.) The overhead is when building the typed DataContext via Reflection.Emit. A field is simply that: one item of metadata, whereas a property requires emitting a field definition, a property definition, two methods for the accessors (each with IL to get/set the underlying field). Because users may point LINQPad to databases with upwards of 1000 tables and functions, this can add up in terms of the time taken to build the assembly - as well as its size (increasing HDD activity and working set).
You have raised an interesting issue in the lack of unification between PropertyInfo and FieldInfo in the reflection object model. It would be nice if there was an interface that unified fields and (non-indexed) properties.

Multiple Linq data models with the same table being mapped in each Re-use mapping

I've implemented the repository pattern on the data access layer of our current service layer.
We have an object model where the same class "historical notes" is mapped on mutiple objects (currently 6 but soon to be more!)
Part of the best practices for the use of linq to sql is not to have one dbml file for every table in the db, but instead to break it down, this way it doesn't have a huge performance hit when the context is created.
Unfortunately the logical places to separate the objects leaves the historical notes in 5 different DBML files. When the linq generator creates the classes it generates a different class in the different namespace.
I have a historical note object in the domain model, but I don't want to re-map the domain object model into the data model for every time we use the historical notes.
One of the things I don't want to do is break the "reading" of the data into multiple queries.
Is there a way I can map the historical note into multiple data models but only write the mapping once?
Thanks
Pete
Solution
Thanks for the help, I think I'm going to move back to one data context for all the data tables.
The work arounds involved in setting up the multiple models isn't worth the extra complexity and potential fragility of the code. Having to write the same left hand, right hand code to map the historical notes is all too much work and too many places to keep the code in sync.
Thanks guys for the input
Part of the best practices for the use
of linq to sql is not to have one dbml
file for every table in the db, but
instead to break it down, this way it
doesn't have a huge performance hit
when the context is created.
Where did you hear that? I don't agree. The DataContext is generally a fairly lightweight object, regardless of the number of tables.
See here for an analysis of the issues involving multiple data contexts:
LINQ to SQL: Single Data Context or Multiple Data Contexts?
http://craftycodeblog.com/2010/07/19/linq-to-sql-single-data-context-or-multiple/
In my opinion, you should have one datacontext per database. This would also solve your mapping problems.
See also LINQ to SQL: Multiple / Single .dbml per project?
One option could be to put the historical notes in their own datacontext, and keep the relationships between this object and the rest of your model as 'ids' (so just foreign keys in the db). That's how I would do it anyway.

What are the advantages or disadvantages of using dbml for linq2sql queries?

I am currently reading Pro Asp.Net MVC, and they are building all of their linq2sql entity classes by hand, and mapping them with the linq mapping attributes. However, everyone else I see (from google searches) talking about linq 2 sql seem to be using the visual designer for building all of their entities. Which is the preferred way to build l2s entities, and what are the advantages/disadvantages of each?
The only difference I have noticed so far, is I can't seem to do inheritance mapping when using the visual designer, although MSDN says I should be able to so I might just be missing it in VS 2010's interface. However, I'm not so sure I should use inheritance anyway as that could technically add additional joins when I don't need the sub table data.
As a PS, l2s will not be doing any modification of my schema, I will be generating schema changes manually and then replicating them in linq2sql.
Thanks,
We used the designer all the time. It indeed introduces an added step, every time you make a change to the schema you need to import the table into the designer again, but I think that effrot pales in comparison to the amount of code you need to write if you bypass the desginer.
Also note that the designer creates partial classes, you can create an additional file for the partial class that includes additional implementation details. That way, when the table gets refereshed in the designer, it leaves you additional code alone. We do this to add a lot of helper functions to the classes, and also to provide strictly typed enumerated properties that overlay the primitive integer FK fields.
It's true that inheritance would be very difficult to accomplish well, but I think if you need that sort of data layer, L2S may not be the best solution. I prefer to keep my data layer clean and simple, just using L2S to get the data in and out, and then pu more complicated logic in the business layer. If we really needed to do things like object inheritance in our data layer, I would probably explore a more advanced and complicated technology like EF
We've built our entire application framework backend using L2S. I developed most of the this. I started to use the DBML designer but I quickly realized this was a royal pain. Every schema change required a change to the table(s) in the designers. Plus, the entities created by the designer all get stuffed in a single class file, and didn't have all the functionality I wanted, like support for M2M relationships, and more. So, it didn't take long before I realized I wanted a better way.
I ended up writing my own code generator that generates the L2S entities the way I want them, and it also generates a "lightweight" set of entities that are used in the application layer. These don't have any L2S plumbing. The code generator creates all these entities, and other code, directly from a target database. No more DBML!
This has worked very well for us and our entities are exactly the way we want them, and generated automatically each time our database schema changes.

What is the best way to build a data layer across multiple databases?

First a bit about the environment:
We use a program called Clearview to manage service relationships with our customers, including call center and field service work. In order to better support clients and our field technicians we also developed a web site to provide access to the service records in Clearview and reporting. Over time our need to customize the behavior and add new features led to more and more things being tied to this website and it's database.
At this point we're dealing with things like a Company being defined partly in the Clearview database and partly in the website database. For good measure we're also starting to tie the scripting for our phone system into the same website, which will require talking to the phone system's own database as well.
All of this is set up and working... BUT we don't have a good data layer to work with it all. We moved to Linq to SQL and now have two DBMLs that we can use, along with some custom classes I wrote before I'd ever heard of Linq, along with some of the old style ADO datasets. So yeah, basically things are a mess.
What I want is a data layer that provides a single front end for our applications, and on the back end manages everything into the correct database.
I had heard something about Entity Framework allowing classes to be built from multiple sources, but it turns out there can only be one database. So the question is, how could I proceed with this?
I'm currently thinking of getting the Linq To SQL classes all set for each database, then manually writing Linq compatible front ends that tie those together. Seems like a lot of work, and given Linq's limitations (such as not being able to refresh) I'm not sure it's a good idea.
Could I do something with Entity Framework that would turn out better? Should I look into another tool? Am I crazy?
The Entity Framework does give a certain measure of database independence, insofar as you can build an entity model from one database, and then connect it to a different database by using a different entity connect string. However, as you say, it's still just one database, and, moreover, it's limited to databases which support the Entity Framework. Many do, but not all of them. You could use multiple entity models within a single application in order to combine multiple databases using the Entity Framework. There is some information on this on the ADO.NET team blog. However, the Entity Framework support for doing this is, at best, in an early stage.
My approach to this problem is to abstract my use of the Entity Framework behind the Repository pattern. The most immediate benefit of this, for me, is to make unit testing very simple; instead of trying to mock my Entity model, I simply substitute a mock repository which returns IQueryables. But the same pattern is also really good for combining multiple data sources, or data sources for which there is no Entity Framework provider, such as a non-data-services-aware Web service.
So I'm not going to say, "Don't use the Entity Framework." I like it, and use it, myself. In view of recent news from Microsoft, I believe it is a better choice than LINQ to SQL. But it will not, by itself, solve the problem you describe. Use the Repository pattern.
if you want to use tools like Linq2SQl or EF and don't want to have to manage multiple DBMLS (or whaetever its called in EF or other tools), you could create views in your website database, that reference back to the ClearView or Phone system's DB.
This allows you to decouple your web site from their database structure. I believe Linq2Sql and EF can use a view as the source for an Entity. If they can't look at nHibernate.
This will also let you have composite entities that are pulled from the various data sources. There are some limitations updating views in SQL Server; however, you can define your own Instead of trigger(s) on the view which can then do the actual insert update delete statements.
L2S works with views, perfectly, in my project. You only need to make a small trick:
1. Add a secondary DB table to the current DB as a view.
2. In Designer, add a primary key attribute to a id field on the view.
3. Only now, add an association to whatever other table you want in the original DB.
Now, you might see the view available for the navigation.