IQueryable to DataSet - linq-to-sql

I have seen numerous methods and tricks around the net today. What i need is convert my Linq to SQL queries (IQueryable results) into a DataSet for reporting purposes. Reporting tool is XtraReports from DevExpress.
A promising solution i found in another post is modelshredder. I am still concern though about the whole object graph, what about if i need all the hierarchical data for my report, the related association data EntitySet, EntityRef (e.g. i have loaded with Customers all the Orders and OrderDetails). Is there something supporting this to convert it in the appropriate DataSet with the related DataTables and all the data i need for generating my report with numerous values from numerous DataTables? I understand i could use the previous tool and convert one by one the related data to DataTables inside the DataSet and give the DataSource in the Report.
[EDIT] DataContext.GetCommand(IQueryable) looks another usefull solution.
I am looking for more views on the subject and listening to whoever used Linq to SQL and XtraReports (or any report tool asking for IList, IBindingList, ITypedList datasource) to do the job.

There is the best solution in official site of DevExpress

Related

Linq 2 Sql and Dynamic table schemas

First a background. Our application is built on ASP.NET MVC3, .NET 4.0, and uses Linq-to-Sql (PLINQO) as its primary means of data access. Our web application is a multi-tenant/multi-client system where each client gets their own Sql Server database. Each Sql Server database up to now has had exactly the same schema.
Often times, clients will ask us to track custom fields in their Db that other clients don't track. The way we've handled this is by reserving a number of customfields in the db in our main tables. For example, our Widget table may have a CustomText1, CustomText2.. CustomText10, and a CustomDate1, CustomDate2..CustomDate10 fields. Again, all our schemas across clients are the same, so Linq-to-Sql handles these fields just as easily as any other field.
Now we are running into an issue where a client wants several hundred CustomBool fields, but doesn't need the others. So, basically, we are researching for ways to still use the Linq-to-Sql, but have it work against potentially different schemas depending on the database it is connected to (although they are different in a very specific way.)
Too much code has already been built on Linq-to-Sql and accessing the Widget classes generated by it that I'd like to not just fall back to straight SQL.
I've seen answers here and on the web on ways for Linq to Sql to access different tables that have the same schema, but I have not found a good answer to the same table name across different dbs with different columns.
Is this possible?
If the main objective is to store a few extra fields for existing domain objects then why not create a generic table that can store key value pairs. This is extremely flexible since there is no need to change your schema if a customer requires a new property.
We do this frequently and normally have some helpers to correctly cast the properties e.g.
Service.GetProperty<bool>("SomeCustomProperty")
If you are looking for a more "pluggable" domain model that can be completely different for each tenant, I think you will struggle if you are following a database driven approach and using the L2S designer to generate your code.
To achieve this you really need to be generating your database based on your code (domain driven design) which will give you much more flexibility i.e. you can load a tenant specific configuration (set of classes, business rules etc.) at runtime and use this to generate/validate your schema.
Update
It would be good if you could elaborate on exactly what design approach you have taken i.e. are you using the Linq designer and generating your model from the database?
It's clear that a generic key value pair store is not going to meet your querying requirements.
It's hard to provide a solution without suggesting a different technology. Relational SQL databases aren't really suited for dynamic domain models. You may be better off with a document database such as MongoDb or RavenDb where you are not tied to a specific schema. You could even make use of these just for your custom properties.
If that's not ideal then another solution would be to use something like Dapper to construct your queries. Assuming you are developing against interfaces you can have a implementation of your data service per tenant that makes use of their custom fields.
Ayende did a whole series of posts on Multitenancy and covers tenant specific domain models. It starts here and may be of some use to you.

Why does LinqPad create Fields instead of Properties?

I recently took on a project of creating a tool for LinqPad that would Dump query results into CSV format in order to use the tool on massive databases for quick results. One thing I wanted out of the tool is for it to be able to work in Visual Studio, and LinqPad. Thus, if I was using LinqtoSQL in VS2010, or LinqPad, I could dump results quickly to a csv file, and then open it up into Excel to view the results.
The biggest hiccup in the project came from how LinqPad builds their DataContexts vs. how Visual Studio builds their DataContexts. The best information I could find on how LinqPad does it comes from here. Basically what I found from my project, was that VS2010 creates properties for their DataContexts, but LinqPad creates Fields. Thus when using reflection:
LinqPad:
dataContextType.GetProperties() //returns 0
dataContextType.GetFields() //returns the Fields from LinqPad created DataContext
VS 2010 LinqToSQL:
dataContextType.GetProperties() //returns the Properties from VS created DataContext
dataContextType.GetFields() //returns 0
So why does LinqPad use Fields instead of Properties in their DataContexts? Wouldn't it have been more feasible to copy the Visual Studio LinqToSQL pattern?
Update
Based on a comment I decided to ask the same question within the LinqPad forum as well.
This is a good question. The main reason for LINQPad using fields to map columns is for performance when building the typed DataContext that backs database-connected queries.
We're not talking about the speed of executing the properties themselves (there's actually very little overhead in executing simple accessors and the JIT may even inline them.) The overhead is when building the typed DataContext via Reflection.Emit. A field is simply that: one item of metadata, whereas a property requires emitting a field definition, a property definition, two methods for the accessors (each with IL to get/set the underlying field). Because users may point LINQPad to databases with upwards of 1000 tables and functions, this can add up in terms of the time taken to build the assembly - as well as its size (increasing HDD activity and working set).
You have raised an interesting issue in the lack of unification between PropertyInfo and FieldInfo in the reflection object model. It would be nice if there was an interface that unified fields and (non-indexed) properties.

Multiple Linq data models with the same table being mapped in each Re-use mapping

I've implemented the repository pattern on the data access layer of our current service layer.
We have an object model where the same class "historical notes" is mapped on mutiple objects (currently 6 but soon to be more!)
Part of the best practices for the use of linq to sql is not to have one dbml file for every table in the db, but instead to break it down, this way it doesn't have a huge performance hit when the context is created.
Unfortunately the logical places to separate the objects leaves the historical notes in 5 different DBML files. When the linq generator creates the classes it generates a different class in the different namespace.
I have a historical note object in the domain model, but I don't want to re-map the domain object model into the data model for every time we use the historical notes.
One of the things I don't want to do is break the "reading" of the data into multiple queries.
Is there a way I can map the historical note into multiple data models but only write the mapping once?
Thanks
Pete
Solution
Thanks for the help, I think I'm going to move back to one data context for all the data tables.
The work arounds involved in setting up the multiple models isn't worth the extra complexity and potential fragility of the code. Having to write the same left hand, right hand code to map the historical notes is all too much work and too many places to keep the code in sync.
Thanks guys for the input
Part of the best practices for the use
of linq to sql is not to have one dbml
file for every table in the db, but
instead to break it down, this way it
doesn't have a huge performance hit
when the context is created.
Where did you hear that? I don't agree. The DataContext is generally a fairly lightweight object, regardless of the number of tables.
See here for an analysis of the issues involving multiple data contexts:
LINQ to SQL: Single Data Context or Multiple Data Contexts?
http://craftycodeblog.com/2010/07/19/linq-to-sql-single-data-context-or-multiple/
In my opinion, you should have one datacontext per database. This would also solve your mapping problems.
See also LINQ to SQL: Multiple / Single .dbml per project?
One option could be to put the historical notes in their own datacontext, and keep the relationships between this object and the rest of your model as 'ids' (so just foreign keys in the db). That's how I would do it anyway.

Linq2Sql Best Practices

I'm recently migrating to Linq2Sql and all my future projects will be using Linq2Sql. Having said that, I researched a lot on how to properly plug-in Linq2Sql in application design.
what to put at what layer ?
How do you design your repositories and business layer services ?
Should I use DTOs over Linq2Sql entities on interaction layer ?
what things should I be careful about ?
what problems did you face ?
I did not find any rock-solid material that really talked about one single thing and everyone have their own opinions. I'm looking forward to your ideas on how to integrate/use Linq2Sql in projects. My priority is maintenance[it should be maintenable and when multiple people work on same project] and scalabilty [it should have scope of evolution].
Thanks.
I have been using linq to entities for the last two and half years on production applications and I can say it has been a really nice experience... but that doesn’t means you should do everything with Linq.
I am going to try to give you some answers to your questions,
I think you should ask first what kind of application you want to create; once the scenario is clear you will have an odd idea of number of transactions or queries you are going to perform against the database (or repository).
Linq could be extremely useful to abstract data access, context and entities handling, but everything comes with at a cost. Objects will be created with a cost and you really need to think about this.
If your application has a ‘nice-not-too-heavy’ data access Linq will be the perfect tool to save time for your application.
If your application is entirely based on data extraction or processing, Linq will be great as well.
If your application is handling huge blocks of data (check your application) you will need to do something else to avoid creating a bunch of objects that might be useless.
What does that means? You need to know what a smart data access means... and that is leave SQL to work for you (in the case of SQL); if you are going to do lots of joints, cross information and stuff, create stored procedures that creates the data result for you and then get it using Linq or SqlCommand or SqlDataAdapters, etc...
What to put at what layer?
Since Linq gives you the data access abstraction you can pretty much place your code where the business logic demands it. There are tons of good practices of how to structure your code; Linq (as any other entity framework or data access libraries) will fit in the right spot.
Avoid whenever is possible direct linq expression in your controls (asp.net has lots of controls with linq data sources), instead wrapped your ‘query’ with a service class that can be instantiated by your code or controls as an object data source.
What have I found?
Pure Linq is not always possible on big applications or projects (so you will end up with many things in linq and some in previous more simple solutions to access your repositories) but will help you to save time.
Implementing stored procedures is a MUST if you want to deliver great quality applications.
Hope this comment helps.
Cheers.

What is the best way to build a data layer across multiple databases?

First a bit about the environment:
We use a program called Clearview to manage service relationships with our customers, including call center and field service work. In order to better support clients and our field technicians we also developed a web site to provide access to the service records in Clearview and reporting. Over time our need to customize the behavior and add new features led to more and more things being tied to this website and it's database.
At this point we're dealing with things like a Company being defined partly in the Clearview database and partly in the website database. For good measure we're also starting to tie the scripting for our phone system into the same website, which will require talking to the phone system's own database as well.
All of this is set up and working... BUT we don't have a good data layer to work with it all. We moved to Linq to SQL and now have two DBMLs that we can use, along with some custom classes I wrote before I'd ever heard of Linq, along with some of the old style ADO datasets. So yeah, basically things are a mess.
What I want is a data layer that provides a single front end for our applications, and on the back end manages everything into the correct database.
I had heard something about Entity Framework allowing classes to be built from multiple sources, but it turns out there can only be one database. So the question is, how could I proceed with this?
I'm currently thinking of getting the Linq To SQL classes all set for each database, then manually writing Linq compatible front ends that tie those together. Seems like a lot of work, and given Linq's limitations (such as not being able to refresh) I'm not sure it's a good idea.
Could I do something with Entity Framework that would turn out better? Should I look into another tool? Am I crazy?
The Entity Framework does give a certain measure of database independence, insofar as you can build an entity model from one database, and then connect it to a different database by using a different entity connect string. However, as you say, it's still just one database, and, moreover, it's limited to databases which support the Entity Framework. Many do, but not all of them. You could use multiple entity models within a single application in order to combine multiple databases using the Entity Framework. There is some information on this on the ADO.NET team blog. However, the Entity Framework support for doing this is, at best, in an early stage.
My approach to this problem is to abstract my use of the Entity Framework behind the Repository pattern. The most immediate benefit of this, for me, is to make unit testing very simple; instead of trying to mock my Entity model, I simply substitute a mock repository which returns IQueryables. But the same pattern is also really good for combining multiple data sources, or data sources for which there is no Entity Framework provider, such as a non-data-services-aware Web service.
So I'm not going to say, "Don't use the Entity Framework." I like it, and use it, myself. In view of recent news from Microsoft, I believe it is a better choice than LINQ to SQL. But it will not, by itself, solve the problem you describe. Use the Repository pattern.
if you want to use tools like Linq2SQl or EF and don't want to have to manage multiple DBMLS (or whaetever its called in EF or other tools), you could create views in your website database, that reference back to the ClearView or Phone system's DB.
This allows you to decouple your web site from their database structure. I believe Linq2Sql and EF can use a view as the source for an Entity. If they can't look at nHibernate.
This will also let you have composite entities that are pulled from the various data sources. There are some limitations updating views in SQL Server; however, you can define your own Instead of trigger(s) on the view which can then do the actual insert update delete statements.
L2S works with views, perfectly, in my project. You only need to make a small trick:
1. Add a secondary DB table to the current DB as a view.
2. In Designer, add a primary key attribute to a id field on the view.
3. Only now, add an association to whatever other table you want in the original DB.
Now, you might see the view available for the navigation.