I recently took on a project of creating a tool for LinqPad that would Dump query results into CSV format in order to use the tool on massive databases for quick results. One thing I wanted out of the tool is for it to be able to work in Visual Studio, and LinqPad. Thus, if I was using LinqtoSQL in VS2010, or LinqPad, I could dump results quickly to a csv file, and then open it up into Excel to view the results.
The biggest hiccup in the project came from how LinqPad builds their DataContexts vs. how Visual Studio builds their DataContexts. The best information I could find on how LinqPad does it comes from here. Basically what I found from my project, was that VS2010 creates properties for their DataContexts, but LinqPad creates Fields. Thus when using reflection:
LinqPad:
dataContextType.GetProperties() //returns 0
dataContextType.GetFields() //returns the Fields from LinqPad created DataContext
VS 2010 LinqToSQL:
dataContextType.GetProperties() //returns the Properties from VS created DataContext
dataContextType.GetFields() //returns 0
So why does LinqPad use Fields instead of Properties in their DataContexts? Wouldn't it have been more feasible to copy the Visual Studio LinqToSQL pattern?
Update
Based on a comment I decided to ask the same question within the LinqPad forum as well.
This is a good question. The main reason for LINQPad using fields to map columns is for performance when building the typed DataContext that backs database-connected queries.
We're not talking about the speed of executing the properties themselves (there's actually very little overhead in executing simple accessors and the JIT may even inline them.) The overhead is when building the typed DataContext via Reflection.Emit. A field is simply that: one item of metadata, whereas a property requires emitting a field definition, a property definition, two methods for the accessors (each with IL to get/set the underlying field). Because users may point LINQPad to databases with upwards of 1000 tables and functions, this can add up in terms of the time taken to build the assembly - as well as its size (increasing HDD activity and working set).
You have raised an interesting issue in the lack of unification between PropertyInfo and FieldInfo in the reflection object model. It would be nice if there was an interface that unified fields and (non-indexed) properties.
Related
I am currently reading Pro Asp.Net MVC, and they are building all of their linq2sql entity classes by hand, and mapping them with the linq mapping attributes. However, everyone else I see (from google searches) talking about linq 2 sql seem to be using the visual designer for building all of their entities. Which is the preferred way to build l2s entities, and what are the advantages/disadvantages of each?
The only difference I have noticed so far, is I can't seem to do inheritance mapping when using the visual designer, although MSDN says I should be able to so I might just be missing it in VS 2010's interface. However, I'm not so sure I should use inheritance anyway as that could technically add additional joins when I don't need the sub table data.
As a PS, l2s will not be doing any modification of my schema, I will be generating schema changes manually and then replicating them in linq2sql.
Thanks,
We used the designer all the time. It indeed introduces an added step, every time you make a change to the schema you need to import the table into the designer again, but I think that effrot pales in comparison to the amount of code you need to write if you bypass the desginer.
Also note that the designer creates partial classes, you can create an additional file for the partial class that includes additional implementation details. That way, when the table gets refereshed in the designer, it leaves you additional code alone. We do this to add a lot of helper functions to the classes, and also to provide strictly typed enumerated properties that overlay the primitive integer FK fields.
It's true that inheritance would be very difficult to accomplish well, but I think if you need that sort of data layer, L2S may not be the best solution. I prefer to keep my data layer clean and simple, just using L2S to get the data in and out, and then pu more complicated logic in the business layer. If we really needed to do things like object inheritance in our data layer, I would probably explore a more advanced and complicated technology like EF
We've built our entire application framework backend using L2S. I developed most of the this. I started to use the DBML designer but I quickly realized this was a royal pain. Every schema change required a change to the table(s) in the designers. Plus, the entities created by the designer all get stuffed in a single class file, and didn't have all the functionality I wanted, like support for M2M relationships, and more. So, it didn't take long before I realized I wanted a better way.
I ended up writing my own code generator that generates the L2S entities the way I want them, and it also generates a "lightweight" set of entities that are used in the application layer. These don't have any L2S plumbing. The code generator creates all these entities, and other code, directly from a target database. No more DBML!
This has worked very well for us and our entities are exactly the way we want them, and generated automatically each time our database schema changes.
I am using LINQ2SQL in my current project. I have a quite a lot of tables ~30. When I create my DBML file I change some of the column names etc for readability.
Recently if I made a change to the table in the underlying Database I just deleted and re-added the table in the DBML file, but this is getting tedious. How can I mimic any changes to the database in the DBML file? (e.g. new column, drop column, new default constraint etc).
Out of the box, Linq-to-SQL has no update feature - amazing, but unfortunately true.
There's two tools I know of that get around this:
PLINQO is a set of CodeSmith code generation templates which handle DBML generation and offer lots of extra features (like generating one file per db entity) - including updates!
The Huagati tools offer updates and enforcing naming conventions for DBML and Entity Framework
Marc
I'm not expecting this to be the correct answer, but I thought I'd chime in anyway.
My approach has been to use the drag-n-drop feature for creating the initial DBML file. Then, any changes I make in my DB are also then made, by hand, in either the designer or in the DBML file (as XML) itself. (You can right-click on the DBML file, select Open With, and choose XML editor.) Sometimes it is much easier/faster to work with its XML instead of messing around in the designer.
I would only consider the deleting and re-adding, as you have been doing, if the changes were significant. For adding a new column, however, I'd suggest working directly with the dbml's XML, it's probably faster.
Good luck!
Welcome to the world of tedious! Unless I missed something, you're doing it the right way.
SubSonic looks like an interesting alternative, and boasts
It will also create your database on the fly if you want, altering the schema as you change your object.
As far as free solutions, there are a couple of blunt instruments that mostly move you away from using the O/R Designer: SQLMetal and Damien Guard's T4 templates.
There are multiple commercial solutions available that offer a lot more features.
The question you have to ask yourself is: Am I using the right ORM? LinqToSql has quite a few significant drawbacks, database change handling being only one of them.
Do not use the Visual Studio 2008 LinqToSql O/R Designer
The drawbacks of adopting Linq To Sql
I have seen numerous methods and tricks around the net today. What i need is convert my Linq to SQL queries (IQueryable results) into a DataSet for reporting purposes. Reporting tool is XtraReports from DevExpress.
A promising solution i found in another post is modelshredder. I am still concern though about the whole object graph, what about if i need all the hierarchical data for my report, the related association data EntitySet, EntityRef (e.g. i have loaded with Customers all the Orders and OrderDetails). Is there something supporting this to convert it in the appropriate DataSet with the related DataTables and all the data i need for generating my report with numerous values from numerous DataTables? I understand i could use the previous tool and convert one by one the related data to DataTables inside the DataSet and give the DataSource in the Report.
[EDIT] DataContext.GetCommand(IQueryable) looks another usefull solution.
I am looking for more views on the subject and listening to whoever used Linq to SQL and XtraReports (or any report tool asking for IList, IBindingList, ITypedList datasource) to do the job.
There is the best solution in official site of DevExpress
I have a Client/Server application, where the Client and Server have some common tables (which are kept in synchronisation as part of the application).
We currently store these tables (i.e. FileDetails) in a Shared.dbml file. Until now, any stored proc that returns a result of set of FileDetails, has been placed in the Shared.dbml (even it is a Server-only) SP.
I released that the LINQ to SQL supports a Base Class property on the DBML, and I thought that perhaps I could have a Server.dbml, that extends my Shared.dbml. In theory this would give me a ServerDataContext with all the shared tables and SPs, as well as the server-specific elements. Normally in the SQL designer I would drag and drop the SP, over the FileDetails table to show this is what was returned, however as the class is in a different DBML this is not possible, and in the XML I don't think the ElementType IdRef="1" approach will work (as the ref needs to point to another file)
I found I can get around that problem by editing the XMLs return type manually:
<Function Name="dbo.SELECT_FTS_FILES" Method="SELECT_FTS_FILES">
<Return Type="ISingleResult<DataTypes.FileDetails>" />
</Function>
My question is, does anyone have any experience with this kind of approach, and could point me to further resources? Are there any obvious drawbacks to it (other than than manual XML updates)
All feedback welcome
You could inherit from your datacontext. However in your new datacontext you wouldn't be able to use the linq designer you would have to code things out manually.
Is there any reason you don't want two datacontext?
Inheritance and LinqToSql don't play nice together in general. If you have a deep need for it you should look into another ORM like NHibernate.
My team is developing a large java application which extensively queries a MySQL database (in different classes and modules).
I'd like to known if there is a pattern that allows me to be notified at compile time if there are queries that refer to a wrong table structure (for instance if I remove or add a field on a table and the query string refers to it), in order to prevent runtime errors.
This should work also for JOIN queries.
Querydsl is similar to LiquidForm and supports both JPA / Hibernate and SQL based backends.
For the SQL based version we currently support MySQL (5.? tested), Oracle (10g tested) and HSQLDB.
In a nutshell a query like this
select count(*) from test where name = null
would become
long count = query.from(test).where(test.name.isnull()).count();
Querydsl SQL uses code generation to reflect SQL schemas into Java classes.
There's an open-source tool called DODS (Data Object Design Studio) that could do what you want. The DODS tool was originally part of the Enhydra Java application server project, and since the company backing that project went kablooey in 2002, DODS has been hosted and maintained at ObjectWeb. Anyway, it's open-source (LGPL).
http://forge.objectweb.org/projects/dods
The concept is that you describe your schema in an XML file, and DODS generates Java POJO classes with which you can query and manipulate the database tables. Of course every time you change your schema, you need to run DODS again to re-generate the ORM classes, and recompile your app against them.
But the result is that if a table or column disappears, and your app is querying database metadata that no longer exists, you do get a compile-time error, because your code is now calling a corresponding class or method that no longer exists.
I would say that the simple answer is "no". The more complete answer is "yes, to some degree", depending on your willingness to jump through hoops.
Unless you have a java representation of your database schema you will never be able to get compile time notification if your queries are wrong (these classes can be generated). Also, you must use these classes to build your queries, so the method you use today (query strings) must be abandoned. To be able to use the java classes to build your queries, you must also use tricks. LiquidForm uses the required tricks to build JPA queries, but I have not seen a similar library for constructing SQL queries (LiquidForm is new and quite brilliant). You would actually have to build a similar library yourself. So, as you see, getting compile time warnings when constructing SQLs is hard, but not impossible (only nearly impossible). But even if you should be able to create what I suggest, your java representation of the schema must be updated immediately after a schema change, so the generation of java classes would have to be built into your IDE or build tool.
I would suggest you rather have good unit tests that will notice when your queries become illegal as a result of schema change. This is the most common way to achieve what you want. Also, should you decide to "upgrade" to JPA, you could use LiquidForm to get what you want.