I'm planning to write an tool which would import dbml file and spit out an MDL file for migration purposes and such. To do this, I obviously have to read this dbml file into some kind of semantic model to do transformations.
I tried to use XmlMappingSource, but first it failed because it expects the XML namespace of a file to be http://schemas.microsoft.com/linqtosql/mapping/2007, whereas I had http://schemas.microsoft.com/linqtosql/dbml/2007. After I changed the namespace to read .../mapping/2007, XmlMappingSource started to complain about all kinds of unrecognized attributes.
Is there any object model to represent the structure of a dbml file?
Being no expert, but having had somewhat of the same problem...
When validating dbml files, XmlMappingSource uses (I think...at least close ;-)
DbmlSchema.xsd, so perhaps either tamper with the existing one, or make you own
mappingsource?
This article may help (or may not...not quite sure I get your question)
Related
I have an XML file, with a schema defined in it.
The scheme has several nested elements (e.g., Family (root) -> Family Members (list of sub-nodes) ).
What would be the easiest way to break this down to a mysql database with multiple tables? Preferably an automated tool/GUI to handle this process. I am trying to avoid writing dedicated code to parse the file and extract the data, an approach that was common in other related questions.
I am using a mac, so windows tools are not relevant.
mysql has load xml as a command which is quite nice if your data can be formatted to match this specification. It's hard to tell if that would work for your dataset without seeing more.
The first thing you would have to do is create a mysql schema based on the XML schema. There are some projects to do this, but it's worth noting that not everything that can be described in XSD can be implemented in SQL.
You could use XSLT or regexp or an editor to get what you want, then do an import. If you have to use a DOM parser to convert your XML to CSVs to load to mysql, it's not too tough at all.
You're essentially asking how to automate the process of (relational) normalization, and that's very difficult if you're only starting from an instance. For example, if your instance has
<book>
<author>Kay</author>
</book>
there's no way of knowing whether a book can have multiple authors, which would affect the SQL table structure.
If you've got a schema then you can do better, but it's still not ideal because inferring the non-hierarchic relationships from an XSD is going to be pretty difficult. Apart from anything else, there are usually cross-document relationships which XSD can't describe - it's unusual to put all your data in one giant XML document.
To do this job properly, you really need to reverse-engineer the object model, and that requires a semantic understanding of the data, not just syntactic manipulation.
I have a task: I need create data access layer, which can work with multiple data sources (json files, xml files, sql server). But I just have no any idea, how it should be done.
I have tried create my own context by inheriting DBContext class (something like JsonContext), which contains paths to json files and does I/O operations, but now i think it looks kinda stupid :).
Maybe I can create interface of basic repository and implement it with each data source? Or maybe exists patterns or practices, that can help me?
It's not a bad idea to take the DbContext that EntityFramework generates for you, and use that as your common base class for all of the different data sources (JsonContext inherits from DbContext). However, the problem I see with this approach is that when you instantiate a JsonContext, it will call the constructors of the base class, DbContext, and try to connect to SQL Server, which is not what you want.
I don't know if there is an accepted pattern for doing what you're trying to do, so I think you're probably just going to have to invent your own common interface or base class that all the concrete data sources will have to implement.
I am developing a web application right now, where the user interacts with desktop software that exports files as XML.
I was curious if there was a way to take the data from the XML file and insert that data into a mySQL database using ColdFusion?
Of course you can, ColdFusion provides powerful tools for handling XML.
Typically you'll need to parse XML file into the XML document object with XmlParse and search through it using XPath language with XmlSearch. Fetched data you can easily use for inserting into the database or any other manipulations.
Please note that there are more useful XML functions present, for example you may be interested in validation XML before parsing it.
If you'll need help for specific situations -- please extend your question or ask another one.
If you are working with XML documents that fit into memory when parsed, #Sergii's answer is the right way to go. On the other hand, XML being verbose as it is, and ColdFusion's using a DOM XML parser, can easily lead to Out of Memory errors.
In that situation, given MySQL and ColdFusion, I see two alternative paths. One is exporting the data from the desktop application as CSV, if possible. Then use MySQL's LOAD DATA INFILE, which you can call from ColdFusion to import the data. This is probably the fastest performance.
If you cannot change the desktop application's export format, consider using a Java StAX parser instead. See my answer from another question for an example of how to do this with ColdFusion. This has the advantage of only pulling in part of the XML document into memory at any given time, but is somewhat more difficult to work with than a DOM parser. As such you will not get OOM errors.
Note, there is a third type of parser available as well from Java - SAX - that has the same quality as a StAX parser of not loading the whole document into memory. However, it's a more difficult approach IMO to work with, thus the StAX recommendation.
I see a number of posts talk about rolling your own LINQ to SQL XML mapping file. Why would you do this instead of using SQLMetal or the designer built into studio? Is it because you want to work with your own POCOs?
If you use the designer then you have no control over the generated classes. While this may be alright for many applications, it's not appropriate for all.
Probably the biggest single advantage to using an external mapping is that it breaks your model's dependency on Linq to SQL, so you could (for example) take the exact same model classes and use them with Entity Framework or NHibernate. Projects or assemblies which need to use the model don't pick up an unwanted dependency on the System.Data.Linq assembly.
Other things you might want to do are:
Include validation logic or other complex logic in property setters;
Use virtual properties (for proxying);
Decorate existing properties with other attributes (i.e. serialization);
etc.
None of these things are possible with generated code. You can extend via partial classes, but that only lets you add members, not change existing ones. You can change the designer-generated code, obviously, but your changes will just get overwritten.
As I mentioned, many projects don't need these things, but many projects do, and if yours is one that does then you'll outgrow the DBML designer and SqlMetal pretty quickly.
I am using LINQ2SQL in my current project. I have a quite a lot of tables ~30. When I create my DBML file I change some of the column names etc for readability.
Recently if I made a change to the table in the underlying Database I just deleted and re-added the table in the DBML file, but this is getting tedious. How can I mimic any changes to the database in the DBML file? (e.g. new column, drop column, new default constraint etc).
Out of the box, Linq-to-SQL has no update feature - amazing, but unfortunately true.
There's two tools I know of that get around this:
PLINQO is a set of CodeSmith code generation templates which handle DBML generation and offer lots of extra features (like generating one file per db entity) - including updates!
The Huagati tools offer updates and enforcing naming conventions for DBML and Entity Framework
Marc
I'm not expecting this to be the correct answer, but I thought I'd chime in anyway.
My approach has been to use the drag-n-drop feature for creating the initial DBML file. Then, any changes I make in my DB are also then made, by hand, in either the designer or in the DBML file (as XML) itself. (You can right-click on the DBML file, select Open With, and choose XML editor.) Sometimes it is much easier/faster to work with its XML instead of messing around in the designer.
I would only consider the deleting and re-adding, as you have been doing, if the changes were significant. For adding a new column, however, I'd suggest working directly with the dbml's XML, it's probably faster.
Good luck!
Welcome to the world of tedious! Unless I missed something, you're doing it the right way.
SubSonic looks like an interesting alternative, and boasts
It will also create your database on the fly if you want, altering the schema as you change your object.
As far as free solutions, there are a couple of blunt instruments that mostly move you away from using the O/R Designer: SQLMetal and Damien Guard's T4 templates.
There are multiple commercial solutions available that offer a lot more features.
The question you have to ask yourself is: Am I using the right ORM? LinqToSql has quite a few significant drawbacks, database change handling being only one of them.
Do not use the Visual Studio 2008 LinqToSql O/R Designer
The drawbacks of adopting Linq To Sql