Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
We have a bunch of XML files, following a schema which is essentially a serialised database form:
<table1>
<column1>value</column1>
<column2>value</column2>
</table1>
<table1>
<column1>another value</column1>
<column2>another value</column2>
</table1>
...
Is there a really easy way to turn that into an SQL database? Obviously I can manually construct the schema, identify all tables, fields etc, and then write a script to import it. I just wonder if there are any tools that could automate some or all of that process?
For Mysql please see the LOAD XML SyntaxDocs.
It should work without any additional XML transformation for the XML you've provided, just specify the format and define the table inside the database firsthand with matching column names:
LOAD XML LOCAL INFILE 'table1.xml'
INTO TABLE table1
ROWS IDENTIFIED BY '<table1>';
There is also a related question:
How to import XMl file into MySQL database table using XML_LOAD(); function
For Postgresql I do not know.
There is a project on CodeProject that makes it simple to convert an XML file to SQL Script. It uses XSLT. You could probably modify it to generate the DDL too.
And See this question too : Generating SQL using XML and XSLT
If there is XML file with 2 different tables then will:
LOAD XML LOCAL INFILE 'table1.xml' INTO TABLE table1
LOAD XML LOCAL INFILE 'table1.xml' INTO TABLE table2
work
try this
http://www.ehow.com/how_6613143_convert-xml-code-sql.html
for downloading the tool
http://www.xml-converter.com/
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to make a web page that gets information about books using HTML and to place the information about books into a database to use it. Any idea of how to take the information from the website open library and store it into a database?
here is the link to the API if needed:
https://openlibrary.org/developers/api
thanks in advance.
If postgreSQL and python is a viable option, LibrariesHacked has a ready-made solution on GitHub for importing and searching Open Library data.
GitHub: LibrariesHacked / openlibrary-search
Using a postgreSQL database it should be possible to import the data directly into tables and then do complex searches with SQL.
Unfortunately the downloads provided are a bit messy. The open library file always errors as the number of columns provided seem to vary. Cleaning it up is difficult as just the text file for editions is 25GB.
That means another python script to clean up the data. The file openlibrary-data-process.py simply reads in the CSV (python is a little more forgiving about dodgy data) and writes it out again, but only if there are 5 columns.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have a Javascript library (SincIt) that I would like to use to synchronise my WebApp with a MySQL Database. However, this SincIt only works with MongoDB at the moment.
I could probably write an MySQL adapter for SincIt, since the library is modular, but I wonder if there is an adapter that translates MongoDB instructions to SQL.
MySQL is a relational database and uses SQL for manipulation of relational algebra. MongoDB is a document database that doesn't support relations or joins.
It does allow for hierarchical nesting of documents, but it's simply an entirely different paradigm.
The most important distinction in this case, is that MongoDB is schema-less. With a mongo collection you never need to do the equivalent of a "CREATE TABLE" statement. Furthermore, mongo has no problem with you starting with one document with a specific json structure, and then adding additional documents to that collection with entirely different structures.
As Mongo works with json data, you would also have the problem with a relational database of needing to convert table data to json documents and vice versa, which really isn't possible in any generic sense.
With MySQL you of course have to have table structures that are maintained in the data dictionary, and if anything changes you need to ALTER the table. You could probably implement a generic table structure of rows where the entire data store would be in a blob, stored in the same json format that Sinclt expects, but at that point you might as well just use mongo.
With that said, if there's some business rule that necessitates it, the fastest way to get it working with MySQL would probably be to do what I just suggested and have a generic row structure with something like:
id
optype (set, update, delete?)
data (blob storing the json payload)
parent_id
Just from a quick perusal of the SincIt docs it appears you'd need something to support the "linked list" aspect of the system.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to convert a number of .json files to .csv's using Python 2.7
Is there any general way to convert a json file to a csv?
PS: I saw various similar solutions on stackoverflow.com but they were very specific
to json tree and doesn't work if the tree structure changes. I am new to this site and am sorry for my bad english and reposting ty
The basic thing to understand is that json and csv files are extremely different on a very fundamental level.
A csv file is just a series of value separated by commas, this is useful for defining data like those in relational databases where you have exactly the same fields repeated for a large number of objects.
A json file has structure to it, there is no straightforward way to represent any kind of tree structure in a csv. You can have various types of foreign key relationships, but when it comes right down to it, trees don't make any sense in a csv file.
My advice to you would be to reconsider using a csv or post your specific example because for the vast majority of cases, there is no sensible way to convert a json document into a csv.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am trying to write a webapp, where one of the functionality is to exchange messages. I am trying to understand how to store these messages. I do not want to store it in DB. If i have to store in file, then how do i separate between messages.
Any links to some document would be greatly appreciated. I tried googling a lot but could not get hold of any reference
You should think about storing the messages in XML format, and use your webapp to load and parse those XML files into the message objects. Why do you not want to store the messages in the database? There are serious drawbacks to storing in the file system rather then the database (or even system memory).
A file system is a database, just not a relational database.
It's often faster than a relation database, but it has significantly less flexibility for indexing on multiple fields.
Parsing XML is gonna suck whether the XML comes from a database or a file.
Instead, you should do page caching to the file system of HTML, or HTML fragments.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need to document a legacy database schema for a new employee and as there's no design document I'd like to generate one from the existing schema. As the tables are MyISAM the foreign key relationships won't produce a nice graph. I'm interested in producing a document showing the important tables, their columns, types and remarks.
Are there any tools available to produce a nice document (PDF, DOC, HTML or RTF say) from the database schemas meta data? Or am I better writing a utility to export this myself (I was thinking dump it to XML and then transform it using XSLT)? The schema is running on MySQL 5.
After some research and looking at the options available I've decided to use schema spy which does pretty much want I wanted.
It produced the results in a reasonable format, but also provided an XML dump of the meta data which I was able to use to write an XSLT transformation to match what I wanted in the first place.
Tip came from answer to question 1869.
You can use MySQL Workbench or his ancestor DBdesigner 4 (open source):
Capture all the database diagram graphically with the "reverse engineering" tool.
Adjust and comment anything you need.
Use the HTML report plugin included in the plugins menu.
Done!
DeZign for Databases can do that for you. Auto-layout is done after the import of your database. You can eventually rearrange the objects in the diagram and then generate a html (or pdf) report from your database including a clickable diagram.
DeZign for Databases
If you're using PHPmyAdmin, you can flip over to the Designer tab to get a visual schema. You could print/screenshot that if you just want to show table relationships.
Sybase Powerdesigner, is one of the best tool for reverse engineering a DB, producing nice diagrams and nice exports.