library to convert CSV to XML, MYSQL, html, RSS, JSON and etc? - csv

it can be any language.
is there a library, software, or plugin that will convert the csv to the various formats?

I just want to add, you can use phpmyadmin of wamp server if you need to import well formatted csv format into mysql database.

I'd just use the dynamic language of my choice. Most of them have a CSV library and libraries to generate the output you want. Just a few lines and you have written it yourself.

here is a product that converts CSV to other formats/
You could just sed and awk to do the same.

I agree with developing your own library since there is always more/custom work to be done; An idea would be to firstly create a process of importing a CSV file into a database. Once its in the database you can easily query and output using the database or using a programming language.

There is a nice generic library in perl for doing conversions and treating a bunch of differenct formats (XML, CSV, HTML etc) as hashes. Even extends to direct treatment (DBI) as table data.
AnyData
https://metacpan.org/pod/AnyData

Related

How can I batch load 1,000 XML files and convert to JSON with Talend?

How can I batch load 1,000 XML files and convert to JSON with Talend? Is it possible to connect to Neo4J with Talend or maybe connect to an RDF Graph Database like Fluree?
Use the following components, tFileInputXML and tFileOutputJSON
I leave you a link to the documentation, look at the scenarios, they indicate how to configure each element.
I have worked with these elements, the reading of the XML works, you may have to read the file in different Inputs, it all depends on the structure of the xml.
You can also validate it using an XSD.
Regards.

Can I export all of my JSON documents of a collection to a CSV in Marklogic?

I have millions of documents in different collections in my database. I need to export them to a csv onto my local storage when I specify the collection name.
I tried mlcp export but didn't work. We cannot use corb for this because of some issues.
I want the csv to be in such a format that if I try a mlcp import then I should be able to restore all docs just the way they were.
My first thought would be to use MLCP archive feature, and to not export to a CSV at all.
If you really want CSV, Corb2 would be my first thought. It provides CSV export functionality out of the box. It might be worth digging into why that didn't work for you.
DMSDK might work too, but involves writing code that handles the writing of CSV, which sounds cumbersome to me.
Last option that comes to mind would be Apache NiFi for which there are various MarkLogic Processors. It allows orchestration of data flow very generically. It could be rather overkill for your purpose though.
HTH!
ml-gradle has support for exporting documents and referencing a transform, which can convert each document to CSV - https://github.com/marklogic-community/ml-gradle/wiki/Exporting-data#exporting-data-to-csv .
Unless all of your documents are flat, you likely need some custom code to determine how to map a hierarchical document into a flat row. So a REST transform is a reasonable solution there.
You can also use a TDE template to project your documents into rows, and the /v1/rows endpoint can return results as CSV. That of course requires creating and loading a TDE template, and then waiting for the matching documents to be re-indexed.

Can Parse be used in place of my MySQL database? (after converted to NoSQL)

Can I dump my sql database, and upload each table to Parse so that it can serve as a multi-table (whatever the NoSql terminology is) database for my project?
Parse can be used with PHP, JS, and many other languages of the web, so in theory, yes. It also has an import feature so you can import data. I'm not sure how well this feature works, but it is definitely worth a try. Here is the documentation.

What is the simplest way to export CouchDB information to CSV?

What would be the simplest way to export a CouchDB database of documents (identical structure) to CSV?
I'm guessing it would involve writing a view and manually parsing each document serially using something like PHP, C# or Python.
But is there a simpler way or something already existing I can make use of?
You should be able to generate the CSV directly from CouchDB, i.e. without PHP/C#/Python, using a list function. See http://wiki.apache.org/couchdb/Formatting_with_Show_and_List and http://guide.couchdb.org/editions/1/en/transforming.html for more information.
I made this : https://gist.github.com/3026004 , it takes the first 100 documents as sample for headers, and it supports nested hashes and arrays.

PST to CSV File Conversion

Does anyone know of a good tool that converts .pst to .csv files through command line?
Can you assume Outlook is installed on the computer? If so, I believe it can be background scripted using OLE or something similar. I've done file conversions through Excel using Ruby that way.
And here's a Perl example
A solution I just stumbled across is:
libpst
It obviously doesn't convert straight to CSV, but it converts into a more manageable format.
Importing into Outlook and then exporting as CSV is still probably the quickest solution, but libpst would certainly be useful if all you have is the PST file and no Outlook.
one time only? or programmatically?
if one time only, import into a mail program that handles mbox (e.g. Thunderbird), at which point you just have text files, manipulate as desired.
otherwise, no idea, best of luck.
You can always write a .Net application using CDO, MAPI, OOM or Redemption, that does what you need.
I've written a complete Outlook exporter tool for my company, which you can view at http://www.tzunami.com