Importing Pandoc markdown into Mediawiki? - mediawiki

I'm looking to important content from Pandoc markdown into Mediawiki.
What is the normal way to import into mediawiki? This page: https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps#How_to_import.3F talks about importing XML dumps.
But is the best way to import, to convert markdown into xml in the right format?
Or is it better to import directly into the database?
Are there any other extensions out there? I'm trying to ascertain the common methods for doing this and whether I need to write something myself or not.

Directly messing with the database is never a good idea, and XML is primarily meant to import dumpfiles exported from other wikis (although it is not terribly hard to construct it manually). For simple cases, the easiest way is using edit.php.

Related

Can I export all of my JSON documents of a collection to a CSV in Marklogic?

I have millions of documents in different collections in my database. I need to export them to a csv onto my local storage when I specify the collection name.
I tried mlcp export but didn't work. We cannot use corb for this because of some issues.
I want the csv to be in such a format that if I try a mlcp import then I should be able to restore all docs just the way they were.
My first thought would be to use MLCP archive feature, and to not export to a CSV at all.
If you really want CSV, Corb2 would be my first thought. It provides CSV export functionality out of the box. It might be worth digging into why that didn't work for you.
DMSDK might work too, but involves writing code that handles the writing of CSV, which sounds cumbersome to me.
Last option that comes to mind would be Apache NiFi for which there are various MarkLogic Processors. It allows orchestration of data flow very generically. It could be rather overkill for your purpose though.
HTH!
ml-gradle has support for exporting documents and referencing a transform, which can convert each document to CSV - https://github.com/marklogic-community/ml-gradle/wiki/Exporting-data#exporting-data-to-csv .
Unless all of your documents are flat, you likely need some custom code to determine how to map a hierarchical document into a flat row. So a REST transform is a reasonable solution there.
You can also use a TDE template to project your documents into rows, and the /v1/rows endpoint can return results as CSV. That of course requires creating and loading a TDE template, and then waiting for the matching documents to be re-indexed.

Can Parse be used in place of my MySQL database? (after converted to NoSQL)

Can I dump my sql database, and upload each table to Parse so that it can serve as a multi-table (whatever the NoSql terminology is) database for my project?
Parse can be used with PHP, JS, and many other languages of the web, so in theory, yes. It also has an import feature so you can import data. I'm not sure how well this feature works, but it is definitely worth a try. Here is the documentation.

What is the simplest way to export CouchDB information to CSV?

What would be the simplest way to export a CouchDB database of documents (identical structure) to CSV?
I'm guessing it would involve writing a view and manually parsing each document serially using something like PHP, C# or Python.
But is there a simpler way or something already existing I can make use of?
You should be able to generate the CSV directly from CouchDB, i.e. without PHP/C#/Python, using a list function. See http://wiki.apache.org/couchdb/Formatting_with_Show_and_List and http://guide.couchdb.org/editions/1/en/transforming.html for more information.
I made this : https://gist.github.com/3026004 , it takes the first 100 documents as sample for headers, and it supports nested hashes and arrays.

filemaker pro export and import to mysql via php

could anyone advise me direct me to a site that explains the best way to go about this I'm sure I could figure it out with allot of time invested but just looking for a jump start. I don't want to use the migration tool either as I just want to put fmp xml files on the server and it create new MySql databases based on the fmpxml results provided
thanks
Technically you can write a XSLT to transform the XML files into SQL. It's pretty much straightforward for data (except data in container fields), but with some effort you can even transfer the scheme from DDR reports (but I doubt it worth it for a single project).
Which version of MySQL? v6 has LOAD XML which will make things easy for you.
If not v6, then you are dealing with stored procedures, which can be a pain. If you need v5, it might make sense to install MySQL6, get the data in there using LOAD XML, and then do a mysqldump, which you can import into v5.
Here is a good link:
http://dev.mysql.com/tech-resources/articles/xml-in-mysql5.1-6.0.html

library to convert CSV to XML, MYSQL, html, RSS, JSON and etc?

it can be any language.
is there a library, software, or plugin that will convert the csv to the various formats?
I just want to add, you can use phpmyadmin of wamp server if you need to import well formatted csv format into mysql database.
I'd just use the dynamic language of my choice. Most of them have a CSV library and libraries to generate the output you want. Just a few lines and you have written it yourself.
here is a product that converts CSV to other formats/
You could just sed and awk to do the same.
I agree with developing your own library since there is always more/custom work to be done; An idea would be to firstly create a process of importing a CSV file into a database. Once its in the database you can easily query and output using the database or using a programming language.
There is a nice generic library in perl for doing conversions and treating a bunch of differenct formats (XML, CSV, HTML etc) as hashes. Even extends to direct treatment (DBI) as table data.
AnyData
https://metacpan.org/pod/AnyData