Parse Export Data to csv, xls etc - mysql

I'm currently in the process of trying to export a Parse app into a MySql database. The tables have a similar setup, so to make the process quicker I was wondering if anyone has figured out an easy to way to export Parse data as a file format the phpmyadmin was accept for importing data (csv, xls, etc.).
I know Parse exports to Json, and I have found several posts around exporting to other file formats, but most are fairly old (a few years atleast), so I was just wondering if anyone had found a way to do this since?

Related

Is there a way to export collection in a Firestore database to a json or csv file?

I have looked at the import/export documentation here: https://cloud.google.com/firestore/docs/manage-data/export-import but this way only seems to export for use in other databases, and use in BigQuery, but if I want to use the data in say, Excel, I would need a csv file for that.
There is nothing built into the Firestore UI or API for exporting to a CSV or Excel, but you can of course use the API to read the data and write the CSV/XLS file yourself.
There are also some promising links in the results of searching for firestore export to csv, like this tutorial on Exporting Firestore Collection as CSV into Cloud Storage on Demand, the easy way and this tutorial on CSV Exports from Firestore

How to import CSV files into Firebase

I see we can import json files into firebase.
What I would like to know is if there is a way to import CSV files (I have files that could have about 50K or even more records with about 10 columns).
Does it even make sense to have such files in firebase ?
I can't answer if it make sense to have such files in Firebase, you should answer that.
I also had to upload CSV files to Firebase and I finally transformed my CSV into JSON and used firebase-import to add my Json into Firebase.
there's a lot of CSV to JSON converters (even online ones). You can pick the one you like the most (I personnaly used node-csvtojson).
I've uploaded many files (tab separated files) (40MB each) into firebase.
Here are the steps:
I wrote a Java code to translate TSV into JSON files.
I used firebase-import to upload them. To install just type in cmd:
npm install firebase-import
One trick I used on top of all the one already mentioned is to synchronize a google spreadsheet with firebase.
You create a script that upload directly to firebase db base on row / columns. It worked quite well and can be more visual for fine tuning the raw data compared to csv/json format directly.
Ref: https://www.sohamkamani.com/blog/2017/03/09/sync-data-between-google-sheets-and-firebase/
Here is the fastest way to Import your CSV to Firestore:
Create an account in Jet Admin
Connect Firebase as a DataSource
Import CSV to Firestore
Ref:
https://blog.jetadmin.io/how-to-import-csv-to-firestore-database-without-code/

Converting Tab Delimited txt to JSON

I have a 1.9GB tab delimited file that is in the form of an xlsx file. I could write a script to convert it to CSV and then convert THAT to json, but I'm just curious if there is a more direct way to do this. Thanks! :)
Not sure if this is an option, but you can import it into some database (for example mongo) and then export relatively easily with that
I came accross a similar problem recently, and what I found really easy to do was actually go directly from XLSX to JSON using MATLAB.
IMPORTING XLSX: https://www.mathworks.com/help/matlab/ref/xlsread.html
EXPORTING JSON: https://www.mathworks.com/help/matlab/ref/jsonencode.html
It might take a little bit of time for such a large file, but I did it on a file about 400MB in size with no problem.

Mind Mapping tool exporting data to Rally

I am trying to export data from a Mind mapping tool such that it can be imported into Rally. Trying to create a mindmap of backlog which can be easily exported to a format compatible to rally ("csv"). I tried using different tools that export data to "csv" format which is Rally-compatible, however ran into some issues and hence decided to get the data into xml format and further convert just the required fields from the xml data to csv which can then be imported to rally.
What I have now is a process that can convert data to xml, I then further use a tool to get it to csv but the hierarchical structure of mindmapping is not maintained when I convert it to csv. So basically, now when I import "csv" data to rally, it doesn't keep parent relationships ( as seen in Mindmapper tool ) in backlogs.
Can anyone help me with this? Thanks!
Have you tried Rally Excel plugin that allows import to Rally from an Excel spreadsheet?
It does not have a direct support for importing parent/child relationship between stories, but if a story designated to be a parent already exists in Rally or imported first, then you may import another batch of stories with Parent field specified, and that will link the newly imported story or stories to the parent. See this video.

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end