Mind Mapping tool exporting data to Rally - csv

I am trying to export data from a Mind mapping tool such that it can be imported into Rally. Trying to create a mindmap of backlog which can be easily exported to a format compatible to rally ("csv"). I tried using different tools that export data to "csv" format which is Rally-compatible, however ran into some issues and hence decided to get the data into xml format and further convert just the required fields from the xml data to csv which can then be imported to rally.
What I have now is a process that can convert data to xml, I then further use a tool to get it to csv but the hierarchical structure of mindmapping is not maintained when I convert it to csv. So basically, now when I import "csv" data to rally, it doesn't keep parent relationships ( as seen in Mindmapper tool ) in backlogs.
Can anyone help me with this? Thanks!

Have you tried Rally Excel plugin that allows import to Rally from an Excel spreadsheet?
It does not have a direct support for importing parent/child relationship between stories, but if a story designated to be a parent already exists in Rally or imported first, then you may import another batch of stories with Parent field specified, and that will link the newly imported story or stories to the parent. See this video.

Related

Is there a way to export collection in a Firestore database to a json or csv file?

I have looked at the import/export documentation here: https://cloud.google.com/firestore/docs/manage-data/export-import but this way only seems to export for use in other databases, and use in BigQuery, but if I want to use the data in say, Excel, I would need a csv file for that.
There is nothing built into the Firestore UI or API for exporting to a CSV or Excel, but you can of course use the API to read the data and write the CSV/XLS file yourself.
There are also some promising links in the results of searching for firestore export to csv, like this tutorial on Exporting Firestore Collection as CSV into Cloud Storage on Demand, the easy way and this tutorial on CSV Exports from Firestore

Upload from CSV file to DHTMLX GANTT

There is a custom implementation in dhtmlx gantt for upload from MPP/XML which goes to their servlet and renders the gantt. Has anyone tried to build a custom CSV upload or any third parties available to load the csv into the gantt.
https://dhtmlx.com/blog/export-import-ms-project-dhtmlx-gantt-chart/
There is no such solution from DHTMLX (FYI I work for DHTMLX), and I'm not aware if there is any third-party service or ready-to-use solution that could be used for a development.
At the code level, importing csv into gantt breaks down into three steps:
parsing CSV into an object array
mapping columns of CSV to properties of that objects (mandatory properties of gantt tasks - text/start_date/duration/parent)
and inserting the result into database.
The first step is trivial. Mapping columns may require implementing some sort of UI so the user could specify which columns of csv mean what in gantt.
For an inspiration, you can check how it's done in this app https://app.ganttpro.com/ - requires registration, but you can create a free account using google or facebook acc - create new project ("+ CREATE NEW" in lefthand menu), select "Import from" and try uploading some csv file -> here is how the ui looks like.
As for the last step - inserting parsed records into db - you'll need to do some coding in order to insert tasks without losing project hierarchy (task.parent -> task.id relations, given that database ids of your items will likely change after inserting), but overall it shouldn't be very difficult.
If you looking for something more specific - please update your question.

Parse Export Data to csv, xls etc

I'm currently in the process of trying to export a Parse app into a MySql database. The tables have a similar setup, so to make the process quicker I was wondering if anyone has figured out an easy to way to export Parse data as a file format the phpmyadmin was accept for importing data (csv, xls, etc.).
I know Parse exports to Json, and I have found several posts around exporting to other file formats, but most are fairly old (a few years atleast), so I was just wondering if anyone had found a way to do this since?

Tool for export to JSON from ArangoDB

To create a native backup and restore it, one has to use arangodump and arangorestore.
To import from JSON (and CSV, TSV), one has to use arangoimp.
What can I use to export to JSON from ArangoDB?
One possibility is to use the arangodump tool that is shipped with ArangoDB.
It can be used to dump an entire database or individual collections. It stores dumped data in JSON format on disk.
Maybe arangodump's output already is in a format that you can work with.

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end