I am testing APIs on a locally run service using Postman. I am getting a large nested response as JSON. But I am required to produce a CSV file with the data in tables. i.e., keys as column names and the values as separate rows into the tables
An offline solution for needed, for privacy and confidentiality reasons.
First I came across this solution: postman saveResponseToFile template
A template of postman that sends the requests to a node server running locally which in turn saves the JSON to a CSV.
But all this does is save each line into a separate cell horizontally across the CSV. Not of much use for readability.
Desired Output
The needed table is something like what this online tool does:
ConvertCSV
This takes the keys as column names and creates new rows for each nested data.
Is there a OSS script that can accomplish this. Sorry if this is a duplicate. I haven't found a proper one after trying more similar scripts similar to the first.
Related
I'm looking for ideas for an Open Source ETL or Data Processing software that can monitor a folder for CSV files, then open and parse the CSV.
For each CSV row the software will transform the CSV into a JSON format and make an API call to start a Camunda BPM process, passing the cell data as variables into the process.
Looking for ideas,
Thanks
You can use a Java WatchService or Spring FileSystemWatcher as discussed here with examples:
How to monitor folder/directory in spring?
referencing also:
https://www.baeldung.com/java-nio2-watchservice
Once you have picked up the CSV you can use my example here as inspiration or extend it: https://github.com/rob2universe/csv-process-starter specifically
https://github.com/rob2universe/csv-process-starter/blob/main/src/main/java/com/camunda/example/service/CsvConverter.java#L48
The example starts a configurable process for every row in the CSV and includes the content of the row as a JSON process data.
I wanted to limit the dependencies of this example. The CSV parsing logic applied is very simple. Commas in the file may break the example, special characters may not be handled correctly. A more robust implementation could replace the simple Java String .split(",") with an existing CSV parser library such as Open CSV
The file watcher would actually be a nice extension to the example. I may add it when I get around to it, but would also accept a pull request in case you fork my project.
I was using Behave and Selenium to test on something that use a large amount of data. Data tables were becoming too big and making the Gherkin documentation unreadable.
I would like to move most of the data from data tables to external file such as JSON. But I couldn't find any examples on websites.
I cannot offer an example at the moment, but I would create the JSON file as needed and give reference to the JSON file in Given or Background , then capture the value in the respective decorated method.
I wanted to develop a small search website where I will be storing the data in XML files. When we search anything, it should display those data as table format in html. How does one retrieve the data from XML files?
Below is the basic thing to display data of only two columns, but I want to display data dynamically:
html file:http://www.w3schools.com/xml/xml_applications.asp
This is the sample code for retrieving the data from xml only for two columns.
Well the first problem I see is that you have two functions in there that are not being called. Nothing programmatic will happen in this scenario. When you have a method you need to call said method with myFunction(). I would recommend reading up a little more on javascript instead of copying and pasting it and expecting it to just "work"
To further elaborate, you removed the function call from the example you took when you took off the button. What is your xml endpoint? (it's not going to be the same as the example unless you build it to be that way). In this example it's just an xml file that is hosted on the server with the same root as the html.
My coding knowledge is very basic so please do bear that in mind. Basically there is a encoding service called vid.ly. We have hundreds of videos on there and would like to create an excel spreadsheet with all the information of them.
Vidly accepts API queries in XML and JSON. Below is a basic example of the query I want to make:
http://api.vid.ly/#GetMediaList
Is there a way that I can get Excel to send that query to the Vidly website, receive an XML/JSON response and make a table from it? I have gotten it to work with an XML generated manually but I really want Excel to pull that information automatically.
Sure, you need to write VBA code in excel sheet. Refer to following urls
https://msdn.microsoft.com/en-us/library/dd819156%28v=office.12%29.aspx
http://www.dotnetspider.com/resources/19-STEP-BY-STEP-Consuming-Web-Services-through-VBA-Excel-or-Word-Part-I.aspx
http://www.automateexcel.com/2004/11/14/excel_vba_consume_web_services/
I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end