How to convert blueprint json file to csv file? - json

How to convert blueprint json file to csv file?
My target is to convert all properties parameters to a csv file from the amabri cluster
Example – how to generate new fresh blueprint.json file from my ambari cluster
curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://10.23.4.122:8080/api/v1/clusters/HDP01?format=blueprint -o /tmp/HDP01_blueprint.json
example of expected results: ( all parameters from json file from all config types should be in the csv file )
autopurge.purgeInterval,512
dataDir,/hadoop/zookeeper
autopurge.snapRetainCount,10
clientPort,2181
initLimit,11
tickTime,2000
syncLimit,5

You could write your own script for doing this conversion.
For example you could use PHP for reading the JSON and creating the csv file exactly the way you want it.
Reading the JSON
$fileContent = file_get_contents('/tmp/HDP01_blueprint.json');
$parsedContent = json_decode($fileContent, true);
After this the content is stored in the $parsedContent variable as an associative array. With this array you can write the values you want to a csv file.
You can even let the script fetch the JSON string for you if you want.

Related

Facing issue with Mongoexport json file "_id" column

I am exporting mongo collection to json format and then loading that data to bigquery table using bq load command.
mongoexport --uri mongo_uri --collection coll_1 --type json --fields id,createdAt,updatedAt --out data1.csv
The json row looks like below:
{"_id":{"$oid":"6234234345345234234sdfsf"},"id":1,"createdAt":"2021-05-11 04:15:15","updatedAt":null}
but when i run bq load command in bigquery it gives below error:
Invalid field name "$oid". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 300 characters long.
I think if mongoexport json contains {"_id": ObjectId(6234234345345234234sdfsf)} , my issue will be solved.
Is there any way to export json like this?
Or any other way to achive this?
Note: i can't use csv format because mongo documents contain commas.
By default, _id holds an ObjectId value, so it's better to store data in {"_id": ObjectId(6234234345345234234sdfsf)} format instead of storing it in "_id":{"$oid":"6234234345345234234sdfsf"}.
As you mentioned if json contains {"_id": ObjectId(6234234345345234234sdfsf)} your problem will be solved.
Replace $oid with oid. I'm using Python, so the code below worked:
with fileinput.FileInput("mongoexport_json.txt", inplace=True, encoding="utf8") as file:
for line in file:
print(line.replace('"$oid":', '"oid":'), end='')

Read JSON data to get key

Following is the sample output data in json format when i create a JIRA from command line using curl command.
{"id":"123456","key":"ABCD-123","self":"http://abcd.com/rest/api/2/issue/123456"}
How can i read ABCD-1234 into a variable. i have tried json data parser jq but it didn't help.
Reference Link: https://developer.atlassian.com/server/jira/platform/jira-rest-api-examples/
With jq:
key=$(jq -r '.key' file)

Need to run collection multiple iterations and need to run csv file which I have declared data for json body

I have exported collection of postman. One request has declare a variable inside the json body . I have csv file with data for that variable. Now I want to run collection with multiple iterations with csv file (data) in Newman command line
Tried this for run multiple times
newman run collection_exported.json -n 3
It runs multiple iterations (For this I did not include that variable)
Json body has below variable
"value":"{{medinfovalues}}"
and csv file has below values
You would need to use the -d <source> or --iteration-data <source> cli flag to tell Newman the location of the data file.
For example:
newman run collection_exported.json -d "filepath/myDataFile.csv" -n 3
More information about all the different cli flags can be found here:
https://github.com/postmanlabs/newman#newman-run-collection-file-source-options

Parse and store text from a json file in windows batch script

I have a json file with the following contents
{"STATUS":"PASS","PASSRATE":96.95238}
I want to store the passrate so that I can use it to append to a filename.
Check jsonextractor.bat with which you can extract information from json file by given dot notated path to object:
jsonextractor.bat data.json PASSRATE
You could use Xidel for that:
xidel -s data.json -e "$json/PASSRATE"

How can you get CSV instead of JSON from the HTTP API of InfluxDB?

I want to use an influxdb within the context of business intelligence:
ETL, joining data from other databases, creating live dashboards.
Right now, we are using standard BI-Tools such as QLIK or Microsoft PowerBI.
According to the documentation the HTTP API should be used for querying (https://docs.influxdata.com/influxdb/v1.2/guides/querying_data/)
My problem is that the output of the API seems to be JSON only. That means that each analyst first has to figure out how to transform the JSON into table-format before joining other data etc.
Is it possible to tell the API to produce a csv-like table output?
Do you have recommendations which tools to use to produce good Dashboards? I tried grafana but that seemed to fall short when joining other data.
You can use -H "Accept: application/csv" in your curl to have a response in CSV. For instance:
$ curl -G 'http://localhost:8086/query' --data-urlencode "db=my_db" --data-urlencode "q=SELECT * FROM \"cpu\"" -H "Accept: application/csv"
name,tags,time,host,region,value
cpu,,1493031640435991638,serverA,us_west,0.64
You can use jq to convert the JSON output to CSV as follows, which also allows you to get RFC3339 formatted timestamps:
jq -r "(.results[0].series[0].columns), (.results[0].series[0].values[]) | #csv"
which gives the output
"time","ppm","T"
"2019-01-17T19:45:00Z",864.5,18.54
"2019-01-17T19:50:00Z",861.4,18.545
"2019-01-17T19:55:00Z",866.2,18.5
"2019-01-17T20:00:00Z",863.9,18.47
and works because:
(.results[0].series[0].columns) gets the column names as array
, concatenates the output
(.results[0].series[0].values[]) gets the data values as array
| #csv uses the jq csv formatter
-r is used to get raw output
Further resources:
Use https://jqplay.org/ to build queries
Other examples: Convert JSON array into CSV using jq
https://unix.stackexchange.com/questions/429241/convert-json-to-csv
How to convert arbirtrary simple JSON to CSV using jq?