I am using an API to get data from NetFlow Analyzer. I get a JSON file formatted like this;
{"startTime":"2017-12-29 11:58","resultVector":[{"port":"*","app":"Unknown_App","dscpCode":"0","traffic":"4.77 MB","dscp":"Default","src":"20.xx.xx.2","dst":"10.xx.xx.1","dstport":"*","prot":"Unknown"}],"Type":"DestinationIN","devDetails":{"deviceID":"5000006","Total":"4.77 MB"},"TimeZone":"America/Chicago","endTime":"2018-01-05 11:58"}
I have been trying to use json2csv (https://github.com/jehiah/json2csv) found at github, and did have success using for a different API & JSON output format. When I run
json2csv -k port,app,dscpCode,traffic,dscp,src,dst,dstport,prot -i filein.json -o fileout2.csv
I get a csv file with nothing but ",,,,,". What I am trying to get are the traffic, source IP, and destination IP.
Running;
json2csv -k startTime,resultVector -i filein.json -o fileout2.csv
Gives me this output which while close, it not csv really
2017-12-29 11:58,[map[dscpCode:0 src:20.xx.xx.2 dst:10.xx.xx.1 prot:Unknown port:* app:Unknown_App dstport:* traffic:4.77 MB dscp:Default]]
Checked few online sites that report this is a valid RFC 4627 JSON. Anyone else familiar with json2csv, or if nothing else another cli tool for Linux that I can use in a script to convert?
This is a good job for jq processor:
jq -r '.resultVector[] | [.traffic, .src, .dst] | #csv' filein.json > fileout2.csv
The final fileout2.csv contents:
"4.77 MB","20.xx.xx.2","10.xx.xx.1"
Typically I prefer cli tools too
If you want to quickly format some json into csv you might also checkout this:
https://json-csv.com/
Provides file upload as well as copy pasting for quick results.
Related
When typing the following cURL into cmd prompt, "**curl https://jsonplaceholder.typicode.com/posts**",
it returns an array of JSON data. (online rest api)
I have been given a command, "curl -u recruiter:supersecret localhost:3000/raw". When typed in the command should return json data.
Using json-server I was able to create a json file and host it locally. When typing the url it created, it displayed the JSON data.
How can I use that specific command to return json data?
Can anyone please provide some direction on how to go about doing this.
Thanks.
Not clear what your question is or what you're asking. If you're curling for /raw, the json file you're hosting with json-server should have a raw field like:
{ "raw": "some test data" }
read the getting started section for json-server:
https://github.com/typicode/json-server
I am working with a flutter web server testing. I writing a simple bash script to fetch some JSON data from API request. The API request dispatch following information as JSON response.
{
"code_version":{
"engine_name":"flutter_renderV1",
"proxy":"10.1.1.1:1090",
"test_rate":true,
"test_density":"0.1",
"mapping_eng":"flutter_default_mapper"
},
"developer_info":{
"developerid":"30242",
"context":true,
"request_timestamp":"156122441"
}
}
Once this received, I saved in to local file named server_response{$id}.json. I need to collect test_density value under code_version data frame. I used several awk, sed command to fetch data, unfortunatly I cannot get the exact output from my terminal.
You need to install powerful JSON querying processor like jq processor. you can can easily install from here
once you install jq processor, try following command to extract the variable from JSON key value
suppose, your file named as server_response_123.json,
jq '.code_version.test_density' server_response_123.json
the output will be shown as,
"0.1"
I am trying to get multiple files from url, which returns JSON and save them as JSON file.
I have tried the code below:
for i in {23..24}
do
wget "https://some url/${i}" > "${i}".json;
done
However, it only saves, for example "23" as file which contains the returned json as text, not as "23.json".
Use -O option instead to download file with required name.
for i in {23..24}
do
wget -O "${i}".json "https://some url/${i}" ;
done
Or use curl
for i in {23..24}
do
curl "https://some url/${i}" > "${i}".json;
done
Influx, how do I upload a JSON with Rest Api?
When i read data from Influx using say Rest+query, it comes in JSON format. Now to upload they are saying json is deprecated and we have to do it in binary format, really?
Read using this gives me data in JSON format
curl -G 'http://localhost:8086/query' --data-urlencode “db=my_db" --data-urlencode "q=select * from \”server1.rte.set\" limit 1">test.txt
Write has to be this binary format
curl -i -XPOST 'http://localhost:8086/write?db=my_db' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
Why would anybody do this? Keep both json or keep both binary.
The current version of InfluxDB doesn't support the JSON write path or a binary protocol. The predominant reason it was deprecated was that decoding JSON was the largest performance bottleneck in the system.
For a more information see the github issue comments 107043910 and 106968181.
I have a large dataset in JSON format that I would like to upload to IrisCouchDB.
I found the following instructions: http://kxepal.iriscouch.com/docs/1.3/api/database/common.html
But I am a newbie and it also seems to be for a single JSON document. Im afraid I will just create one huge entry and not multiple documents entries which is what I want.
I have NodeJS but I don't have Cradle. Will I need it in order to perform this function? Any help is greatly appreciated!
Oh, no! That's bad idea to reference on docs at my Iris host - that's was preview dev build. Please follow the official docs: http://docs.couchdb.org/en/latest/api/index.html
To update multiple document's you need to use /db/_bulk_docs resource:
curl -X POST http://localhost:5984/db/_bulk_docs \
-H "Content-Type:application/json" \
-d '{"docs":[{"name":"Yoda"}, {"name":"Han Solo"}, {"name":"Leia"}]}'
So you see the format of data you should send to CouchDB? Now it's all depending from you JSON file format. If whole file is an array of objects: just wrap his content into {"docs":..} object. If not: you have to write some small library to convert this file data into the required format.