Save JSON from API using bashscript - json

I am trying to get multiple files from url, which returns JSON and save them as JSON file.
I have tried the code below:
for i in {23..24}
do
wget "https://some url/${i}" > "${i}".json;
done
However, it only saves, for example "23" as file which contains the returned json as text, not as "23.json".

Use -O option instead to download file with required name.
for i in {23..24}
do
wget -O "${i}".json "https://some url/${i}" ;
done
Or use curl
for i in {23..24}
do
curl "https://some url/${i}" > "${i}".json;
done

Related

cURL returns JSON data

When typing the following cURL into cmd prompt, "**curl https://jsonplaceholder.typicode.com/posts**",
it returns an array of JSON data. (online rest api)
I have been given a command, "curl -u recruiter:supersecret localhost:3000/raw". When typed in the command should return json data.
Using json-server I was able to create a json file and host it locally. When typing the url it created, it displayed the JSON data.
How can I use that specific command to return json data?
Can anyone please provide some direction on how to go about doing this.
Thanks.
Not clear what your question is or what you're asking. If you're curling for /raw, the json file you're hosting with json-server should have a raw field like:
{ "raw": "some test data" }
read the getting started section for json-server:
https://github.com/typicode/json-server

Converting JSON to CSV via cli

I am using an API to get data from NetFlow Analyzer. I get a JSON file formatted like this;
{"startTime":"2017-12-29 11:58","resultVector":[{"port":"*","app":"Unknown_App","dscpCode":"0","traffic":"4.77 MB","dscp":"Default","src":"20.xx.xx.2","dst":"10.xx.xx.1","dstport":"*","prot":"Unknown"}],"Type":"DestinationIN","devDetails":{"deviceID":"5000006","Total":"4.77 MB"},"TimeZone":"America/Chicago","endTime":"2018-01-05 11:58"}
I have been trying to use json2csv (https://github.com/jehiah/json2csv) found at github, and did have success using for a different API & JSON output format. When I run
json2csv -k port,app,dscpCode,traffic,dscp,src,dst,dstport,prot -i filein.json -o fileout2.csv
I get a csv file with nothing but ",,,,,". What I am trying to get are the traffic, source IP, and destination IP.
Running;
json2csv -k startTime,resultVector -i filein.json -o fileout2.csv
Gives me this output which while close, it not csv really
2017-12-29 11:58,[map[dscpCode:0 src:20.xx.xx.2 dst:10.xx.xx.1 prot:Unknown port:* app:Unknown_App dstport:* traffic:4.77 MB dscp:Default]]
Checked few online sites that report this is a valid RFC 4627 JSON. Anyone else familiar with json2csv, or if nothing else another cli tool for Linux that I can use in a script to convert?
This is a good job for jq processor:
jq -r '.resultVector[] | [.traffic, .src, .dst] | #csv' filein.json > fileout2.csv
The final fileout2.csv contents:
"4.77 MB","20.xx.xx.2","10.xx.xx.1"
Typically I prefer cli tools too
If you want to quickly format some json into csv you might also checkout this:
https://json-csv.com/
Provides file upload as well as copy pasting for quick results.

importing json file to elasticsearch

I have used
curl -XPOST "http://localhost:9200/<my_index_name>" -d #<absolute_path_to_my_json_file>
Then when I tried to get the data using
curl -XGET "http://localhost:9200/<my_index_name>"
its giving me data only for first line of my json file. (including other stuff also - settings, mappings,alias etc).
But why is it not able to load the entire json file ?
BTW, I am using ES 2.4.0. If I have to use bulk, what is the syntax ?
Try using these
curl -XPUT "http://localhost:9200/<my_index_name>" -d #<absolute_path_to_my_json_file>
GET index does not search actually.
You have also to run something like GET index/_search.

How to upload multiple documents with multiple JSON files to Cloudant DB via cURL?

Currently I am able to PUT a single json file to a document in Cloudant using this : curl -X PUT 'https://username.cloudant.com/dummydb/doc3' -H "Content-Type: application/json" -d #numbers.json.I have many JSON files to be uploaded as different documents in the same DB.How can it be done?
So you definitely want to use Cloudant's _bulk_docs API endpoint in this scenario. It's more efficient (and cost-effective) if you're doing a bunch of writes. You basically POST an array that contains all your JSON docs. Here's the documentation on it: https://docs.cloudant.com/document.html#bulk-operations
Going one step further, so long as you've structured your JSON file properly, you can just upload the file to _bulk_docs. In cURL, that would look something like this: curl -X POST -d #file.json <domain>/db/_bulk_docs ... (plus the content type and all that other verbose stuff).
One step up from that would be using the ccurl (CouchDB/Cloudant cURL) tool that wraps your cURL statements to Cloudant and makes them less verbose. See https://developer.ibm.com/clouddataservices/2015/10/19/command-line-tools-for-cloudant-and-couchdb/ from https://stackoverflow.com/users/4264864/glynn-bird for more.
Happy Couching!
You can create a for loop and create documents from each JSON file.
For example, in the command below I have 4 JSON files in my directory and I create 4 documents in my people database:
for file in *.json
> do
> curl -d #$file https://username:password#myinstance.cloudant.com/people/ -H "Content-Type:application/json"
> done
{"ok":true,"id":"763a28122dad6c96572e585d56c28ebd","rev":"1-08814eea6977b2e5f2afb9960d50862d"}
{"ok":true,"id":"763a28122dad6c96572e585d56c292da","rev":"1-5965ef49d3a7650c5d0013981c90c129"}
{"ok":true,"id":"763a28122dad6c96572e585d56c2b49c","rev":"1-fcb732999a4d99ab9dc5462593068bed"}
{"ok":true,"id":"e944282beaedf14418fb111b0ac1f537","rev":"1-b20bcc6cddcc8007ef1cfb8867c2de81"}

Where to specify the file path while running elasticsearch Bulk API

I am new to elasticsearch, running elasticsearch from chrome:extension- Postman.
I want to enter bulk data into it, from JSON using Bulk API.
I have seen the command :
curl -s -XPOST 'http://jfblouvmlxecs01:9200/_bulk' --data-binary #bulk.json
While using Postman, I do not know where to store the file #bulk.json,
Currently I have stored it at C:\elasticsearch-1.5.2\bin\bulk.JSON
The command I am using is http://localhost:9200/_bulk --data-binary #bulk.JSON
This is throwing following error:
"error": "InvalidIndexNameException[[_bulk --data-binary #bulk.JSON] Invalid index name [_bulk --data-binary #bulk.JSON], must not contain the following characters [\, /, *, ?, \", <, >, |, , ,]]",
"status": 400 }
Can someone please suggest, where to store the JSON file.
Or am I doing something wrong here.
You can store your file anywhere and then pass the path like this:
curl -s -XPOST 'http://jfblouvmlxecs01:9200/_bulk' --data-binary #"/blah/blah/bulk.json"
If you want to do it with postman there is an answer here that explains how
But if you want to use this for large data, I wouldn't recommend using postman or sense. Use curl directly or logstash
See also this: https://github.com/taskrabbit/elasticsearch-dump