Batch Processing Curl API Requests in Bash? - json

Need to query an API endpoint for specific parameters, but there's a parameter limit of 20.
params are gathered into an array & stored in a JSON file, and ref'd in a variable tacked onto the end of my curl command, which generates the full curl API request.
curl -s -g GET '/api/endpoint?parameters='$myparams
eg.
curl -s -g GET '/api/endpoint?parameters=["1","2","3","etc"]'
This works fine when the params json is small and below the parameter limit per request. Only problem is params list fluctuates but is many times larger than the request limit.
My normal thinking would be to iterate through the param lines, but that would create many requests and probably block me too.
What would a good approach be to parse the parameter array json and generate curl API requests respectful of the parameter limit, with the minimum requests? Say its 115 params now, so that'd create 5 api requests of 20 params tacked on & 1 of 15..

You can chunk the array with undocumented _nwise function and then use that, e.g.:
<<JSON jq -r '_nwise(3) | "/api/endpoint?parameters=\(.)"'
["1","2","3","4","5","6","7","8"]
JSON
Output:
/api/endpoint?parameters=["1","2","3"]
/api/endpoint?parameters=["4","5","6"]
/api/endpoint?parameters=["7","8"]
This will generate the URLs for your curl calls, which you can then save in a file or consume directly:
<input.json jq -r ... | while read -r url; do curl -s -g -XGET "$url"; done
Or generate the query string only and use it in your curl call (pay attention to proper escaping/quoting):
<input.json jq -c '_nwise(3)' | while read -r qs; do curl -s -g -XGET "/api/endpoint?parameters=$qs"; done

Depending on your input format and requirements regarding robustness, you might not need jq at all; sed and paste can do the trick:
<<IN sed 's/\\/&&/g;s/"/\\"/g' | sed 's/^/"/;s/$/"/' | paste -sd ',,\n' | while read -r items; do curl -s -g -XGET "/api/endpoint?parameters=[$items]" done;
1
2
3
4
5
6
7
8
IN
Output:
curl -s -g -XGET /api/endpoint?parameters=["1","2","3"]
curl -s -g -XGET /api/endpoint?parameters=["4","5","6"]
curl -s -g -XGET /api/endpoint?parameters=["7","8"]
Explanation:
sed 's/\\/&&/g;s/"/\\"/g': replace \ with \\ and " with \".
sed 's/^/"/;s/$/"/': wrap each line/item in quotes
paste -sd ',,\n': take 3 lines and join them by a comma (repeat the comma character as many times as you need items minus 1)
while read -r items; do curl -s -g -XGET "/api/endpoint?parameters=[$items]"; done;: read generated items, wrap them in brackets and run curl

Related

Filtering array using curl (qBittorrent Web API)

Hoping someone can help me. I'm trying to understand the qBittorrent Web API. At the moment I'm listing all the paused torrents with:
curl -i http://localhost:8080/api/v2/torrents/info?category=test
The problem is that lists the whole JSON array - my question is can I just display the "name" or "hash" fields? This is all using curl through cmd, but I've tried this in Git Bash & Powershell:
[{"eta":8640000,"f_l_piece_prio":false,"force_start":false,"hash":"8419d48d86a14335c83fdf4930843438a2f75a6b","last_activity":1664863523,"magnet_uri":"","max_seeding_time":0,"**name**":"TestTorrentName","num_complete":12,"num_incomplete":1,"num_leechs":0,"num_seeds":0,"priority":0,"progress":1,"ratio":0,"ratio_limit":-2,"save_path":"F:\\Completed\\test\\","seeding_time":0,"seeding_time_limit":-2,"seen_complete":1664863523,"seq_dl":false,"size":217388295,"state":"pausedUP","super_seeding":false,"tags":"","time_active":569,"total_size":217388295,"tracker":"udp://open.stealth.si:80/announce","trackers_count":10,"up_limit":-1,"uploaded":0,"uploaded_session":0,"upspeed":0}]
I've tried the following that should work according to https://jqplay.org/ - see screenshot
curl -i http://localhost:8080/api/v2/torrents/info?category=test | jq --raw-output '.[] | .name'
But unfortunately I'm getting the following error:
curl -i http://localhost:8080/api/v2/torrents/info?category=test | jq --raw-output '.[] | .name'
% Total % Received % Xferd Average Speed Time '.name'' is not recognized as an internal or external command,
operable program or batch file.
Ti
curl -i http://localhost:8080/api/v2/torrents/info?category=test | jq --raw-output '.[] | .name'
The -i let curl give some header info, that is parsed to jq, but jq can only parse JSON end therefore fails.
Remove the -i and optionally replace it with -s to remove the stats:
curl -s http://localhost:8080/api/v2/torrents/info?category=test | jq --raw-output '.[] | .name'

Search for artifacts 30d or older in JFrog Artifactory

I would like to get list of artifacts which are created 30 days ago and before.
I have a script which it was providing with in time period bound, but where I need to change time in milliseconds every time. Its bit tough every time so I need to get list of artifacts which are created 30 days back with out modifying my script every time.
This is what i am using now
RESULTS=`curl -s -X GET -u <username>:<password> \
"https://<domain>.artifactoryonline.com/<domain>/api/search/creation?from=$START_TIME&to=$END_TIME&repos=$REPO" \
| grep uri \
| awk '{print $3}' \
| sed s'/.$//' \
| sed s'/.$//' \
| sed -r 's/^.{1}//'`
Your best option here is probably to use JFrog's AQL and query for artifacts with
"created" older than X days, for example, you can use an AQL query like:
items.find({"created" : {"$before" : "30d"}})
You can read more about AQL in general and about "Relative Time Operators" specifically, here
So, an example curl with a limit of 10 artifacts would look like:
curl -X POST -u <user>:<password> -H "content-type: text/plain" -d 'items.find({"created":{"$before":"30d"}}).sort({"$desc" : ["created"]}).limit(10)' https://<your Artifactory server>:<port>/artifactory/api/search/aql

Why doesn't the GitHub API return all branches for a repository?

As title says. I am facing why this mirror on GitHub don't show output in terminal using this command:
wget --no-check-cert -q -O - "http://api.github.com/repos/bminor/glibc/branches" | grep release
but on GitHub there is for example:
The response is paginated. If you look at response headers:
Link: <https://api.github.com/repositories/13868694/branches?page=2>; rel="next", <https://api.github.com/repositories/13868694/branches?page=10>; rel="last"
you'll see that you're looking at page 1 of 10. You can increase the number of records per page using a query string parameter up to 100:
curl -Lv 'http://api.github.com/repos/bminor/glibc/branches?per_page=100'
but you still have to get the other pages in your case; the desired page is selected using the page query string parameter, e.g.:
$ curl -sL 'http://api.github.com/repos/bminor/glibc/branches?page=2&per_page=100' \
| jq -r '.[] | select(.name | contains("release")).name'
hjl/release/2.20/master
hjl/x32/release/2.12
hjl/x32/release/2.15
Alternatively, using the GitHub CLI:
gh api --method GET repos/bminor/glibc/branches \
--raw-fielg page=2 --raw-field per_page=100 \
--jq '.[] | select(.name | contains("release")).name'

Parsing an HTTP response from a curl POST [duplicate]

This question already has answers here:
A better way to extract JSON value in bash script
(2 answers)
Closed 6 years ago.
I'm currently trying to grab and assign the N_596164000673190002 to a variable from a curl command.
This is the command:
curl -L -H 'X-Cisco-Meraki-API-Key: mykeygoeshere' -X POST -H'Content-Type: application/json' --data-binary '{"name":"'"$NETWORK_NAME"'", "type":"appliance", "timeZone":"'"$TIME_ZONE"'"}' 'https://dashboard.meraki.com/api/v0/organizations/foobar/networks'
This is the response:
{"id":"N_596164000673190002","organizationId":"foo","type":"appliance","name":"bar","timeZone":"America/Chicago","tags":""}
How do I successfully read and grab the variable after id (without the double quotes), while also simutaneously assigning it to a variable, $NETWORK_ID? I imagine this can all be done in one line.
If this is successful, echo $NETWORK_ID should return N_596164000673190002
To parse json in bash, people usually use jq as it is installed by default on most Unix distributions.
Try the following :
NETWORK_ID=$(my_curl_command | jq -r '.id')
Here, '.id' is a filter indicating we want to retrieve the value for the key id, and the -r flag is used to remove double quotes from the output.
Pipe the JSON output to python json to grab the id value you need, and use bash command substitution to assign the result to your NETWORK_ID environment variable.
NETWORK_ID=$(curl -L -H 'X-Cisco-Meraki-API-Key: mykeygoeshere' -X POST \
-H'Content-Type: application/json' \
--data-binary '{"name":"'"$NETWORK_NAME"'", "type":"appliance", \
"timeZone":"'"$TIME_ZONE"'"}' \
'https://dashboard.meraki.com/api/v0/organizations/foobar/networks' \
| python -c "import sys, json; print json.load(sys.stdin)['id']")

Converting a bash command output into JSON and serving it over http on the fly

I want to convert the output of ifstat command into JSON and serve it over http on the fly to be used for a javascript graph app. Are there any lightweight -- sed or awk -- command-line solutions which I can use? I do not want to store JSON output on the disk and it would be good if the web-server was a small lightweight command line tool into which I can pipe JSON output.
EDIT 1:
This is the live streaming chart library which will use the data. I'm not keen on a specific web server; any webserver that does the job would be fine.
This is what I have tried.
Terminal #1
ifstat -n | awk 'NR>2{print systime(),$0; fflush()}' | tee ifstat.log
Terminal #2
while :
do
{
echo -e "HTTP/1.1 200 OK"
echo -e "Content-Type: application/json\n"
tail -n1 ifstat.log | awk '{ printf("{\"time\":%s, \"in\":%s, \"out\":%s}\n", $1, $2, $3) }'
} | nc -l 8000
done
firefox
open: http://localhost:8000
{"time":1332052321, "in":1.24, "out":2.62}
I know little about JSON. Maybe the output is invalid. You should rewrite the awk command.