In Bash, How To Sort Echo Output from JQ? - json

I want to make automatic script to parse JSON for some value and echo-ing it to somewhere (in this case of test, terminal). Here is my complicated code:
for ((i = 0 ; i < 34 ; i++)); do
echo " \
$(curl -s https://data.covid19.go.id/public/api/prov.json | jq -r ".list_data[$i] .key"): \ # ex. Output: DKI Jakarta
$(curl -s https://data.covid19.go.id/public/api/prov.json | jq -r ".list_data[$i] .jumlah_kasus")"; # ex. Output: 580584
done
With this loop I got 34 lines:
DKI JAKARTA: 580584
JAWA BARAT: 402548
JAWA TENGAH: 265345
JAWA TIMUR: 178799
KALIMANTAN TIMUR: 79876
RIAU: 72361
SULAWESI SELATAN: 65330
DAERAH ISTIMEWA YOGYAKARTA: 65265
BANTEN: 59091
SUMATERA BARAT: 53125
BALI: 51499
SUMATERA UTARA: 36854
KALIMANTAN SELATAN: 36645
SUMATERA SELATAN: 29725
KEPULAUAN RIAU: 27854
KALIMANTAN TENGAH: 26719
LAMPUNG: 22731
KEPULAUAN BANGKA BELITUNG: 21976
PAPUA: 21150
NUSA TENGGARA TIMUR: 19977
ACEH: 19577
SULAWESI UTARA: 16633
KALIMANTAN BARAT: 15585
SULAWESI TENGAH: 13977
KALIMANTAN UTARA: 13466
JAMBI: 13332
NUSA TENGGARA BARAT: 13197
SULAWESI TENGGARA: 11948
PAPUA BARAT: 11682
BENGKULU: 10778
MALUKU: 9067
SULAWESI BARAT: 6043
GORONTALO: 5913
MALUKU UTARA: 5828
My question is: How can I sort the lines alphabetically with keeping province name and value synced?
E.g. like this:
ACEH: 19577
BALI: 51499
# etc...
Any answer will be appreciated, thank you. Let me know if there are something unclear so I can explain it.

Is this what you expected :
curl -s https://data.covid19.go.id/public/api/prov.json |\
jq -r '.list_data|sort_by(.key)[]|"\(.key): \(.jumlah_kasus)"'

Output stuff as one line with unique separator, sort, then replace that separtor with a newline.
for ...; do
echo "$(one)!$(two)"
done | sort | tr '!' '\n'

Related

diffculty using bash to pass the contents of `top` into a json file

I want to use a bash script to output the contents of top command and then write it to a json file. But I'm having difficulty writing the slashes/encodings/line breaks into a file with a valid json object
Here's what I tried:
#!/bin/bash
message1=$(top -n 1 -o %CPU)
message2=$(top -n 1 -o %CPU | jq -aRs .)
message3=$(top -n 1 -o %CPU | jq -Rs .)
message4=${message1//\\/\\\\/}
echo "{\"message\":\"${message2}\"}" > file.json
But when I look at the file.json, it looks soemthing like this:
{"message":""\u001b[?1h\u001b=\u001b[?25l\u001b[H\u001b[2J\u001b(B\u001b[mtop - 21:34:53 up 55 days, 5:14, 2 users, load average: 0.17, 0.09, 0.03\u001b(B\u001b[m\u001b[39;49m\u001b(B\u001b[m\u001b[39;49m\u001b[K\nTasks:\u001b(B\u001b[m\u001b[39;49m\u001b[1m 129 \u001b(B\u001b[m\u001b[39;49mtotal,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 1 \u001b(B\u001b[m\u001b[39;49mrunning,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 128 \u001b(B\u001b[m\u001b[39;49msleeping,\u001b(B\u001b[m
Each of the other attempts with message1 to message4 all result in various json syntax issues.
Can anyone suggest what I should try next?
You don't need all the whistle of echo and multiple jq invocations:
top -b -n 1 -o %CPU | jq -aRs '{"message": .}' >file.json
Or pass the output of the top command as an argument variable.
Using --arg to pass arguments to jq:
jq -an --arg msg "$(top -b -n 1 -o %CPU)" '{"message": $msg}' >file.json

How to find value of a key in a json response trace file using shell script

I have a response trace file containing below response:
#RESPONSE BODY
#--------------------
{"totalItems":1,"member":[{"name":"name","title":"PatchedT","description":"My des_","id":"70EA96FB313349279EB089BA9DE2EC3B","type":"Product","modified":"2019 Jul 23 10:22:15","created":"2019 Jul 23 10:21:54",}]}
I need to fetch the value of the "id" key in a variable which I can put in my further code.
Expected result is
echo $id - should give me 70EA96FB313349279EB089BA9DE2EC3B value
With valid JSON (remove first to second row with sed and parse with jq):
id=$(sed '1,2d' file | jq -r '.member[]|.id')
Output to variable id:
70EA96FB313349279EB089BA9DE2EC3B
I would strongly suggest using jq to parse json.
But given that json is mostly compatible with python dictionaries and arrays, this HACK would work too:
$ cat resp
#RESPONSE BODY
#--------------------
{"totalItems":1,"member":[{"name":"name","title":"PatchedT","description":"My des_","id":"70EA96FB313349279EB089BA9DE2EC3B","type":"Product","modified":"2019 Jul 23 10:22:15","created":"2019 Jul 23 10:21:54",}]}
$ awk 'NR==3{print "a="$0;print "print a[\"member\"][0][\"id\"]"}' resp | python
70EA96FB313349279EB089BA9DE2EC3B
$ sed -n '3s|.*|a=\0\nprint a["member"][0]["id"]|p' resp | python
70EA96FB313349279EB089BA9DE2EC3B
Note that this code is
1. dirty hack, because your system does not have the right tool - jq
2. susceptible to shell injection attacks. Hence use it ONLY IF you trust the response received from your service.
Quick and dirty (don't use eval):
eval $(cat response_file | tail -1 | awk -F , '{ print $5 }' | sed -e 's/"//g' -e 's/:/=/')
It is based on the exact structure you gave, and hoping there is no , in any value before "id".
Or assign it yourself:
id=$(cat response_file | tail -1 | awk -F , '{ print $5 }' | cut -d: -f2 | sed -e 's/"//g')
Note that you can't access the name field with that trick, as it is the first item of the member array and will be "swallowed" by the { print $2 }. You can use an even-uglier hack to retrieve it though:
id=$(cat response_file | tail -1 | sed -e 's/:\[/,/g' -e 's/}\]//g' | awk -F , '{ print $5 }' | cut -d: -f2 | sed -e 's/"//g')
But, if you can, jq is the right tool for that work instead of ugly hacks like that (but if it works...).
When you can't use jq, you can consider
id=$(grep -Eo "[0-9A-F]{32}" file)
This is only working when the file looks like what I expect, so you might need to add extra checks like
id=$(grep "My des_" file | grep -Eo "[0-9A-F]{32}" | head -1)

jq to convert two text strings into separate json objects

How do I convert these two text strings into separate json objects
Text strings:
start process: Mon May 15 03:14:09 UTC 2017
logfilename: log_download_2017
Json output:
{
"start process": "Mon May 15 03:14:09 UTC 2017",
}
{
"logfilename": "log_download_2017",
}
Shell script:
logfilename="log_download_2017"
echo "start process: $(date -u)" | tee -a $logfilename.txt | jq -R split(:) >> $logfilename.json
echo "logfilename:" $logfilename | tee -a $logfilename.txt | jq -R split(:) >> $logfilename.json
One approach would be to use index/1, e.g. along these lines:
jq -R 'index(":") as $ix | {(.[:$ix]) : .[$ix+1:]}'
Or, if your jq supports regex, you might like to consider:
jq -R 'match( "([^:]*):(.*)" ) | .captures | {(.[0].string): .[1].string}'
or:
jq -R '[capture( "(?<key>[^:]*):(?<value>.*)" )] | from_entries'

jq to convert two text strings into a single json object

How do I convert these two text strings into a single json object
Text strings:
start process: Mon May 15 03:14:09 UTC 2017
logfilename: log_download_2017
Json output:
{
"start process": "Mon May 15 03:14:09 UTC 2017",
"logfilename": "log_download_2017",
}
Shell script:
logfilename="log_download_2017"
echo "start process: $(date -u)" | tee -a $logfilename.txt | jq -R . >> $logfilename.json
echo "logfilename:" $logfilename | tee -a $logfilename.txt | jq -R . >> $logfilename.json
As mentioned e.g. at Use jq to turn x=y pairs into key/value pairs, the basic task of converting a key:value string can be accomplished in a number of ways. For example, you could start with:
index(":") as $ix | {(.[:$ix]) : .[$ix+1:]}
You evidently want to trim some spaces, which can be done using sub/2.
To combine the objects, you could use add. To do this in a single pass, you would use jq -R -s
Putting it all together, you could do worse than:
def trim: sub("^ +";"") | sub(" +$";"");
def s2o:
(index(":") // empty) as $ix
| {(.[:$ix]): (.[$ix+1:]|trim)};
split("\n") | map(s2o) | add

Convert ps aux to json

I try to convert the output of ps aux into Json format without using Perl or Python! For these I have read about jq. But I have success to convert the commandline output into json.
How to convert a simpe ps aux to Json?
ps aux | awk '
BEGIN { ORS = ""; print " [ "}
{ printf "%s{\"user\": \"%s\", \"pid\": \"%s\", \"cpu\": \"%s\"}",
separator, $1, $2, $3
separator = ", "
}
END { print " ] " }';
Just adjust columns which you need from ps aux output.
jq can read non-JSON input. You'll want to pre-process the input with awk first:
ps aux |
awk -v OFS=, '{print $1, $2}' |
jq -R 'split(",") | {user: .[0], pid: .[1]}'
If you want an array instead of a sequence of objects, pipe the output through jq --slurp 'add'. (I swear there's a way to do that without an extra call to jq, but it escapes me at the moment.)
Here's an only-jq solution based on tokenization.
Tokenization can be done using:
def tokens:
def trim: sub("^ +";"") | sub(" +$";"");
trim | splits(" +");
For illustration and brevity, let's consider only the first 10 tokens:
[tokens] | .[0:9]
Invocation:
$ ps aux | jq -c -R -f tokens.jq
Or as a one-liner, you could get away with:
$ ps aux | jq -cR '[splits(" +")] | .[0:9]'
First few lines of output:
["USER","PID","%CPU","%MEM","VSZ","RSS","TT","STAT","STARTED"]
["p","1595","55.9","0.4","2593756","32832","??","R","24Jan17"]
["p","12472","26.6","12.6","4951848","1058864","??","R","Sat01AM"]
["p","13239","10.9","1.5","4073756","128324","??","R","Sun12AM"]
["p","12482","7.8","1.2","3876628","101736","??","R","Sat01AM"]
["p","32039","7.7","1.4","4786968","118424","??","R","12Feb17"]
["_windowserver","425","7.6","0.8","3445536","65052","??","Ss","24Jan17"]
Using the headers as object keys
See e.g.
https://github.com/stedolan/jq/wiki/Cookbook#convert-a-csv-file-with-headers-to-json
I have a gist that to convert ps output to json. It uses jq under the covers so you need to install that. But you do not need to know jq
Dumps the fields specified by the -o flag as an array of PID objects:
ps ax -o "stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,wchan,sz,pcpu,command" \
| jq -sRr ' sub("\n$";"") | split("\n") | ([.[0]|splits(" +")]) as $header | .[1:] | [.[] | [. as $x | range($header|length) | {"key": $header[.], "value": (if .==($header|length-1) then ([$x|splits(" +")][.:]|join(" ")|tojson|.[1:length-1]) else ([$x|splits(" +")][.]) end) } ] | from_entries]'
This builds an array of the header fields, maps an array of {key, value} objects per output object, and then uses the built-in from_entries filter to object-aggregate these into the outputs.