Sh Script JSON values from JSON string - json

i have a file which contains an JSON object as a string:
{"STATUS":[{"STATUS":"S","When":1530779438,"Code":70,"Msg":"CGMiner stats","Description":"cgminer 4.9.0"}],"STATS":[{"CGMiner":"4.9.0","Miner":"9.0.0.5","CompileTime":"Sat May 26 20:42:30 CST 2018","Type":"Antminer Z9-Mini"},{"STATS":0,"ID":"ZCASH0","Elapsed":179818,"Calls":0,"Wait":0.000000,"Max":0.000000,"Min":99999999.000000,"GHS 5s":"16.39","GHS av":16.27,"miner_count":3,"frequency":"750","fan_num":1,"fan1":5760,"fan2":0,"fan3":0,"fan4":0,"fan5":0,"fan6":0,"temp_num":3,"temp1":41,"temp2":40,"temp3":43,"temp2_1":56,"temp2_2":53,"temp2_3":56,"temp_max":43,"Device Hardware%":0.0000,"no_matching_work":0,"chain_acn1":4,"chain_acn2":4,"chain_acn3":4,"chain_acs1":" oooo","chain_acs2":" oooo","chain_acs3":" oooo","chain_hw1":0,"chain_hw2":0,"chain_hw3":0,"chain_rate1":"5.18","chain_rate2":"5.34","chain_rate3":"5.87"}],"id":1}
now i want to get some values from keys in this object within a sh script.
The following cmd works, but however not for all keys!?
this works: (i get "750")
grep -o '"frequency": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
but this not: (empty)
grep -o '"fan_num": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
same with this:
grep -o '"fan1": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
Working on a xilinx OS, which has no python, so "jq" will not work and grep has no "-P" option. So anyone have an idea to work with that? :)
Thanks and best regards,
dave

When you want to do more than just g/re/p you should be using awk, not combinations of greps+pipes, etc.
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
750
$ awk -v tag='fan_num' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
1
$ awk -v tag='fan1' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
5760
The above will work with any awk in any shell on any UNIX box. If you have GNU awk for the 3rd arg to match() and gensub() you can write it a bit briefer:
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)",a) { print gensub(/^"/,"",1,a[1]) }' file
750

Related

Bash script: How to read values from a JSON file and use them as input

I am trying to parse values from JSON file that takes the following form (config.json):
[{
"gain": "90",
"exposure": "850",
"whitebalanceR": "120",
"whitebalanceG": "185",
"whitebalanceB": "100",
"Snap": "True"
}
]
I parse each individual string like this (app.sh):
grep -o '"gain": "[^"]*' config.json | grep -o '[^"]*$'
grep -o '"exposure": "[^"]*' config.json | grep -o '[^"]*$'
grep -o '"whitebalanceR": "[^"]*' config.json | grep -o '[^"]*$'
grep -o '"whitebalanceG": "[^"]*' config.json | grep -o '[^"]*$'
grep -o '"whitebalanceB": "[^"]*' config.json | grep -o '[^"]*$'
This will output the value of the string in the terminal
I would like to use the value for each string as input to some function like this:
./someFunc -w '${whitebalanceR} ${whitebalanceG} ${whitebalanceG}' -ax6 -s ${exposure} -g ${gain} -i 1 -o -a -6
But it fails to run? What am I doing wrong? and how to actually use the JSON string as input to a function in bash?
Much appreciated
IMHO, use bash is a bad idea for parsing JSON. But, if you have no choice, use jq as Tranbi suggested:
gain=$( jq -r '.[] | .gain' config.json; )
echo $gain
-r option is required to print the value without quotes.
Hope that helps!

diffculty using bash to pass the contents of `top` into a json file

I want to use a bash script to output the contents of top command and then write it to a json file. But I'm having difficulty writing the slashes/encodings/line breaks into a file with a valid json object
Here's what I tried:
#!/bin/bash
message1=$(top -n 1 -o %CPU)
message2=$(top -n 1 -o %CPU | jq -aRs .)
message3=$(top -n 1 -o %CPU | jq -Rs .)
message4=${message1//\\/\\\\/}
echo "{\"message\":\"${message2}\"}" > file.json
But when I look at the file.json, it looks soemthing like this:
{"message":""\u001b[?1h\u001b=\u001b[?25l\u001b[H\u001b[2J\u001b(B\u001b[mtop - 21:34:53 up 55 days, 5:14, 2 users, load average: 0.17, 0.09, 0.03\u001b(B\u001b[m\u001b[39;49m\u001b(B\u001b[m\u001b[39;49m\u001b[K\nTasks:\u001b(B\u001b[m\u001b[39;49m\u001b[1m 129 \u001b(B\u001b[m\u001b[39;49mtotal,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 1 \u001b(B\u001b[m\u001b[39;49mrunning,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 128 \u001b(B\u001b[m\u001b[39;49msleeping,\u001b(B\u001b[m
Each of the other attempts with message1 to message4 all result in various json syntax issues.
Can anyone suggest what I should try next?
You don't need all the whistle of echo and multiple jq invocations:
top -b -n 1 -o %CPU | jq -aRs '{"message": .}' >file.json
Or pass the output of the top command as an argument variable.
Using --arg to pass arguments to jq:
jq -an --arg msg "$(top -b -n 1 -o %CPU)" '{"message": $msg}' >file.json

extract token from curl result by shell script

I write this script
#!/bin/bash
# cm.sh
curl -i \
-H "Content-Type: application/json" \
-d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "admin",
"domain": { "id": "default" },
"password": "secret"
}
}
}
}
}' \
"http://localhost/identity/v3/auth/tokens" ; echo
echo $tokenizer1
echo $tokenizer2
But all of them(awk or sed) it's the same
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 540 100 312 100 228 312 228 0:00:01 --:--:-- 0:00:01 5142
My goal is to put the token in a variable for later.
Thanks guys in advance.
Instead of using the direct result of cURL, you could save the result in a file, and use your grep command on it.
Something like this maybe :
curl -o boulou.txt http://localhost/identity/v3/auth/tokens && cat boulou.txt | grep "X-Subject-Token" | awk '{printf $2}'
Edit, if you just want you desired output, add the --silent to the cURL command :
curl -o boulou.txt http://localhost/identity/v3/auth/tokens --silent && cat boulou.txt | grep "X-Subject-Token" | awk '{printf $2}'
Edit 2: If you want to export it, and delete your file, you could use something like this :
export OS_TOKEN=$(curl -o billy.txt hhttp://localhost/identity/v3/auth/tokens --silent && cat billy.txt | grep "X-Subject-Token" | awk '{printf $2}') && rm billy.txt
"How do I use grep/awk to extract a header in a field from curl when I pass it a JSON document when the contents is stored in a variable?" is a very tricky and unique problem indeed.
However, if you gradually mock out every part of your code to narrow it down, you'll discover that this is the much easier question you could have researched or asked instead:
How do I use grep/awk on contents from a variable?
I have a variable containing HTTP headers, and I want to extract the value of one of them. Here's an example:
variable='Foo: 1
Bar: 2
Baz: 3'
This is what I've tried to get 2 from Bar:
# Just hangs
tokenizer1=$variable `grep "Bar" | awk '{printf $2}'`
# Is empty
tokenizer2=$variable | `grep "Bar" | awk '{printf $2}'`
The answer here is to use echo to pipe the contents so that grep can read it on stdin:
tokenizer3=$(echo "$variable" | grep "Bar" | awk '{printf $2}')
This is easily applied to your example:
tokenizer3=$(echo "$token" | grep "X-Subject-Token" | awk '{printf $2}')
echo "The value is $tokenizer3"

grep json value of a key name. (busybox without option -P)

I found tons of threads who discussed "how to grep json values".
But unfortunately useless for me and all who using grep from busybox (embedded linux). This grep version doesn't have the option "-P" (perl exp). Only "-E" (Extended Regexp) is available.
BusyBox v1.20.2 () multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
I have a json example:
{
"one": "apple",
"two": "banana"
}
Now, I want to extract the value e.g. "apple" from key "one".
grep -E '".*?"' file.json
Just an example how it should look like.
And btw: How to access groups from regex?
I would be grateful for any help or alternatives.
With busybox awk:
busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]]+", "", $2); print $2}'
-F '[:,]' sets the field separator as : or ,
/"one"/ {gsub("[[:blank:]]+", "", $2); print $2} macthes if the line contains "one", if so strips off all horizontal whitespace(s) from second field and then printing the field
If you want to strip off the quotes too:
busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]\"]+", "", $2); print $2}'
Example:
$ cat file.json
{
"one": "apple",
"two": "banana"
}
$ busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]]+", "", $2); print $2}' file.json
"apple"
$ busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]\"]+", "", $2); print $2}' file.json
apple
I like simple commands which enhance readability and easy to understand. In your file first we have to remove whitespaces to match the string. For that I usually prefer sed command. After that we can use awk command to find the match.
awk -F: '$1=="one" {print $2}' | sed -r 's/(\t|\s|,)//g' file.json
It will return:
"apple"
Note: I removed Comma(,) which present at end of line. If you need Comma also as output then refer below command.
awk -F: '$1=="one" {print $2}' | sed -r 's/(\t|\s)//g' file.json
It will return:
"apple",
awk solution won't work if a single line json
example:
{"one":"apple","two":"banana"}
following sed will do:
busybox cat file.json | sed -n 's/.*"one":\([^}, ]*\).*/\1/p'

Shell Script CURL JSON value to variable

I was wondering how to parse the CURL JSON output from the server into variables.
Currently, I have -
curl -X POST -H "Content: agent-type: application/x-www-form-urlencoded" https://www.toontownrewritten.com/api/login?format=json -d username="$USERNAME" -d password="$PASSWORD" | python -m json.tool
But it only outputs the JSON from the server and then have it parsed, like so:
{
"eta": "0",
"position": "0",
"queueToken": "6bee9e85-343f-41c7-a4d3-156f901da615",
"success": "delayed"
}
But how do I put - for example the success value above returned from the server into a variable $SUCCESS and have the value as delayed & have queueToken as a variable $queueToken and 6bee9e85-343f-41c7-a4d3-156f901da615 as a value?
Then when I use-
echo "$SUCCESS"
it shows this as the output -
delayed
And when I use
echo "$queueToken"
and the output as
6bee9e85-343f-41c7-a4d3-156f901da615
Thanks!
Find and install jq (https://stedolan.github.io/jq/). jq is a JSON parser. JSON is not reliably parsed by line-oriented tools like sed because, like XML, JSON is not a line-oriented data format.
In terms of your question:
source <(
curl -X POST -H "$content_type" "$url" -d username="$USERNAME" -d password="$PASSWORD" |
jq -r '. as $h | keys | map(. + "=\"" + $h[.] + "\"") | .[]'
)
The jq syntax is a bit weird, I'm still working on it. It's basically a series of filters, each pipe taking the previous input and transforming it. In this case, the end result is some lines that look like variable="value"
This answer uses bash's "process substitution" to take the results of the jq command, treat it like a file, and source it into the current shell. The variables will then be available to use.
Here's an example of Extract a JSON value from a BASH script
#!/bin/bash
function jsonval {
temp=`echo $json | sed 's/\\\\\//\//g' | sed 's/[{}]//g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | sed 's/\"\:\"/\|/g' | sed 's/[\,]/ /g' | sed 's/\"//g' | grep -w $prop`
echo ${temp##*|}
}
json=`curl -s -X GET http://twitter.com/users/show/$1.json`
prop='profile_image_url'
picurl=`jsonval`
`curl -s -X GET $picurl -o $1.png`
A bash script which demonstrates parsing a JSON string to extract a
property value. The script contains a jsonval function which operates
on two variables, json and prop. When the script is passed the name of
a twitter user it attempts to download the user's profile picture.
You could use perl module on command line:
1st, ensure they is installed, under debian based, you could
sudo apt-get install libjson-xs-perl
But for other OS, you could install perl modules via CPAN (the Comprehensive Perl Archive Network):
cpan App::cpanminus
cpan JSON::XS
Note: You may have to run this with superuser privileges.
then:
curlopts=(-X POST -H
"Content: apent-type: application/x-www-form-urlencoded"
-d username="$USERNAME" -d password="$PASSWORD")
curlurl=https://www.toontownrewritten.com/api/login?format=json
. <(
perl -MJSON::XS -e '
$/=undef;my $a=JSON::XS::decode_json <> ;
printf "declare -A Json=\047(%s)\047\n", join " ",map {
"[".$_."]=\"".$a->{$_}."\""
} qw|queueToken success eta position|;
' < <(
curl "${curlopts[#]}" $curlurl
)
)
The line qw|...| let you precise which variables you want to be driven... This could be replaced by keys $a, but could have to be debugged as some characters is forbiden is associative arrays values names.
echo ${Json[queueToken]}
6bee9e85-343f-41c7-a4d3-156f901da615
echo ${Json[eta]}
0