How to parse json response in the shell script? - json

I am working with bash shell script. I need to execute an URL using shell script and then parse the json data coming from it.
This is my URL - http://localhost:8080/test_beat and the responses I can get after hitting the URL will be from either these two -
{"error": "error_message"}
{"success": "success_message"}
Below is my shell script which executes the URL using wget.
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/test_beat)
#grep $DATA for error and success key
Now I am not sure how to parse json response in $DATA and see whether the key is success or error. If the key is success, then I will print a message "success" and print $DATA value and exit out of the shell script with zero status code but if the key is error, then I will print "error" and print $DATA value and exit out of the shell script with non zero status code.
How can I parse json response and extract the key from it in shell script?
I don't want to install any library to do this since my JSON response is fixed and it will always be same as shown above so any simpler way is fine.
Update:-
Below is my final shell script -
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/tester)
echo $DATA
#grep $DATA for error and success key
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
case "$KEY" in
success)
exit 0
;;
error)
exit 1
;;
esac
Does this looks right?

If you are going to be using any more complicated json from the shell and you can install additional software, jq is going to be your friend.
So, for example, if you want to just extract the error message if present, then you can do this:
$ echo '{"error": "Some Error"}' | jq ".error"
"Some Error"
If you try this on the success case, it will do:
$echo '{"success": "Yay"}' | jq ".error"
null
The main advantage of the tool is simply that it fully understands json. So, no need for concern over corner cases and whatnot.

#!/bin/bash
IFS= read -d '' DATA < temp.txt ## Imitates your DATA=$(wget ...). Just replace it.
while IFS=\" read -ra LINE; do
case "${LINE[1]}" in
error)
# ERROR_MSG=${LINE[3]}
printf -v ERROR_MSG '%b' "${LINE[3]}"
;;
success)
# SUCCESS_MSG=${LINE[3]}
printf -v SUCCESS_MSG '%b' "${LINE[3]}"
;;
esac
done <<< "$DATA"
echo "$ERROR_MSG|$SUCCESS_MSG" ## Shows: error_message|success_message
* %b expands backslash escape sequences in the corresponding argument.
Update as I didn't really get the question at first. It should simply be:
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
[[ $KEY == success ]] ## Gives $? = 0 if true or else 1 if false.
And you can examine it further:
case "$KEY" in
success)
echo "Success message: $MESSAGE"
exit 0
;;
error)
echo "Error message: $MESSAGE"
exit 1
;;
esac
Of course similar obvious tests can be done with it:
if [[ $KEY == success ]]; then
echo "It was successful."
else
echo "It wasn't."
fi
From your last comment it can be simply done as
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
echo "$DATA" ## Your really need to show $DATA and not $MESSAGE right?
[[ $KEY == success ]]
exit ## Exits with code based from current $?. Not necessary if you're on the last line of the script.

You probably already have python installed, which has json parsing in the standard library. Python is not a great language for one-liners in shell scripts, but here is one way to use it:
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/test_beat)
if python -c '
import json, sys
exit(1 if "error" in json.loads(sys.stdin.read()) else 0)' <<<"$DATA"
then
echo "SUCCESS: $DATA"
else
echo "ERROR: $DATA"
exit 1
fi

Given:
that you don't want to use JSON libraries.
and that the response you're parsing is simple and the only thing you care about is the presence of substring "success", I suggest the following simplification:
#!/bin/bash
wget -O - -q -t 1 http://localhost:8080/tester | grep -F -q '"success"'
exit $?
-F tells grep to search for a fixed (literal) string.
-q tells grep to produce no output and instead only reflect via its exit code whether a match was found or not.
exit $? simply exits with grep's exit code ($? is a special variable that reflects the most recently executed command's exit code).
Note that if you all you care about is whether wget's output contains "success", the above pipeline will do - no need to capture wget's output in an aux. variable.

Related

How to use regex to parse this json: { "success" : true }?

I am using a shell script to make an API call and I need to verify that the json response is this:
{ "success" : true }
I am able to echo the call response to see that it has that value but I need to validate the response in an if statement so that the script can continue, I have tried to do this a number of ways with no success
Regex - I have used regex to extract values from other json responses, but I have not found a regex pattern that can extract the value of "success" with this json
String Comparison - I thought of simply using this condition to attempt to match the strings:
if [ "$callResponse" = '{ "success" : true }' ]
However I quickly ran into issues with the script reading the json due to its special characters, I tried using sed to add a backslash before each special character but sed could not read the json either
Lastly I tried to pipe the response to python but got the error "ValueError: No JSON object could be decoded" when using this command:
status=${$callResponse | python -c "import sys, json; print(json.load(sys.stdin)['success'])"}
Does anyone know a regex pattern that could find that specific json string? Is there another simple solution to this issue?
(Note that it is not possible to download jq or any other utilities for this PC)
Since the caller knows that the response is { "success" : true }, I can't think of any reason to not use jq in this case. For instance, you can try something like this:
if echo '{ "success" : true }' | jq --exit-status '.success == true' >/dev/null; then
echo "success"
# Do something success is true in the response.
else
echo "not success" response.
# Do something else success is not true or absent in the
fi
If you want to make an API call and get the response, you can easily pass the JSON response directly from wget to jq instead of going the roundabout way of storing it in an intermediate variable by tweaking it like this:
if wget --timeout 10 -O - -q -t 1 https://your.api.com/endpoint | jq --exit-status '.success == true' >/dev/null; then
echo "success"
else
echo "not success"
fi
To match when the value of success is true in a flexible way:
"success"\s*:\s*"?true"?
This will match all of these:
{ "success" : true }
{ "success" : "true" }
{ "success":true}
To be strict and match the above, but not imbalanced quotes like { "success" : "true }, use this:
"success"\s*:\s*("?)true\1
I would highly recommend not doing it that way.
We used to do it this way long ago and got into trouble when the response code is "200 OK" but receiving {"success": false} seemed to contradict each other.
A better approach is to use the response status codes instead.
Simply return 200 OK if success is true otherwise return the appropriate error status code if its not.
https://www.restapitutorial.com/httpstatuscodes.html
EDIT:
Bash script to help:
COOKIE_FILE="cookies.txt"
SERVER_IP="172.1.2.3"
LOGFILE="logs/api-calls.log"
WGETLOGFILE="logs/last-api-call.log"
#Helper function
on_wget_err ( )
{
EXITCODE=${1}
case ${EXITCODE} in
0) RESULT="OK";;
*) cat ${WGETLOGFILE} >> ${LOGFILE};
grep "HTTP/1.1" ${WGETLOGFILE} | gawk '{print substr($0,16)}'
exit 0;;
esac
}
if wget -O - -qT 4 -t 1 ${SERVER_IP} > /dev/null; then
echo "Server is up"
wget -S -O - --load-cookies ${COOKIE_FILE} "http://${SERVER_IP}${SERVER_ADDRESS}/my/api?param=$1" 2> ${WGETLOGFILE}
on_wget_err ${?}
echo "API was successfull"
else
echo "Server or network down"
exit 1;
fi
Your Python attempt was close. Here's a working one:
callResponse='{ "success" : true }'
status=$(echo "$callResponse" |
python -c "import sys, json; print(json.load(sys.stdin)['success'])")
echo "$status"
Or alternatively, rewritten to go straight in an if statement:
callResponse='{ "success" : true }'
if echo "$callResponse" | python -c "import sys, json; sys.exit(0 if json.load(sys.stdin)['success'] else 1)"
then
echo "Success"
fi

Periodically reading output from async background scripts

Context: I'm making my own i3-Bar script to read output from other (asynchronous) scripts running in background, concatenate them and then echo them to i3-Bar itself.
The way I'm passing outputs is in plain files, and I guess (logically) the problem is that the files are sometimes read and written at the same time. The best way to reproduce this behavior is by suspending the computer and then waking it back up - I don't know the exact cause of this, I can only go on what I see from my debug log files.
Main Code: Added comments for clarity
#!/usr/bin/env bash
cd "${0%/*}";
trap "kill -- -$$" EXIT; #The bg. scripts are on a while [ 1 ] loop, have to kill them.
rm -r ../input/*;
mkdir ../input/; #Just in case.
for tFile in ./*; do
#Run all of the available scripts in the current directory in the background.
if [ $(basename $tFile) != "main.sh" ]; then ("$tFile" &); fi;
done;
echo -e '{ "version": 1 }\n['; #I3-Bar can use infinite array of JSON input.
while [ 1 ]; do
input=../input/*; #All of the scripts put their output in this folder as separate text files
input=$(sort -nr <(printf "%s\n" $input));
output="";
for tFile in $input; do
#Read and add all of the files to one output string.
if [ $tFile == "../input/*" ]; then break; fi;
output+="$(cat $tFile),";
done;
if [ "$output" == "" ]; then
echo -e "[{\"full_text\":\"ERR: No input files found\",\"color\":\"#ff0000\"}],\n";
else
echo -e "[${output::-1}],\n";
fi;
sleep 0.2s;
done;
Example Input Script:
#!/usr/bin/env bash
cd "${0%/*}";
while [ 1 ]; do
echo -e "{" \
"\"name\":\"clock\"," \
"\"separator_block_width\":12," \
"\"full_text\":\"$(date +"%H:%M:%S")\"}" > ../input/0_clock;
sleep 1;
done;
The Problem
The problem isn't the script itself, but the fact, that i3-Bar receives a malformed JSON input (-> parse error), and terminates - I'll show such log later.
Another problem is, that the background scripts should run asynchronously, because some need to update every 1 second nad some only every 1 minute, etc. So the use of a FIFO isn't really an option, unless I create some ugly inefficient hacky stuff.
I know there is a need for IPC here, but I have no idea how to effieciently do this.
Script output from randomly crashing - waking up error looks the same
[{ "separator_block_width":12, "color":"#BAF2F8", "full_text":"192.168.1.104 "},{ "separator_block_width":12, "color":"#BAF2F8", "full_text":"100%"}],
[{ "separator_block_width":12, "color":"#BAF2F8", "full_text":"192.168.1.104 "},,],
(Error is created by the second line)
As you see, the main script tries to read the file, doesn't get any output, but the comma is still there -> malformed JSON.
The immediate error is easy to fix: don't append an entry to output if the corresponding file is empty:
for tFile in $input; do
[[ $tFile != "../input/*" ]] &&
[[ -s $tFile ]] &&
output+="$(<$tFile),"
done
There is a potential race condition here, though. Just because a particular input file exists doesn't mean that the data is fully written to it yet. I would change your input scripts to look something like
#!/usr/bin/env bash
cd "${0%/*}";
while true; do
o=$(mktemp)
printf '{"name": "clock", "separator_block_width": 12, "full_text": %(%H:%M:%S)T}\n' > "$o"
mv "$o" ../input/0_clock
sleep 1
done
Also, ${output%,} is a safer way to trim a trailing comma when necessary.

Bash: Check json response and write to a file if string exists

I curl an endpoint for a json response and write the response to a file.
So far I've got a script that:
1). does the curl if the file does not exist and
2). else sets a variable
#!/bin/bash
instance="server1"
curl=$(curl -sk https://my-app-api.com | python -m json.tool)
json_response_file="/tmp/file"
if [ ! -f ${json_response_file} ] ; then
${curl} > ${json_response_file}
instance_info=$(cat ${json_response_file})
else
instance_info=$(cat ${json_response_file})
fi
The problem is, the file may exist with a bad response or is empty.
Possibly using bash until, I'd like to
(1). check (using JQ) that a field in the curl response contains $instance and only then write the file.
(2). retry the curl XX number of times until the response contains $instance
(3). write the file once the response contains $instance
(4). set the variable instance_info=$(cat ${json_response_file}) when the above is done correctly.
I started like this... then got stuck...
until [[ $(/usr/bin/jq --raw-output '.server' <<< ${curl}) = $instance ]]
do
One sane implementation might look something like this:
retries=10
instance=server1
response_file=filename
# define a function, since you want to run this code multiple times
# the old version only ran curl once and reused that result
fetch() { curl -sk https://my-app-api.com; }
instance_info=
for (( retries_left=retries; retries_left > 0; retries_left-- )); do
content=$(fetch)
server=$(jq --raw-output '.server' <<<"$content")
if [[ $server = "$instance" ]]; then
# Writing isn't atomic, but renaming is; doing it this way makes sure that no
# incomplete response will ever exist in response_file. If working in a directory
# like /tmp where others users may have write, use $(mktemp) to create a tempfile with
# a random name to avoid security risk.
printf '%s\n' "$content" >"$response_file.tmp" \
&& mv "$response_file.tmp" "$response_file"
instance_info=$content
break
fi
done
[[ $instance_info ]] || { echo "ERROR: Giving up after $retries retries" >&2; }

unix function return if any error occurs

I have a unix script in which I am calling functions.
I want the function should return immediately if any of the command failed in between.
But checking $? after every command I can not do. Is there any other way to do this.
Maybe running the script from a file line by line (as long of course as each of your functions are one line long).
Maybe the following script can be a starting point:
#!/bin/sh
while read l
do
eval "$l || break"
done <<EOF
echo test | grep e
echo test2 | grep r
echo test3 grep 3
EOF
This is another idea after my previous answer. It works with bash script and requires your functions to be quite simple (pipes may cause some issues):
#!/bin/bash
set -o monitor
check() {
[ $? -eq 0 ] && exit
}
trap check SIGCHLD
/bin/echo $(( 1+1 ))
/bin/echo $(( 1/0 ))
/bin/echo $(( 2+2 ))
Furthermore: functions need to be external command (this is why I use /bin/echo rather than echo). Regards.

Bourne shell function return variable always empty

The following Bourne shell script, given a path, is supposed to test each component of the path for existence; then set a variable comprising only those components that actually exist.
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
paths=/usr/share/man:\
/usr/X11R6/man:\
/usr/local/man
MANPATH=`test_path $paths`
echo $MANPATH
When run, it always prints nothing. The trace using set -x is:
+ paths=/usr/share/man:/usr/X11R6/man:/usr/local/man
++ test_path /usr/share/man:/usr/X11R6/man:/usr/local/man
++ path=
++ echo /usr/share/man:/usr/X11R6/man:/usr/local/man
++ tr : '\012'
++ read component
++ '[' -d /usr/share/man ']'
++ '[' -z '' ']'
++ path=/usr/share/man
++ read component
++ '[' -d /usr/X11R6/man ']'
++ read component
++ '[' -d /usr/local/man ']'
++ '[' -z /usr/share/man ']'
++ path=/usr/share/man:/usr/local/man
++ read component
++ echo ''
+ MANPATH=
+ echo
Why is the final echo $path empty? The $path variable within the while loop was incrementally set for each iteration just fine.
The pipe runs all commands involved in sub-shells, including the entire while ... loop. Therefore, all changes to variables in that loop are confined to the sub-shell and invisible to the parent shell script.
One way to work around that is putting the while ... loop and the echo into a list that executes entirely in the sub-shell, so that the modified variable $path is visible to echo:
test_path()
{
echo "$1" | tr ':' '\n' | {
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then
path="$component"
else
path="$path:$component"
fi
fi
done
echo "$path"
}
}
However, I suggest using something like this:
test_path()
{
echo "$1" | tr ':' '\n' |
while read dir
do
[ -d "$dir" ] && printf "%s:" "$dir"
done |
sed 's/:$/\n/'
}
... but that's a matter of taste.
Edit: As others have said, the behaviour you are observing depends on the shell. The POSIX standard describes pipelined commands as run in sub-shells, but that is not a requirement:
Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment.
Bash runs them in sub-shells, but some shells run the last command in the context of the main script, when only the preceding commands in the pipeline are run in sub-shells.
This should work in a Bourne shell that understands functions (and would work in Bash and other shells too):
test_path() {
echo $1 | tr ':' '\012' |
{
path=""
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
}
The inner set of braces groups the commands into a unit, so path is only set in the subshell but is echoed from the same subshell.
Why is the final echo $path empty?
Until recently, Bash would give all components of a pipeline their own process, separate from the shell process in which the pipeline is run.
Separate process == separate address space, and no variable sharing.
In ksh93 and in recent Bash (may need a shopt setting), the shell will run the last component of a pipeline in the calling shell, so any variables changed inside the loop are preserved when the loop exits.
Another way to accomplish what you want is to make sure that the echo $path is in the same process as the loop, using parentheses:
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | ( while read component
do
[ -d "$component" ] || continue
path="${path:+$path:}$component"
done
echo "$path"
)
}
Note: I simplified the inner if. There was no else so the test can be replaced with a shortcut. Also, the two path assignments can be combined into one, using the S{var:+ ...} parameter substitution trick.
Your script works just fine with no change under Solaris 11 and probably also most commercial Unix like AIX and HP-UX because under these OSes, the underlying implementation of /bin/sh is provided by ksh. This would be also the case if /bin/sh is backed by zsh.
It doesn't work for you likely because your /bin/sh is implemented by one of bash, dash, mksh or busybox sh which all process each component of a pipeline in a subshell while ksh and zsh both keep the last element of a pipeline in the current shell, saving an unnecessary fork.
It is possible to "fix" your script for it to work when sh is provided by bash by adding this line somewhere before the pipeline:
shopt -s lastpipe
or better, if you wan't to keep portability:
command -v shopt > /dev/null && shopt -s lastpipe
This will keep the script working for ksh, and zsh but still won't work for dash, mksh or the original Bourne shell.
Note that both bash and ksh behaviors are allowed by the POSIX standard.