Check if string is a valid JSON with jq - json

I need to catch an error when lifting a service. The response can be null, a string error message like
error services-migration/foobar: Not found: services-migration/foobar
or a valid JSON when everything is fine. I was wondering if there is a way with jq to simply check if the provided string is a valid JSON. I could ofc check the string for some keywords like error f.e., but I'm looking for a more robust option, where eg. I get a true/false or 1/0 from jq.
I was looking through the docs of jq and also some questions here on SO but everything was about finding and picking out key-values from a JSON, but nothing about simply validating a string.
UPDATE:
I've got this:
result=$(some command)
from which the result is the string error services-migration/foobar: Not found: services-migration/foobar
And then the if statement:
if jq -e . >/dev/null 2>&1 <<<"$result"; then
echo "it catches it"
else
echo "it doesn't catch it"
fi
And it always ends up in the else clause.

From the manual:
-e / --exit-status:
Sets the exit status of jq to 0 if the last output values was neither false nor null, 1 if the last output value was either false or null, or 4 if no valid result was ever produced. Normally jq exits with 2 if there was any usage problem or system error, 3 if there was a jq program compile error, or 0 if the jq program ran.
So you can use:
if jq -e . >/dev/null 2>&1 <<<"$json_string"; then
echo "Parsed JSON successfully and got something other than false/null"
else
echo "Failed to parse JSON, or got false/null"
fi
In fact, if you don't care about distinguishing between the different types of error, then you can just lose the -e switch. In this case, anything considered to be valid JSON (including false/null) will be parsed successfully by the filter . and the program will terminate successfully, so the if branch will be followed.

This is working for me
echo $json_string | jq -e . >/dev/null 2>&1 | echo ${PIPESTATUS[1]}
that returns return code:
0 - Success
1 - Failed
4 - Invalid
Then you can evaluate the return code by further code.

The commands and explanations below attempt to use jq in the least expensive way, just as means to validate JSON. In my opinion, there are tools better suited for this (Like JSONLint), but if the only thing you have at hand is jq then this is my contribution.
To validate a JSON file and still get the errors in STDERR output using jq you can use the following bash one-liner
jq -reM '""' <<<'<MAYBE_INVALID_JSON>' 1>/dev/null
Which reads as:
Execute jq with flags:
-r to output a raw value (No processing or formatting)
-e to exit with a code greater than 0 if there was an error
-M do not colorize JSON
The first jq argument is '""' which outputs an empty string, effectively preventing jq from making any processing of the JSON for printing
<<< followed by a string is called a "Here String", and is basically telling bash to treat the string as if it was a file and pass it to STDIN (As input to the command).
Note that you can replace <<<'<MAYBE_INVALID_JSON>' with the path of a file you want to validate and it would still work
Then redirect any STDOUT output (Which normally will contain the JSON) to /dev/null, effectively discarding it
You can also get fancier and output a custom message below the errors by using the following:
jq -reM '""' <<<'<MAYBE_INVALID_JSON>' 1>/dev/null || ( exit_code="$?"; echo 'ERROR: Invalid JSON file. See errors above' 1>&2; exit "$exit_code" )
Where the added portion reads as follows:
|| means: If the previous command failed with a non-zero exit code then run the following command
Open a sub-shell with (. This is actually needed since we want to exit with the original command's exit code
Set the exit_code environment variable to the exit code of the last command $?
Print custom error message using echo
exit the sub-shell with the exit code from the original command $exit_code
Close the sub-shell

For a variable content like for a line, that's been working for me
echo "$my_potential_json"|jq empty > /dev/null 2>&1
if [ $? -eq 0 ]; then
echo "found valid json:$my_potential_json"
do_whatever....

Related

Bash case statement doesnt work with json string value using jq

I working on a function which extract the choosen track from a media container (mkv,mp4...etc). One of its major feature will be the "auto output file extension assigner".
the process will be the following...
step 1) when i give the script the number of the track, which i want to extract, it automatically inspect the source file with mediainfo and output the results in JSON format.
step 2) With JQ, i query the value of the "track" key from the selected track, and save it to the "mediaFormat" variable.
step 3) put this variable in a switch statement and compare with a predefined list of switches. If there is a match, then it will initialize the "mediaExtension" variable
with the appropriate value, which will be used as a extension of the ouput file.
For now i just want echo the "mediaExtension" variable, to see if it works. And it DIDN'T WORK.
The problem is step 1-2 works as expected, but somehow the switch statement (step 3) doesn't work. Only the (*) switch will be executed, which means it doesn't recognize the "AVC" switch.
#!/bin/bash
# INCLUDES
# mediainfo binary
PATH=/cygdrive/c/build_suite/local64/bin-video:$PATH;
# jq binary
PATH=/cygdrive/c/build_suite/local64/bin-global:$PATH;
# BASH SETTINGS
set -x;
# FUNCTION PARAMETER
function inspectExtension () {
mediaFormat=$(mediainfo "$1" --Output=JSON | jq ".media.track[$2].Format");
case $mediaFormat in
"AVC") mediaExtension="264";;
*) echo "ERROR";;
esac
set "$mediaExtension";
echo "$mediaExtension";
}
inspectExtension "test.mp4" "1";
read -p "Press enter to continue...";
And as you can see, in this script i activated tracing (set -x), and this is what i see in the console window (i use cygwin on windows 10).
+ inspectExtension test.mp4 1
++ mediainfo test.mp4 --Output=JSON
++ jq '.media.track[1].Format'
' mediaFormat='"AVC"
+ case $mediaFormat in
+ echo ERROR
ERROR
+ set ''
+ echo ''
+ read -p 'Press enter to continue...'
Press enter to continue...
Any ideas? Or is something what i miss here?
Thx for the help!
Maybe the only thing you miss is using the --raw-output option of jq like so:
mediaFormat=$(mediainfo "$1" --Output=JSON | jq --raw-output ".media.track[$2].Format");
Whenever you use jq to access some string values, it will be best to use the --raw-output option because it get's rid of the enclosing quotes.
Assuming you want mediaFormat to be a JSON value (i.e., assuming the invocation of jq is the way you have it), "AVC" in the case statement should be quoted:
'"AVC"' ) ...
In addition, it would probably be safer to quote the argument of case.

Get key and value from json string limited with openwrt

Im checking inside a openwrt with very few shell commands to see If is possible to filter json string to have the values.
For example
{"address":"192.168.2.2","user":"user1","groups":"permissions"}
I receive from curl the string and I need to separate values to pass vars to other commands.
For now Im checking some examples but not works
#!/bin/sh
. /usr/share/libubox/jshn.sh
json_init
json_load '$(cat $STRING)'
json_get_keys keys
for k in $keys; do
json_get_var v "$k"
echo "$k : $v"
done
But produce error "Failed to parse message data"
My problem is justly that I cna't use jq, or python to choose data, so only solution is to separate first.
Suggestions?
I found other form more clean to do the same
eval $(jsonfilter -s $STRING -e 'ADDRESS=#.address' -e 'USER=#.user')
echo "address=$ADDRESS user=$USER"
With this form I can filter every value how parameter, without jq or python function.

bash script complains about file name too long

So I have a script that does this (jq is a command line JSON processor):
echo "Getting LB Node IDs"
echo $LB_STATUS | jq '.loadBalancer.nodes[] .id'
The output of the last line is:
1
2
3
But when I try and assign it to an array:
echo "Creating an Array"
nodeIdArray=($($LB_STATUS | jq '.loadBalancer.nodes[] .id'))
I get this error:
./myShellScript.sh: line 53: {"loadBalancer":{"name":"lbName","id":1,"protocol":"HTTP","port":80,"algorithm":"WEIGHTED_LEAST_CONNECTIONS","status":"ACTIVE","cluster":{"name":"ztm-n22.dfw1.lbaas.rackspace.net"},"nodes":[{"address":"1.2.3.4","id":1,"type":"PRIMARY","port":80,"status":"ONLINE","condition":"ENABLED","weight":1},{"address":"1.2.3.4","id":2,"type":"PRIMARY","port":80,"status":"ONLINE","condition":"ENABLED","weight":1},{"address":"1.2.3.4","id":3,"type":"PRIMARY","port":80,"status":"ONLINE","condition":"ENABLED","weight":1}],"timeout":30,"created":{"time":"2016-06-28T22:14:24Z"},"healthMonitor":{"type":"CONNECT","delay":10,"timeout":5,"attemptsBeforeDeactivation":2},"sslTermination":...<A BOAT LOAD MORE JSON I CUT OUT FOR BREVITY'S SAKE>: File name too long
SO $LB_STATUS | jq '.loadBalancer.nodes[] .id' produces a few numbers while trying to assign those numbers to an array doesn't work
What Went Wrong
$variable | something doesn't pass the text in variable as input to something -- instead, it runs the contents of $variable as a command. Presumably you wanted echo "$variable" | something instead (but see below!)
Even if that were fixed, the array=( $(some-command) ) idiom is itself buggy. See BashPitfalls #50 describing why it shouldn't be used, and the various alternatives.
What To Do Instead
When feeding content from a variable as an input to a command, it's idiomatic to use a herestring: somecommand <<<"$variable". These aren't free (as they create temporary files), but they're cheaper than pipelines (which fork off subshells).
If you have bash 4.x or newer, you have readarray:
readarray -t nodeIdArray < <(jq -r '.loadBalancer.nodes[].id' <<<"$LB_STATUS")
If you need compatibility with bash 3.x, read -a can do the job:
IFS=$'\n' read -r -d '' -a nodeIdArray \
< <(jq -r '.loadBalancer.nodes[].id' <<<"$LB_STATUS" && printf '\0')
...which also has the advantage of causing read to return a nonzero exit status if the jq command fails.
You left out the echo.
Change
nodeIdArray=($($LB_STATUS | jq '.loadBalancer.nodes[] .id'))
to
nodeIdArray=($( echo $LB_STATUS | jq '.loadBalancer.nodes[] .id' ))

bash: return code from redirected input command

I have a bash script I am using to loop through and process the results from a SQL query like such.
while read field1 field2 field3 field4
do
{...something here...}
done < <(mysql -h $HOST -u $USER -p"$PASS" $DB << EOF
{...multi-line select query here...}
EOF)
The problem I do not know how to solve is how do I refactor this so I can get the return code, error out and skip the loop if something goes wrong querying the database?
Edit:
I have tried to using a named-pipe with the following.
mkfifo /tmp/mypipe
mysql -h $HOST -u $USER -p"$PASS" $DB << EOF >> /tmp/mypipe
{...multi-line select query here...}
EOF
echo $?
{...loop here...}
This did not seem to work because the mysql command sits and waits for the pipe to be read before continuing. So unless I have something reading the pipe mysql does not exit to have a return code.
I have tried storing the query results into a variable first with the following.
DATADUMP=$(mysql -h $HOST -u $USER -p"$PASS" $DB -e \
'select stuff from place \
join table 1 on record ..... \
')
The issue I ran into with this is the read loop would only read the first four "words" from the DATADUMP variable and would ignore the rest.
At this point, unless someone comes back with a great idea, I'm going to mktemp a temp file to hold the query results. I was hoping to keep from constantly reading and writing to the disk, but my deadline is approaching very quickly.
There is no convenient way to do this, other than to capture the entire output of the mysql command before processing it. You could capture it into a variable, if you thought it wasn't going to be gigabytes of data, or into a temporary file. That's the only way to guarantee that no partial result is processed in the case of an error.
If you knew that all errors produced empty output, then it would be reasonably simple, since the while loop wouldn't execute at all. You wouldn't be able to distinguish between an error and an empty search, but you could work around that by producing an error indication:
got_data=0
while read field1 field2; do
got_data=1
# ...
done < <( mysql ... || printf "%s %d" "ERROR" $?) <<EOF
# ...
EOF
)
if ((!got_data)); then
if [[ $field1 == "ERROR" ]]; then
# error code is in $field2
else
# query returned no result
fi
However, the premise is likely to be false. In particular, a network failure in the middle of the query return will probably produce both some output and an error indication. And in that case, the above code would process the partial return.
If you don't actually mind processing a partial result, as long as the error is eventually detected, then you could use a simple variation on the above code. The code guarantees that an unsuccessful exit of mysql will eventually produce a line which starts with the ERROR token. (I don't know if mysql guarantees output of a trailing newline if the network transmission is interrupted. You'd need to play some games to guarantee that the error line output by printf always appeared on a line by itself.)
There is a little trick in the above: The error line (ERROR 3, for example) is output with printf without a trailing newline. The absence of the trailing newline will cause read to exit with a failure code even though it will successfully split the line into the variables field1 and field2. This makes it unnecessary to try to distinguish the error indication from normal data inside the while loop, although you still need a clear indication for the test after the while loop.

Creating a Select Menu in bash using JSON input

I am trying to create a menu using the select function in bash. I am accessing an API which will return its output in json format. I will then process what the API returns into a select statement where a user can then interact with.
Here is the API call and how I parse the output:
curl -H "Authorization:Bearer $ACCESS_TOKEN" https://api.runscope.com/buckets \
| python -mjson.tool > output.json
This will send the output from the curl through python's json parsing tool and finally into the output.json file.
I then create an array using this json blob. I had to set IFS to \n in order to parse the file properly:
IFS=$'\n'
BUCKETS=("$(jq '.data | .[].name' output.json)")
I then add an exit option to the array so that users have a way to quit the selection menu:
BUCKETS+=("Exit")
Finally, I create the menu:
select BUCKET in $BUCKETS;
do
case $BUCKET in
"Exit")
echo "Exiting..."
break;;
esac
echo "You picked: $BUCKET"
done
Unfortunately, this does not create the exit option. I am able to see a menu consisting of every other option I want, except the exit option. Every option in the menu and in the array has quotes around them. How do I get the Exit option to show up?
$BUCKETS is expanding to the first element of the BUCKETS array.
Which is then being word-split and used as your select entries.
This makes sense since you wrapped the jq subshell in double quotes which prevented word-splitting from happening there (and means the IFS change isn't doing anything I believe).
Iff your entries can contain spaces and you want that assigned to an array properly the way to do that is by reading the output of jq with a while IFS= read -r entry; do loop.
BUCKETS=()
while IFS= read -r entry; do
BUCKETS+=("$entry")
done < <(jq '.data | .[].name' output.json)
Then appending your exit item to the array.
BUCKETS+=(Exit)
and then using
select BUCKET in "${BUCKETS[#]}"; do
(Both select a in l; do or select a in l\ndo there's no need for ; and \n there.)
That all being said unless you need output.json for something else you can avoid that too.
BUCKETS=()
while IFS= read -r entry; do
BUCKETS+=("$entry")
done < <(curl -H "Authorization:Bearer $ACCESS_TOKEN" https://api.runscope.com/buckets | python -mjson.tool | jq '.data | .[].name')
Instead of
jq '.data | .[].name' output.json
try
jq -r '.data | .[].name' output.json
(-r : raw data without quotes)
And the most important part :
select BUCKET in "${BUCKETS[#]}"
^^^
ARRAY syntax