I'm trying to detect when I get all empty json values.
Here are 2 examples of empty json values.
json='{
"data": [],
"pagination": {
"cursor": ""
}
}'
json='{
"data": [],
"error": "",
"message": "",
"status": ""
}'
The closest I've gotten is this, which seems to work for the first example, but not the 2nd.
echo "$json" | jq -e 'all( .[] ; . == "" or . == [] or . == {} )'
One, I thought easy way, would be to get the string text from each key pair, then just check if I'm left with an empty string. But I haven't found a way that works in the pagination example.
This will give you false if all strings and arrays are "" or []:
jq -e 'all(.. | strings, arrays; IN("", [])) | not'
Demo
This might be a good use for gron
json='{
"data": [],
"pagination": {
"cursor": "",
"position":[2,3]
}
}'
gron transforms the input JSON into discrete paths:
$ gron <<< "$json"
json = {};
json.data = [];
json.pagination = {};
json.pagination.cursor = "";
json.pagination.position = [];
json.pagination.position[0] = 2;
json.pagination.position[1] = 3;
and to test for any non-empty elements
$ gron <<< "$json" | grep -Evq '(\{\}|\[\]|"");$' && echo "not everything is empty"
not everything is empty
Related
Could you please assist me to on how I can merge two json variables in bash to get the desired output mentioned below {without manually lopping over .data[] array} ? I tired echo "${firstJsonoObj} ${SecondJsonoObj}" | jq -s add but it didn't parse through the array.
firstJsonoObj='{"data" :[{"id": "123"},{"id": "124"}]}'
SecondJsonoObj='{"etag" :" 234324"}'
desired output
{"data" :[{"id": "123", "etag" :" 234324"},{"id": "124", "etag" :" 234324"}]}
Thanks in advance!
You can append to each data element using +=:
#!/bin/bash
firstJsonoObj='{"data" :[{"id": "123"},{"id": "124"}]}'
SecondJsonoObj='{"etag" :" 234324"}'
jq -c ".data[] += $SecondJsonoObj" <<< "$firstJsonoObj"
Output:
{"data":[{"id":"123","etag":" 234324"},{"id":"124","etag":" 234324"}]}
Please don't use double quotes to inject data from shell into code. jq provides the --arg and --argjson options to do that safely:
#!/bin/bash
firstJsonoObj='{"data" :[{"id": "123"},{"id": "124"}]}'
SecondJsonoObj='{"etag" :" 234324"}'
jq --argjson x "$SecondJsonoObj" '.data[] += $x' <<< "$firstJsonoObj"
# or
jq --argjson a "$firstJsonoObj" --argjson b "$SecondJsonoObj" -n '$a | .data[] += $b'
{
"data": [
{
"id": "123",
"etag": " 234324"
},
{
"id": "124",
"etag": " 234324"
}
]
}
jq -s add will not work because you want to add the second document to a deeper level within the first. Use .data[] += input (without -s), with . acessing the first and ìnput accessing the second input:
echo "${firstJsonoObj} ${SecondJsonoObj}" | jq '.data[] += input'
Or, as bash is tagged, use a Heredoc:
jq '.data[] += input' <<< "${firstJsonoObj} ${SecondJsonoObj}"
Output:
{
"data": [
{
"id": "123",
"etag": " 234324"
},
{
"id": "124",
"etag": " 234324"
}
]
}
Demo
I have a json file with this data:
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
But my challege here is I need to replace that values array with words in a text or csv file:
this
this
this
is
is
an
an
array
My output needs to have (although I could probably get away with the words all on one line...):
"values": [
"this this this",
"is is",
"an an",
"array"
],
Is this possible with only jq? Or would I have to get awk to help out?
I already started down the awk road with:
awk -F, 'BEGIN{ORS=" "; {print "["}} {print $2} END{{print "]"}}' filename
But I know there is still some work here...
And then I came across jq -Rn inputs. But I haven't figured out how or if I can get the desired result.
Thanks for any pointers.
Assuming you have a raw ASCII text file named file and an input JSON file, you could do
jq --rawfile txt file '.data[].values |= ( $txt | split("\n")[:-1] | group_by(.) | map(join(" ")) )' json
produces
{
"data": [
{
"name": "table",
"values": [
"an an",
"array",
"is is",
"this this this"
]
}
]
}
You can use jq and awk.
Given:
$ cat file
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
$ cat replacement
this
this
this
is
is
an
an
array
First create a string for the replacement array (awk is easy to use here):
ins=$(awk '!s {s=last=$1; next}
$1==last{s=s " " $1; next}
{print s; s=last=$1}
END{print s}' replacement | tr '\n' '\t')
Then use jq to insert into the JSON:
jq --rawfile txt <(echo "$ins") '.data[].values |= ( $txt | split("\t")[:-1] )' file
{
"data": [
{
"name": "table",
"values": [
"this this this",
"is is",
"an an",
"array"
]
}
]
}
You can also use ruby to process both files:
ruby -r json -e '
BEGIN{ ar=File.readlines(ARGV[0])
.map{|l| l.rstrip}
.group_by{|e| e}
.values
.map{|v| v.join(" ")}
j=JSON.parse(File.read(ARGV[1]))
}
j["data"][0]["values"]=ar
puts JSON.pretty_generate(j)' txt file
# same output...
Lets say this is my array :
[
{
"name": "Matias",
"age": "33"
}
]
I can do this :
echo "$response" | jq '[ .[] | select(.name | test("M.*"))] | . += [.[]]'
And it will output :
[
{
"name": "Matias",
"age": "33"
},
{
"name": "Matias",
"age": "33"
}
]
But I cant do this :
echo "$response" | jq '[ .[] | select(.name | test("M.*"))] | . += [.[] * 3]'
jq: error (at <stdin>:7): object ({"name":"Ma...) and number (3) cannot be multiplied
I need to extend an array to create a dummy array with 100 values. And I cant do it. Also, I would like to have a random age on the objects. ( So later on I can filter the file to measure performance of an app .
Currently jq does not have a built-in randomization function, but it's easy enough to generate random numbers that jq can use. The following solution uses awk but in a way that some other PRNG can easily be used.
#!/bin/bash
function template {
cat<<EOF
[
{
"name": "Matias",
"age": "33"
}
]
EOF
}
function randoms {
awk -v n=$1 'BEGIN { for(i=0;i<n;i++) {print int(100*rand())} }'
}
randoms 100 | jq -n --argfile template <(template) '
first($template[] | select(.name | test("M.*"))) as $t
| [ $t | .age = inputs]
'
Note on performance
Even though the above uses awk and jq together, this combination is about 10 times faster than the posted jtc solution using -eu:
jq+awk: u+s = 0.012s
jtc with -eu: u+s = 0.192s
Using jtc in conjunction with awk as above, however, gives u+s == 0.008s on the same machine.
I am trying to open a file, look through the file and change a value based on the value and pass this either to a file or var.
Below is an example of the JSON
{
"Par": [
{
"Key": "12345L",
"Value": "https://100.100.100.100:100",
"UseLastValue": true
},
{
"Key": "12345S",
"Value": "VAL2CHANGE",
"UseLastValue": true
},
{
"Key": "12345T",
"Value": "HAPPY-HELLO",
"UseLastValue": true
}
],
"CANCOPY": false,
"LOGFILE": ["HELPLOG"]
}
i have been using jq and i have been successful in isolating the object group and change the value.
cat jsonfile,json | jq '.Par | map(select(.Value=="VAL2CHANGE")) | .[] | .Value="VALHASBEENCHANGED"'
This gives
{
"Key": "12345S",
"Value": "VALHASBEENCHANGED",
"UseLastValue": true
}
What id like to achieve is to retain the full JSON output with the changed value
{
"Par": [
{
"Key": "12345L",
"Value": "https://100.100.100.100:100",
"UseLastValue": true
},
{
"Key": "12345S",
"Value": "VALHASBEENCHANGED",
"UseLastValue": true
},
{
"Key": "12345T",
"Value": "HAPPY-HELLO",
"UseLastValue": true
}
],
"CANCOPY": false,
"LOGFILE": ["HELPLOG"]
}
I.E.
jq '.Par | map(select(.Value=="VAL2CHANGE")) | .[] | .Value="VALHASBEENCHANGED"' (NOW PUT IT BACK IN FILE)
OR
open file, look in file, file value to be changed and change this and output this to a file or to screen
To add, the json file will only contain the value im looking for once as im creating this. If any other values need changing i will name differently.
jq --arg match "VAL2CHANGE" \
--arg replace "VALHASBEENCHANGED" \
'.Par |= map(if .Value == $match then (.Value=$replace) else . end)' \
<in.json
To more comprehensively replace a string anywhere it may be in a nested data structure, you can use the walk function -- which will be in the standard library in jq 1.6, but can be manually pulled in in 1.5:
jq --arg match "VAL2CHANGE" \
--arg replace "VALHASBEENCHANGED" '
# taken from jq 1.6; will not be needed here after that version is released.
# Apply f to composite entities recursively, and to atoms
def walk(f):
. as $in
| if type == "object" then
reduce keys_unsorted[] as $key
( {}; . + { ($key): ($in[$key] | walk(f)) } ) | f
elif type == "array" then map( walk(f) ) | f
else f
end;
walk(if . == $match then $replace else . end)' <in.json
If you're just replacing based on the values, you could stream the file and replace the values as you rebuild the result.
$ jq --arg change 'VAL2CHANGE' --arg value 'VALHASBEENCHANGED' -n --stream '
fromstream(inputs | if length == 2 and .[1] == $change then .[1] = $value else . end)
' input.json
I have JSON like this that I'm parsing with jq:
{
"data": [
{
"item": {
"name": "string 1"
},
"item": {
"name": "string 2"
},
"item": {
"name": "string 3"
}
}
]
}
...and I'm trying to get "string 1" "string 2" and "string 3" into a Bash array, but I can't find a solution that ignores the whitespace in them. Is there a method in jq that I'm missing, or perhaps an elegant solution in Bash for it?
Current method:
json_names=$(cat file.json | jq ".data[] .item .name")
read -a name_array <<< $json_names
The below assume your JSON text is in a string named s. That is:
s='{
"data": [
{
"item1": {
"name": "string 1"
},
"item2": {
"name": "string 2"
},
"item3": {
"name": "string 3"
}
}
]
}'
Unfortunately, both of the below will misbehave with strings containing literal newlines; since jq doesn't have support for NUL-delimited output, this is difficult to work around.
On bash 4 (with slightly sloppy error handling, but tersely):
readarray -t name_array < <(jq -r '.data[] | .[] | .name' <<<"$s")
...or on bash 3.x or newer (with very comprehensive error handling, but verbosely):
# -d '' tells read to process up to a NUL, and will exit with a nonzero exit status if that
# NUL is not seen; thus, this causes the read to pass through any error which occurred in
# jq.
IFS=$'\n' read -r -d '' -a name_array \
< <(jq -r '.data[] | .[] | .name' <<<"$s" && printf '\0')
This populates a bash array, contents of which can be displayed with:
declare -p name_array
Arrays are assigned in the form:
NAME=(VALUE1 VALUE2 ... )
where NAME is the name of the variable, VALUE1, VALUE2, and the rest are fields separated with characters that are present in the $IFS (input field separator) variable.
Since jq outputs the string values as lines (sequences separated with the new line character), then you can temporarily override $IFS, e.g.:
# Disable globbing, remember current -f flag value
[[ "$-" == *f* ]] || globbing_disabled=1
set -f
IFS=$'\n' a=( $(jq --raw-output '.data[].item.name' file.json) )
# Restore globbing
test -n "$globbing_disabled" && set +f
The above will create an array of three items for the following file.json:
{
"data": [
{"item": {
"name": "string 1"
}},
{"item": {
"name": "string 2"
}},
{"item": {
"name": "string 3"
}}
]
}
The following shows how to create a bash array consisting of arbitrary JSON texts produced by a run of jq.
In the following, I'll assume input.json is a file with the following:
["string 1", "new\nline", {"x": 1}, ["one\ttab", 4]]
With this input, the jq filter .[] produces four JSON texts -- two JSON strings, a JSON object, and a JSON array.
The following bash script can then be used to set x to be a bash array of the JSON texts:
#!/bin/bash
x=()
while read -r value
do
x+=("$value")
done < <(jq -c '.[]' input.json)
For example, adding this bash expression to the script:
for a in "${x[#]}" ; do echo a="$a"; done
would yield:
a="string 1"
a="new\nline"
a={"x":1}
a=["one\ttab",4]
Notice how (encoded) newlines and (encoded) tabs are handled properly.