I'm trying to build a SQL layer on top of jq for json files and I would like to implement the select. So far I got that:
function join() {
# If no arguments, do nothing.
# This avoids confusing errors in some shells.
if [ $# -eq 0 ]; then
return
fi
local joiner="$1"
shift
while [ $# -gt 1 ]; do
printf "%s%s" "$1" "$joiner"
shift
done
printf '%s\n' "$1"
}
function jselect {
keys=`join "\":1, \"" $#`
jq "with_entries(select(.key | in({\"$keys\":1})))"
}
allows me to do
$ echo '{"success":true, "failure":false, "results":{"a": "...", "b": "...", "c": "..."}}' | jselect success results
>>> {
"success": true,
"results": {
"a": "...",
"b": "...",
"c": "..."
}
}
but I would like to be able to index nested properties as well something like:
$ echo '{"success":true, "failure":false, "results":{"a": "...", "b": "...", "c": "..."}}' | jselect success results
>>> {
"success": true,
"results": {
"b": "..."
}
}
or
>>> {
"success": true,
"results.b": "..."
}
Any idea?
Unless I didn't understand what you're asking, I think there is no need for extra shell script, you can use plain jq script:
echo '{"success":true, "failure":false, "results":{"a": "...", "b": "...", "c": "..."}}' | jq '{ success, results }'
The jq script only selects the 2 objects you want. Note the shorter form that uses 1 keyword per object (instead of "success":.success).
echo '{"success":true, "failure":false, "results":{"a": "...", "b": "...", "c": "..."}}' | jq '{ success, "results.b":.results.b }'
This script is almost the same except that the object name is explicit.
Related
I have a json file with this data:
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
But my challege here is I need to replace that values array with words in a text or csv file:
this
this
this
is
is
an
an
array
My output needs to have (although I could probably get away with the words all on one line...):
"values": [
"this this this",
"is is",
"an an",
"array"
],
Is this possible with only jq? Or would I have to get awk to help out?
I already started down the awk road with:
awk -F, 'BEGIN{ORS=" "; {print "["}} {print $2} END{{print "]"}}' filename
But I know there is still some work here...
And then I came across jq -Rn inputs. But I haven't figured out how or if I can get the desired result.
Thanks for any pointers.
Assuming you have a raw ASCII text file named file and an input JSON file, you could do
jq --rawfile txt file '.data[].values |= ( $txt | split("\n")[:-1] | group_by(.) | map(join(" ")) )' json
produces
{
"data": [
{
"name": "table",
"values": [
"an an",
"array",
"is is",
"this this this"
]
}
]
}
You can use jq and awk.
Given:
$ cat file
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
$ cat replacement
this
this
this
is
is
an
an
array
First create a string for the replacement array (awk is easy to use here):
ins=$(awk '!s {s=last=$1; next}
$1==last{s=s " " $1; next}
{print s; s=last=$1}
END{print s}' replacement | tr '\n' '\t')
Then use jq to insert into the JSON:
jq --rawfile txt <(echo "$ins") '.data[].values |= ( $txt | split("\t")[:-1] )' file
{
"data": [
{
"name": "table",
"values": [
"this this this",
"is is",
"an an",
"array"
]
}
]
}
You can also use ruby to process both files:
ruby -r json -e '
BEGIN{ ar=File.readlines(ARGV[0])
.map{|l| l.rstrip}
.group_by{|e| e}
.values
.map{|v| v.join(" ")}
j=JSON.parse(File.read(ARGV[1]))
}
j["data"][0]["values"]=ar
puts JSON.pretty_generate(j)' txt file
# same output...
I have json as a string "Str"
"{
"A": {
"id": 4
},
"B": {//Something},
"C": {
"A": {
"id": 2
}
},
"E": {
"A": null
},
"F": {//Something}
}"
I wanted all non null values of "A" which can be repeated anywhere in json. I wanted output like all contents of "A"
{"id": 4}
{"id": 2}
Can you please help me with Linux command to get this ?
Instead of line oriented ones use a tool which is capable of parsing JSON values syntax wise. An example using jq:
$ json_value='{"A":{"id":4},"B":{"foo":0},"C":{"A":{"id":2}},"E":{"A":null},"F":{"foo":0}}'
$
$ jq -c '..|objects|.A//empty' <<< "$json_value"
{"id":4}
{"id":2}
.. # list nodes recursively
| objects # select objects
| .A // empty # print A's value if present.
We have a custom CD Pipeline Tool, which unfortunately does not version the deployment parameters. So I put these in a Bitbucket Repo as a json file and validate them against a REST API of this CD Tool.
So I have 2 json arrays, which are structurally the same, but may contain different objects or values in these objects. I want to compare them to see if they are different and what is different.
So far, I used the solution from here:
Using jq or alternative command line tools to diff JSON files
So I have put this in my code:
jq --argjson a "${bb_cfg}" --argjson b "${cd_tool_cfg}" -n 'def post_recurse(f): def r: (f | select(. != null) | r), .; r; def post_recurse: post_recurse(.[]?); ($a | (post_recurse | arrays) |= sort) as $a | ($b | (post_recurse | arrays) |= sort) as $b | $a == $b'
now I get a true if they are identical or false if 2 jsons have differences, but I do not know what is different.
I tried to do this with this if I get false back:
diff --suppress-common-lines -y <(jq . -S <<< "${bb_cfg}") <(jq . -S <<< "${cd_tool_cfg}")
Input $bb_cfg:
[{
"key": "IGNORE_VALIDATION_ERROR",
"value": "true",
"tags": []
},
{
"key": "BB_CFG_REPO_NAME",
"value": "cd-tool-cfg",
"tags": []
}]
Input $cd_tool_cfg
[{
"key": "IGNORE_VALIDATION_ERROR",
"value": "false",
"tags": []
},
{
"key": "BB_CFG_REPO_NAME",
"value": "cd-tool-cfg",
"tags": []
}]
which works partly, because if only the value is different, the output is like this:
"value": "true" | "value": "false"
so I do not get the whole json object here to quickly find out what parameter is different.
What I eventually want is to get something like this:
{
"key": "IGNORE_VALIDATION_ERROR",
"value": "true",
"tags": []
}
{
"key": "IGNORE_VALIDATION_ERROR",
"value": "false",
"tags": []
}
where I can store this in a variable in my bash script and transform this in an output I can use.
You could use jq's -c or --compact-output option:
diff <(jq -c .[] <<<"$bb_cfg") <(jq -c .[] <<<"$cd_tool_cfg")
1c1
< {"key":"IGNORE_VALIDATION_ERROR","value":"true","tags":[]}
---
> {"key":"IGNORE_VALIDATION_ERROR","value":"false","tags":[]}
The -c option will simply output a json with each array member on a separate line.
The following command will give you something like you requested:
diff --old-line-format="%L" --unchanged-line-format="" --new-line-format="%L" <(jq -c .[] <<<"$bb_cfg") <(jq -c .[] <<<"$cd_tool_cfg") | jq
will output:
{
"key": "IGNORE_VALIDATION_ERROR",
"value": "true",
"tags": []
}
{
"key": "IGNORE_VALIDATION_ERROR",
"value": "false",
"tags": []
}
Input:
{"success": true, "results": {"a": …, "b": …, "c": …}}
Desired output, given I want to keep b:
{"success": true, "results": {"b": …}}
Things I tried:
$ jq 'del(select(.results.b | not))'
{"success": true, "results": {"a": …, "b": …, "c": …}}
# removes nothing from "results"
$ jq 'with_entries(select(.key == "success" or .key == "results.b"))'
{"success": true}
# nested comparison not understood; returns only "success"
Thanks!
Here is one solution:
.results |= {b}
Sample Run
$ jq -M '.results |= {b}' <<< '{"success":true, "results":{"a": "…", "b": "…", "c": "…"}}'
{
"success": true,
"results": {
"b": "…"
}
}
Try it online at jqplay.org
Another way using nodejs and shell :
Code :
$ node<<EOF
var obj = $(</tmp/file.json);
delete obj.results.a;
delete obj.results.c;
console.log(JSON.stringify(obj));
EOF
OUTPUT :
{"success":true,"results":{"b":"bbb"}}
I have a while loop with two variables I have to merge into a single piece of JSON, like so:
#!/bin/bash
while read -r from to
do
# BONUS: Solution would ideally require no quoting at this point
echo { \"From\": \"$from\", \"To\": \"$to\" }
done << EOF
foo bar
what ever
EOF
Which currently outputs invalid JSON:
{ "From": "foo", "To": "bar" }
{ "From": "what", "To": "ever" }
What's the simplest I can create valid JSON like:
[
{ "From": "foo", "To": "bar" },
{ "From": "what", "To": "ever" }
]
I looked at jq but I couldn't figure out how to do it. I'm not looking to do it in shell ideally because I feel adding commas and such is a bit ugly.
With jq:
$ jq -nR '[inputs | split(" ") | {"From": .[0], "To": .[1]}]' <<EOF
foo bar
what ever
EOF
[
{
"From": "foo",
"To": "bar"
},
{
"From": "what",
"To": "ever"
}
]
-n tells jq to not read any input; -R is for raw input so it doesn't expect JSON.
The input is read with inputs, resulting in one string per input line:
$ jq -nR 'inputs' <<EOF
foo bar
what ever
EOF
"foo bar"
"what ever"
These are then split into arrays of words:
$ jq -nR 'inputs | split(" ")' <<EOF
foo bar
what ever
EOF
[
"foo",
"bar"
]
[
"what",
"ever"
]
From this, we construct the objects:
$ jq -nR 'inputs | split(" ") | {"From": .[0], "To": .[1]}' <<EOF
foo bar
what ever
EOF
{
"From": "foo",
"To": "bar"
}
{
"From": "what",
"To": "ever"
}
And finally, we wrap everything in [] to get the final output shown first.
The more intuitive approach of splitting input directly fails because wrapping everything in [] results in one array per input line:
$ jq -R '[split(" ") | { "From": .[0], "To": .[1] }]' <<EOF
foo bar
what ever
EOF
[
{
"From": "foo",
"To": "bar"
}
]
[
{
"From": "what",
"To": "ever"
}
]
Hence the somewhat cumbersome -n/inputs. Notice that inputs requires jq version 1.5.
Here's an all-jq solution that assumes the "From" and "To" values are presented exactly as in your example:
jq -R -n '[inputs | split(" ") | {From: .[0], To: .[1]}]'
You might want to handle additional spaces using gsub/2.
If your jq does not have inputs then you can use this incantation:
jq -R -s 'split("\n")
| map(select(length>1) | split(" ") | {From: .[0], To: .[1]})'
Or you could just pipe the output from your while-loop into jq -s.