Create JSON file using jq - json

I'm trying to create a JSON file by executing the following command:
jq --arg greeting world '{"hello":"$greeting"}' > file.json
This command stuck without any input. While
jq -n --arg greeting world '{"hello":"$greeting"}' > file.json
doesn't parse correctly. I'm just wondering is really possible to create a JSON file.

So your code doesn't work because included the variable inside double quotes which gets treated as string. That is why it is not working
As #Jeff Mercado, pointed out the solution is
jq -n --arg greeting world '{"hello":$greeting}' > file.json
About the - in a name. This is actually possible. But as of now this is not available in released version of jq. If you compile the master branch of jq on your system. There is a new variable called $ARGS.named which can be used to access the information.
I just compiled and check the below command and it works like a charm
./jq -n --arg name-is tarun '{"name": $ARGS.named["name-is"]}'
{
"name": "tarun"
}

$ARGS provides access to named (--arg name value) and positional (--args one two three) arguments from the jq command line, and allows you to build up objects easily & safely.
Named arguments:
$ jq -n '{christmas: $ARGS.named}' \
--arg one 'partridge in a "pear" tree' \
--arg two 'turtle doves'
{
"christmas": {
"one": "partridge in a \"pear\" tree",
"two": "turtle doves"
}
}
Positional arguments:
$ jq -n '{numbers: $ARGS.positional}' --args 1 2 3
{
"numbers": [
"1",
"2",
"3"
]
}
Note you can access individual items of the positional array, and that the named arguments are directly available as variables:
jq -n '{first: {name: $one, count: $ARGS.positional[0]}, all: $ARGS}' \
--arg one 'partridge in a "pear" tree' \
--arg two 'turtle doves' \
--args 1 2 3
{
"first": {
"name": "partridge in a \"pear\" tree",
"count": "1"
},
"all": {
"positional": [
"1",
"2",
"3"
],
"named": {
"one": "partridge in a \"pear\" tree",
"two": "turtle doves"
}
}
}

To add to what Jeff and Tarun have already said, you might want to use the \() string interpolation syntax in your command. eg.
jq -n --arg greeting world '{"hello":"\($greeting)"}'
for me this produces
{
"hello": "world"
}
Regarding your reply to Jeff's comment, the argument name you choose has to be a valid jq variable name so an arg like greeting-for-you won't work but you could use underscores so greeting_for_you would be ok. Or you could use the version Tarun described.

Related

Compose dynamic JSON file with arguments

I found that using jq I can compose a dynamic JSON file with arguments.
For example profile.jq:
{
"name": $name,
"age": $age,
"secretIdentity": $id,
"powers":{
"sid": $something
}
}
jq -n --arg name "FL" \
--arg age 60 \
--arg id 1234 \
--something "ss" \
-f profile.jq > out.json
so I can get a json file out.json
{
"name": "FL",
"age": 60,
"secretIdentity": 1234,
"powers":{
"sid": "ss"
}
}
However, there are many args, is there any easier way than passing each arg using --arg? Like, is that possible to give all args in a JSON file, something like:
somecommand --argfile arg-file.json --inputfile template-json.json
PS: I noticed that jq has an option --args, but I couldn't find any example of how to use it.

jq to diff json in bash

I have 2 json objects comming from a rest api. I want to compare if they are the same object.
objectA:
{
"type": {
"S": "equal"
},
"preFilter": {
"BOOL": true
}
}
objectB:
{
"preFilter": {
"BOOL": true
},
"type": {
"S": "equal"
}
}
They are the same, but an md5sum will see them as different. I tried inserting them in 2 different files, and compare the files using something proposed
here: but I would like to know if it's possible to use jq on the fly to compare variables.
I've been trying to change
--argfile a a.json
for
--arg a $a
(being $a a json string) with no luck. Any idea how to approach strings, not files?
It would probably be simplest to use the --argjson command-line option, e.g.
jq -n --argjson a "$a" --argjson b "$b" '$a == $b'
Of course there are alternatives, e.g. using jq -s ...

JQ Join JSON files by key

Looks like it's not actual for jq 1.4, could you provide any other ways to join JSON files by key?
e.g
{
"key": "874102296-1",
"que_lat": "40"
}
{
"key": "874102296-2",
"que_lat": "406790"
}
and
{
"key": "874102296-1",
"in_time": "1530874104733",
"latency": "12864258288242"
}
{
"key": "874102296-2",
"in_time": "1530874104746"
}
As a result, i'd like to have something like this:
{
"key": "874102296-1",
"in_time": "1530874104733",
"full_latency": "12864258288242",
"que_lat": "40"
}
{
"key": "874102296-2",
"in_time": "1530874104746",
"que_lat": "406790"
}
Thanks!
The problem can easily be solved using the def of hashJoin given in the SO page that you cite.
Solution using jq 1.5 or higher
If you have jq 1.5 or higher, you could use this invocation:
jq -n --slurpfile f1 file1.json --slurpfile f2 file2.json -f join.jq
where join.jq contains the second def of hashJoin, together with:
hashJoin($f1; $f2; .key)[]
Solution using jq 1.4
If you have jq 1.4, the trickiness is to read each of the two files separately as an array. Here's one approach that assumes bash:
jq -n --argfile f1 <(jq -s . file1.json) --argfile f2 <(jq -s . file2.json) -f join.jq
where join.jq is as above.
If you cannot use bash, then it might be simplest to create temporary files.

Getting output using jq

I have the following JSON outp
{
"environment": {
"reg": "abc"
},
"system": {
"svcs": {
"upsvcs": [
{
"name": "monitor",
"tags": [],
"vmnts": [],
"label": "upsvcs",
"credentials": {
"Date": "Feb152018",
"time": "1330"
}
},
{
"name": "application",
"tags": [],
"vmnts": [],
"label": "upsvcs",
"credentials": {
"lastViewed": "2018-02-07"
}
}
]
}
}
and to retrieve Date value (from credentials). I have tried `curl xxx | jq -r '. | select (.Date)'
which is not returning any value. Can someone please let me know what is the correct syntax and any explanation on how to retrieve elements (or any articles that do so).
TIA
The (or at least a) short answer is:
.system.svcs.upsvcs[0].credentials.Date
Without a schema, it is often a bit tricky to get the exact path correctly, so you might want to consider using the jq filter paths. In your case:
$ jq -c paths input.json | grep Date
["system","svcs","upsvcs",0,"credentials","Date"]
You could also use this path (i.e., the above array) directly:
$ jq 'getpath(["system","svcs","upsvcs",0,"credentials","Date"])' input.json
"Feb152018"
and thus you could use a "path-free" query:
$ jq --argjson p $(jq -c paths input.json | grep --max 1 Date) 'getpath($p)' input.json
"Feb152018"
Another approach would be just to retrieve all “truthy” .Date values, no matter where they appear:
$ jq -c '.. | .Date? // empty' input.json
"Feb152018"
select
Since you mentioned select, please note that:
$ jq -c '.. | select(.Date?)' input.json
{"Date":"Feb152018","time":"1330"}
Further information
For further information about jq in general and the topic of retrieval in particular, see the online tutorial, manual, and FAQ:
https://stedolan.github.io/jq/tutorial/
https://stedolan.github.io/jq/manual/v1.5/
https://github.com/stedolan/jq/wiki/FAQ

What is the best way to pipe JSON with whitespaces into a bash array?

I have JSON like this that I'm parsing with jq:
{
"data": [
{
"item": {
"name": "string 1"
},
"item": {
"name": "string 2"
},
"item": {
"name": "string 3"
}
}
]
}
...and I'm trying to get "string 1" "string 2" and "string 3" into a Bash array, but I can't find a solution that ignores the whitespace in them. Is there a method in jq that I'm missing, or perhaps an elegant solution in Bash for it?
Current method:
json_names=$(cat file.json | jq ".data[] .item .name")
read -a name_array <<< $json_names
The below assume your JSON text is in a string named s. That is:
s='{
"data": [
{
"item1": {
"name": "string 1"
},
"item2": {
"name": "string 2"
},
"item3": {
"name": "string 3"
}
}
]
}'
Unfortunately, both of the below will misbehave with strings containing literal newlines; since jq doesn't have support for NUL-delimited output, this is difficult to work around.
On bash 4 (with slightly sloppy error handling, but tersely):
readarray -t name_array < <(jq -r '.data[] | .[] | .name' <<<"$s")
...or on bash 3.x or newer (with very comprehensive error handling, but verbosely):
# -d '' tells read to process up to a NUL, and will exit with a nonzero exit status if that
# NUL is not seen; thus, this causes the read to pass through any error which occurred in
# jq.
IFS=$'\n' read -r -d '' -a name_array \
< <(jq -r '.data[] | .[] | .name' <<<"$s" && printf '\0')
This populates a bash array, contents of which can be displayed with:
declare -p name_array
Arrays are assigned in the form:
NAME=(VALUE1 VALUE2 ... )
where NAME is the name of the variable, VALUE1, VALUE2, and the rest are fields separated with characters that are present in the $IFS (input field separator) variable.
Since jq outputs the string values as lines (sequences separated with the new line character), then you can temporarily override $IFS, e.g.:
# Disable globbing, remember current -f flag value
[[ "$-" == *f* ]] || globbing_disabled=1
set -f
IFS=$'\n' a=( $(jq --raw-output '.data[].item.name' file.json) )
# Restore globbing
test -n "$globbing_disabled" && set +f
The above will create an array of three items for the following file.json:
{
"data": [
{"item": {
"name": "string 1"
}},
{"item": {
"name": "string 2"
}},
{"item": {
"name": "string 3"
}}
]
}
The following shows how to create a bash array consisting of arbitrary JSON texts produced by a run of jq.
In the following, I'll assume input.json is a file with the following:
["string 1", "new\nline", {"x": 1}, ["one\ttab", 4]]
With this input, the jq filter .[] produces four JSON texts -- two JSON strings, a JSON object, and a JSON array.
The following bash script can then be used to set x to be a bash array of the JSON texts:
#!/bin/bash
x=()
while read -r value
do
x+=("$value")
done < <(jq -c '.[]' input.json)
For example, adding this bash expression to the script:
for a in "${x[#]}" ; do echo a="$a"; done
would yield:
a="string 1"
a="new\nline"
a={"x":1}
a=["one\ttab",4]
Notice how (encoded) newlines and (encoded) tabs are handled properly.