How to merge two json streams/lists into a single stream of jsons like the following.
file 1:
{"key1": 1}
{"key1": 2}
{"key1": 3}
file 2:
{"key2": -1}
{"key2": -2}
{"key2": -3}
Expected output:
{"key1": 1, "key2": -1}
{"key1": 2, "key2": -2}
{"key1": 3, "key2": -3}
Here is a "streaming" solution that is more efficient (both memory-wise and otherwise) than one that requires that both files be read in their entirety before producing any output:
< file1.json jq -nc --slurpfile file2 file2.json '
# For each item $s in the stream s,
# emit [$s, $t] where $t is the corresponding item in the input array
def zip(s):
. as $in
| foreach s as $s (-1; .+1; [$s, $in[.]]);
$file2 | zip(inputs) | add
'
If your jq does not support the --slurpfile command-line option, then use --argfile instead.
As a one-liner
foreach inputs as $s (-1; .+1; $s + $file2[.])
You can transpose and add:
$ jq -c -n --slurpfile a file1.json --slurpfile b file2.json '[$a, $b] | transpose[] | add'
{"key1":1,"key2":-1}
{"key1":2,"key2":-2}
{"key1":3,"key2":-3}
Related
json can be created using the following command.
jq -n \
--arg v1 "Value1" \
--arg v2 "Value2" \
'{k1: "$v1", k2:$v2'}
But when my key is mutable, how should I loop? For example, the script I execute is
test.sh k1=v1 k2=v2 k3=v3
test.sh is as follows
index=1
while ((index <= "$#")); do
data_i_key=$(echo ${!index} | awk -F "=" '{print $1}')
data_i_value=$(echo ${!index} | awk -F "=" '{print $2}')
let index++
JSON_STRING=$(jq -n \
--arg value "$data_i_value" \
'{'"$data_i_key"': $value'})
echo $JSON_STRING
The above print result is
{ "K3": "V3" }
if I replace it with
JSON_STRING+=$(jq -n \
--arg val_value "$dataValue" \
'{'"$data_i_key"': $val_value'})
The print result is
{ "k1": "v1" }{ "k2": "v2" }{ "K3": "V3" }
The above two methods have not achieved the goal, do you have a good way to deal with it? My desired output is
{ "k1": "v1" , "k2": "v2" ,"K3": "V3" }
hope you can help me.
I'd suggest a totally different, but simpler, approach:
for kv; do
echo "$kv" | jq -R './"=" | {key:first,value:last}'
done | jq -s 'from_entries'
It builds {key: …, value: …} objects from your positional parameters (splitting them by the equal sign) and then slurping all those objects in a second jq process, converting them into a single object via from_entries.
Alternatively, using -n (--null-input), --args and then accessing via $ARGS.positional. This avoids the loop and the second jq call altogether.
jq -n '$ARGS.positional | map(./"=" | {key:first,value:last}) | from_entries' --args "$#"
If your values can contain = themselves, you have to join all but the first value of the array:
jq -n '$ARGS.positional
| map(./"=" | {key:first,value:.[1:]|join("=")})
| from_entries' --args "$#"
Use parentheses around the key expression to have it evaluated:
jq -n --arg k1 "Key1" --arg v1 "Value1" '{($k1): $v1}'
{
"Key1": "Value1"
}
Perhaps this, which will handle an arbitrary number of key=value arguments:
#!/usr/bin/env bash
args=()
for arg; do
IFS="=" read -r k v <<<"$arg"
args+=("$k" "$v")
done
jq -n --args '
$ARGS.positional as $a
| reduce range(0; $a | length; 2) as $i ({}; .[$a[$i]] = $a[$i + 1])
' "${args[#]}"
Produces:
$ ./test.sh k1=v1 k2=v2 k3=v3
{
"k1": "v1",
"k2": "v2",
"k3": "v3"
}
I a have a short TSV stream that outputs key/value pairs and I would like to create a JSON object out of it, with jq.
For now I have this code, which generates a stack of JSON objects:
printf '%s\t%s\n' key1 val1 key2 val2 |
jq -R 'rtrimstr("\n") | split("\t") | { (.[0]): .[1] }'
{
"key1": "val1"
}
{
"key2": "val2"
}
I can pipe it to jq -s 'add' for getting my expected output:
{
"key1": "val1",
"key2": "val2"
}
But, is it possible to get this output with a single jq call?
Use -n and inputs to access the stream item by item.
For instance using reduce:
jq -Rn 'reduce (inputs/"\t") as [$k,$v] ({}; .[$k] = $v)'
Or using from_entries:
jq -Rn '[inputs/"\t" | {key: .[0], value: .[1]}] | from_entries'
I'm a rookie wirh JQ.
I would like to merge 2 json files with JQ. But only for the present keys in first file.
First file (first.json)
{
"##locale": "en",
"foo": "bar1"
}
Second file (second.json)
{
"##locale": "en",
"foo": "bar2",
"oof": "rab"
}
I already tried.
edit: jq -n '.[0] * .[1]' first.json second.json
jq -s '.[0] * .[1]' first.json second.json
But the returned result is wrong.
{
"##locale": "en",
"foo": "bar2",
"oof": "rab"
}
"oof" entry should not be present.
Expected merged.
{
"##locale": "en",
"foo": "bar2"
}
Best regards.
And here's a one-liner, which happens to be quite efficient:
jq --argfile first first.json '. as $in | $first | with_entries(.value = $in[.key] )' second.json
Consider:
jq -n '.
| input as $first # read first input
| input as $second # read second input
| $first * $second # make the merger of the two the context item
| [ to_entries[] # ...then break it out into key/value pairs
| select($first[.key]) # ...and filter those for whether they exist in the first input
] | from_entries # ...before reassembling into a single object.
' first.json second.json
...which properly emits:
{
"##locale": "en",
"foo": "bar2"
}
I have two json inputs and I want jq to build a new json copying the elements from the 2nd array to the corresponding position in the 1st:
1st json:
[
{"foo": "foo1", "bar": "bar1"},
{"foo": "foo2", "bar": "bar2"},
{"foo": "foo3", "bar": "bar3"}
]
2nd json:
[[
"baz1",
"baz2",
"baz3"
]]
expected result:
[
{"foo": "foo1", "bar": "bar1", "baz": "baz1"},
{"foo": "foo2", "bar": "bar2", "baz": "baz2"},
{"foo": "foo3", "bar": "bar3", "baz": "baz3"}
]
I've tried this command line but it doesn't seems to work
jq -n --argfile o1 "1st.json" --argfile o2 "2nd.json" "[$o1 [] | .baz= $o2[][]]"
The following adopts a straightforward approach to the point of being a bit pedestrian:
jq -s -f merge.jq 1.json 2.json
assuming the file merge.jq contains:
.[1][0] as $two
| .[0]
| reduce range(0; length) as $i (.;
.[$i].baz = $two[$i] )
Variation
If your jq supports the --argfile option, you can avoid the overhead of "slurping" by running:
jq --argfile two 2.json -f merge.jq 1.json
assuming merge.jq contains:
reduce range(0; length) as $i (.;
.[$i].baz = $two[0][$i] )
I have two json files, each containing one simple object, for example:
file1
{
"key1": "value1",
"key2": "value2"
}
file2
{
"key1": "valueA",
"key3": "valueB"
}
I need to combine these two using jq so that I end up with one object that contains all of the keys from both objects. If there are common keys, I need the values of from second object being used.
I'm struggling to get the right expression to use. I thought that something as simple as
jq '. * .' file1 file2
should give me what I want, however this results in a non-json output:
{
"key1": "value1",
"key2": "value2"
}
{
"key1": "valueA",
"key3": "valueB"
}
The same exact thing happens if I use jq '. + .' file1 file2.
How can I combine these two objects?
By passing in multiple input files, the contents of each file are streamed in. You'd either have to slurp them in or combine the individual inputs.
$ jq -s 'add' file1 file2
or
$ jq -n 'reduce inputs as $i ({}; . + $i)' file1 file2
Or if you wanted to merge instead of add.
$ jq -n 'reduce inputs as $i ({}; . * $i)' file1 file2
Alternative way with jq --slurpfile option:
jq --slurpfile f2 file2 '. + $f2[0]' file1
The output:
{
"key1": "valueA",
"key2": "value2",
"key3": "valueB"
}
Here is another way (assumes sample data in file1.json and file2.json):
$ jq -Mn --argfile file1 file1.json --argfile file2 file2.json '$file1 + $file2'
{
"key1": "valueA",
"key2": "value2",
"key3": "valueB"
}