Lets say this is my array :
[
{
"name": "Matias",
"age": "33"
}
]
I can do this :
echo "$response" | jq '[ .[] | select(.name | test("M.*"))] | . += [.[]]'
And it will output :
[
{
"name": "Matias",
"age": "33"
},
{
"name": "Matias",
"age": "33"
}
]
But I cant do this :
echo "$response" | jq '[ .[] | select(.name | test("M.*"))] | . += [.[] * 3]'
jq: error (at <stdin>:7): object ({"name":"Ma...) and number (3) cannot be multiplied
I need to extend an array to create a dummy array with 100 values. And I cant do it. Also, I would like to have a random age on the objects. ( So later on I can filter the file to measure performance of an app .
Currently jq does not have a built-in randomization function, but it's easy enough to generate random numbers that jq can use. The following solution uses awk but in a way that some other PRNG can easily be used.
#!/bin/bash
function template {
cat<<EOF
[
{
"name": "Matias",
"age": "33"
}
]
EOF
}
function randoms {
awk -v n=$1 'BEGIN { for(i=0;i<n;i++) {print int(100*rand())} }'
}
randoms 100 | jq -n --argfile template <(template) '
first($template[] | select(.name | test("M.*"))) as $t
| [ $t | .age = inputs]
'
Note on performance
Even though the above uses awk and jq together, this combination is about 10 times faster than the posted jtc solution using -eu:
jq+awk: u+s = 0.012s
jtc with -eu: u+s = 0.192s
Using jtc in conjunction with awk as above, however, gives u+s == 0.008s on the same machine.
Related
curl http://testhost.test.com:8080/application/app/version | jq '.version' | jq '.[]'
The above command outputs only the values as below:
"madireddy#test.com"
"2323"
"test"
"02-03-2014-13:41"
"application"
How can I get the key names instead like the below:
email
versionID
context
date
versionName
To get the keys in the order they appear in the original JSON use:
jq 'keys_unsorted' file.json
If you want the keys sorted alphanumerically, you can use:
jq 'keys' file.json
Complete example
$ cat file.json
{ "Created-By" : "Apache Maven", "Build-Number" : "", "Archiver-Version" : "Plexus Archiver", "Build-Id" : "", "Build-Tag" : "", "Built-By" : "cporter"}
$ jq 'keys_unsorted' file.json
[
"Created-By",
"Build-Number",
"Archiver-Version",
"Build-Id",
"Build-Tag",
"Built-By"
]
$ jq 'keys' file.json
[
"Archiver-Version",
"Build-Id",
"Build-Number",
"Build-Tag",
"Built-By",
"Created-By"
]
To get the keys on a deeper node in a JSON:
echo '{"data": "1", "user": { "name": 2, "phone": 3 } }' | jq '.user | keys[]'
"name"
"phone"
You need to use jq 'keys[]'. For example:
echo '{"example1" : 1, "example2" : 2, "example3" : 3}' | jq 'keys[]'
Will output a line separated list:
"example1"
"example2"
"example3"
In combination with the above answer, you want to ask jq for raw output, so your last filter should be eg.:
cat input.json | jq -r 'keys'
From jq help:
-r output raw strings, not JSON texts;
To print keys on one line as csv:
echo '{"b":"2","a":"1"}' | jq -r 'keys | [ .[] | tostring ] | #csv'
Output:
"a","b"
For csv completeness ... to print values on one line as csv:
echo '{"b":"2","a":"1"}' | jq -rS . | jq -r '. | [ .[] | tostring ] | #csv'
Output:
"1","2"
If your input is an array of objects,
[
{
"a01" : { "name" : "A", "user" : "B" }
},
{
"a02" : { "name" : "C", "user" : "D" }
}
]
try with:
jq '.[] | keys[]'
Oddly enough, the accepted answer doesn’t actually answer the Q exactly, so for reference, here is a solution that does:
$ jq -r 'keys_unsorted[]' file.json
echo '{"ab": 1, "cd": 2}' | jq -r 'keys[]' prints all keys one key per line without quotes.
ab
cd
Here's another way of getting a Bash array with the example JSON given by #anubhava in his answer:
arr=($(jq --raw-output 'keys_unsorted | #sh' file.json))
echo ${arr[0]} # 'Archiver-Version'
echo ${arr[1]} # 'Build-Id'
echo ${arr[2]} # 'Build-Jdk'
I'm a rookie wirh JQ.
I would like to merge 2 json files with JQ. But only for the present keys in first file.
First file (first.json)
{
"##locale": "en",
"foo": "bar1"
}
Second file (second.json)
{
"##locale": "en",
"foo": "bar2",
"oof": "rab"
}
I already tried.
edit: jq -n '.[0] * .[1]' first.json second.json
jq -s '.[0] * .[1]' first.json second.json
But the returned result is wrong.
{
"##locale": "en",
"foo": "bar2",
"oof": "rab"
}
"oof" entry should not be present.
Expected merged.
{
"##locale": "en",
"foo": "bar2"
}
Best regards.
And here's a one-liner, which happens to be quite efficient:
jq --argfile first first.json '. as $in | $first | with_entries(.value = $in[.key] )' second.json
Consider:
jq -n '.
| input as $first # read first input
| input as $second # read second input
| $first * $second # make the merger of the two the context item
| [ to_entries[] # ...then break it out into key/value pairs
| select($first[.key]) # ...and filter those for whether they exist in the first input
] | from_entries # ...before reassembling into a single object.
' first.json second.json
...which properly emits:
{
"##locale": "en",
"foo": "bar2"
}
I have multiple JSON files that I'd like to merge into one.
Some have the same root element but different children. I don't want to overwrite the children but too extend them if they have the same parent element.
I've tried this answer, but it doesn't work:
jq: error (at file2.json:0): array ([{"title":"...) and array ([{"title":"...) cannot be multiplied
Sample files and wanted result (Gist)
Thank you in advance.
Here is a recursive solution which uses group_by(.key) to decide
which objects to combine. This could be a little simpler if .children
were more uniform. Sometimes it's absent in the sample data and sometimes it's the unusual value [{}].
def merge:
def kids:
map(
.children
| if length<1 then empty else .[] end
)
| if length<1 then {} else {children:merge} end
;
def mergegroup:
{
title: .[0].title
, key: .[0].key
} + kids
;
if .==[{}] then .
else group_by(.key) | map(mergegroup)
end
;
[ .[] | .[] ] | merge
When run with the -s option as follows
jq -M -s -f filter.jq file1.json file2.json
It produces the following output.
[
{
"title": "Title1",
"key": "12345678",
"children": [
{
"title": "SubTitle2",
"key": "123456713",
"children": [
{}
]
},
{
"title": "SubTitle1",
"key": "12345679",
"children": [
{
"title": "SubSubTitle1",
"key": "12345610"
},
{
"title": "SubSubTitle2",
"key": "12345611"
},
{
"title": "DifferentSubSubTitle1",
"key": "12345612"
}
]
}
]
}
]
If the ordering of the objects within the .children matters
then an a sort_by can be added to the {children:merge} expression,
e.g. {children:merge|sort_by(.key)}
Here is something that will reproduce your desired result. It's by no means automatic, It's really a proof of concept at this stage.
One liner:
jq -s '. as $in | ($in[0][].children[].children + $in[1][].children[0].children | unique) as $a1 | $in[1][].children[1] as $s1 | $in[0] | .[0].children[0].children = ($a1) | .[0].children += [$s1]' file1.json file2.json
Multi line breakdown (Copy/Paste):
jq -s '. as $in
| ($in[0][].children[].children + $in[1][].children[0].children
| unique) as $a1
| $in[1][].children[1] as $s1
| $in[0]
| .[0].children[0].children = ($a1)
| .[0].children += [$s1]' file1.json file2.json
Where:
$in : file1.json and file2.json combined input
$a1: merged "SubSubTitle" array
$s1: second subtitle object
I suspect the reason this didn't work was because your schema is different and has nested arrays.
I find it quite hypnotic looking at this, it would be good if you could elaborate a bit on how fixed the structure is and what the requirements are.
I have the following type of json:
{
"foo": "hello",
"bar": [
{
"key": "k1",
"val": "v1"
},
{
"key": "k2",
"val": "v2"
},
{
"key": "k3",
"val": "v3"
}
]
}
I want to output the following:
"hello", 1, "k1", "v1"
"hello", 2, "k2", "v2"
"hello", 3, "k3", "v3"
I am using jq to tranform this and the answer should also be with a jq transformation.
I am currently at:
echo '{"foo": "hello","bar": [{"key": "k1","val": "v1"},{"key": "k2","val": "v2"},{"key": "k3","val": "v3"} ]}' | jq -c -r '.bar[] as $b | [.foo, ($b | .key, .val)] | #csv'
Which gives me:
"hello","k1","v1"
"hello","k2","v2"
"hello","k3","v3"
How can I also get the index to show of the array element being parsed?
You could convert the array to entries to access the index and the value. Then you can build out the CSV rows.
$ jq -r '[.foo] + (.bar | to_entries[] | [.key+1,.value.key,.value.val]) | #csv' input.json
"hello",1,"k1","v1"
"hello",2,"k2","v2"
"hello",3,"k3","v3"
Assuming you have access to jq 1.5 and that the key/val keys are presented in that order:
jq -r '.foo as $foo
| foreach .bar[] as $i (0; .+1; [$foo, .] + [$i[]])
| #csv'
would produce:
"hello",1,"k1","v1"
"hello",2,"k2","v2"
"hello",3,"k3","v3"
The -r option is often used with #csv to convert the JSON string that would otherwise be produced by #csv into a comma-separated list of values.
If you really want to join with ", ", then it's a bit messier, but if you're not worried about the functionality that #csv provides, here's one way:
$ jq -r '"\"\(.foo)\"" as $foo
| foreach .bar[] as $i
(0; .+1; "\($foo), \(.), \($i | map("\"\(.)\"")|join(", "))")'
This produces:
"hello", 1, "k1", "v1"
"hello", 2, "k2", "v2"
"hello", 3, "k3", "v3"
If your jq does not have foreach then you could similarly use reduce, but it might be easier to upgrade.
curl http://testhost.test.com:8080/application/app/version | jq '.version' | jq '.[]'
The above command outputs only the values as below:
"madireddy#test.com"
"2323"
"test"
"02-03-2014-13:41"
"application"
How can I get the key names instead like the below:
email
versionID
context
date
versionName
To get the keys in the order they appear in the original JSON use:
jq 'keys_unsorted' file.json
If you want the keys sorted alphanumerically, you can use:
jq 'keys' file.json
Complete example
$ cat file.json
{ "Created-By" : "Apache Maven", "Build-Number" : "", "Archiver-Version" : "Plexus Archiver", "Build-Id" : "", "Build-Tag" : "", "Built-By" : "cporter"}
$ jq 'keys_unsorted' file.json
[
"Created-By",
"Build-Number",
"Archiver-Version",
"Build-Id",
"Build-Tag",
"Built-By"
]
$ jq 'keys' file.json
[
"Archiver-Version",
"Build-Id",
"Build-Number",
"Build-Tag",
"Built-By",
"Created-By"
]
To get the keys on a deeper node in a JSON:
echo '{"data": "1", "user": { "name": 2, "phone": 3 } }' | jq '.user | keys[]'
"name"
"phone"
You need to use jq 'keys[]'. For example:
echo '{"example1" : 1, "example2" : 2, "example3" : 3}' | jq 'keys[]'
Will output a line separated list:
"example1"
"example2"
"example3"
In combination with the above answer, you want to ask jq for raw output, so your last filter should be eg.:
cat input.json | jq -r 'keys'
From jq help:
-r output raw strings, not JSON texts;
To print keys on one line as csv:
echo '{"b":"2","a":"1"}' | jq -r 'keys | [ .[] | tostring ] | #csv'
Output:
"a","b"
For csv completeness ... to print values on one line as csv:
echo '{"b":"2","a":"1"}' | jq -rS . | jq -r '. | [ .[] | tostring ] | #csv'
Output:
"1","2"
If your input is an array of objects,
[
{
"a01" : { "name" : "A", "user" : "B" }
},
{
"a02" : { "name" : "C", "user" : "D" }
}
]
try with:
jq '.[] | keys[]'
Oddly enough, the accepted answer doesn’t actually answer the Q exactly, so for reference, here is a solution that does:
$ jq -r 'keys_unsorted[]' file.json
echo '{"ab": 1, "cd": 2}' | jq -r 'keys[]' prints all keys one key per line without quotes.
ab
cd
Here's another way of getting a Bash array with the example JSON given by #anubhava in his answer:
arr=($(jq --raw-output 'keys_unsorted | #sh' file.json))
echo ${arr[0]} # 'Archiver-Version'
echo ${arr[1]} # 'Build-Id'
echo ${arr[2]} # 'Build-Jdk'