Lookup filtering with jq - json

Giving a JSON string like this,
[
{
"id": 1,
"name": "Arthur",
"age": "21"
},
{
"id": 2,
"name": "Richard",
"age": "32"
}
]
How to filter by name and get the age?
E.g., given the name being "Richard", let jq return "32". Thx.

$ jq --arg name Richard '.[] | select(.name==$name) | .age' input.json
"32"
When using jq like this in Windows, the quoting would have to be appropriate for Windows.

Related

Print key and value for different entries in an object

I need to print some results with jq to take json.
This is an example:
{
"data": [
{
"time": 20201606,
"event": {
"ip": "127.0.1",
"hostname": "srv1",
"locations": [
"UK",
"site1"
],
"num": 1
}
},
{
"time": 202016034,
"event": {
"ip": "127.0.2",
"hostname": "srv2",
"locations": [
"UK",
"site2"
],
"num": 3
}
}
]
}
Like to generate this output "num, ip, hostname, locations":
1, srv1, 127.0.1, UK,site1
2, srv2, 127.0.2, HK,site2
3, srv3, 127.0.3, LO,site3
How can I print this via jq?
Join locations by a comma, and put the result into an array with other fields. Then join again by a comma followed by a space to get the desired output format. E.g.:
.data[].event | [
.num,
.hostname,
.ip,
(.locations | join(",")) ?
] | join(", ")
Use --raw-output/-r option in the command line invocation to get raw strings instead of JSON strings.
Online demo
At its core, you want to build an array consisting of the values you want:
$ jq '.data[].event | [.num, .hostame, .ip, .locations]' tmp.json
[
1,
null,
"127.0.1",
[
"UK",
"site1"
]
]
[
3,
null,
"127.0.2",
[
"UK",
"site2"
]
]
From there, it's a matter of formatting. First, let's turn the list of locations into a single string:
$ jq '.data[].event | [.num, .hostame, .ip, (.locations|join(","))]' tmp.json
[
1,
null,
"127.0.1",
"UK,site1"
]
[
3,
null,
"127.0.2",
"UK,site2"
]
Next, let's join those strings into a ", "-separated string.
$ jq '.data[].event | [.num, .hostame, .ip, (.locations|join(","))] | join(", ")' tmp.json
"1, , 127.0.1, UK,site1"
"3, , 127.0.2, UK,site2"
Finally, you can use the -r flag to output raw text rather than a JSON string value.
$ jq -r '.data[].event | [.num, .hostame, .ip, (.locations|join(","))] | join(", ")' tmp.json
1, , 127.0.1, UK,site1
3, , 127.0.2, UK,site2

Retrieve data from JSON where list items have specific/known values using JQ

I have a JSON file with block like below
{
"id": 0000,
"sGName": "SG1",
"customProperties": [
{
"id": 100000,
"name": "clustersIP",
"value": "ABC"
}
],
"filters": [
{
"id": 74616,
"attributeName": "serverName",
"value": "Sever101"
},
{
"id": 74617,
"attributeName": "bigIPPool",
"value": "server101v1"
}
],
},
I am looking for JQ in bash to retrieve
sGName, where "attributeName"="serverName", and "value"="Sever101" Can some one please help on that?
I might have multiple "sGName"
I believe you're looking for the following command :
jq --raw-output 'map( select(.filters | any(.attributeName == "serverName" and .value == "Sever101")) |.sGName) | join("\n")'
You can try it here.
With the serverName criteria extracted as a parameter :
jq --raw-output --arg serverName "Sever101" 'map( select(.filters | any(.attributeName == "serverName" and .value == $serverName)) |.sGName) | join("\n")'
oguz ismail proposes a more streamlined solution :
jq --raw-output --arg serverName "Sever101" '.[] | select(any(.filters[]; .attributeName=="serverName" and .value==$serverName)).sGName'

jq- merge two json files on a value

i have two json files structured like that:
file 1
[
{
"id": 25422,
"location": "Hotel X",
"suppliers": [
12
]
},
{
"id": 25423,
"location": "Hotel Y",
"suppliers": [
13
]
}]
file 2
[
{
"id": 12,
"vatNumber": "0000000000"
},
{
"id": 14,
"vatNumber": "0000000001"
}]
and i'd like a result like this
[
{
"id": 25422,
"location": "Hotel X",
"suppliers": [
12
],
"vatNumber": "0000000000"
},
{
"id": 25423,
"location": "Hotel Y",
"suppliers": [
13
],
}]
The important thing to me is that the matching vatNumbers, are set in the first file. Supplier arrays are not required anymore after the melding, if it simplifies the job.
Also jq is not essential, but i need something i can use via terminal to set up a script.
Thank you in advance.
Here's one of many possible solutions. If your jq does not have INDEX/2, then either upgrade your jq or include its def (available e.g. from https://github.com/stedolan/jq/blob/master/src/builtin.jq):
Invocation:
jq -n --argfile f1 file1.json --argfile f2 file2.json -f merge.jq
merge.jq:
INDEX($f2[] ; .id) as $dict
| $f1
| map( ($dict[.suppliers[0]|tostring]|.vatNumber) as $vn
| if $vn then .vatNumber = $vn else . end)

Creating a CSV from json using jq, based on elements in array

I have the following json format that I need to convert to CSV
[{
"name": "joe",
"age": 21,
"skills": [{
"lang": "spanish",
"grade": "47",
"school": {
"name": "my school",
"url": "example.com/sp-school"
}
}, {
"lang": "english",
"grade": "87"
}]
},
{
"name": "sarah",
"age": 34,
"skills": [{
"lang": "french",
"grade": "47",
"school": {
"name": "my school",
"url": "example.com/sp-school"
}
}, {
"lang": "english",
"grade": "87"
}]
}, {
"name": "jim",
"age": 26,
"skills": [{
"lang": "spanish",
"grade": "60"
}, {
"lang": "english",
"grade": "66",
"school": {
"name": "eg school",
"url": "eg-school.com"
}
}]
}
]
to convert to csv
name,age,grade,school,url,file,line_number
joe,21,47,"my school","example.com/sp-school",sample.json,1
jim,26,60,"","",sample.json,3
So add the top level fields and the object from the skills array if lang=spanish and the school hash from the skills object for spanish if it exists
I'd also like to add the file and line number it came from.
I would like to use jq for the job, but can't figure out the syntax , anyone help me out ?
With your data in input.json, and the following jq program in tocsv.jq:
.[]
| [.name, .age] +
(.skills[]
| select(.lang == "spanish")
| [.grade, .school.name, .school.url, input_filename, input_line_number] )
| #csv
the invocation:
jq -r -f tocsv.jq input.json
yields:
"joe",21,"47","my school","example.com/sp-school","input.json",51
"jim",26,"60",,,"input.json",51
If you want the number-valued strings converted to numbers, you could use the "tonumber" filter. If you want the null-valued fields replaced by strings, use e.g. .school.name // ""
Of course this approach doesn't yield a very useful line number. One approach that would yield higher granularity would be to stream the individual objects into jq, but then you'd lose the filename. To recover the filename you could pass it in as an argument. So you would have a pipeline like so:
jq -c '.[]' input.json | jq -r --arg file input.json -f tocsv2.jq
where tocsv2.jq would be like tscsv.jq above but without the initial .[] |, and with $file instead of input_filename.
Finally, please also consider using the TSV format (#tsv) rather than the rather messy CSV format (#csv).

Select or exclude multiples object with an array of IDs

I have the following JSON :
[
{
"id": "1",
"foo": "bar-a",
"hello": "world-a"
},
{
"id": "2",
"foo": "bar-b",
"hello": "world-b"
},
{
"id": "10",
"foo": "bar-c",
"hello": "world-c"
},
{
"id": "42",
"foo": "bar-d",
"hello": "world-d"
}
]
And I have the following array store in a variable: ["1", "2", "56", "1337"] (note the IDs are string, and may contain any regular character).
So, thanks to this SO, I found a way to filter my original data. jq 'jq '[.[] | select(.id == ("1", "2", "56", "1337"))]' ./data.json (note the array is surrounded by parentheses and not brackets) produces :
[
{
"id": "1",
"foo": "bar-a",
"hello": "world-a"
},
{
"id": "2",
"foo": "bar-b",
"hello": "world-b"
}
]
But I would also liked to do the opposite (basically excluding IDs instead of selecting them). Using select(.id != ("1", "2", "56", "1337")) doesn't work and using jq '[. - [.[] | select(.id == ("1", "2", "56", "1337"))]]' ./data.json seems very ugly and it doesn't work with my actual data (an output of aws ec2 describe-instances).
So have you any idea to do that? Thank you!
To include them, you need to verify that the id is any of the values in the keep set.
$ jq --argjson include '["1", "2", "56", "1337"]' 'map(select(.id == $include[]))' ...
To exclude them, you need to verify that all values are not in your excluded set. But it might just be easier to take the original set and remove the items that are in the excluded set.
$ jq --argjson exclude '["1", "2", "56", "1337"]' '. - map(select(.id == $exclude[]))' ...
Here is a solution that uses inside. Assuming you run jq as
jq -M --argjson IDS '["1","2","56","1337"]' -f filter.jq data.json
This filter.jq
map( select([.id] | inside($IDS)) )
produces the ids from data.json that are in the $IDS array:
[
{
"id": "1",
"foo": "bar-a",
"hello": "world-a"
},
{
"id": "2",
"foo": "bar-b",
"hello": "world-b"
}
]
and this filter.jq
map( select([.id] | inside($IDS) | not) )
produces the ids from data.json that are not in the $IDS array:
[
{
"id": "10",
"foo": "bar-c",
"hello": "world-c"
},
{
"id": "42",
"foo": "bar-d",
"hello": "world-d"
}
]