How to group by based on value in json using jq? - json

I have the following json
[
{
"certname": "server1",
"environment": "production",
"name": "memorysize",
"value": "62.76 GiB"
},
{
"certname": "server1",
"environment": "production",
"name": "processorcount",
"value": 12
},
{
"certname": "server2",
"environment": "production",
"name": "memorysize",
"value": "62.76 GiB"
},
{
"certname": "server2",
"environment": "production",
"name": "processorcount",
"value": 10
}
]
And I want to convert to this format where it's grouped by the certname. The challenge is I need to use value for to make it as the key as follow
[
{
"certname": "server1",
"memorysize": "62.76 GiB",
"processorcount": 12
},
{
"certname": "server2",
"memorysize": "62.76 GiB",
"processorcount": 10
}
]
How do I do this using jq? I have tried to_entries but it doesn't help either.
Thanks

The following is a commented jq script. Feel free to use it as is, or strip out the newlines and comments and use it as is.
# First, we construct an object that maps each `$certname` to `{certname: $certname}`. We name it $init.
(map({key:.certname, value: {certname}}) | unique | from_entries) as $init |
# Next, we take each object of the input in turn (name it $attr) and assign its
# `name:value` into one of the objects.
# $init is the dictionary above
# Reduce will pass the current dictionary as . for each invocation, and the assignment
# returns the input object.
reduce .[] as $attr ($init; .[$attr.certname][$attr.name] = $attr.value) |
# Our initial dictionary has now been expanded with attributes.
# Map it back to an array of objects. .[] is a stream of objects,
# we capture that in an outer array.
[.[]]

Related

Create merged JSON array from multiple files using jq

I have multiple JSON files one.json, two.json, three.json with the below format and I want to create a consolidated array from them using jq. So, from all the files I want to extract Name and Value field inside the Parameters and use them to create an array where the id value will be constructed from the Name value and value field will be constructed using Value field value.
input:
one.json:
{
"Parameters": [
{
"Name": "id1",
"Value": "one",
"Version": 2,
"LastModifiedDate": 1581663187.36
}
]
}
two.json
{
"Parameters": [
{
"Name": "id2",
"Value": "xyz",
"Version": 2,
"LastModifiedDate": 1581663187.36
}
]
}
three.json
{
"Parameters": [
{
"Name": "id3",
"Value": "xyz",
"Version": 2,
"LastModifiedDate": 1581663187.36
}
]
}
output:
[
{
"id": "id1",
"value": "one"
},
{
"id": "id2",
"value": "xyz"
},
{
"id": "id3",
"value": "xyz"
}
]
How to achieve this using jq
You can use a reduce expression instead of slurping the whole file into memory (-s); by iterative manipulation of the input file contents and then appending the required fields one at a time.
jq -n 'reduce inputs.Parameters[] as $d (.; . + [ { id: $d.Name, value: $d.Value } ])' one.json two.json three.json
The -n flag is to ensure that we construct the output JSON data from scratch over the input file contents made available over the inputs function. Since reduce works in an iterative manner, for each of the object in the input, we create a final array, creating the KV pair as desired.

jq find the max in quoted values

Here is my JSON test.jsonfile :
[
{
"name": "nodejs",
"version": "0.1.21",
"apiVersion": "v1"
},
{
"name": "nodejs",
"version": "0.1.20",
"apiVersion": "v1"
},
{
"name": "nodejs",
"version": "0.1.11",
"apiVersion": "v1"
},
{
"name": "nodejs",
"version": "0.1.9",
"apiVersion": "v1"
},
{
"name": "nodejs",
"version": "0.1.8",
"apiVersion": "v1"
}
]
When I use max_by, jq return 0.1.9 instead of 0.1.21 probably due to the quoted value :
cat test.json | jq 'max_by(.version)'
{
"name": "nodejs",
"version": "0.1.9",
"apiVersion": "v1"
}
How can I get the element with version=0.1.21 ?
Semantic version compare is not supported out of the box in jq. You need to play around with the fields split by .
jq 'sort_by(.version | split(".") | map(tonumber))[-1]'
The split(".") takes the string from .version and creates an array of fields i.e. 0.1.21 becomes an array of [ "0", "1", "21"] and map(tonumber) takes an input array and transforms the string elements to an array of digits.
The sort_by() function does a index wise comparison for each of the elements in the array generated from last step and sorts in the ascending order with the object containing the version 0.1.21 at the last. The notation [-1] is to get the last object from this sorted array.
Here's an adaptation of the more general answer using jq at
How to sort Artifactory package search result by version number with JFrog CLI?
def parse:
[splits("[-.]")]
| map(tonumber? // .) ;
max_by(.version|parse)
As a less robust one-liner:
max_by(.version | [splits("[.]")] | map(tonumber))

Remove matching/non-matching elements of a nested array using jq

I need to split the results of a sonarqube analysis history into individual files. Assuming a starting input below,
{
"paging": {
"pageIndex": 1,
"pageSize": 100,
"total": 3
},
"measures": [
{
"metric": "coverage",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "100.0"
},
{
"date": "2018-11-21T12:22:39+0000",
"value": "100.0"
},
{
"date": "2018-11-21T13:09:02+0000",
"value": "100.0"
}
]
},
{
"metric": "bugs",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
},
{
"date": "2018-11-21T12:22:39+0000",
"value": "0"
},
{
"date": "2018-11-21T13:09:02+0000",
"value": "0"
}
]
},
{
"metric": "vulnerabilities",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
},
{
"date": "2018-11-21T12:22:39+0000",
"value": "0"
},
{
"date": "2018-11-21T13:09:02+0000",
"value": "0"
}
]
}
]
}
How do I use jq to clean the results so it only retains the history array entries for each element? The desired output is something like this (output-20181118123808.json for analysis done on "2018-11-18T12:37:08+0000"):
{
"paging": {
"pageIndex": 1,
"pageSize": 100,
"total": 3
},
"measures": [
{
"metric": "coverage",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "100.0"
}
]
},
{
"metric": "bugs",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
}
]
},
{
"metric": "vulnerabilities",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
}
]
}
]
}
I am lost on how to operate only on the sub-elements while leaving the parent structure intact. The naming of the JSON file is going to be handled externally from the jq utility. The sample data provided will be split into 3 files. Some other input can have a variable number of entries, some may be up to 10000. Thanks.
Here is a solution which uses awk to write the distinct files. The solution assumes that the dates for each measure are the same and in the same order, but imposes no limit on the number of distinct dates, or the number of distinct measures.
jq -c 'range(0; .measures[0].history|length) as $i
| (.measures[0].history[$i].date|gsub("[^0-9]";"")), # basis of filename
reduce range(0; .measures|length) as $j (.;
.measures[$j].history |= [.[$i]])' input.json |
awk -F\\t 'fn {print >> fn; fn="";next}{fn="output-" $1 ".json"}'
Comments
The choice of awk here is just for convenience.
The disadvantage of this approach is that if each file is to be neatly formatted, an additional run of a pretty-printer (such as jq) would be required for each file. Thus, if the output in each file is required to be neat, a case could be made for running jq once for each date, thus obviating the need for the post-processing (awk) step.
If the dates of the measures are not in lock-step, then the same approach as above could still be used, but of course the gathering of the dates and the corresponding measures would have to be done differently.
Output
The first two lines produced by the invocation of jq above are as follows:
"201811181237080000"
{"paging":{"pageIndex":1,"pageSize":100,"total":3},"measures":[{"metric":"coverage","history":[{"date":"2018-11-18T12:37:08+0000","value":"100.0"}]},{"metric":"bugs","history":[{"date":"2018-11-18T12:37:08+0000","value":"0"}]},{"metric":"vulnerabilities","history":[{"date":"2018-11-18T12:37:08+0000","value":"0"}]}]}
In the comments, the following addendum to the original question appeared:
is there a variation wherein the filtering is based on the date value and not the position? It is not guaranteed that the order will be the same or the number of elements in each metric is going to be the same (i.e. some dates may be missing "bugs", some might have additional metric such as "complexity").
The following will produce a stream of JSON objects, one per date. This stream can be annotated with the date as per my previous answer, which shows how to use these annotations to create the various files. For ease of understanding, we use two helper functions:
def dates:
INDEX(.measures[].history[].date; .)
| keys;
def gather($date): map(select(.date==$date));
dates[] as $date
| .measures |= map( .history |= gather($date) )
INDEX/2
If your jq does not have INDEX/2, now would be an excellent time to upgrade, but in case that's not feasible, here is its def:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);

jq - Find a JSON object based on one of its values and get another value from it

I've started using jq just very recently and I would like to know if something like this is even possible.
Example:
{
"name": "device",
"version": "1.0.0",
"address": [
{
"address": "10.1.2.3",
"interface": "wlan1_wifi"
},
{
"address": "10.1.2.5",
"interface": "wlan2_link"
},
{
"address": "10.1.2.4",
"interface": "ether1"
}
],
"wireless": [
{
"name": "wlan1_wifi",
"type": "5Ghz",
"ssid": "wifi"
},
{
"name": "wlan2_link",
"type": "2Ghz",
"ssid": "link"
}
]
}
Firstly let's transform the example to this json object:
cat json | jq '. | {"name": ."name", "version": ."version", "wireless": [."wireless"[] | {"name": ."name", "type": ."type", "ssid": ."ssid"}]}'
{
"name": "device",
"version": "1.0.0",
"wireless": [
{
"name": "wlan1_wifi",
"type": "5Ghz",
"ssid": "wifi"
},
{
"name": "wlan2_link",
"type": "2Ghz",
"ssid": "link"
}
]
}
Now there's a problem. I need to assign an address to the "wireless" array. The address is stored in "address" array.
So the question: is there a way of finding the right json object in "address" based on "name" (in wireless array) and "interface" (in address array) for every json object in "wireless" array and then assigning "address" to it?
The final result should look like this:
{
"name": "device",
"version": "1.0.0",
"wireless": [
{
"name": "wlan1_wifi",
"type": "5Ghz",
"ssid": "wifi",
"address": "10.1.2.3"
},
{
"name": "wlan2_link",
"type": "2Ghz",
"ssid": "link",
"address": "10.1.2.5"
}
]
}
Answer:
Here's my answer based on the answer from #peak. Instead of copying the content of .wireless and then using map, I'm cherry picking the keys that I want to include only. This also allows me to position "address" how ever I want.
(INDEX(.address[]; .interface)) as $dict
| {name: .name, version: .version,
wireless: [.wireless[] | {name, address: ($dict[.name]|.address), type, ssid}]}
The following produces the output as originally requested:
(.wireless[].name) as $name
| .address[]
| select(.interface == $name)
| { wireless: {name: $name, address}}
However the above filter could potentially produce more than one result, so you might want to make modifications accordingly.
Revised revised requirements
If your jq has INDEX/2 (which was only made available AFTER jq 1.5 was released), you can simply use it to create a lookup table:
(INDEX(.address[]; .interface)) as $dict
| {name,
version,
wireless: (.wireless
| map(. + {address: ($dict[.name]|.address) }) ) }
Or (depending perhaps on the exact requirements):
(INDEX(.address[]; .interface)) as $dict
| del(.address)
| .wireless |= map(. + {address: ($dict[.name]|.address) })
If your jq does not have INDEX/2, then you could easily adapt the above (using reduce), or even more easily snarf the def of INDEX/2 from https://github.com/stedolan/jq/blob/master/src/builtin.jq

parsing JSON with jq to return value of element where another element has a certain value

I have some JSON output I am trying to parse with jq. I read some examples on filtering but I don't really understand it and my output it more complicated than the examples. I have no idea where to even begin beyond jq '.[]' as I don't understand the syntax of jq beyond that and the hierarchy and terminology are challenging as well. My JSON output is below. I want to return the value for Valid where the ItemName equals Item_2. How can I do this?
"1"
[
{
"GroupId": "1569",
"Title": "My_title",
"Logo": "logo.jpg",
"Tags": [
"tag1",
"tag2",
"tag3"
],
"Owner": [
{
"Name": "John Doe",
"Id": "53335"
}
],
"ItemId": "209766",
"Item": [
{
"Id": 47744,
"ItemName": "Item_1",
"Valid": false
},
{
"Id": 47872,
"ItemName": "Item_2",
"Valid": true
},
{
"Id": 47872,
"ItemName": "Item_3",
"Valid": false
}
]
}
]
"Browse"
"8fj9438jgge9hdfv0jj0en34ijnd9nnf"
"v9er84n9ogjuwheofn9gerinneorheoj"
Except for the initial and trailing JSON scalars, you'd simply write:
.[] | .Item[] | select( .ItemName == "Item_2" ) | .Valid
In your particular case, to ensure the top-level JSON scalars are ignored, you could prefix the above with:
arrays |