I'm trying to Transform the following json
{ "application" : [
{ "name" : "app1",
"policies" : [
{ "name" : "pol_1",
"orderNumber" : "10"
},
{ "name" : "pol_2",
"orderNumber" : "20"
}
]
},
{ "name" : "app2",
"policies" : [
{ "name" : "pol_A",
"orderNumber" : "10"
},
{ "name" : "pol_B",
"orderNumber" : "20"
}
]
}
]
}
To the following
{ "pol_1":"10", "pol_2":"20" }
Using
jq -r ".application[] | select(.name==\"app1\") | .policies[] | {\".name\" : .orderNumber}"
I was able to get
{
"pol_1":"10"
}
{
"pol_2":"20"
}
Any idea how I can merge them. Am I missing something Or am I doing it the wrong way?
You were almost there. Use map to create a single array instead of two independent objects, then use add to merge its contents.
jq '.application[]
| select(.name == "app1")
| .policies
| map({ (.name) : .orderNumber } )
| add' file.json
Related
I would like to create a JSON with array of nested objects with a grouping for different fields.
This is the CSV and Iwould like to group it by sid, year and quarter (first three fields):
S4446B3,2020,202001,2,345.45
S4446B3,2020,202001,4,24.44
S4446B3,2021,202102,5,314.55
S6506LK,2020,202002,3,376.55
S6506LK,2020,202003,3,76.23
After splitting the CSV with the following I get an object for each record.
split("\n")
| map(split(","))
| .[0:]
| map({"sid" : .[0], "year" : .[1], "quarter" : .[2], "customer_type" : .[3], "obj" : .[4]})
But for each sid I would like to get an array of objects nested like this :
[
{
"sid" : "S4446B3",
"years" : [
{
"year" : 2020,
"quarters" : [
{
"quarter" : 202001,
"customer_type" : [
{
"type" : 2,
"obj" : "345.45"
},
{
"type" : 4,
"obj" : "24.44"
}
]
}
]
},
{
"year" : 2021,
"quarters" : [
{
"quarter" : 202102,
"customer_type" : [
{
"type" : 5,
"obj" : "314.55"
}
]
}
]
}
]
},
{
"sid" : "S6506LK",
"years" : [
{
"year" : 2020,
"quarters" : [
{
"quarter" : 202002,
"customer_type" : [
{
"type" : 3,
"obj" : "376.55"
}
]
},
{
"quarter" : 202003,
"customer_type" : [
{
"type" : 3,
"obj" : "76.23"
}
]
}
]
}
]
}
]
It'd be more intuitive if sid, year, quarter, etc. were to be key names. With -R/--raw-input and -n/--null-input options on the command line, this will do that:
reduce (inputs / ",")
as [$sid, $year, $quarter, $type, $obj]
(.; .[$sid][$year][$quarter] += [{$type, $obj}])
And, to get your expected output you can append these lines to the above program.
| .[][] |= (to_entries | map({quarter: .key, customer_type: .value}))
| .[] |= (to_entries | map({year: .key, quarters: .value}))
| . |= (to_entries | map({sid: .key, years: .value}))
I am working with a JSON file similar to the one below:
{ "Response" : {
"TimeUnit" : [ 1576126800000 ],
"metaData" : {
"errors" : [ ],
"notices" : [ "query served by:1"]
},
"stats" : {
"data" : [ {
"identifier" : {
"names" : [ "apiproxy", "response_status_code", "target_response_code", "target_ip" ],
"values" : [ "IO", "502", "502", "7.1.143.6" ]
},
"metric" : [ {
"env" : "dev",
"name" : "sum(message_count)",
"values" : [ 0.0]
} ]
} ]
} } }
My object is to display a mapping of the identifier and values like :
apiproxy=IO
response_status_code=502
target_response_code=502
target_ip=7.1.143.6
I have been able to parse both names and values with
.[].stats.data[] | (.identifier.names[]) and .[].stats.data[] | (.identifier.values[])
but I need help with the jq way to map the values.
The whole thing can be done in jq using the -r command-line option:
.[].stats.data[]
| [.identifier.names, .identifier.values]
| transpose[]
| "\(.[0])=\(.[1])"
I'm trying to use JQ to find the most recent artifact in a Nexus API query. Right now, my JSON output looks something like:
{
"items" : [ {
"downloadUrl" : "https://nexus.ama.org/repository/Snapshots/org/sso/browser-manager/1.0-SNAPSHOT/browser-manager-1.0-20180703.144121-1.jar",
"path" : "org/sso/browser-manager/1.0-SNAPSHOT/browser-manager-1.0-20180703.144121-1.jar",
"id" : "V0FEQS1TbmFwc2hvdHM6MzhjZDQ3NTQwMTBkNGJhOTY1N2JiOTEyMTM1ZGRjZWQ",
"repository" : "Snapshots",
"format" : "maven2",
"checksum" : {
"sha1" : "7ac324905fb1ff15ef6020f256fcb5c9f54113ca",
"md5" : "bb25c483a183001dfdc58c07a71a98ed"
}
}, {
"downloadUrl" : "https://nexus.ama.org/repository/Snapshots/org/sso/browser-manager/1.0-SNAPSHOT/browser-manager-1.0-20180703.204941-2.jar",
"path" : "org/sso/browser-manager/1.0-SNAPSHOT/browser-manager-1.0-20180703.204941-2.jar",
"id" : "V0FEQS1TbmFwc2hvdHM6MzhjZDQ3NTQwMTBkNGJhOWM4YjQ0NmRjYzFkODkxM2U",
"repository" : "Snapshots",
"format" : "maven2",
"checksum" : {
"sha1" : "b4ba2049ea828391c720f49b6668a66a8b0bca9c",
"md5" : "6757c55c0e6d933dc90e398204cca966"
}
} ],
"continuationToken" : null
}
I've managed to use JQ to repackage the data as:
.items[] | { "id" : .id, "date" : (.path | scan("[0-9]{8}\\.[0-9-]*")) }
output:
{
"id": "V0FEQS1TbmFwc2hvdHM6MzhjZDQ3NTQwMTBkNGJhOTY1N2JiOTEyMTM1ZGRjZWQ",
"date": "20180703.144121-1"
}
{
"id": "V0FEQS1TbmFwc2hvdHM6MzhjZDQ3NTQwMTBkNGJhOWM4YjQ0NmRjYzFkODkxM2U",
"date": "20180703.204941-2"
}
Now I'm a little stuck trying to figure out which of the two JSON objects is the most recent. How can I sort by date and extract the id for that object?
Is there a better way to filter/sort this data? My example has only 2 items[] in the JSON response, but there may be a larger number of them.
The filter sort_by/1 will sort your timestamps in chronological order, but it requires an array as input, so you could write:
.items
| map({ "id" : .id, "date" : (.path | scan("[0-9]{8}\\.[0-9-]*")) })
| sort_by(.date)
| .[-1]
The trailing .[-1] selects the last item, so with your input the result would be:
{
"id": "V0FEQS1TbmFwc2hvdHM6MzhjZDQ3NTQwMTBkNGJhOWM4YjQ0NmRjYzFkODkxM2U",
"date": "20180703.204941-2"
}
I have a json structure and would like to replace strings in 2 fields that are in a seperate text file.
Here is the json file with 2 records:
{
"events" : {
"-KKQQIUR7FAVxBOPOFhr" : {
"dateAdded" : 1487592568926,
"owner" : "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type" : "boycott"
},
"-KKjjM-pAXvTuEjDjoj_" : {
"dateAdded" : 1487933370561,
"owner" : "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type" : "boycott"
}
},
"geo" : {
"-KKQQIUR7FAVxBOPOFhr" : {
".priority" : "qw3yttz1k9",
"g" : "qw3yttz1k9",
"l" : [ 40.762632, -73.973837 ]
},
"-KKjjM-pAXvTuEjDjoj_" : {
".priority" : "qw3yttx6bv",
"g" : "qw3yttx6bv",
"l" : [ 41.889019, -87.626291 ]
}
},
"log" : "null",
"users" : {
"62e6aaa0-a50c-4448-a381-f02efde2316d" : {
"events" : {
"-KKQQIUR7FAVxBOPOFhr" : {
"type" : "boycott"
},
"-KKjjM-pAXvTuEjDjoj_" : {
"type" : "boycott"
}
}
}
}
}
And here is the txt file that I want to substitue in:
49.287130, -123.124026
36.129770, -115.172811
There are lots more records but I kept this to 2 for brevity.
Any help would be appreciated. Thank you.
The problem description seems to assume that the ordering of the key-value pairs within a JSON object is fixed. Different JSON-oriented tools (and indeed different versions of jq) have different takes on this. In any case, the following assumes a version of jq that respects the ordering (e.g. jq 1.5); it also assumes that inputs is available, though that is inessential.
The key to the following solution is the helper function, map_nth_value/2, which modifies the value of the nth key in a JSON object:
def map_nth_value(n; filter):
to_entries
| (.[n] |= {"key": .key, "value": (.value | filter)} )
| from_entries ;
[inputs | select(length > 0) | split(",") | map(tonumber)] as $lists
| reduce range(0; $lists|length) as $i
( $object;
.geo |= map_nth_value($i; .l = $lists[$i] ) )
With the above jq program in a file (say program.jq), and with the text file in a file (say input.txt) and the JSON object in a file (say object.json), the following invocation:
jq -R -n --argfile object object.json -f program.jq input.txt
produces:
{
"events": {
"-KKQQIUR7FAVxBOPOFhr": {
"dateAdded": 1487592568926,
"owner": "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type": "boycott"
},
"-KKjjM-pAXvTuEjDjoj_": {
"dateAdded": 1487933370561,
"owner": "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type": "boycott"
}
},
"geo": {
"-KKQQIUR7FAVxBOPOFhr": {
".priority": "qw3yttz1k9",
"g": "qw3yttz1k9",
"l": [
49.28713,
-123.124026
]
},
"-KKjjM-pAXvTuEjDjoj_": {
".priority": "qw3yttx6bv",
"g": "qw3yttx6bv",
"l": [
36.12977,
-115.172811
]
}
},
"log": "null",
"users": {
"62e6aaa0-a50c-4448-a381-f02efde2316d": {
"events": {
"-KKQQIUR7FAVxBOPOFhr": {
"type": "boycott"
},
"-KKjjM-pAXvTuEjDjoj_": {
"type": "boycott"
}
}
}
}
}
I have a MongoDB Collection which contains data elements like this:
{
"_id" : "9878jr23geg",
"element" : {
"name" : "element7",
"Set" : [
{
"SubListA" : [
{
"name" : "AlbertEinstein",
"value" : "45"
},
{
"name" : "JohnDoe",
"value" : "34"
},
]
},
{
"MoreNames" : [
{
"name" : "TimMcGraw",
"value" : "39"
}
]
}
]
}
{
"_id" : "275678hfvd",
"element" : {
"name" : "element8",
"Set" : [
{
"SubListA" : [
{
"name" : "AlbertEinstein",
"value" : "45"
},
{
"name" : "JimmyKimmel",
"value" : "41"
}
]
}
]
}
I'm trying to count the occurrences of each unique name, grouped by the element of Set to which they belong. For example, both objects in my example above have an object with name: "AlbertEinstein" inside element.Set.SublistA; therefore I'd expect a return value something along the lines of:
element.Set.SublistA.AlbertEinstein | 2
Essentially, I'd like a count for each of the distinct names when the data is grouped by objects within element.Set.
Ideally, for the example given, I'd like all of:
element.Set.SubListA.AlbertEinstein | 2
element.Set.SubListA.JohnDoe | 1
element.Set.MoreNames.TimMcGraw | 1
element.Set.SublistA.JimmyKimmel | 1
I've tried several aggregate queries but none seems to achieve what I'm trying to do.