I have multiple JSON files one.json, two.json, three.json with the below format and I want to create a consolidated array from them using jq. So, from all the files I want to extract Name and Value field inside the Parameters and use them to create an array where the id value will be constructed from the Name value and value field will be constructed using Value field value.
input:
one.json:
{
"Parameters": [
{
"Name": "id1",
"Value": "one",
"Version": 2,
"LastModifiedDate": 1581663187.36
}
]
}
two.json
{
"Parameters": [
{
"Name": "id2",
"Value": "xyz",
"Version": 2,
"LastModifiedDate": 1581663187.36
}
]
}
three.json
{
"Parameters": [
{
"Name": "id3",
"Value": "xyz",
"Version": 2,
"LastModifiedDate": 1581663187.36
}
]
}
output:
[
{
"id": "id1",
"value": "one"
},
{
"id": "id2",
"value": "xyz"
},
{
"id": "id3",
"value": "xyz"
}
]
How to achieve this using jq
You can use a reduce expression instead of slurping the whole file into memory (-s); by iterative manipulation of the input file contents and then appending the required fields one at a time.
jq -n 'reduce inputs.Parameters[] as $d (.; . + [ { id: $d.Name, value: $d.Value } ])' one.json two.json three.json
The -n flag is to ensure that we construct the output JSON data from scratch over the input file contents made available over the inputs function. Since reduce works in an iterative manner, for each of the object in the input, we create a final array, creating the KV pair as desired.
Related
How would one transform a JSON object into several derived JSON objects with jq?
Example input:
[
{
"id": 1,
"a": "value-in-a",
"b": "value-in-b"
},
{
"id": 2,
"c": "value-in-c"
}
]
Expected output:
[
{
"id": "1",
"value": "value-in-a"
},
{
"id": "1",
"value": "value-in-b"
},
{
"id": "2",
"value": "value-in-c"
}
]
Here the output is an array with 3 elements. First 2 elements are transformed using the first element in the input array. Third element is produced from second element in the input array.
I assume to achieve there will need to be several steps:
a) Construct 2 objects from single JSON object input. Aassume this can be done using variables. First assign input object into variable, then construct object with value a and then with value b. Not sure how to make JQ return several constructed JSON objects.
b) Conditionals will need to be used to not produce an object if a, or b, or c is missing. This can probably be done using 'alternative' operator or if-then-else
You can iterate over the other keys using del and keys_unsorted:
jq 'map({id, value: (del(.id) | .[keys_unsorted[]])})'
[
{
"id": 1,
"value": "value-in-a"
},
{
"id": 1,
"value": "value-in-b"
},
{
"id": 2,
"value": "value-in-c"
}
]
Demo
I have a json formatted overview of backups, generated using pgbackrest. For simplicity I removed a lot of clutter so the main structures remain. The list can contain multiple backup structures, I reduced here to just 1 for simplicity.
[
{
"backup": [
{
"archive": {
"start": "000000090000000200000075",
"stop": "000000090000000200000075"
},
"info": {
"size": 1200934840
},
"label": "20220103-122051F",
"type": "full"
},
{
"archive": {
"start": "00000009000000020000007D",
"stop": "00000009000000020000007D"
},
"info": {
"size": 1168586300
},
"label": "20220103-153304F_20220104-081304I",
"type": "incr"
}
],
"name": "dbname1"
}
]
Using jq I tried to generate a simpeler format out of this, until now without any luck.
What I would like to see is the backup.archive, backup.info, backup.label, backup.type, name combined in one simple structure, without getting into a cartesian product. I would be very happy to get the following output:
[
{
"backup": [
{
"archive": {
"start": "000000090000000200000075",
"stop": "000000090000000200000075"
},
"name": "dbname1",
"info": {
"size": 1200934840
},
"label": "20220103-122051F",
"type": "full"
},
{
"archive": {
"start": "00000009000000020000007D",
"stop": "00000009000000020000007D"
},
"name": "dbname1",
"info": {
"size": 1168586300
},
"label": "20220103-153304F_20220104-081304I",
"type": "incr"
}
]
}
]
where name is redundantly added to the list. How can I use jq to convert the shown input to the requested output? In the end I just want to generate a simple csv from the data. Even with the simplified structure using
'.[].backup[].name + ":" + .[].backup[].type'
I get a cartesian product:
"dbname1:full"
"dbname1:full"
"dbname1:incr"
"dbname1:incr"
how to solve that?
So, for each object in the top-level array you want to pull in .name into each of its .backup array's elements, right? Then try
jq 'map(.backup[] += {name} | del(.name))'
Demo
Then, generating a CSV output using jq is easy: There is a builtin called #csv which transforms an array into a string of its values with quotes (if they are stringy) and separated by commas. So, all you need to do is to iteratively compose your desired values into arrays. At this point, removing .name is not necessary anymore as we are piecing together the array for CSV output anyway. And we're giving the -r flag to jq in order to make the output raw text rather than JSON.
jq -r '.[]
| .backup[] + {name}
| [(.archive | .start, .stop), .name, .info.size, .label, .type]
| #csv
'
Demo
First navigate to backup and only then “print” the stuff you’re interested.
.[].backup[] | .name + ":" + .type
I have the following file "Pokemon.json", it's a stripped down list of Pokémon, listing their Pokédex ID, name and an array of Object Types.
[{
"name": "onix",
"id": 95,
"types": [{
"slot": 2,
"type": {
"name": "ground"
}
},
{
"slot": 1,
"type": {
"name": "rock"
}
}
]
}, {
"name": "drowzee",
"id": 96,
"types": [{
"slot": 1,
"type": {
"name": "psychic"
}
}]
}]
The output I'm trying to achieve is, extracting the name value of the type object and inserting it into an array.
I can easily get an array of all the types with
jq -r '.pokemon[].types[].type.name' pokemon.json
But I'm missing the key part to transform the name field into it's own array
[ {
"name": "onix",
"id": 95,
"types": [ "rock", "ground" ]
}, {
"name": "drowzee",
"id": 96,
"types": [ "psychic" ]
} ]
Any help appreciated, thank you!
In the man it states you have an option to use map - which essentially means walking over each result and returning something (in our case, same data, constructed differently.)
This means that for each row you are creating new object, and put some values inside
Pay attention, you do need another iterator within, since we want one object per row.
(we simply need to map the values in different way it is constructed right now.)
So the solution might look like so:
jq -r '.pokemon[]|{name:.name, id:.id, types:.types|map(.type.name)}' pokemon.json
I need to split the results of a sonarqube analysis history into individual files. Assuming a starting input below,
{
"paging": {
"pageIndex": 1,
"pageSize": 100,
"total": 3
},
"measures": [
{
"metric": "coverage",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "100.0"
},
{
"date": "2018-11-21T12:22:39+0000",
"value": "100.0"
},
{
"date": "2018-11-21T13:09:02+0000",
"value": "100.0"
}
]
},
{
"metric": "bugs",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
},
{
"date": "2018-11-21T12:22:39+0000",
"value": "0"
},
{
"date": "2018-11-21T13:09:02+0000",
"value": "0"
}
]
},
{
"metric": "vulnerabilities",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
},
{
"date": "2018-11-21T12:22:39+0000",
"value": "0"
},
{
"date": "2018-11-21T13:09:02+0000",
"value": "0"
}
]
}
]
}
How do I use jq to clean the results so it only retains the history array entries for each element? The desired output is something like this (output-20181118123808.json for analysis done on "2018-11-18T12:37:08+0000"):
{
"paging": {
"pageIndex": 1,
"pageSize": 100,
"total": 3
},
"measures": [
{
"metric": "coverage",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "100.0"
}
]
},
{
"metric": "bugs",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
}
]
},
{
"metric": "vulnerabilities",
"history": [
{
"date": "2018-11-18T12:37:08+0000",
"value": "0"
}
]
}
]
}
I am lost on how to operate only on the sub-elements while leaving the parent structure intact. The naming of the JSON file is going to be handled externally from the jq utility. The sample data provided will be split into 3 files. Some other input can have a variable number of entries, some may be up to 10000. Thanks.
Here is a solution which uses awk to write the distinct files. The solution assumes that the dates for each measure are the same and in the same order, but imposes no limit on the number of distinct dates, or the number of distinct measures.
jq -c 'range(0; .measures[0].history|length) as $i
| (.measures[0].history[$i].date|gsub("[^0-9]";"")), # basis of filename
reduce range(0; .measures|length) as $j (.;
.measures[$j].history |= [.[$i]])' input.json |
awk -F\\t 'fn {print >> fn; fn="";next}{fn="output-" $1 ".json"}'
Comments
The choice of awk here is just for convenience.
The disadvantage of this approach is that if each file is to be neatly formatted, an additional run of a pretty-printer (such as jq) would be required for each file. Thus, if the output in each file is required to be neat, a case could be made for running jq once for each date, thus obviating the need for the post-processing (awk) step.
If the dates of the measures are not in lock-step, then the same approach as above could still be used, but of course the gathering of the dates and the corresponding measures would have to be done differently.
Output
The first two lines produced by the invocation of jq above are as follows:
"201811181237080000"
{"paging":{"pageIndex":1,"pageSize":100,"total":3},"measures":[{"metric":"coverage","history":[{"date":"2018-11-18T12:37:08+0000","value":"100.0"}]},{"metric":"bugs","history":[{"date":"2018-11-18T12:37:08+0000","value":"0"}]},{"metric":"vulnerabilities","history":[{"date":"2018-11-18T12:37:08+0000","value":"0"}]}]}
In the comments, the following addendum to the original question appeared:
is there a variation wherein the filtering is based on the date value and not the position? It is not guaranteed that the order will be the same or the number of elements in each metric is going to be the same (i.e. some dates may be missing "bugs", some might have additional metric such as "complexity").
The following will produce a stream of JSON objects, one per date. This stream can be annotated with the date as per my previous answer, which shows how to use these annotations to create the various files. For ease of understanding, we use two helper functions:
def dates:
INDEX(.measures[].history[].date; .)
| keys;
def gather($date): map(select(.date==$date));
dates[] as $date
| .measures |= map( .history |= gather($date) )
INDEX/2
If your jq does not have INDEX/2, now would be an excellent time to upgrade, but in case that's not feasible, here is its def:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
I have some JSON output I am trying to parse with jq. I read some examples on filtering but I don't really understand it and my output it more complicated than the examples. I have no idea where to even begin beyond jq '.[]' as I don't understand the syntax of jq beyond that and the hierarchy and terminology are challenging as well. My JSON output is below. I want to return the value for Valid where the ItemName equals Item_2. How can I do this?
"1"
[
{
"GroupId": "1569",
"Title": "My_title",
"Logo": "logo.jpg",
"Tags": [
"tag1",
"tag2",
"tag3"
],
"Owner": [
{
"Name": "John Doe",
"Id": "53335"
}
],
"ItemId": "209766",
"Item": [
{
"Id": 47744,
"ItemName": "Item_1",
"Valid": false
},
{
"Id": 47872,
"ItemName": "Item_2",
"Valid": true
},
{
"Id": 47872,
"ItemName": "Item_3",
"Valid": false
}
]
}
]
"Browse"
"8fj9438jgge9hdfv0jj0en34ijnd9nnf"
"v9er84n9ogjuwheofn9gerinneorheoj"
Except for the initial and trailing JSON scalars, you'd simply write:
.[] | .Item[] | select( .ItemName == "Item_2" ) | .Valid
In your particular case, to ensure the top-level JSON scalars are ignored, you could prefix the above with:
arrays |