Parsing JSON format with jq - json

I need to parse the output from lsblk. Since I am doing this from within a script I need the output in a standardized format. Therefore I chose the JSON format as output. Here is the command with some sample output:
# lsblk -o NAME,MOUNTPOINT -J
{
"blockdevices": [
{"name": "sda", "mountpoint": null,
"children": [
{"name": "sda1", "mountpoint": "/sda1/mountpoint"},
{"name": "sda2", "mountpoint": null,
"children": [
{"name": "sda2_mapper", "mountpoint": "/sda2/mountpoint"}
]
},
{"name": "sda3", "mountpoint": null},
{"name": "sda4", "mountpoint": null}
]
},
{"name": "sdb", "mountpoint": null,
"children": [
{"name": "sdb1", "mountpoint": "/sdb1/mountpoint"},
{"name": "sdb2", "mountpoint": null}
]
},
{"name": "sdc", "mountpoint": null}
]
}
I want to extract the names of all innermost nodes, i.e., the name of all nodes that do not have children. The desired output for the above sample would be:
sda1
sda2_mapper
sda3
sda4
sdb1
sdb2
sdc
My tool of choice is jq which I have only recently discovered. I have tried
# jq '.blockdevices[].children[]?.name?'
But this only filters the first level of names. I also tried with
# jq 'recurse(.name?)'
but this returns the entire file.
Is there a way to return only nodes that do not have children, no matter how deep they are nested?
PS: I am capable of implementing the requirement in bash or awk. I would, however, prefer a solution with a tool like jq, which specific purpose is to parse json files.

I don't think this is the simplest way to do it, but it seems to work:
$ jq -r '.blockdevices[] | .. | objects | select(has("children")|not)| .name' tmp.json
sda1
sda2_mapper
sda3
sda4
sdb1
sdb2
sdc
It recursively outputs each value found in the JSON, filtering out first anything that is not an object, then any object that has a children key. Finally, you can select the name value from each remaining object.

With your JSON input, the following command:
jq '.. | scalars'
emits the "leaves", beginning:
"sda"
"sda1"
"/sda1/mountpoint"
Use the -r (raw output) to strip the quotation marks from strings.

Related

Merging multiple JSON Lines files into a single JSON object

I'm trying to merge / reduce many JSON objects and somehow I'm not getting the expected result.
I'm only interested in getting all keys, the values and the number of items inside arrays are irrelevant.
file1.json:
{
"customerId": "xx",
"emails": [
{
"address": "james#zz.com",
"customType": "",
"type": "custom"
},
{
"address": "sales#x.com",
"primary": true
},
{
"address": "info#x.com"
}
]
}
{
"id": "654",
"emails": [
{
"address": "peter#x.com",
"primary": true
}
]
}
The desired output is a JSON object with all possible keys from all input objects. The values are irrelevant, any value from any input object is OK. But all keys from input objects must be present in output object:
{
"emails": [
{
"address": "james#zz.com", <--- any existing value works
"customType": "", <--- any existing value works
"type": "custom", <--- any existing value works
"primary": true <--- any existing value works
}
],
"customerId": "xx", <--- any existing value works
"id": "654" <--- any existing value works
}
I tried reducing it, but it misses many of the keys in the array:
$ jq -s 'reduce .[] as $item ({}; . + $item)' file1.json
{
"customerId": "xx",
"emails": [
{
"address": "peter#x.com",
"primary": true
}
],
"id": "654"
}
The structure of the objects contained in file1.json is unknown, so the solution must be agnostic of any keys/values and the solution must not assume any structure or depth.
Is it possible to fix this somehow considering how jq works? Or is it possible to solve this issue using another tool?
PS: For those of you that are curious, this is useful to infer a schema that can be created in a database. Given an arbitrary number of JSON objects with an arbitrary structure, it's easy to create a single JSON squished/merged/fused structure that will "accommodate" all JSON objects.
BigQuery is able to autodetect a schema, but only 500 lines are analyzed to come up with it. This presents problems if objects have different structures past that 500 line mark.
With this approach I can squish a JSON Lines file with 1000000s of objects into one line that can be then imported into BigQuery with the autodetect schema flag and it will work every time since BigQuery only has one line to analyze and this line is the "super-schema" of all the objects. After extracting the autodetected schema I can manually fine tune it to make sure types are correct and then recreate the table specifying my tuned schema:
$ ls -1 users*.json | wc --lines
3672
$ cat users*.json > users-all.json
$ cat users-all.json | wc --lines
146482633
$ jq 'squish' users-all.json > users-all-squished.json
$ cat users-all-squished.json | wc --lines
1
$ bq load --autodetect users users-all-squished.json
$ bq show schema --format=prettyjson users > users-schema.json
$ vi users-schema.json
$ bq rm --table users
$ bq mk --table users --schema=users-schema.json
$ bq load users users-all.json
[Some options are missing or changed for readability]
Here is a solution that produces the expected result in the sample example, and seems to meet all the stated requirements. It is similar to one proposed by #pmf on this page.
jq -n --stream '
def squish: map(if type == "number" then 0 else . end);
reduce (inputs | select(length==2)) as [$p, $v] ({}; setpath($p|squish; $v))
'
Output
For the example given in the Q, the output is:
{
"customerId": "xx",
"emails": [
{
"address": "peter#x.com",
"customType": "",
"type": "custom",
"primary": true
}
],
"id": "654"
}
As #peak has pointed out, some aspects are underspecified. For instance, what should happen with .customerId and .id? Are they always the same across all files (as suggested by the sample files provided)? Do you want the items of the .emails array just thrown into one large array, or do you want to have them "merged" by some criteria (e.g. by a common value in their .address field)? Here are some stubs to start from:
Simply concatenate the .emails arrays and take all other parts from the first file:
jq 'reduce inputs as $in (.; .emails += $in.emails)' file*.json
# or simpler
jq '.emails += [inputs.emails[]]' file*.json
Demo Demo
{
"emails": [
{
"address": "cc#xx.com"
},
{
"address": "james#zz.com",
"customType": "",
"type": "custom"
},
{
"address": "james#x.com"
},
{
"address": "sales#x.com",
"primary": true
},
{
"address": "info#x.com"
},
{
"address": "james#x.com"
},
{
"address": "sales#x.com",
"primary": true
},
{
"address": "info#x.com"
}
],
"customerId": "xx",
"id": "654"
}
Merge the objects in the .emails array by a common value in their .address field, with latter values overwriting former values for other fields with colliding names, and discard all other parts from the files:
jq -n 'reduce inputs.emails[] as $e ({}; .[$e.address] += $e) | map(.)' file*.json
Demo
[
{
"address": "cc#xx.com"
},
{
"address": "james#zz.com",
"customType": "",
"type": "custom"
},
{
"address": "james#x.com"
},
{
"address": "sales#x.com",
"primary": true
},
{
"address": "info#x.com"
}
]
If you are only interested in a list of unique field names for a given address, regardless of the counts and values used, you can also go with:
jq -n '
reduce inputs.emails[] as $e ({}; .[$e.address][$e | keys_unsorted[]] = 1)
| map_values(keys)
'
Demo
{
"cc#xx.com": [
"address"
],
"james#zz.com": [
"address",
"customType",
"type"
],
"james#x.com": [
"address"
],
"sales#x.com": [
"address",
"primary"
],
"info#x.com": [
"address"
]
}
The structure of the objects contained in file1.json is unknown, so the solution must be agnostic of any keys/values and the solution must not assume any structure or depth.
You can use the --stream flag to break down the structure into an array of paths and values, discard the values part and make the paths unique:
jq --stream -nc '[inputs[0]] | unique[]' file*.json
["customerId"]
["emails"]
["emails",0,"address"]
["emails",0,"customType"]
["emails",0,"primary"]
["emails",0,"type"]
["emails",1,"address"]
["emails",2]
["emails",2,"address"]
["emails",2,"primary"]
["emails",3]
["emails",3,"address"]
["id"]
Trying to build a representation of this, similar to any of the input files, comes with a lot of caveats. For instance, how would you represent in a single structure if one file had .emails as an array of objects, and another had .emails as just an atomic value, say, a string. You would not be able to represent this plurality without introducing new, possibly ambiguous structures (e.g. putting all possibilities into an array).
Therefore, having a list of paths could be a fair compromise. Judging by your desired output, you want to focus more on the object structure, so you could further reduce complexity by discarding the array indices. Depending on your use case, you could replace them with a single value to retain the information of the presence of an array, or discard them entirely:
jq --stream -nc '[inputs[0] | map(numbers = 0)] | unique[]' file*.json
["customerId"]
["emails"]
["emails",0]
["emails",0,"address"]
["emails",0,"customType"]
["emails",0,"primary"]
["emails",0,"type"]
["id"]
jq --stream -nc '[inputs[0] | map(strings)] | unique[]' file*.json
["customerId"]
["emails"]
["emails","address"]
["emails","customType"]
["emails","primary"]
["emails","type"]
["id"]
The following program meets these two key requirements:
"all keys from input objects must be present in output object";
"the solution must be agnostic of any keys/values and the solution must not assume any structure or depth."
The approach is the same as one suggested by #pmf, and for the example given in the Q, produces results that are very similar to the one that is shown:
jq -n --stream '
def squish: map(select(type == "string"));
reduce (inputs | select(length==2)) as [$p, $v] ({};
setpath($p|squish; $v))
'
With the given input, this produces:
{
"customerId": "xx",
"emails": {
"address": "peter#x.com",
"customType": "",
"type": "custom",
"primary": true
},
"id": "654"
}

Merge and Sort JSON using JQ

I have a file containing the following structure and unknown number of results:
{
"results": [
[
{
"field": "AccountID",
"value": "5177497"
},
{
"field": "Requests",
"value": "50900"
}
],
[
{
"field": "AccountID",
"value": "pro"
},
{
"field": "Requests",
"value": "251"
}
]
],
"statistics": {
"Matched": 51498,
"Scanned": 8673577,
"ScannedByte": 2.72400814E10
},
"status": "HOLD"
}
{
"results": [
[
{
"field": "AccountID",
"value": "5577497"
},
{
"field": "Requests",
"value": "51900"
}
],
"statistics": {
"Matched": 51498,
"Scanned": 8673577,
"ScannedByte": 2.72400814E10
},
"status": "HOLD"
}
There are multiple such results which are indexed as an array with the results folder. They are not seperated by a comma.
I am trying to just print The "AccountID" sorted by "Requests" in ZSH using jq. I have tried flattening them and using:
jq -r '.results[][0] |.value ' filename
jq -r '.results[][1] |.value ' filename
To get the Account ID and Requests seperately and sorting them. I don't think bash has a dictionary that can be used. The problem lies in the file as the Field and value are not key value pair but are both pairs. Therefore extracting them using the above two lines into seperate arrays and sorting by the second array seems a bit too long. I was wondering if there is a way to combine both the operations.
The other way is to combine it all to a string and sort it in ascending order. Python would probably have the best solution but the code requires to be a zsh or bash script.
Solutions that use sed, jq or any other ZSH supported compilers are welcome. If there is a way to create a dictionary in bash, please do let me know.
The projectd output requirement is just the Account ID vs Request Number.
5577497 has 51900 requests
5177497 has 50900 requests
pro has 251 requests
If you don't mind learning a little jq, it will probably be best to write a small jq program to do what you want.
To get you started, consider the following jq program, which assumes your input is a stream of valid JSON objects with a "results" key similar to your sample:
[inputs | .results[] | map( { (.field) : .value} ) | add]
After making minor changes to your input so that it consists of valid JSON objects, an invocation of jq with the -n option produces an array of AccountID/Requests objects:
[
{
"AccountID": "5177497",
"Requests": "50900"
},
{
"AccountID": "pro",
"Requests": "251"
},
{
"AccountID": "5577497",
"Requests": "51900"
}
]
You could (for example) now use jq's group_by to group these objects by AccountID, and thereby produce the result you want.
jq -S '.results[] | map( { (.field) : .value} ) | add' query-results-aggregate \
| jq -s -c 'group_by(.number_of_requests) | .[]'
This does the trick. Thanks to peak for the guidance.

Get parent value from json using jq

My json file looks like this;
{
"RQBTYFE86MFC3oL": {
"name": "Nightmode",
"lights": [
"1",
"2",
"3",
"4",
"5",
"7",
"8",
"9",
"10",
"11"
],
"owner": "kvovodUUfn2vlby9h9okdDhv8SrTzkBFjk6kPz2v",
"recycle": false,
"locked": false,
"appdata": {
"version": 1,
"data": "QSXCj_r01_d99"
},
"picture": "",
"lastupdated": "2018-08-08T03:21:39",
"version": 2
}
}
I want to get the 'RQBTYFE86MFC3oL' value by doing a query for 'Nightmode'. So far I came up with this;
jq '.[] | select(.name == "Nightmode")'
This will return me the correct part of the Json but the 'RQBTYFE86MFC3oL' part is stripped. How do I get this part as well?
A simple way to determine the key name(s) corresponding to values satisfying a certain condition is to use to_entries, as explained in the jq manual.
Using this approach, the appropriate jq filter would be:
to_entries[] | select(.value.name == "Nightmode") | .key
with the result:
"RQBTYFE86MFC3oL"
If you want to get the key-value pair, you'd use with_entries as follows:
with_entries( select(.value.name == "Nightmode") )
If the input JSON is too large to fit comfortably in memory, then it would make sense to use jq's streaming parser (invoked with the --stream command-line option):
jq --stream '
select(.[1] == "Nightmode" and (first|length) == 2 and first[1] == "name")
| first | first'
This would produce the key name.
The key idea is that the streaming parser produces arrays including pairs of the form: [ARRAYPATH, VALUE] where VALUE is the value at ARRAYPATH.
You want to get the Key Value.
So use the keys command, to return 'RQBTYFE86MFC3oL' as that is the key, the rest is the value of that key.
jq 'keys'
Here is a snippet: https://jqplay.org/s/YvpCb2PH42
Reference: https://stedolan.github.io/jq/manual/

How to update a subitem in a json file using jq?

Using jq I tried to update this json document:
{
"git_defaults": {
"branch": "master",
"email": "jenkins#host",
"user": "Jenkins"
},
"git_namespaces": [
{
"name": "NamespaceX",
"modules": [
"moduleA",
"moduleB",
"moduleC",
"moduleD"
]
},
{
"name": "NamespaceY",
"modules": [
"moduleE"
]
}
]
}
with adding moduleF to NamespaceY. I need to write the file back again to the original source file.
I came close (but no cigar) with:
jq '. | .git_namespaces[] | select(.name=="namespaceY").modules |= (.+ ["moduleF"])' config.json
and
jq '. | select(.git_namespaces[].name=="namespaceY").modules |= (.+ ["moduleF"])' config.json
The following filter should perform the update you want:
(.git_namespaces[] | select(.name=="NamespaceY").modules) += ["moduleF"]
Note that the initial '.|' in your attempt is not needed; that "NamespaceY" is capitalized in config.json; that the parens as shown are the keys to success; and that += can be used here.
One way to write back to the original file would perhaps be to use 'sponge'; other possibilities are discussed on the jq FAQ https://github.com/stedolan/jq/wiki/FAQ

How to use `jq` to obtain the keys

My json looks like this :
{
"20160522201409-jobsv1-1": {
"vmStateDisplayName": "Ready",
"servers": {
"20160522201409 jobs_v1 1": {
"serverStateDisplayName": "Ready",
"creationDate": "2016-05-22T20:14:22.000+0000",
"state": "READY",
"provisionStatus": "PENDING",
"serverRole": "ROLE",
"serverType": "SERVER",
"serverName": "20160522201409 jobs_v1 1",
"serverId": 2902
}
},
"isAdminNode": true,
"creationDate": "2016-05-22T20:14:23.000+0000",
"totalStorage": 15360,
"shapeId": "ot1",
"state": "READY",
"vmId": 4353,
"hostName": "20160522201409-jobsv1-1",
"label": "20160522201409 jobs_v1 ADMIN_SERVER 1",
"ipAddress": "10.252.159.39",
"publicIpAddress": "10.252.159.39",
"usageType": "ADMIN_SERVER",
"role": "ADMIN_SERVER",
"componentType": "jobs_v1"
}
}
My key keeps changing from time to time. So for example 20160522201409-jobsv1-1 may be something else tomorrow. Also I may more than one such entry in the json payload.
I want to echo $KEYS and I am trying to do it using jq.
Things I have tried :
| jq .KEYS is the command i use frequently.
Is there a jq command to display all the primary keys in the json?
I only care about the hostname field. And I would like to extract that out. I know how to do it using grep but it is NOT a clean approach.
You can simply use: keys:
% jq 'keys' my.json
[
"20160522201409-jobsv1-1"
]
And to get the first:
% jq -r 'keys[0]' my.json
20160522201409-jobsv1-1
-r is for raw output:
--raw-output / -r: With this option, if the filter’s result is a string then it will be written directly to standard output rather than being formatted as a JSON string with quotes. This can be useful for making jq filters talk to non-JSON-based systems.
Source
If you want a known value below an unknown property, eg xxx.hostName:
% jq -r '.[].hostName' my.json
20160522201409-jobsv1-1