Using jq to get json values - json

Input json:
{
"food_group": "fruit",
"glycemic_index": "low",
"fruits": {
"fruit_name": "apple",
"size": "large",
"color": "red"
}
}
Below two jq commands work:
# jq -r 'keys_unsorted[] as $key | "\($key), \(.[$key])"' food.json
food_group, fruit
glycemic_index, low
fruits, {"fruit_name":"apple","size":"large","color":"red"}
# jq -r 'keys_unsorted[0:2] as $key | "\($key)"' food.json
["food_group","glycemic_index"]
How to get values for the first two keys using jq in the same manner? I tried below
# jq -r 'keys_unsorted[0:2] as $key | "\($key), \(.[$key])"' food.json
jq: error (at food.json:9): Cannot index object with array
Expected output:
food_group, fruit
glycemic_index, low

To iterate over a hash array , you can use to_entries and that will transform to a array .
After you can use select to filter rows you want to keep .
jq -r 'to_entries[]| select( ( .value | type ) == "string" ) | "\(.key), \(.value)" '

You can use to_entries
to_entries[] | select(.key=="food_group" or .key=="glycemic_index") | "\(.key), \(.value)"
Demo
https://jqplay.org/s/Aqvos4w7bo

Related

Windows version fails where jqplay.org works

I've been using jq to parse the output from AWS cli.
The output looks something like this..
{
"Vpcs": [
{
"CidrBlock": "10.29.19.64/26",
"State": "available",
"VpcId": "vpc-0ba51bd29c41d41",
"IsDefault": false,
"Tags": [
{
"Key": "Name",
"Value": "CloudEndure-Europe-Development"
}
]
}
]}
and the script I am using looks like this..
.Vpcs[] | [.VpcId, .CidrBlock, (.Tags[]|select(.Key=="Name")|.Value)]
If I run it under Windows it fails like this.
jq: error: Name/0 is not defined at , line 1:
.Vpcs[] | [.VpcId, .CidrBlock, (.Tags[]|select(.Key==Name)|.Value)]
jq: 1 compile error
But it works fine in jqplay.org.
Any ideas, on Windows Im using jq-1.6.
Thanks
Bruce.
The correct jq program is
.Vpcs[] | [.VpcId, .CidrBlock, ( .Tags[] | select( .Key == "Name" ) | .Value ) ]
You didn't show the command you used, but you provided the following to jq:
.Vpcs[] | [.VpcId, .CidrBlock, ( .Tags[] | select( .Key == Name ) | .Value ) ]
That's incorrect. (Notice the missing quotes.)
Not only did you not provide what command you used, you didn't specify whether it was being provided to the Windows API (CreateProcess), Windows Shell (cmd) or Power Shell.
I'm guessing cmd. In order to provide the above program to jq, you can use the following cmd command:
jq ".Vpcs[] | [.VpcId, .CidrBlock, ( .Tags[] | select( .Key == \"Name\" ) | .Value ) ]" file.json
I'm not agreeing to ikegami about the CMD command that [he/she?] provided because the character used for CMD escaping is ^, not \ like Assembly/C/C++. I hope this will work (I don't want to test this on my potato thing):
jq .Vpcs[] | [.VpcId, .CidrBlock, ( .Tags[] | select( .Key == "Name" ) | .Value ) ] file.json
or this:
jq .Vpcs[] | [.VpcId, .CidrBlock, ( .Tags[] | select( .Key == ^"Name^" ) | .Value ) ] file.json

Is there a way to filter a JSON object using jq to only include those with a key matching a value from a known list?

I have a JSON array, and another text file that contains a list of values.
[
{
"key": "foo",
"detail": "bar"
},
...
]
I need to filter the array elements to only those that have a "key" value that is found in the list of values.
The list of values is a text file containing a single item per-line.
foo
baz
Is this possible to do using jq?
You can use the following:
jq --rawfile to_keep_file to_keep.txt '
( [ $to_keep_file | match(".+"; "g").string | { (.): true } ] | add ) as $to_keep_lkup |
map(select($to_keep_lkup[.key]))
' to_filter.json
or
(
jq -sR . to_keep.txt
cat to_filter.json
) | jq -n '
( [ input | match(".+"; "g").string | { (.): true } ] | add ) as $to_keep_lkup |
inputs | map(select($to_keep_lkup[.key]))
'
The former requires jq v1.6, the first version to provide --rawfile.
jqplay

How to get the first object after filtering an array in jq?

Given the following JSON
{
"tags": [
{
"key": "env",
"value": "foo"
},
{
"key": "env",
"value": "bar"
}
]
}
I am trying to find out the first tag where the key is env. I have this-
.tags[] | select (.key=="env") |.[0]
but that gives me an error Cannot index object with number
Use first(expr) to provide an expression that satisfies your usecase.
first(.tags[]? | select(.key == "env") .value)
You could wrap the results of your query in an array and then pick the first one
[.tags[] | select(.key=="env")] | .[0]
jq -r 'first( .tags[] | select(.key=="env") ).value'
jqplay
.tags[] flattens the array into a stream of values. You're applying .[0] to each of the values, not a filtered array. To filter an array, you'd use
.tags | map(select(...)) | .[0]
or
.tags | map(select(...)) | first
map(...) is a shorthand for [ .[] | ... ], so the above is equivalent to
.tags | [ .[] | select(...) ] | first
and
[ .tags[] | select(...) ] | first
Finally, [ ... ] | first can be written as first(...).
first( .tags[] | select(...) )

Using jq to convert json to csv

I'm trying to come up with the correct jq syntax to convert json to csv.
Desired results:
<email>,<id>,<name>
e.g.
user1#whatever.nevermind.no,0,general
user2#whatever.nevermind.no,0,general
user1#whatever.nevermind.no,1,local
...
note that also need to ignore objects with empty "agent_priorities"
Input
[
{
"id": 0,
"name": "General",
"agent_priorities": {
"user1#whatever.nevermind.no": "normal",
"user2#whatever.nevermind.no": "normal"
}
},
{
"id": 1,
"name": "local",
"agent_priorities": {
"user1#whatever.nevermind.no": "normal"
}
},
{
"id": 2,
"name": "Engineering",
}
]
The following variant of the accepted answer checks for the existence of the "agent_priorities" key as per the requirements, and uses keys_unsorted to preserve the order of the keys:
jq -r '
.[]
| select(has("agent_priorities"))
| .id as $id
| .name as $name
| .agent_priorities
| keys_unsorted[]
| [., $id, $name ]
| #csv
' file.json
Store the id and name in variables, then iterate over the keys of agent_priorities:
jq -r '.[]
| .id as $id
| .name as $name
| .agent_priorities
| keys
| .[]
| [., $id, $name ]
| #csv
' file.json

How to convert nested JSON to CSV using only jq

I've following json,
{
"A": {
"C": {
"D": "T1",
"E": 1
},
"F": {
"D": "T2",
"E": 2
}
},
"B": {
"C": {
"D": "T3",
"E": 3
}
}
}
I want to convert it into csv as follows,
A,C,T1,1
A,F,T2,2
B,C,T3,3
Description of output: The parents keys will be printed until, I've reached the leaf child. Once I reached leaf child, print its value.
I've tried following and couldn't succeed,
cat my.json | jq -r '(map(keys) | add | unique) as $cols | map(. as $row | $cols | map($row[.])) as $rows | $rows[] | #csv'
and it throwing me an error.
I can't hardcode the parent keys, as the actual json has too many records. But the structure of the json is similar. What am I missing?
Some of the requirements are unclear, but the following solves one interpretation of the problem:
paths as $path
| {path: $path, value: getpath($path)}
| select(.value|type == "object" )
| select( [.value[]][0] | type != "object")
| .path + ([.value[]])
| #csv
(This program could be optimized but the presentation here is intended to make the separate steps clear.)
Invocation:
jq -r -f leaves-to-csv.jq input.json
Output:
"A","C","T1",1
"A","F","T2",2
"B","C","T3",3
Unquoted strings
To avoid the quotation marks around strings, you could replace the last component of the pipeline above with:
join(",")
Here is a solution using tostream and group_by
[
tostream
| select(length == 2) # e.g. [["A","C","D"],"T1"]
| .[0][:-1] + [.[1]] # ["A","C","T1"]
]
| group_by(.[:-1]) # [[["A","C","T1"],["A","C",1]],...
| .[] # [["A","C","T1"],["A","C",1]]
| .[0][0:2] + map(.[-1]|tostring) # ["A","C","T1","1"]
| join(",") # "A,C,T1,1"