I'm trying to figure out how to print name of keys and specific sub-sub values from it.
My JSON is:
{
"results": 3,
"rows": [
{
"hostname1": {
"tags": [
"owner:TEAM_A",
"friendlyname:myhost1",
"x:abc",
"y:jkl"
]
}
},
{
"hostname2": {
"tags": [
"friendlyname:myhost2",
"owner:TEAM_A",
"x:def",
"q:jkl"
]
}
},
{
"hostname3": {
"tags": [
"owner:TEAM_A",
"x:ghi",
"friendlyname:myhost3",
"q:jkl"
]
}
}
]
}
What I've already achieved is to print just keys of hostnames:
jq -r '.rows[] | keys[]' example.json
hostname1
hostname2
hostname3
I know how to print key:values from tags array:
jq -r .rows[0].hostname1.tags[0,1] example.json
owner:TEAM_A
friendlyname:myhost1
But I can't figure out how to print
hostname1
"owner:TEAM_A",
"friendlyname:myhost1",
hostname2
"owner:TEAM_A",
"friendlyname:myhost2",
hostname3
"owner:TEAM_A",
"friendlyname:myhost3",
Be aware, that the keys in tags array has different order, so I cannot reach it through .rows[0].hostname1.tags[0,1] I'm looking for something like .rows[0].all_keys.tags[owner,friendlyname]
My bash script was very close, but the order of keys brokes it.
hostnames=`jq -r '.rows[] | keys[]' example.json`
count=0
for i in $hostnames
do
jq -r .rows[$count].$i\.tags[0,1] example.json
echo $i
((count=count+1))
done
Turning tags into an object first would make it easier to retrieve tags in a particular order.
.rows[][].tags | INDEX(sub(":.*"; "")) | .owner, .friendlyname
Online demo
And it seems like you don't need a shell loop for this task, JQ can do all that and even more on its own.
.rows[]
| keys_unsorted[] as $hostname
| .[$hostname].tags
| INDEX(sub(":.*"; ""))
| $hostname, "\t" + (.owner, .friendlyname)
Online demo
You can use to_entries to convert an object into an array of key-value pairs, then access .key and .value of its items to your own likings. For instance:
jq -r '.rows[] | to_entries[] | [.key, .value.tags[0,1]] | join("\n ")'
hostname1
owner:TEAM_A
friendlyname:myhost1
hostname2
friendlyname:myhost2
owner:TEAM_A
hostname3
owner:TEAM_A
x:ghi
Demo
Another example:
jq -r '
.rows[] | to_entries[] | [.key, (
.value.tags[] | select(startswith("owner:", "friendlyname:"))
)] | join("\n ")
'
hostname1
owner:TEAM_A
friendlyname:myhost1
hostname2
friendlyname:myhost2
owner:TEAM_A
hostname3
owner:TEAM_A
friendlyname:myhost3
Demo
Related
I'm trying to create a script to read the return code from a json from a curl command.
my curl command is:
curl -sk 'https://192.168.0.1/custom/getData.php?device=mydevice&object=http--HTTP_v6_Global Index&indicator=http_httpCode&plugin=xxx' | jq '.'
The json output is:
{
"device": "mydevice",
"object": "http--HTTP_v6_Global ",
"object_descr": "HTTP download of http://192.168.0.1",
"indicator": "http_httpCode",
"indicator_descr": null,
"plugin": "xxx",
"starttime": 1650468121,
"endtime": 1650468421,
"data": {
"1650468248": {
"http_httpCode#HTTP Code": 200
}
}
}
How can read the value "http_httpCode#HTTP Code" if 1650468248 is a dynamic value?
You could use to_entries so you can use .value to target the 'unknown' key:
jq '.data | to_entries | first | .value."http_httpCode#HTTP Code"'
Online demo
Another approach by just 'looping over everything' either with .. or .[]:
jq '.data | .. | ."http_httpCode#HTTP Code"? // empty'
Online demo
jq '.data | .[]."http_httpCode#HTTP Code"'
Online demo
They all return 200
Thanks I solved.
This is my solution:
jq -r '.data | .[]."http_httpCode#HTTP Code"'
The JSON files look like:
{
"name": "My Collection",
"description": "This is a great collection.",
"date": 1639717379161,
"attributes": [
{
"trait_type": "Background",
"value": "Sand"
},
{
"trait_type": "Skin",
"value": "Dark Brown"
},
{
"trait_type": "Mouth",
"value": "Smile Basic"
},
{
"trait_type": "Eyes",
"value": "Confused"
}
]
}
I found a shell script that uses jq and has this code:
i=1
for eachFile in *.json; do
cat $i.json | jq -r '.[] | {column1: .name, column2: .description} | [.[] | tostring] | #csv' > extract-$i.csv
echo "converted $i of many json files..."
((i=i+1))
done
But its output is:
jq: error (at <stdin>:34): Cannot index string with string "name"
converted 1 of many json files...
Any suggestions on how I can make this work? Thank you!
Quick jq lesson
===========
jq filters are applied like this:
jq -r '.name_of_json_field_0 <optional filter>, .name_of_json_field_1 <optional filter>'
and so on and so forth. A single dot is the simplest filter; it leaves the data field untouched.
jq -r '.name_of_field .'
You may also leave the filter field untouched for the same effect.
In your case:
jq -r '.name, .description'
will extract the values of both those fields.
.[] will unwrap an array to have the next piped filter applied to each unwrapped value. Example:
jq -r '.attributes | .[]
extracts all trait_types objects.
You may sometime want to repackage objects in an array by surrounding the filter in brackets:
jq -r '[.name, .description, .date]
You may sometime want to repackage data in an object by surrounding the filter in curly braces:
`jq -r '{new_field_name: .name, super_new_field_name: .description}'
playing around with these, I was able to get
jq -r '[.name, .description, .date, (.attributes | [.[] | .trait_type] | #csv | gsub(",";";") | gsub("\"";"")), (.attributes | [.[] | .value] | .[]] | #csv | gsub(",";";") | gsub("\"";""))] | #csv'
to give us:
"My Collection","This is a great collection.",1639717379161,"Background;Skin;Mouth;Eyes","Sand;Dark Brown;Smile Basic;Confused"
Name, description, and date were left as is, so let's break down the weird parts, one step at a time.
.attributes | [.[] | .trait_type]
.[] extracts each element of the attributes array and pipes the result of that into the next filter, which says to simply extract trait_type, where they are re-packaged in an array.
.attributes | [.[] | .trait_type] | #csv
turn the array into a csv-parsable format.
(.attributes | [.[] | .trait_type] | #csv | gsub(",";";") | gsub("\"";""))
Parens separate this from the rest of the evaluations, obviously.
The first gsub here replaces commas with semicolons so they don't get interpreted as a separate field, the second removes all extra double quotes.
This is best explained with expected input and output.
Given this input:
{
"27852380038": {
"compute_id": 34234234,
"to_compute": [
{
"asset_id": 304221854,
"new_scheme": "mynewscheme",
"original_host": "oldscheme1234.location.com"
},
{
"asset_id": 12123121,
"new_scheme": "myotherscheme",
"original_host": "olderscheme1234.location.com"
}
]
},
"31352333022": {
"compute_id": 43888877,
"to_compute": [
{
"asset_id": 404221555,
"new_scheme": "mynewscheme",
"original_host": "oldscheme1234.location.com"
},
{
"asset_id": 52123444,
"new_scheme": "myotherscheme",
"original_host": "olderscheme1234.location.com"
}
]
}
}
And the asset_id that I'm searching for, 12123121, the output should be:
27852380038
So I want the top level keys where any of the asset_ids in to_compute match my input asset_id.
I haven't seen any jq example so far that combines nested access with an any test / if else.
The task can be accomplished without using environment variables, e.g.
< input.json jq -r --argjson ASSET_ID 12123121 '
to_entries[]
| {key, asset_id: .value.to_compute[].asset_id}
| select(.asset_id==$ASSET_ID)
| .key'
or more efficiently, using the filter:
to_entries[]
| select( any( .value.to_compute[]; .asset_id==$ASSET_ID) )
| .key
With some help from a coworker I was able to figure it out:
$ export ASSET_ID=12123121
$ cat input.json | jq -r "to_entries[] | .value.to_compute[] + {job: .key} | select(.asset_id==$ASSET_ID) | .job"
27852380038
input.json:-
{
"menu": {
"id": "file",
"value": "File",
"user": {
"address": "USA",
"email": "user#gmail.com"
}
}
}
Command:-
result=$(cat input.json | jq -r '.menu | keys[]')
Result:-
id
value
user
Loop through result:-
for type in "${result[#]}"
do
echo "--$type--"
done
Output:-
--id
value
user--
I want to do process the keys values in a loop. When I do the above, It result as a single string.
How can I do a loop with json keys result in bash script?
The canonical way :
file='input.json'
cat "$file" | jq -r '.menu | keys[]' |
while IFS= read -r value; do
echo "$value"
done
bash faq #1
But you seems to want an array, so the syntax is (missing parentheses) :
file='input.json'
result=( $(cat "$file" | jq -r '.menu | keys[]') )
for type in "${result[#]}"; do
echo "--$type--"
done
Output:
--id--
--value--
--user--
Using bash to just print an object keys from JSON data is redundant.
Jq is able to handle it by itself. Use the following simple jq solution:
jq -r '.menu | keys_unsorted[] | "--"+ . +"--"' input.json
The output:
--id--
--value--
--user--
I'm trying to convert an object that looks like this:
{
"123" : "abc",
"231" : "dbh",
"452" : "xyz"
}
To csv that looks like this:
"123","abc"
"231","dbh"
"452","xyz"
I would prefer to use the command line tool jq but can't seem to figure out how to do the assignment. I managed to get the keys with jq '. | keys' test.json but couldn't figure out what to do next.
The problem is you can't convert a k:v object like this straight into csv with #csv. It needs to be an array so we need to convert to an array first. If the keys were named, it would be simple but they're dynamic so its not so easy.
Try this filter:
to_entries[] | [.key, .value]
to_entries converts an object to an array of key/value objects. [] breaks up the array to each of the items in the array
then for each of the items, covert to an array containing the key and value.
This produces the following output:
[
"123",
"abc"
],
[
"231",
"dbh"
],
[
"452",
"xyz"
]
Then you can use the #csv filter to convert the rows to CSV rows.
$ echo '{"123":"abc","231":"dbh","452":"xyz"}' | jq -r 'to_entries[] | [.key, .value] | #csv'
"123","abc"
"231","dbh"
"452","xyz"
Jeff answer is a good starting point, something closer to what you expect:
cat input.json | jq 'to_entries | map([.key, .value]|join(","))'
[
"123,abc",
"231,dbh",
"452,xyz"
]
But did not find a way to join using newline:
cat input.json | jq 'to_entries | map([.key, .value]|join(","))|join("\n")'
"123,abc\n231,dbh\n452,xyz"
Here's an example I ended up using this morning (processing PagerDuty alerts):
cat /tmp/summary.json | jq -r '
.incidents
| map({desc: .trigger_summary_data.description, id:.id})
| group_by(.desc)
| map(length as $len
| {desc:.[0].desc, length: $len})
| sort_by(.length)
| map([.desc, .length] | #csv)
| join("\n") '
This dumps a CVS-separated document that looks something like:
"[Triggered] Something annoyingly frequent",31
"[Triggered] Even more frequent alert!",35
"[No data] Stats Server is probably acting up",55
Try This
give same output you want
echo '{"123":"abc","231":"dbh","452":"xyz"}' | jq -r 'to_entries | .[] | "\"" + .key + "\",\"" + (.value | tostring)+ "\""'
onecol2txt () {
awk 'BEGIN { RS="_end_"; FS="\n"}
{ for (i=2; i <= NF; i++){
printf "%s ",$i
}
printf "\n"
}'
}
cat jsonfile | jq -r -c '....,"_end_"' | onecol2txt