I have below json output from ansible, I want to parse this file using jq. I want to match packages name GeoIP and print version number.
How can I use jq to get this version number for matching package name? looking output like this
"GeoIP","1.5.0"
sample ansible output
{
"ansible_facts.packages": {
"GConf2": [
{
"arch": "x86_64",
"epoch": null,
"name": "GConf2",
"release": "8.el7",
"source": "rpm",
"version": "3.2.6"
}
],
"GeoIP": [
{
"arch": "x86_64",
"epoch": null,
"name": "GeoIP",
"release": "14.el7",
"source": "rpm",
"version": "1.5.0"
}
],
"ImageMagick": [
{
"arch": "x86_64",
"epoch": null,
"name": "ImageMagick",
"release": "18.el7",
"source": "rpm",
"version": "6.7.8.9"
}
],
}
}
First, make the source valid json by removing the comma in the third to last row.
With that done, we can start at the end and work back. The desired output can be produced using jq's #csv format. That, in turn, requires that the output is in an array. (See the section of the manual titled "Format strings and escaping".) So we need to get the data to look like this.
jq -r '["GeoIP","1.5.0"] | #csv'
One way to do that is to put each item in its own array and add the arrays together. (See the section titled "Addition".)
jq -r '["GeoIP"] + [.[] | .GeoIP[].version] | #csv'
Since map(x) is defined as [.[] | x], you can say this instead.
jq -r '["GeoIP"] + map(.GeoIP[].version) | #csv'
You can use a variable to specify the package name you want like this.
jq -r --arg "package" "GeoIP" '[$package] + map(.[$package][].version) | #csv'
Update
My original solution has unnecessary steps. The array can be made like this.
jq -r '[ "GeoIP", .[].GeoIP[].version ] | #csv'
Or, using a variable
jq -r --arg "package" "GeoIP" '[$package,(.[] | .[$package][].version)]| #csv'
Q: "How can I use jq to get the version number for matching package name?"
A: Try the script below
$ cat print-version.sh
#!/bin/sh
cat output.json | jq ".ansible_facts.packages.$1 | .[].version"
For example
$ print-version.sh GeoIP
"1.5.0"
Improved version of the script
Quoting the comment: "Using the shell's string interpolation as done here is risky and generally regarded as an "anti-pattern". – peak"
The improved script below does the job
$ cat print-version.sh
#!/bin/sh
cat output.json | jq ".ansible_facts.packages.$1[].version"
output.json
{
"ansible_facts": {
"packages": {
"GConf2": [
{
"arch": "x86_64",
"epoch": null,
"name": "GConf2",
"release": "8.el7",
"source": "rpm",
"version": "3.2.6"
}
],
"GeoIP": [
{
"arch": "x86_64",
"epoch": null,
"name": "GeoIP",
"release": "14.el7",
"source": "rpm",
"version": "1.5.0"
}
],
"ImageMagick": [
{
"arch": "x86_64",
"epoch": null,
"name": "ImageMagick",
"release": "18.el7",
"source": "rpm",
"version": "6.7.8.9"
}
]
}
}
}
I assume you want to get a loop over all key-values attributes in the ansible_facts.packages, so it's recommended to use a Bash loop to iterate over it. Also the sample contents of the JSON you provided don't compile, I removed the comma 3 rows from the end.
You need one jq to get all the keys of the ansible_facts.packages and then in each iteration to get the version of the given key (package).
To get all the keys:
[http_offline#greenhat-30 tmp]$ cat file.json | jq '."ansible_facts.packages" | keys[]'
"GConf2"
"GeoIP"
"ImageMagick"
[http_offline#greenhat-30 tmp]$
Loop:
for package in `cat file.json | jq '."ansible_facts.packages" | keys[]'`
do
VERSION=`cat file.json | jq ".\"ansible_facts.packages\".$package[0].version"`
echo "$package,$VERSION"
done
Sample run:
[http_offline#greenhat-30 tmp]$ for package in `cat file.json | jq '."ansible_facts.packages" | keys[]'`
> do
> VERSION=`cat file.json | jq ".\"ansible_facts.packages\".$package[0].version"`
> echo "$package,$VERSION"
> done
"GConf2","3.2.6"
"GeoIP","1.5.0"
"ImageMagick","6.7.8.9"
[http_offline#greenhat-30 tmp]$
Related
I'm working a bash script to extraxt specific field from json output using jq.
USERNAME=$(echo "$OUTPUT" | jq -r '.[] | .name')
Due to jq it always fails with parse error: Invalid numeric literal at line 1, column 2 error.
My restapi result has the below output.
[
{
"url": "#/systemadm/groups/uuid-d6e4e05",
"options": {},
"group_id": 313,
"owner": "abc-123-mec",
"owner_id": "ad1337884",
"id": "c258d7b330",
"name": "abc-group"
},
{
"options": {},
"id": "global%3Regmebers",
"name": "Udata-123"
},
{
"url": "#/systemadm/groups/uuid-38943000",
"options": {},
"group_id": 910,
"owner": "framework-abcc",
"owner_id": "78d4472b738bc",
"id": "38943000057a",
"name": "def-group"
},
........................
............................
......................................
So what's wrong with this jq response of code to get "name" ?
jq can only process valid JSON.
If the value of OUTPUT is literally "id": "c258d7b330","name": "abc-group", then you could enclose it in curly braces to make it valid JSON. No guarantees though; this depends on the exact format of your input.
OUTPUT='"id": "c258d7b330",
"name": "abc-group"'
USERNAME="$(printf '%s\n' "{$OUTPUT}" | jq -r '.name')"
printf '%s\n' "$USERNAME"; // abc-group
If it cannot be converted to valid JSON, maybe a simple solution using grep+cut or awk would suffice?
OUTPUT='"id": "c258d7b330",
"name": "abc-group"'
USERNAME="$(printf '%s\n' "$OUTPUT" | grep '^"name":' | cut -d'"' -f4)"
awk:
printf '%s\n' "$OUTPUT" | awk -F'"' '/^"name":/{print $4}'
Or even use jq to parse the input as array of strings and then filter for the line in which you are interested:
jq -Rr '(select(startswith("\"name\":")) / "\"")[3]'
All options are really fragile and I recommend to fix your input to be actual, valid JSON
In the below json, I'm unable to get the value which have reporter only.
the output should be jhoncena only which should written into a file.
jq -r '.values' response.json | grep reporter
the output for this is
"name": "reporter-jhoncena"
{
"size": 3,
"limit": 25,
"isLastPage": true,
"values": [
{
"name": "hello-world"
},
{
"name": "test-frame"
},
{
"name": "reporter-jhoncena"
}
],
"start": 0
}
You can use capture :
jq -r '.values[].name
| capture("^reporter-(?<name>.*)").name
' response.json
You can use split such as
jq -r '.values[2].name | split("-")[1]' response.json
Demo
Edit : Alternatively you can use
jq -r '.values[].name | select(.|split("-")[0]=="reporter")|split("-")[1]' response.json > outfile.txt
without knowing the order of the name element within the array
Demo
jq -r '.values[]
| select(.name|index("reporter"))
| .name
| sub("reporter-";"")' in.json > out.txt
Of course you might wish to use a different selection criterion, e.g. using startswith or test.
This is JSON Object
{
"success": true,
"terms": "https://coinlayer.com/terms",
"privacy": "https://coinlayer.com/privacy",
"timestamp": 1620244806,
"target": "USD",
"rates": {
"611": 0.389165,
"ABC": 59.99,
"ACP": 0.014931,
"ACT": 0.021098,
"ACT*": 0.017178,
"ADA": 1.460965
}
}
I require this type of output:
611,0.389165
ABC,59.99
ACP,0.014931
ACT,0.021098
ACT*,0.017178
ADA,1.460965
Can somebody help me figure out doing it preferably with jq, shell script or command.
You can use #csv to generate CSV output from arrays, and to_entries to break up the object's elements into said arrays:
$ jq -r '.rates | to_entries[] | [ .key, .value ] | #csv' input.json
"611",0.389165
"ABC",59.99
"ACP",0.014931
"ACT",0.021098
"ACT*",0.017178
"ADA",1.460965
With the following input file:
{
"events": [
{
"mydata": {
"id": "123456",
"account": "21234"
}
},
{
"mydata": {
"id": "123457",
"account": "21234"
}
}
]
}
When I run it through this JQ filter,
jq ".events[] | [.mydata.id, .mydata.account]" events.json
I get a set of arrays:
[
"123456",
"21234"
]
[
"123457",
"21234"
]
When I put this output through the #csv filter to create CSV output:
jq ".events[] | [.mydata.id, .mydata.account] | #csv" events.json
I get a CSV file with one record per row:
"\"123456\",\"21234\""
"\"123457\",\"21234\""
I would like CSV file with two records per row, like this:
"123456","21234"
"123457","21234"
What am I doing wrong?
Use the -r flag.
Here is the explanation in the manual:
--raw-output / -r: With this option, if the filter's result is a string then it will be written directly to standard output rather than
being formatted as a JSON string with quotes.
jq -r '.events[] | [.mydata.id, .mydata.account] | #csv'
Yields
"123456","21234"
"123457","21234"
I have the following JSON outp
{
"environment": {
"reg": "abc"
},
"system": {
"svcs": {
"upsvcs": [
{
"name": "monitor",
"tags": [],
"vmnts": [],
"label": "upsvcs",
"credentials": {
"Date": "Feb152018",
"time": "1330"
}
},
{
"name": "application",
"tags": [],
"vmnts": [],
"label": "upsvcs",
"credentials": {
"lastViewed": "2018-02-07"
}
}
]
}
}
and to retrieve Date value (from credentials). I have tried `curl xxx | jq -r '. | select (.Date)'
which is not returning any value. Can someone please let me know what is the correct syntax and any explanation on how to retrieve elements (or any articles that do so).
TIA
The (or at least a) short answer is:
.system.svcs.upsvcs[0].credentials.Date
Without a schema, it is often a bit tricky to get the exact path correctly, so you might want to consider using the jq filter paths. In your case:
$ jq -c paths input.json | grep Date
["system","svcs","upsvcs",0,"credentials","Date"]
You could also use this path (i.e., the above array) directly:
$ jq 'getpath(["system","svcs","upsvcs",0,"credentials","Date"])' input.json
"Feb152018"
and thus you could use a "path-free" query:
$ jq --argjson p $(jq -c paths input.json | grep --max 1 Date) 'getpath($p)' input.json
"Feb152018"
Another approach would be just to retrieve all “truthy” .Date values, no matter where they appear:
$ jq -c '.. | .Date? // empty' input.json
"Feb152018"
select
Since you mentioned select, please note that:
$ jq -c '.. | select(.Date?)' input.json
{"Date":"Feb152018","time":"1330"}
Further information
For further information about jq in general and the topic of retrieval in particular, see the online tutorial, manual, and FAQ:
https://stedolan.github.io/jq/tutorial/
https://stedolan.github.io/jq/manual/v1.5/
https://github.com/stedolan/jq/wiki/FAQ