How to select multiple values in an array in json using jq - json

Am using jq to get multiple responses from the JSON file using the below command.
.components| to_entries[]| "\(.key)- \(.value.status)"
which gives me below
Server2- UP
server1 - UP
Splunk- UP
Datameer - UP
Platfora - UP
diskSpace- Good
But I want to select only a few I tried giving in braces of to_entries[] but it didn't work.
Expected output:
Server1 - UP
Splunk -UP
Platfora - UP
Is there any way to pick only a few values.
Appreciate your help. Thank you.

With the -r command-line option, the following transforms the given input to the desired output, and is perhaps close to what you're looking for:
.components
| to_entries[]
| select(.key == ("server1", "Splunk", "Platfora"))
| "\(.key)- \(.value.status)"
If the list of components is available as a JSON list, then you could modify the selection criterion accordingly, e.g. using IN (uppercase) or index.

Related

JSONPath Syntax does not work when dot in key's in more than 2 filter fields

apiVersion: v1
data:
backupscript:
service.properties: |
agent.download.location = /home/bnr
script.execution.time.in.minutes = 1
Need to fetch value in "script.execution.time.in.minutes".
I am using ..-o "jsonpath={.data['service\.properties'].'script\.execution\.time\.in\.minutes'}"
It is giving me empty result.
How do we use the Escape on the end field's filter.
Sometimes I find it helpful when working with jsonpath expressions, to start smaller and build up.
For example, does this work and if so, what does it return?
-o "jsonpath={.data['service\.properties']"
Then add the next part on.
That being said, I think the problem you're going to run into is that these are not yaml properties, but instead is a string, as indicated by the |:
service.properties: |
agent.download.location = /home/bnr
script.execution.time.in.minutes = 1
So I don't think you're going to be able to use jsonpath alone to query the value of script.execution.time.in.minutes.
You can probably do what you want with a combination of jsonpath and awk+sed, something like this:
kubectl get pod foo -o jsonpath="{.data['service\.properties']}" | awk -F'=' '$1 ~ /script\.execution\.time\.in\.minutes/ {print $2}' | sed 's/ //'
This does the following:
Get the service properties using jsonpath
Use awk to extract the number from the line that has script.execution.time.in.minutes
Use sed to remove spaces
There might be a more elegant way to accomplish this, but hopefully this can at least help you get an idea of one way to do it.

Can jq check each element of a comma seperated array of values to check if the value exists in JSON?

I have a JSON file and I am extracting data from it using jq. One simple use case is pulling out any JSON Object that contains an Id which is provided as an argument.
I use the following simple script to do so:
[.[] | select(.id == $ID)]
The script is stored in a separate file (by_id.jq) which I pass in using the -f argument.
The full command looks something like this:
cat ./my_json_file.json | jq -sf --arg ID "8df993c1-57d5-46b3-a8a3-d95066934e5b" ./by_id.jq
Is there a way by only using jq that a comma separated list of values could be passed as an argument to the jq script and iterate through the ids and check them against the value of .id in the the JSON file with the result being the objects that have that id?
For example if I wanted to pull out three objects by their ids I would want to structure the command in this way:
cat ./my_json_file.json | jq -sf --arg ID "8df993c1-57d5-46b3-a8a3-d95066934e5b,1d5441ca-5758-474d-a9fc-40d0f68aa538,23cc618a-8ad4-4141-bc1c-0251y0663963" ./by_id.jq
Sure. Though you'll need to parse (split) that list of ids to something that jq can work with, such as an array of ids. Then your problem becomes, given an array of keys, select objects that have any of these ids. Which you could use approaches found here.
$ jq --arg ID '8df993c1-57d5-46b3-a8a3-d95066934e5b,1d5441ca-5758-474d-a9fc-40d0f68aa538,23cc618a-8ad4-4141-bc1c-0251y0663963' '
select(.id | IN($ID|split(",")[]))
' ./my_json_file.json
I'm not sure what your input looks like but judging by your use of slurping then filtering the slurped input, it's a stream of objects. The slurping is not necessary here.
Here is an approach that focuses on efficiency.
Your Q indicates that in fact you have a stream of objects, so the first step towards efficiency is to avoid the -s option, and use -n with inputs instead.
The second step it to avoid splitting your comma-separated string of values more than once.
So your script might look like this:
INDEX($ids | splits(","); .) as $dict
| inputs
| select($dict[.id])
And the invocation would look like this:
jq -n --args a,b,c -f by_id.jq
This of course assumes that simply splitting the string of ids on "," will suffice. You might need to trim the values and take care of other potential anomalies.
For efficiency, it would be better to split $ID just once.
So if you have to use the -s option, you could use the following jq program:
INDEX($ID | splits(","); .) as $dict
| .[]
| select($dict[.id])

Use jq in Bash to sort object properties by descending length of their value

I have the following JSON snippet:
{
"root_path": "/www",
"core_path": "/www/wp",
"content_path": "/www/content",
"vendor_path": "/www/vendor"
}
I would like to use jq first to get the values sorted in descending order of length:
/www/content
/www/vendor
/www/wp
/www
I need these so I can match against a list of files to find which of the named paths the files exist in.
Then I would like to use jq again to swap properties for values (it can drop duplicate properties, that's okay):
{
"/www": "root_path".
"/www/wp": "core_path",
"/www/content": "content_path",
"/www/vendor": "vendor_path"
}
My use case for this 2nd query is to be able to lookup a matched path value and find its path name, which I will then use in a second JSON snippet with an identical schema to get the named path's value.
My use-case is for website deployment and I have a config file that contains files names as they will exist on the deployment server that should be copied from the source server to the deploy server but the servers may have different directory layouts.
I need to use Bash for this, but if there is a better way to do what I am looking to do I am open. That said, I really do want to learn how to use jq better so I would prefer to learn how to use jq to do these transforms.
I am using jq version 1.5
the values sorted in descending order of length:
[.[]] | sort_by(length) | reverse[]
swap properties for values
with_entries(.key as $k | .key=.value | .value=$k )
Combining the two requirements
A solution to the combined problem can be crafted by combining the above two solutions, because with_entries is a combination of to_entries and from_entries:
to_entries
| map(.key as $k | .key=.value | .value=$k )
| sort_by(.key|length)
| reverse
| from_entries

jq to remove one of the duplicated objects

I have a json file like this:
{"caller_id":"123321","cust_name":"abc"}
{"caller_id":"123443","cust_name":"def"}
{"caller_id":"123321","cust_name":"abc"}
{"caller_id":"234432","cust_name":"ghi"}
{"caller_id":"123321","cust_name":"abc"}
....
I tried:
jq -s 'unique_by(.field1)'
but this will remove all the duplicated items, I,m looking to keep just one of the duplicated items, to get the file like this:
{"caller_id":"123321","cust_name":"abc"}
{"caller_id":"123443","cust_name":"def"}
{"caller_id":"234432","cust_name":"ghi"}
....
With field1, I doubt you are getting anything in the output, since there is no key/field with the given name. If you simply change your command to jq -s 'unique_by(.caller_id)' it will give you desired result containing unique & sorted objects based on caller_id key. It will ensure in result you have atleast & atmost one object for each caller_id.
NOTE: Same as what #Jeff Mercado has explained in the comments.
If the file consists of a sequence (stream) of JSON objects, then a very simple way to produce a stream of the distinct objects would be to use the invocation:
jq -s `unique[]`
A similar alternative would be:
jq -n `[inputs] | unique[]`
For large files, however, the above will probably be too inefficient, both with respect to RAM and run-time. Note that both unique and unique_by entail a sort.
A far better alternative would be to take advantage of the fact that the input is a stream, and to avoid the built-in unique and unique_by filters. This can be done with the assistance of the following filters, which are not yet built-in but likely to become so:
# emit a dictionary
def set(s): reduce s as $x ({}; .[$x | (type[0:1] + tostring)] = $x);
# distinct entities in the stream s
def distinct(s): set(s)[];
We now have only to add:
distinct(inputs)
to achieve the objective, provided jq is invoked with the -n command-line option.
This approach will also preserve the original ordering.
If the input is an array ...
If the input is an array, then using distinct as defined above still has the advantage of not requiring a sort. For arrays that are too large to fit comfortably in memory, it would be advisable to use jq's streaming parser to create a stream.
One possibility would be to proceed in two steps (jq --stream .... | jq -n ...), but it might be better to do everything in one step (jq -cn --stream ...), using the following "main" program:
distinct(fromstream(inputs
| (.[0] |= .[1:] )
| select(. != [[]])))

Use jq to recursively select key names of an object

I have a JSON document that looks like:
simple: 42
normal:
description: "NORMAL"
combo:
one:
description: "ONE"
two:
description: "TWO"
arbitrary:
foo: 42
I want to use a jq expression to generate the following:
["normal", "one", "two"]
The condition to select the key is that its corresponding value is an object type that has a key description. In this case, keys simple and arbitrary don't qualify.
I'm having a hard time to craft the filter. Looked into with_entries and recurse/2 but can't solve it myself.
TIA.
It's not clear to me whether the YAML that you gave is just a "view" of your JSON or whether you actually want to start with YAML. If your document really is YAML, then one approach would be to use a tool
(such as yaml2json or yq) to convert the yaml to JSON, and then run jq
as shown below; another would be to use jq as a text-processor,
but in that case you could just as well use awk.
yaml2json input.yaml |
jq -c '[.. | objects | to_entries[]
| select(.value | has("description")?) | .key]'
Output
["normal","one","two"]
Streaming parser
This type of problem is also well-suited to jq's streaming parser, which is especially handy when dealing with very large JSON texts. Using jq --stream, a suitable jq filter would be:
[select(length==2) | .[0] | select(.[-1] == "description") | .[-2]]
The ordering of the results will depend on the ordering of the keys produced by the YAML-to-JSON conversion tool.