JMESPpath: filtering out by nested attributes - json

I am trying to apply the filter using JMESPath jp (https://github.com/jmespath/jp) utility.
My goal is to have only the flow whose state is 'ADDED' and having specific device id (e.g. 0000debf17cff54b) filtered out.
I am trying something like this:
cat test | ./jp '[][?id=="of:00002259146f7743" && state=="ADDED"]'
but the result is []
[
{
"flow": [
{
"ethType": "0x86dd",
"type": "ETH_TYPE"
},
{
"protocol": 58,
"type": "IP_PROTO"
},
{
"icmpv6Type": 135,
"type": "ICMPV6_TYPE"
}
],
"id": "of:00001aced404664b",
"state": "ADDED"
},
{
"flow": [
{
"ethType": "0x86dd",
"type": "ETH_TYPE"
},
{
"protocol": 58,
"type": "IP_PROTO"
},
{
"icmpv6Type": 136,
"type": "ICMPV6_TYPE"
}
],
"id": "of:0000debf17cff54b",
"state": "ADDED"
}
]

No need to use the first [], [?id=='of:0000debf17cff54b' && state=='ADDED'] works fine.
Using the first [] gives you the entire array, that does not contain a id or state keys.

Related

How to operate with json list in CMake?

I have the following code which I'm trying to read in CMake.
{
"demo": [
{
"name": "foo0",
"url": "url1",
"verify_ssl": true
},
{
"name": "foo1",
"url": "url1",
"verify_ssl": true
},
{
"name": "foo2",
"url": "url2",
"verify_ssl": true
}
]
}
I'm trying to access a member from the list above, for example demo[0].name without success, what I'm doing wrong?
file(READ "${CONAN_CACHE}/demo.json" MY_JSON_STRING)
string(JSON CUR_NAME GET ${MY_JSON_STRING} demo[0].name)
One at a time.
string(JSON CUR_NAME GET ${MY_JSON_STRING} demo 0 name)

Filtering a nested json for a particular flied and returning adjacent member in ruby

I have an API response of the following structure
{
"id": "123342-123412",
"data": [
{
"id": "ace123",
"name": "Tom",
"files": [
{
"color": "yellow",
"file_id": "245"
},
{
"color": "red",
"file_id": "233"
}
]
},
{
"id": "asd123",
"name": "Jerry",
"files": [
{
"color": "red",
"file_id": "210"
},
{
"color": "green",
"file_id": "221"
}
]
},
{
"id": "acs123",
"name": "Barbie",
"files": [
{
"color": "green",
"file_id": "201"
}
]
}
]
}
I am new to ruby, I want to filter out all file ids with the color red, what's the better way of doing it rather than iterating through the whole JSON using
data.each do | object|
# individual element search code
end
I am using ruby version 2.6
The single line version that comes to my mind is:
json[:data].map {|d| d[:files] }.flatten.select {|f| f[:color] == 'red' }.map {|f| f[:file_id] }
=> ["233", "210"]
But this iterates multiple times (one for every method call), not to mention it looks kind of cryptic to me.
Personally I would prefer a verbose version, where it's clearly stated how's getting the values:
file_ids = []
json[:data].each do |data|
data[:files].each do |file|
next if file[:color] != 'red'
file_ids << file[:file_id]
end
end
file_ids.uniq # In case you have duplicates
But is up to you what to use.

sub document under main document how to get a listing with pagination using couchbase(N1QL) query

any one can help me how to get the sub document List with pagination
i just give a sample example :
{
"accessories": [`
{
"data": {
"name": "TEST",
"updated_at": "2020-03-27T16:16:20.818Z"
},
"id": "56e83ea1-042e-47e0-85f8-186189c37426"
}
],
"calibration_reports": [`
{
"data": {
"deleted_at": "",
"frm27_equipment": [
"test_cat1"
],
"frm27_link": [
"yes"
],
"frm27_submit": null,
"updated_at": "2020-03-30T10:24:52.703Z"
},
"id": "e4c8b1b4-7f37-46db-a49d-bca74482b968"
},
{
"data": {
"deleted_at": "",
"frm27_equipment": [
"test_cat1"
],
"frm27_link": [
"no"
],
"frm27_submit": null,
"updated_at": "2020-03-30T10:34:37.615Z"
},
"id": "445854d6-66bf-4e33-b620-05a5053119a8"
}
],
}
]
}
Here i want to get a calibration_reports list with pagination is it possible ? using couchbase (N1ql Query)
please if any one know, what is the process how to get the list of result with pagination using couchbase(N1QL) query. please help me.
One possible way to go about this is to use UNNEST.
For instance:
SELECT calreports.id
FROM utpal u
UNNEST u.calibration_reports calreports
This would return something like:
[
{ "id": "aaa" },
{ "id": "bbb" },
{ "id": "ccc" },
... etc ...
]
And then you can use normal LIMIT/OFFSET for pagination, like so:
SELECT calreports.id
FROM utpal u
UNNEST u.calibration_reports calreports
LIMIT 50
OFFSET 150;

Change subelement with jq

I have a structure that looks like so
[
[
{
"ID": "grp1-001",
},
{
"ID": "grp1-002",
},
{
"ID": "grp1-003",
},
{
"ID": "grp1-004",
},
{
"ID": "grp1-005",
},
{
"ID": "grp1-006",
}
],
[
{
"ID": "grp2-001",
},
{
"ID": "grp2-002",
},
{
"ID": "grp2-003",
},
{
"ID": "grp2-004",
},
{
"ID": "grp2-005",
},
{
"ID": "grp2-006",
}
.......
what I need to get as a result of the modification is this
[
[
["1", "grp1-001"],
["2", "grp1-002"],
["3", "grp1-003"],
["4", "grp1-004"],
["5", "grp1-005"],
["6", "grp1-006"],
],
[
["1", "grp2-001"],
["2", "grp2-002"],
["3", "grp2-003"],
["4", "grp2-004"],
["5", "grp2-005"],
["6", "grp2-006"],
],
Which means I need to keep the external structure (outside array and an internal grouping) but convert the inner dict to an array and replace the "ID" key with a value (that will come from external source like --argjson). I am not even sure how to start - any ideas/resources are highly appreciated.
Assuming you're just taking the objects and transforming them to pairs of the index in the array and the ID value, you could do this:
map([to_entries[] | [.key + 1, .value.ID | tostring]])
https://jqplay.org/s/RBac7SPfdG
Using to_entries/0 on an array gives you an array of key/value (index/value) pairs. You could then shift the indices by 1 and convert to strings.

How to get specified values from ansbile facts

I'm confused a little bit.
I alredy tried jmespath, but didn't help.
I need to get a high-level object which contains a specified object.
In the example below we see an ansible fact about HDDs. I need to get the disk name (sdf, sdg or dm-0) of the HDD which contains a specified partition, e.g. sdf1.
I've got this jmespath query:
msg.*.[partitions.sdf1]
but it just shows me all inside sdf1. Filters like [?partitions=="sdf1"] don't work here,
so the question is: how to preserve the whole sdf object in my example?
thanks in advance!
{
"msg": {
"sdf": {
"partitions": {
"sdf1": {
"holders": [],
"links": {
"ids": [
"17101686F123-part1",
"wwn-0x123456-part1"
]
},
"sectors": "1875380224"
}
},
"removable": "0",
"rotational": "0"
},
"sdg": {
"partitions": {
"sdg1": {
"holders": [],
"links": {
"ids": [
"164414123CEB-part1",
"wwn-0x1233451234831ceb-part1"
]
},
"uuid": "F301-FA7F"
}
},
"removable": "0"
},
"dm-0": {
"holders": [],
"host": "",
"links": {
"ids": [],
"uuids": []
},
"vendor": null,
"virtual": 1
}
}
}
This should work:
msg.*.{value: #, condition: partitions.sdf1}[?condition].value
explanation
At first we create a temporary object For each HDD object: {value, condition}. condition will be null if partitions.sdf1 doesn't exist on the corresponding HDD (sdf, sdg dm-0) object. If partitions.sdf1 exists, condition will contain that object, that is:
"condition": {
"holders": [],
"links": {
"ids": [
"17101686F123-part1",
"wwn-0x123456-part1"
]
},
"sectors": "1875380224"
}
Using [?condition] you filter out all objects where condition is null. Finally, we extract the actual value using .value