I'm trying to output displayName from JSON which has both
"source": "0.0.0.0/0" and
tcpOptions": "destinationPortRange": "min": 80
The result should display only
rule-1
eg: JSON
[
{
"displayName": "rule-1",
"secrule": [
{
"source": "0.0.0.0/0",
"tcpOptions": {
"destinationPortRange": {
"min": 80,
"max": 80
}
}
},
{
"source": "0.0.0.0/0",
"tcpOptions": {
"destinationPortRange": {
"min": 443,
"max": 443
}
}
}
]
},
{
"displayName": "rule-2",
"secrule": [
{
"source": "0.0.0.0/0",
"tcpOptions": {
"destinationPortRange": {
"min": 443,
"max": 443
}
}
},
{
"source": "20.0.0.0/0",
"tcpOptions": {
"destinationPortRange": {
"min": 80,
"max": 80
}
}
}
]
}
]
I have tried
jq -r '.[] | select(.secrule[].source == "0.0.0.0/0" and .secrule[].tcpOptions.destinationPortRange.min == 80) | .displayName' JSON | sort -u
But it displays both rules (which is incorrect)
rule-1
rule-2
You're expanding .secrule twice, thus every combination of its elements get checked. Use any instead:
.[] | select(any(.secrule[]; .source=="0.0.0.0/0" and .tcpOptions.destinationPortRange.min==80)).displayName
Related
I'm trying to implement facets with a date range aggregation in the current version of Amazon Elasticsearch Service (version 7.10). The key for what I want the article documents to group for, is publishedAt, what is a date. I want one bucket, where publishedAt is in the past, which means, it is published, one where it is in the future, which means scheduled and one for all articles without a publishedAt, which are drafts. published and scheduled are working as they should. For drafts I can't enter a filter or date range as they are null. So I want to make use of the "Missing Values" feature. This should treat the documents with publishedAt = null like to have the date given in the missing field. Unfortunately it has no effect on the results. Even if I change the date of missing to let it match with published or scheduled.
My request:
GET https://es.amazonaws.com/articles/_search
{
"size": 10,
"aggs": {
"facet_bucket_all": {
"aggs": {
"channel": {
"terms": {
"field": "channel.keyword",
"size": 5
}
},
"brand": {
"terms": {
"field": "brand.keyword",
"size": 5
}
},
"articleStatus": {
"date_range": {
"field": "publishedAt",
"format": "dd-MM-yyyy",
"missing": "01-07-1886",
"ranges": [
{ "key": "published", "from": "now-99y/M", "to": "now/M" },
{ "key": "scheduled", "from": "now+1s/M", "to": "now+99y/M" },
{ "key": "drafts", "from": "01-01-1886", "to": "31-12-1886" }
]
}
}
},
"filter": {
"bool": {
"must": []
}
}
},
"facet_bucket_publishedAt": {
"aggs": {},
"filter": {
"bool": {
"must": []
}
}
},
"facet_bucket_author": {
"aggs": {
"author": {
"terms": {
"field": "author",
"size": 10
}
}
},
"filter": {
"bool": {
"must": []
}
}
}
},
"query": {
"bool": {
"filter": [
{
"range": {
"publishedAt": {
"lte": "2021-08-09T09:52:19.975Z"
}
}
}
]
}
},
"from": 0,
"sort": [
{
"_score": "desc"
}
]
}
And in the result, the drafts are empty:
"articleStatus": {
"buckets": [
{
"key": "published",
"from": -1.496448E12,
"from_as_string": "01-08-1922",
"to": 1.627776E12,
"to_as_string": "01-08-2021",
"doc_count": 47920
},
{
"key": "scheduled",
"from": 1.627776E12,
"from_as_string": "01-08-2021",
"to": 4.7519136E12,
"to_as_string": "01-08-2120",
"doc_count": 3
},
{
"key": "drafts",
"from": 1.67252256E13,
"from_as_string": "01-01-1886",
"to": 1.67566752E13,
"to_as_string": "31-12-1886",
"doc_count": 0
}
]
}
SearchKit added this part to the query:
"query": {
"bool": {
"filter": [
{
"range": {
"publishedAt": {
"lte": "2021-08-09T09:52:19.975Z"
}
}
}
]
}
}
This had to be removed, because it filters out null values, before the missing filter makes its job.
Now I get the correct result:
"articleStatus": {
"buckets": [
{
"key": "drafts",
"from": -2.650752E12,
"from_as_string": "01-01-1886",
"to": -2.6193024E12,
"to_as_string": "31-12-1886",
"doc_count": 7
},
{
"key": "published",
"from": -1.496448E12,
"from_as_string": "01-08-1922",
"to": 1.627776E12,
"to_as_string": "01-08-2021",
"doc_count": 47920
},
{
"key": "scheduled",
"from": 1.627776E12,
"from_as_string": "01-08-2021",
"to": 4.7519136E12,
"to_as_string": "01-08-2120",
"doc_count": 3
}
]
}
I have a Json file with below content, I am trying to replace serviceName string from ca-visual-node to ca-visual-node-canary if host name matches test4.analytics.io using jq utility. How can I get this done so that we will have json file updated with replace string
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"alb.ingress.kubernetes.io/actions.ssl-redirect": "{\"Type\": \"redirect\", \"RedirectConfig\": { \"Protocol\": \"HTTPS\", \"Port\": \"443\", \"StatusCode\": \"HTTP_301\"}}"
},
"finalizers": [
"ingress.k8s.aws/resources"
],
"name": "ca-visual",
"namespace": "cloud-anaytics",
},
"spec": {
"rules": [
{
"host": "test.analytics.io",
"http": {
"paths": [
{
"backend": {
"serviceName": "ca-visual-play",
"servicePort": 9443
},
"path": "/apii/*",
"pathType": "ImplementationSpecific"
},
{
"backend": {
"serviceName": "ssl-redirect",
"servicePort": "use-annotation"
},
"path": "/*",
"pathType": "ImplementationSpecific"
},
{
"backend": {
"serviceName": "ca-visual-node",
"servicePort": 443
},
"path": "/*",
"pathType": "ImplementationSpecific"
}
]
}
},
{
"host": "test4.analytics.io",
"http": {
"paths": [
{
"backend": {
"serviceName": "ca-visual-play",
"servicePort": 9443
},
"path": "/apii/*",
"pathType": "ImplementationSpecific"
},
{
"backend": {
"serviceName": "ssl-redirect",
"servicePort": "use-annotation"
},
"path": "/*",
"pathType": "ImplementationSpecific"
},
{
"backend": {
"serviceName": "ca-visual-node",
"servicePort": 443
},
"path": "/*",
"pathType": "ImplementationSpecific"
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "16b3-cloudanaytics.us-xxxx-1.elb.amazonaws.com"
}
]
}
}
}
I am trying to replace serviceName string from ca-visual-node to ca-visual-node-canary if host name matches test4.analytics.io using jq utility. How can I get this done so that we will have json file updated with replace string.
Just walk the path from the root and use the update operator |= with the filter wrapped around (..)
(
.spec.rules[] |
select(.host == "test4.analytics.io")? |
.http.paths[].backend |
select(.serviceName == "ca-visual-node")? |
.serviceName
) |= "ca-visual-node-canary"
jqplay - demo
There is a table customer and it has a jsonb datatype field named report to hold json files.
The existing Json file in the report jsonb field is as follows
{
"report": {
"operations-utilization-rightsizing": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}
}
}
Now I need to append or merge the below json file to this report field in customer table.
{
"operations-battery-critical-events": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}
}
I tried the following update statement
UPDATE customer
SET report = report || '{
"operations-battery-critical-events": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}
}' :: jsonb
WHERE report IS NOT NULL;
The output for the above SQL is,
{
"report": {
"operations-utilization-rightsizing": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}
},
"operations-battery-critical-events": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}
}
And the desired output should be as below,
{
"report": {
"operations-utilization-rightsizing": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
},
"operations-battery-critical-events": {
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}
}
}
I'm new to json, please let me for any further details.
This should work, but you have to adjust it to whether the top key is report (as in your question) or report_settings (as in your desired output example).
UPDATE customer
SET report = jsonb_set(report, '{report,operations-battery-critical-events}',
'{
"default_settings": [{
"type": "%",
"radio": "false",
"range": {
"max": 100,
"min": 0
}
}]
}'::jsonb)
WHERE report IS NOT NULL;
I have a big file named as new_file.json which have several json in it, like:
{ "ResourceRecordSets": [ { "Name": "XYZ.", "Type": "mms", "TTL": 172800, "ResourceRecords": [ { "Value": "mms-1219.buydmms-24.org." }, { "Value": "mms-1606.buydmms-08.co.uk." }, { "Value": "mms-516.buydmms-00.net." }, { "Value": "mms-458.buydmms-57.com." } ] }, { "Name": "XYZ.", "Type": "SOA", "TTL": 900, "ResourceRecords": [ { "Value": "ABC.COM. 1 7200 900 1209600 86400" } ] }, { "Name": "bb.XYZ.", "Type": "CNAME", "SetIdentifier": "fix", "GeoLocation": { "ContinentCode": "EU" }, "TTL": 300, "ResourceRecords": [ { "Value": "abx.xyz.com" } ] }, { "Name": "bb.XYZ.", "Type": "CNAME", "SetIdentifier": "route to xms staging svc", "GeoLocation": { "CountryCode": "*" }, "TTL": 60, "ResourceRecords": [ { "Value": "xms-staging-xmssvc-1241009625.eu-west-1.elb.amazonbuy.com" } ] } ] }
{ "ResourceRecordSets": [ { "Name": "xyz.com.", "Type": "mms", "TTL": 172800, "ResourceRecords": [ { "Value": "mms-877.buydmms-45.net." }, { "Value": "mms-1168.buydmms-18.org." }, { "Value": "mms-375.buydmms-46.com." }, { "Value": "mms-1835.buydmms-37.co.uk." } ] }, { "Name": "xyz.com.", "Type": "SOA", "TTL": 900, "ResourceRecords": [ { "Value": "mms-877.buydmms-45.net. buydmms-taste.hurdle.com. 1 7200 900 1209600 86400" } ] }, { "Name": "prod.xyz.com.", "Type": "CNAME", "SetIdentifier": "pointing to finclub", "Weight": 1, "TTL": 300, "ResourceRecords": [ { "Value": "indiv-finclub.elb.amazonbuy.com" } ] }, { "Name": "prod.xyz.com.", "Type": "CNAME", "SetIdentifier": "pointing to symentic", "Weight": 99, "TTL": 300, "ResourceRecords": [ { "Value": "some.com" } ] } ] }
{ "ResourceRecordSets": [ { "Name": "fun.org.", "Type": "mms", "TTL": 172800, "ResourceRecords": [ { "Value": "mms-352.buydmms-44.com." }, { "Value": "mms-1131.buydmms-13.org." }, { "Value": "mms-591.buydmms-09.net." }, { "Value": "mms-1997.buydmms-57.co.uk." } ] }, { "Name": "fun.org.", "Type": "SOA", "TTL": 900, "ResourceRecords": [ { "Value": "mms-352.buydmms-44.com. buydmms-taste.hurdle.com. 1 7200 900 1209600 86400" } ] }, { "Name": "portal-junior.fun.org.", "Type": "CNAME", "TTL": 300, "ResourceRecords": [ { "Value": "portal.expressplay.com" } ] } ] }
{ "ResourceRecordSets": [ { "Name": "junior.fun.org.", "Type": "mms", "TTL": 172800, "ResourceRecords": [ { "Value": "mms-518.buydmms-00.net." }, { "Value": "mms-1447.buydmms-52.org." }, { "Value": "mms-499.buydmms-62.com." }, { "Value": "mms-1879.buydmms-42.co.uk." } ] }, { "Name": "junior.fun.org.", "Type": "SOA", "TTL": 900, "ResourceRecords": [ { "Value": "mms-518.buydmms-00.net. buydmms-taste.hurdle.com. 1 7200 900 1209600 86400" } ] }, { "Name": "db.junior.fun.org.", "Type": "CNAME", "TTL": 300, "ResourceRecords": [ { "Value": "xms16-ap.crds.hurdle.com" } ] }, { "Name": "junior.junior.fun.org.", "Type": "CNAME", "ResourceRecords": [ { "Value": "This resource record set includes an attribute that is ummsupported on this Route 53 endpoint. Please commsider using a newer endpoint or a tool that does so." } ], "TrafficPolicyImmstanceId": "17b76444-85c2-4ec5-a16d-8611fa05ca82" } ] }
{ "ResourceRecordSets": [ { "Name": "junior.myjuniordmms.org.", "Type": "mms", "TTL": 172800, "ResourceRecords": [ { "Value": "mms-455.buydmms-56.com." }, { "Value": "mms-1381.buydmms-44.org." }, { "Value": "mms-741.buydmms-28.net." }, { "Value": "mms-1992.buydmms-57.co.uk." } ] }, { "Name": "junior.myjuniordmms.org.", "Type": "SOA", "TTL": 900, "ResourceRecords": [ { "Value": "mms-455.buydmms-56.com. buydmms-taste.hurdle.com. 1 7200 900 1209600 86400" } ] } ] }
I want to make the same file as one single valid json, can it be done by using jq or some other method in shell/bash
Yes , you can.
command:
cat new_file.json | jq -s '.[0] * .[1]'
output:
{
"ResourceRecordSets": [
{
"Name": "xyz.com.",
"Type": "mms",
"TTL": 172800,
"ResourceRecords": [
{
"Value": "mms-877.buydmms-45.net."
},
{
"Value": "mms-1168.buydmms-18.org."
},
{
"Value": "mms-375.buydmms-46.com."
},
{
"Value": "mms-1835.buydmms-37.co.uk."
}
]
},
{
"Name": "xyz.com.",
"Type": "SOA",
"TTL": 900,
"ResourceRecords": [
{
"Value": "mms-877.buydmms-45.net. buydmms-taste.hurdle.com. 1 7200 900 1209600 86400"
}
]
},
{
"Name": "prod.xyz.com.",
"Type": "CNAME",
"SetIdentifier": "pointing to finclub",
"Weight": 1,
"TTL": 300,
"ResourceRecords": [
{
"Value": "indiv-finclub.elb.amazonbuy.com"
}
]
},
{
"Name": "prod.xyz.com.",
"Type": "CNAME",
"SetIdentifier": "pointing to symentic",
"Weight": 99,
"TTL": 300,
"ResourceRecords": [
{
"Value": "some.com"
}
]
}
]
}
it's possible to achieve the same using unix/linux utilities only: sed and paste:
bash $ cat new_file.json | sed '/^ *$/d' | paste -s -d, - | sed -E 's/(.*)/[\1]/'
the first sed removes all the blank lines
paste concatenates all the input lines over comma ,
the last sed puts square brackets around the input
the resulting output will be a valid JSON
I'm trying to apply a filter to line visualisations in Kibana 4.5.1. I have an index (xscores) with two different types (sd and sma), here is a sample:
{
"_index": "xscore",
"_type": "xscore",
"_id": "AVgAejjHwGMH9TPDlF04",
"_score": 1,
"_source": {
"id": "AVgAejjHwGMH9TPDlF04",
"value": 0.019607843137254926,
"timestamp": 1477476480000,
"minutes": 1,
"type": "sma"
}
I am trying to show the sum only for sma and the average only for sd by adding a filter in the json box. However I always get a search_phase_execution_exception. This is the code that Kibana sends to elasticsearch:
{"query": {
"filtered": {
"query": {
"query_string": {
"analyze_wildcard": true,
"query": "*"
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"timestamp": {
"gte": 1477436400000,
"lte": 1477522799999,
"format": "epoch_millis"
}
}
}
],
"must_not": [
]
}
}
}},"size": 0,"aggs": {
"3": {
"date_histogram": {
"field": "timestamp",
"interval": "30m",
"time_zone": "Europe\/London",
"min_doc_count": 1,
"extended_bounds": {
"min": 1477436400000,
"max": 1477522799999
}
},
"aggs": {
"4": {
"terms": {
"field": "type",
"size": 5,
"order": {
"1": "desc"
}
},
"aggs": {
"1": {
"avg": {
"field": "value"
}
},
"2": {
"sum": {
"field": "value",
"filter": {
"term": {
"type": "sma"
}
}
}
}
}
}
}
}}}
The problem is in the last area I think but can't figure out what exactly is wrong.
Running the same query in ES returns the following error:
"shard": 0,
"index": "xscore",
"node": "mszD3Y_4T-aGNEkVtt4BCg",
"reason": {
"type": "search_parse_exception",
"reason": "Unexpected token START_OBJECT in [2]."
I'm using ES 2.3 and Kibana 4.5 on a MacOS 10.10.