I am using aws ecs query to get list of properties being used by the current running task.
command -
cft = "aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b
I am storing this in an output variable
output= $( eval $cft)
Output:
"tasks": [
{
"attachments": [
{
"id": "da8a1312-8278-46d5-8e3b-6b6a1d96f820",
"type": "ElasticNetworkInterface",
"status": "ATTACHED",
"details": [
{
"name": "subnetId",
"value": "subnet-0a151f2eb959ad4"
},
{
"name": "networkInterfaceId",
"value": "eni-081948e3666253f"
},
{
"name": "macAddress",
"value": "02:2a:9i:5c:4a:77"
},
{
"name": "privateDnsName",
"value": "ip-172-56-17-177.us-west-2.compute.internal"
},
{
"name": "privateIPv4Address",
"value": "172.56.17.177"
}
]
}
],
"availabilityZone": "us-west-2a",
"clusterArn": "arn:aws:ecs:us-west-2:4984314772:cluster/secrets",
"containers": [
{
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"name": "nginx",
"image": "nginx",
"lastStatus": "PENDING",
"networkInterfaces": [
{
"attachmentId": "da8a1312-8278-46d5-6b6a1d96f820",
"privateIpv4Address": "172.31.17.176"
}
],
"healthStatus": "UNKNOWN",
"cpu": "0"
}
],
"cpu": "256",
"createdAt": "2020-12-10T18:00:16.320000+05:30",
"desiredStatus": "RUNNING",
"group": "family:nginx",
"healthStatus": "UNKNOWN",
"lastStatus": "PENDING",
"launchType": "FARGATE",
"memory": "512",
"overrides": {
"containerOverrides": [
{
"name": "nginx"
}
],
"inferenceAcceleratorOverrides": []
},
"platformVersion": "1.4.0",
"tags": [],
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"taskDefinitionArn": "arn:aws:ecs:us-west-2:4984314772:task-definition/nginx:17",
"version": 2
}
],
"failures": []
}
now if do an echo of $output.tasks[0].containers[0] nothing happens it prints the entire thing again, i want to store the result in output variable and refer different parameter like we do in json format.
You will need to use a json parser such as jq and so:
eval $cft | jq '.tasks[].containers[]'
To avoid using eval you could simple pipe the aws command into jq and so:
aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]'
or:
cft=$(aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]')
echo $cft | jq '.tasks[].containers[]'
Related
Info
I have a terraform state file (json) with some deprecated attributes.
I would like to remove theses deprecated attributes.
I try to use jq and select() && del() but did not succeed to get back my full json without the deprecated attribue timeouts.
Problem
How to get my full json without the attribute timeouts for only one type of resources google_dns_record_set.
Data
{
"version": 4,
"terraform_version": "1.0.6",
"serial": 635,
"lineage": "6a9c2392-fdae-2b54-adcc-7366f262ffa4",
"outputs": {"test":"test1"},
"resources": [
{
"module": "module.resources",
"mode": "data",
"type": "google_client_config"
},
{
"module": "module.xxx.module.module1[\"cluster\"]",
"mode": "managed",
"type": "google_dns_record_set",
"name": "public_ip_ic_dns",
"provider": "module.xxx.provider[\"registry.terraform.io/hashicorp/google\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"id": "projects/xxx-xxx/managedZones/xxx--public/rrsets/*.net1.cluster.xxx--public.net.com./A",
"managed_zone": "xxx--public",
"name": "*.net1.cluster.xxx--public.net.com.",
"project": "xxx-xxx",
"rrdatas": [
"11.22.33.44"
],
"timeouts": null,
"ttl": 300,
"type": "A"
},
"sensitive_attributes": [],
"private": "xxx",
"dependencies": [
"xxx"
]
}
]
}
]
}
Command
jq -r '.resources[] | select(.type=="google_dns_record_set").instances[].attributes | del(.timeouts)' data.json
Pull the del command up front to include the whole selection as its own filter
del(.resources[] | select(.type=="google_dns_record_set").instances[].attributes.timeouts)
Demo
I want to filter out two types in the query expression
Json file :
[
{
"name": "name0",
"tags": {
"env": "dev"
},
"type": "Microsoft.OperationsManagement/solutions"
},
{
"name": "name1",
"tags": {
"env": "dev"
},
"type": "Microsoft.Web/sites"
},
{
"name": "name2",
"tags": {
"env": "dev"
},
"type": "Microsoft.Web/serverFarms"
},
{
"name": "name4",
"tags": null,
"type": "Microsoft.Network/privateDnsZones/virtualNetworkLinks"
}
]
Expression
az resource list --resource-group MYRG --query '[? type != `"Microsoft.OperationsManagement/solutions"` && `"Microsoft.Network/privateDnsZones/virtualNetworkLinks"`]'
But result is including 2nd type in the result
[
{
"name": "name1",
"tags": {
"env": "dev"
},
"type": "Microsoft.Web/sites"
},
{
"name": "name2",
"tags": {
"env": "dev"
},
"type": "Microsoft.Web/serverFarms"
},
{
"name": "name4",
"tags": null,
"type": "Microsoft.Network/privateDnsZones/virtualNetworkLinks"
}
]
I have tried other expression but no luck. Could someone tell me what exactly I am missing.
You can try adding another type != after &&:
az resource list --resource-group MYRG --query '[? type != `"Microsoft.OperationsManagement/solutions"` && type != `"Microsoft.Network/privateDnsZones/virtualNetworkLinks"`]'
You can also filter using slice to exclude "Microsoft.OperationsManagement/solutions" and "Microsoft.Network/privateDnsZones/virtualNetworkLinks":
az resource list --resource-group MYRG --query [1:3].type
You can refer to How to query Azure resources using the Azure CLI and JMESPATH queries
I have a deep json. Sometimes, I need to look for the json path for a key containing certain word.
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-03-28T21:09:42Z",
"labels": {
"bu": "finance",
"env": "prod"
},
"name": "auth",
"namespace": "default",
"resourceVersion": "2786",
"selfLink": "/api/v1/namespaces/default/pods/auth",
"uid": "ce73565a-519d-11e9-bcb7-0242ac110009"
},
"spec": {
"containers": [
{
"command": [
"sleep",
"4800"
],
"image": "busybox",
"imagePullPolicy": "Always",
"name": "busybox",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-dbpcm",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"nodeName": "node01",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-dbpcm",
"secret": {
"defaultMode": 420,
"secretName": "default-token-dbpcm"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-03-28T21:09:42Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-03-28T21:09:50Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": null,
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-03-28T21:09:42Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://b5be8275555ad70939401d658bb4e504b52215b70618ad43c2d0d02c35e1ae27",
"image": "busybox:latest",
"imageID": "docker-pullable://busybox#sha256:061ca9704a714ee3e8b80523ec720c64f6209ad3f97c0ff7cb9ec7d19f15149f",
"lastState": {},
"name": "busybox",
"ready": true,
"restartCount": 0,
"state": {
"running": {
"startedAt": "2019-03-28T21:09:49Z"
}
}
}
],
"hostIP": "172.17.0.37",
"phase": "Running",
"podIP": "10.32.0.4",
"qosClass": "BestEffort",
"startTime": "2019-03-28T21:09:42Z"
}
}
Currently If i need the podIP, then I do that this way to find the object which has the search keyword and then I build the path
curl myson | jq "[paths]" | grep "IP" --context=10
Is there any nice shortcut to simplify this? What I really need is - all the paths which could have the matching key.
spec.podIP
spec.hostIP
select paths containing keyword in their last element, and use join(".") to generate your desired output.
paths
| select(.[-1] | type == "string" and contains("keyword"))
| join(".")
.[-1] returns the last element of an array,
type == "string" is required because an array index is a number and numbers and strings can't be checked for their containment.
You may want to specify -r option.
As #JeffMercado implicitly suggested you can set the query from command line without touching the script:
jq -r 'paths
| select(.[-1] | type == "string" and contains($q))
| join(".")' file.json --arg q 'keyword'
You can stream the input in, which provides paths and values. You could then inspect the paths and optionally output the values.
$ jq --stream --arg pattern 'IP' '
select(length == 2 and any(.[0][] | strings; test($pattern)))
| "\(.[0] | join(".")): \(.[1])"
' input.json
"status.hostIP: 172.17.0.37"
"status.podIP: 10.32.0.4"
shameless plug
https://github.com/TomConlin/json_to_paths
because sometime you do not even know the component you want to filter for before you see what is there.
json2jqpath.jq file.json
.
.apiVersion
.kind
.metadata
.metadata|.creationTimestamp
.metadata|.labels
.metadata|.labels|.bu
.metadata|.labels|.env
.metadata|.name
.metadata|.namespace
.metadata|.resourceVersion
.metadata|.selfLink
.metadata|.uid
.spec
.spec|.containers
.spec|.containers|.[]
.spec|.containers|.[]|.command
.spec|.containers|.[]|.command|.[]
.spec|.containers|.[]|.image
.spec|.containers|.[]|.imagePullPolicy
.spec|.containers|.[]|.name
.spec|.containers|.[]|.resources
.spec|.containers|.[]|.terminationMessagePath
.spec|.containers|.[]|.terminationMessagePolicy
.spec|.containers|.[]|.volumeMounts
.spec|.containers|.[]|.volumeMounts|.[]
.spec|.containers|.[]|.volumeMounts|.[]|.mountPath
.spec|.containers|.[]|.volumeMounts|.[]|.name
.spec|.containers|.[]|.volumeMounts|.[]|.readOnly
.spec|.dnsPolicy
.spec|.nodeName
.spec|.priority
.spec|.restartPolicy
.spec|.schedulerName
.spec|.securityContext
.spec|.serviceAccount
.spec|.serviceAccountName
.spec|.terminationGracePeriodSeconds
.spec|.tolerations
.spec|.tolerations|.[]
.spec|.tolerations|.[]|.effect
.spec|.tolerations|.[]|.key
.spec|.tolerations|.[]|.operator
.spec|.tolerations|.[]|.tolerationSeconds
.spec|.volumes
.spec|.volumes|.[]
.spec|.volumes|.[]|.name
.spec|.volumes|.[]|.secret
.spec|.volumes|.[]|.secret|.defaultMode
.spec|.volumes|.[]|.secret|.secretName
.status
.status|.conditions
.status|.conditions|.[]
.status|.conditions|.[]|.lastProbeTime
.status|.conditions|.[]|.lastTransitionTime
.status|.conditions|.[]|.status
.status|.conditions|.[]|.type
.status|.containerStatuses
.status|.containerStatuses|.[]
.status|.containerStatuses|.[]|.containerID
.status|.containerStatuses|.[]|.image
.status|.containerStatuses|.[]|.imageID
.status|.containerStatuses|.[]|.lastState
.status|.containerStatuses|.[]|.name
.status|.containerStatuses|.[]|.ready
.status|.containerStatuses|.[]|.restartCount
.status|.containerStatuses|.[]|.state
.status|.containerStatuses|.[]|.state|.running
.status|.containerStatuses|.[]|.state|.running|.startedAt
.status|.hostIP
.status|.phase
.status|.podIP
.status|.qosClass
.status|.startTime
I have a ksh script that retrives (using curl) a json file similar to the one bellow:
{
"Type1": {
"dev": {
"server": [
{ "group": "APP1", "name": "DAPP1002", "ip": "10.1.1.1" },
{ "group": "APP2", "name": "DAPP2001", "ip": "10.1.1.2" }
]
},
"qa": {
"server": [
{ "group": "APP1", "name": "QAPP1002", "ip": "10.1.2.1" },
{ "group": "APP2", "name": "QAPP2001", "ip": "10.1.2.2" }
]
},
"prod": {
"proxy": "type1.prod.proxy.mydomain.com",
"server": [
{ "group": "APP1", "name": "PAPP1001", "ip": "10.1.3.1" },
{ "group": "APP1", "name": "PAPP1002", "ip": "10.1.3.2" },
{ "group": "APP2", "name": "PAPP2001", "ip": "10.1.3.3" }
]
}
},
"Type2": {
"dev": {
"server": [
{ "group": "APP8", "name": "DAPP8002", "ip": "10.2.1.1" },
{ "group": "APP9", "name": "DAPP9001", "ip": "10.2.1.2" }
]
},
"qa": {
"server": [
{ "group": "APP8", "name": "QAPP8002", "ip": "10.2.2.1" },
{ "group": "APP9", "name": "QAPP9001", "ip": "10.2.2.2" }
]
},
"prod": {
"proxy": "type2.prod.proxy.mydomain.com",
"server": [
{ "group": "APP8", "name": "PAPP8001", "ip": "10.2.3.1" },
{ "group": "APP9", "name": "PAPP9001", "ip": "10.2.3.2" },
{ "group": "APP9", "name": "PAPP9002", "ip": "10.2.3.3" }
]
}
}
}
... based on a server name (field "name") I would have to collect the following info, to pass to a function:
"Type", "name", "ip", "proxy"
(Note that the "proxy" info is optional)
I am new to json, and I am trying to get this filtered with jq but so far, I am out of lucky.
What I acomplished so far is the following jq query, when searching for "PAPP9001" :
jq '.[] | .[] | select(.server[].name=="PAPP9001") | .proxy as $proxy | .server[] | {proxy: $proxy, name: .name, ip: .ip} | select(.name=="PAPP9001")' curlreturn.json
which returns me:
{
"proxy": "type2.prod.proxy.mydomain.com",
"name": "PAPP9001",
"ip": "10.2.3.2"
}
but:
I could not get the "Type" info, at the top level
Considering the number of pipes and the 2 selects, I doubt that this is the most efficient way.
One way to retrieve the key names programmatically is using to_entries. For example, given your input, this jq filter:
to_entries[]
| .key as $type
| .value[]
| .proxy as $proxy
| .server[]
| select(.name == "PAPP9001")
| { Type: $type, name, ip, proxy: $proxy }
yields:
{
"Type": "Type2",
"name": "PAPP9001",
"ip": "10.2.3.2",
"proxy": "type2.prod.proxy.mydomain.com"
}
Variations
If, for example, you wanted these four fields as a CSV row, then you could replace the last line of the filter above with:
| [$type, .name, .ip, $proxy] | #csv
See the jq manual for how to use string interpolation.
I'm pretty new to jq and wanted to use it to update an AWS ECS task definition with a new value. AWS cli returns the following json response and i would like to modify the object with name property CONFIG_URL with value "this is atest".
{
"family": "contentpublishing-task",
"volumes": [],
"containerDefinitions": [
{
"environment": [
{
"name": "TEST_ENV",
"value": "TEST"
},
{
"name": "CONFIG_URL",
"value": "s3://stg-appcfg/config-20160729-1130.json"
}
],
"name": "contentpublishing",
"mountPoints": [],
"image": "contentpublishing:blah",
"cpu": 512,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 8081,
"hostPort": 8080
}
],
"memory": 256,
"essential": true,
"volumesFrom": []
}
]
}
Tried the following query
cat test.json | jq 'select(.containerDefinitions[0].environment[].name=="CONFIG_URL").value|="this is atest"' 2>&1
But following has been returned. As you can see an additional value key is added the outer most json object.
{
"family": "contentpublishing-task",
"volumes": [],
"containerDefinitions": [
{
"environment": [
{
"name": "TEST_ENV",
"value": "TEST"
},
{
"name": "CONFIG_URL",
"value": "s3://stg-appcfg/config-20160729-1130.json"
}
],
"name": "contentpublishing",
"mountPoints": [],
"image": "contentpublishing:blah",
"cpu": 512,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 8081,
"hostPort": 8080
}
],
"memory": 256,
"essential": true,
"volumesFrom": []
}
],
"value": "this is atest"
}
You have to select the corresponding environment node first before setting the value. Your query doesn't change the context so it's still on the root item so you end up adding the new value to the root.
$ jq --arg update_name "CONFIG_URL" --arg update_value "this is a test" \
'(.containerDefinitions[].environment[] | select(.name == $update_name)).value = $update_value' input.json
Here is a solution which uses jq Complex assignments
(
.containerDefinitions[]
| .environment[]
| select(.name == "CONFIG_URL")
| .value
) |= "this is atest"