I'm pretty new to jq and wanted to use it to update an AWS ECS task definition with a new value. AWS cli returns the following json response and i would like to modify the object with name property CONFIG_URL with value "this is atest".
{
"family": "contentpublishing-task",
"volumes": [],
"containerDefinitions": [
{
"environment": [
{
"name": "TEST_ENV",
"value": "TEST"
},
{
"name": "CONFIG_URL",
"value": "s3://stg-appcfg/config-20160729-1130.json"
}
],
"name": "contentpublishing",
"mountPoints": [],
"image": "contentpublishing:blah",
"cpu": 512,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 8081,
"hostPort": 8080
}
],
"memory": 256,
"essential": true,
"volumesFrom": []
}
]
}
Tried the following query
cat test.json | jq 'select(.containerDefinitions[0].environment[].name=="CONFIG_URL").value|="this is atest"' 2>&1
But following has been returned. As you can see an additional value key is added the outer most json object.
{
"family": "contentpublishing-task",
"volumes": [],
"containerDefinitions": [
{
"environment": [
{
"name": "TEST_ENV",
"value": "TEST"
},
{
"name": "CONFIG_URL",
"value": "s3://stg-appcfg/config-20160729-1130.json"
}
],
"name": "contentpublishing",
"mountPoints": [],
"image": "contentpublishing:blah",
"cpu": 512,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 8081,
"hostPort": 8080
}
],
"memory": 256,
"essential": true,
"volumesFrom": []
}
],
"value": "this is atest"
}
You have to select the corresponding environment node first before setting the value. Your query doesn't change the context so it's still on the root item so you end up adding the new value to the root.
$ jq --arg update_name "CONFIG_URL" --arg update_value "this is a test" \
'(.containerDefinitions[].environment[] | select(.name == $update_name)).value = $update_value' input.json
Here is a solution which uses jq Complex assignments
(
.containerDefinitions[]
| .environment[]
| select(.name == "CONFIG_URL")
| .value
) |= "this is atest"
Related
I'm trying to merge two JSON files. The main destination is to overwrite environment variables in the 1st file with environment variables in the 2nd.
1st file:
{
"containerDefinitions": [
{
"name": "foo",
"image": "nginx:latest",
"cpu": 1024,
"memory": 4096,
"memoryReservation": 2048,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "SERVER_PORT",
"value": "8080"
},
{
"name": "DB_NAME",
"value": "example_db"
}
],
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/dev/ecs/example",
"awslogs-region": "us-west-1",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "bar",
"taskRoleArn": "arn:aws:iam::111111111111:role/assume-ecs-role",
"executionRoleArn": "arn:aws:iam::111111111111:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
]
}
2nd file:
{
"containerDefinitions": [
{
"environment": [
{
"name": "SERVER_PORT",
"value": "8081"
}
]
}
]
}
The product has to be next:
{
"containerDefinitions": [
{
"name": "foo",
"image": "nginx:latest",
"cpu": 1024,
"memory": 4096,
"memoryReservation": 2048,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "SERVER_PORT",
"value": "8081"
},
{
"name": "DB_NAME",
"value": "example_db"
}
],
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/dev/ecs/example",
"awslogs-region": "us-west-1",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "bar",
"taskRoleArn": "arn:aws:iam::111111111111:role/assume-ecs-role",
"executionRoleArn": "arn:aws:iam::111111111111:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
]
}
I tried to do next:
jq -s 'reduce .[] as $item ({}; reduce ($item | keys_unsorted[]) as $key (.; $item[$key] as $val | ($val | type) as $type | .[$key] = if ($type == "array") then (.[$key] + $val | unique) elif ($type == "object") then (.[$key] + $val) else $val end))' 1.json 2.json
But the result is:
{
"containerDefinitions": [
{
"name": "foo",
"image": "nginx:latest",
"cpu": 1024,
"memory": 4096,
"memoryReservation": 2048,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "SERVER_PORT",
"value": "8080"
},
{
"name": "DB_NAME",
"value": "example_db"
}
],
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/dev/ecs/example",
"awslogs-region": "us-west-1",
"awslogs-stream-prefix": "ecs"
}
}
},
{
"environment": [
{
"name": "SERVER_PORT",
"value": "8081"
}
]
}
],
"family": "bar",
"taskRoleArn": "arn:aws:iam::111111111111:role/assume-ecs-role",
"executionRoleArn": "arn:aws:iam::111111111111:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
]
}
Could anyone help to find out how to reach the right result?
Something like this will do the trick:
(input | .containerDefinitions[0].environment | from_entries) as $new_env
| input | .containerDefinitions[].environment |= ((from_entries + $new_env) | to_entries)
Online demo
In case it's unclear, the invocation should look like so:
jq -n '...' 2.json 1.json
Here's a solution using tostream and has(1) to read the values from the second file, and setpath to set them in the first file:
jq 'reduce (input | tostream | select(has(1))) as $i (.; setpath($i[0]; $i[1]))' \
1.json 2.json
Demo
When providing the files in reversed order (2.json 1.json), the context . and input have to be swapped:
jq 'reduce (tostream | select(has(1))) as $i (input; setpath($i[0]; $i[1]))' \
2.json 1.json
Demo
I am using aws ecs query to get list of properties being used by the current running task.
command -
cft = "aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b
I am storing this in an output variable
output= $( eval $cft)
Output:
"tasks": [
{
"attachments": [
{
"id": "da8a1312-8278-46d5-8e3b-6b6a1d96f820",
"type": "ElasticNetworkInterface",
"status": "ATTACHED",
"details": [
{
"name": "subnetId",
"value": "subnet-0a151f2eb959ad4"
},
{
"name": "networkInterfaceId",
"value": "eni-081948e3666253f"
},
{
"name": "macAddress",
"value": "02:2a:9i:5c:4a:77"
},
{
"name": "privateDnsName",
"value": "ip-172-56-17-177.us-west-2.compute.internal"
},
{
"name": "privateIPv4Address",
"value": "172.56.17.177"
}
]
}
],
"availabilityZone": "us-west-2a",
"clusterArn": "arn:aws:ecs:us-west-2:4984314772:cluster/secrets",
"containers": [
{
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"name": "nginx",
"image": "nginx",
"lastStatus": "PENDING",
"networkInterfaces": [
{
"attachmentId": "da8a1312-8278-46d5-6b6a1d96f820",
"privateIpv4Address": "172.31.17.176"
}
],
"healthStatus": "UNKNOWN",
"cpu": "0"
}
],
"cpu": "256",
"createdAt": "2020-12-10T18:00:16.320000+05:30",
"desiredStatus": "RUNNING",
"group": "family:nginx",
"healthStatus": "UNKNOWN",
"lastStatus": "PENDING",
"launchType": "FARGATE",
"memory": "512",
"overrides": {
"containerOverrides": [
{
"name": "nginx"
}
],
"inferenceAcceleratorOverrides": []
},
"platformVersion": "1.4.0",
"tags": [],
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"taskDefinitionArn": "arn:aws:ecs:us-west-2:4984314772:task-definition/nginx:17",
"version": 2
}
],
"failures": []
}
now if do an echo of $output.tasks[0].containers[0] nothing happens it prints the entire thing again, i want to store the result in output variable and refer different parameter like we do in json format.
You will need to use a json parser such as jq and so:
eval $cft | jq '.tasks[].containers[]'
To avoid using eval you could simple pipe the aws command into jq and so:
aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]'
or:
cft=$(aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]')
echo $cft | jq '.tasks[].containers[]'
I have below JSON in a variable name TASK_DEFINTIION
It has a \r character at the end of "image": "700707367057.dkr.ecr.us-east-1.amazonaws.com/php-demo:feature-feature01\r" under containerDefinitions
I am using TASK_DEFINITION_AFTER= 'echo $TASK_DEFINTIION | sed "s/\\r//g"' to remove the \r but seems it is removing all the hidden carriage return but not removing the one which is visible as regular character.
Any help would be highly appriciated.
{
"memory": "1024",
"networkMode": "awsvpc",
"family": "ecs-php-demo",
"placementConstraints": [],
"cpu": "512",
"executionRoleArn": "arn:aws:iam::700707367057:role/ecsTaskExecutionRole",
"volumes": [],
"requiresCompatibilities": [
"FARGATE"
],
"taskRoleArn": "arn:aws:iam::700707367057:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"memoryReservation": 256,
"environment": [],
"name": "ecs-php-demo",
"mountPoints": [],
"image": "700707367057.dkr.ecr.us-east-1.amazonaws.com/php-demo:feature-feature01\r",
"cpu": 0,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 8080,
"hostPort": 8080
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs",
"awslogs-group": "/ecs/ecs-php-demo"
}
},
"essential": true,
"volumesFrom": []
}
]
}
Using jq rtrimstr to stay conform with the JSON syntax:
#!/usr/bin/bash
TASK_DEFINTIION="$(
jq '.containerDefinitions[].image|=rtrimstr("\r")' <<<"$TASK_DEFINTIION"
)"
echo "$TASK_DEFINTIION"
man jq:
rtrimstr(str)
Outputs its input with the given suffix string removed, if it ends with it.
jq ´[.[]|rtrimstr("foo")]´
["fo", "foo", "barfoo", "foobar", "foob"]
=> ["fo","","bar","foobar","foob"]
In your command, first \ escapes the second \, so sed sees only one \
You need :
TASK_DEFINITION_AFTER="$(echo $TASK_DEFINTIION | sed "s/\\\\r//g")"
you can use jq to replace key value:
jq '.containerDefinitions[].image="700707367057.dkr.ecr.us-east-1.amazonaws.com/php-demo:feature-feature01"' file.json
but unfortunately, jq does not support in-place editing, so you must redirect to a temporary file first and then replace your original file with it, or use sponge utility from the more utils package, like that:
jq '.containerDefinitions[].image="700707367057.dkr.ecr.us-east-1.amazonaws.com/php-demo:feature-feature01"' file.json|sponge file.json
A pure jq solution to remove \r with gsub :
jq '.containerDefinitions[].image|=gsub("[\r]"; "")' file.json|sponge file.json
output sample :
{
"memory": "1024",
"networkMode": "awsvpc",
"family": "ecs-php-demo",
"placementConstraints": [],
"cpu": "512",
"executionRoleArn": "arn:aws:iam::700707367057:role/ecsTaskExecutionRole",
"volumes": [],
"requiresCompatibilities": [
"FARGATE"
],
"taskRoleArn": "arn:aws:iam::700707367057:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"memoryReservation": 256,
"environment": [],
"name": "ecs-php-demo",
"mountPoints": [],
"image": "700707367057.dkr.ecr.us-east-1.amazonaws.com/php-demo:feature-feature01",
"cpu": 0,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 8080,
"hostPort": 8080
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs",
"awslogs-group": "/ecs/ecs-php-demo"
}
},
"essential": true,
"volumesFrom": []
}
]
}
I have a deep json. Sometimes, I need to look for the json path for a key containing certain word.
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-03-28T21:09:42Z",
"labels": {
"bu": "finance",
"env": "prod"
},
"name": "auth",
"namespace": "default",
"resourceVersion": "2786",
"selfLink": "/api/v1/namespaces/default/pods/auth",
"uid": "ce73565a-519d-11e9-bcb7-0242ac110009"
},
"spec": {
"containers": [
{
"command": [
"sleep",
"4800"
],
"image": "busybox",
"imagePullPolicy": "Always",
"name": "busybox",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-dbpcm",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"nodeName": "node01",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-dbpcm",
"secret": {
"defaultMode": 420,
"secretName": "default-token-dbpcm"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-03-28T21:09:42Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-03-28T21:09:50Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": null,
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-03-28T21:09:42Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://b5be8275555ad70939401d658bb4e504b52215b70618ad43c2d0d02c35e1ae27",
"image": "busybox:latest",
"imageID": "docker-pullable://busybox#sha256:061ca9704a714ee3e8b80523ec720c64f6209ad3f97c0ff7cb9ec7d19f15149f",
"lastState": {},
"name": "busybox",
"ready": true,
"restartCount": 0,
"state": {
"running": {
"startedAt": "2019-03-28T21:09:49Z"
}
}
}
],
"hostIP": "172.17.0.37",
"phase": "Running",
"podIP": "10.32.0.4",
"qosClass": "BestEffort",
"startTime": "2019-03-28T21:09:42Z"
}
}
Currently If i need the podIP, then I do that this way to find the object which has the search keyword and then I build the path
curl myson | jq "[paths]" | grep "IP" --context=10
Is there any nice shortcut to simplify this? What I really need is - all the paths which could have the matching key.
spec.podIP
spec.hostIP
select paths containing keyword in their last element, and use join(".") to generate your desired output.
paths
| select(.[-1] | type == "string" and contains("keyword"))
| join(".")
.[-1] returns the last element of an array,
type == "string" is required because an array index is a number and numbers and strings can't be checked for their containment.
You may want to specify -r option.
As #JeffMercado implicitly suggested you can set the query from command line without touching the script:
jq -r 'paths
| select(.[-1] | type == "string" and contains($q))
| join(".")' file.json --arg q 'keyword'
You can stream the input in, which provides paths and values. You could then inspect the paths and optionally output the values.
$ jq --stream --arg pattern 'IP' '
select(length == 2 and any(.[0][] | strings; test($pattern)))
| "\(.[0] | join(".")): \(.[1])"
' input.json
"status.hostIP: 172.17.0.37"
"status.podIP: 10.32.0.4"
shameless plug
https://github.com/TomConlin/json_to_paths
because sometime you do not even know the component you want to filter for before you see what is there.
json2jqpath.jq file.json
.
.apiVersion
.kind
.metadata
.metadata|.creationTimestamp
.metadata|.labels
.metadata|.labels|.bu
.metadata|.labels|.env
.metadata|.name
.metadata|.namespace
.metadata|.resourceVersion
.metadata|.selfLink
.metadata|.uid
.spec
.spec|.containers
.spec|.containers|.[]
.spec|.containers|.[]|.command
.spec|.containers|.[]|.command|.[]
.spec|.containers|.[]|.image
.spec|.containers|.[]|.imagePullPolicy
.spec|.containers|.[]|.name
.spec|.containers|.[]|.resources
.spec|.containers|.[]|.terminationMessagePath
.spec|.containers|.[]|.terminationMessagePolicy
.spec|.containers|.[]|.volumeMounts
.spec|.containers|.[]|.volumeMounts|.[]
.spec|.containers|.[]|.volumeMounts|.[]|.mountPath
.spec|.containers|.[]|.volumeMounts|.[]|.name
.spec|.containers|.[]|.volumeMounts|.[]|.readOnly
.spec|.dnsPolicy
.spec|.nodeName
.spec|.priority
.spec|.restartPolicy
.spec|.schedulerName
.spec|.securityContext
.spec|.serviceAccount
.spec|.serviceAccountName
.spec|.terminationGracePeriodSeconds
.spec|.tolerations
.spec|.tolerations|.[]
.spec|.tolerations|.[]|.effect
.spec|.tolerations|.[]|.key
.spec|.tolerations|.[]|.operator
.spec|.tolerations|.[]|.tolerationSeconds
.spec|.volumes
.spec|.volumes|.[]
.spec|.volumes|.[]|.name
.spec|.volumes|.[]|.secret
.spec|.volumes|.[]|.secret|.defaultMode
.spec|.volumes|.[]|.secret|.secretName
.status
.status|.conditions
.status|.conditions|.[]
.status|.conditions|.[]|.lastProbeTime
.status|.conditions|.[]|.lastTransitionTime
.status|.conditions|.[]|.status
.status|.conditions|.[]|.type
.status|.containerStatuses
.status|.containerStatuses|.[]
.status|.containerStatuses|.[]|.containerID
.status|.containerStatuses|.[]|.image
.status|.containerStatuses|.[]|.imageID
.status|.containerStatuses|.[]|.lastState
.status|.containerStatuses|.[]|.name
.status|.containerStatuses|.[]|.ready
.status|.containerStatuses|.[]|.restartCount
.status|.containerStatuses|.[]|.state
.status|.containerStatuses|.[]|.state|.running
.status|.containerStatuses|.[]|.state|.running|.startedAt
.status|.hostIP
.status|.phase
.status|.podIP
.status|.qosClass
.status|.startTime
I am trying to parse convert Json Task definition with new image.
I want to change the "image": "docker.org/alpha/alpha-app-newgen:12.2.3" value in Json to "image": "docker.org/alpha/alpha-app-newgen:12.5.0" or any other version dynamically.
below is my Json task def:
{
"taskDefinition": {
"family": "ing-stack",
"volumes": [
{
"host": {
"sourcePath": "/tmp/nginx/elb.conf"
},
"name": "volume-0"
}
],
"containerDefinitions": [
{
"dnsSearchDomains": [],
"environment": [
{
"name": "API_SECRET",
"value": "ING-SECRET"
},
{
"name": "API_KEY",
"value": "AVERA-CADA-VERA-KEY"
}
],
"readonlyRootFilesystem": false,
"name": "ing-stg",
"links": [],
"mountPoints": [],
"image": "docker.org/alpha/alpha-app-newgen:12.2.3",
"privileged": false,
"essential": true,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 19000,
"hostPort": 19000
}
],
"dockerLabels": {}
},
{
"dnsSearchDomains": [],
"environment": [
{
"name": "NG_PROXY",
"value": "ing"
}
],
"readonlyRootFilesystem": false,
"name": "web",
"links": [
"identity-ng"
],
"mountPoints": [
{
"sourceVolume": "volume-0",
"readOnly": false,
"containerPath": "/etc/nginx/conf.d/default.conf"
}
],
"image": "docker.org/alpha/alpha-ui:6.4.7",
"portMappings": [
{
"protocol": "tcp",
"containerPort": 443,
"hostPort": 443
},
{
"protocol": "tcp",
"containerPort": 80,
"hostPort": 80
}
],
"memory": 512,
"command": [
"sh",
"prep-run-nginx.sh"
],
"dockerLabels": {}
}
],
"revision": 136
}
}
I need to get the same structure back with new value for the image.
I tried the following
jq '. | select(.containerDefinitions[].image | contains("'$new_img_no_ver'") ) | .image |= "my new image"', but its adding to the end of the JSON.
Can anybody tell me how to achieve this.
Here are two potential solutions. Other variants are of course possible.
walk/1
If you don't want to be bothered with the details of exactly where the relevant "image" tag is located, consider using walk/1:
Invocation:
jq --arg old "docker.org/alpha/alpha-app-newgen:12.2.3" --arg new "HELLO" -f update.jq input.json
update.jq:
walk(if type == "object" and .image == $old then .image=$new else . end)
If your jq does not have walk/1, then consider updating or getting its jq definition by googling: jq def walk
Targeted update
Invocation: as above
update.jq:
.taskDefinition.containerDefinitions[].image |= (if . == $old then $new else . end)