AWS step function: how to pass InputPath to OutputPath unchanged in Fargate task - json

I have an AWS steps function defined using this Serverless plugin with 3 steps (FirstStep -> Worker -> EndStep -> Done):
stepFunctions:
stateMachines:
MyStateMachine:
name: "MyStateMachine"
definition:
StartAt: FirstStep
States:
FirstStep:
Type: Task
Resource:
Fn::GetAtt: [ FirstStep, Arn ]
InputPath: $
OutputPath: $
Next: Worker
Worker:
Type: Task
Resource: arn:aws:states:::ecs:runTask.sync
InputPath: $
OutputPath: $
Parameters:
Cluster: "#{EcsCluster}"
TaskDefinition: "#{EcsTaskDefinition}"
LaunchType: FARGATE
Overrides:
ContainerOverrides:
- Name: container-worker
Environment:
- Name: ENV_VAR_1
'Value.$': $.ENV_VAR_1
- Name: ENV_VAR_2
'Value.$': $.ENV_VAR_2
Next: EndStep
EndStep:
Type: Task
Resource:
Fn::GetAtt: [ EndStep, Arn ]
InputPath: $
OutputPath: $
Next: Done
Done:
Type: Succeed
I would like to propagate the InputPath unchanged from Worker step (Fargate) to EndStep, but when I inspect step input of EndStep from AWS management console I see that data associated with Fargate task is passed instead:
{
"Attachments": [...],
"Attributes": [],
"AvailabilityZone": "...",
"ClusterArn": "...",
"Connectivity": "CONNECTED",
"ConnectivityAt": 1619602512349,
"Containers": [...],
"Cpu": "1024",
"CreatedAt": 1619602508374,
"DesiredStatus": "STOPPED",
"ExecutionStoppedAt": 1619602543623,
"Group": "...",
"InferenceAccelerators": [],
"LastStatus": "STOPPED",
"LaunchType": "FARGATE",
"Memory": "3072",
"Overrides": {
"ContainerOverrides": [
{
"Command": [],
"Environment": [
{
"Name": "ENV_VAR_1",
"Value": "..."
},
{
"Name": "ENV_VAR_2",
"Value": "..."
}
],
"EnvironmentFiles": [],
"Name": "container-worker",
"ResourceRequirements": []
}
],
"InferenceAcceleratorOverrides": []
},
"PlatformVersion": "1.4.0",
"PullStartedAt": 1619602522806,
"PullStoppedAt": 1619602527294,
"StartedAt": 1619602527802,
"StartedBy": "AWS Step Functions",
"StopCode": "EssentialContainerExited",
"StoppedAt": 1619602567040,
"StoppedReason": "Essential container in task exited",
"StoppingAt": 1619602553655,
"Tags": [],
"TaskArn": "...",
"TaskDefinitionArn": "...",
"Version": 5
}
Basically, if the initial input is
{
"ENV_VAR_1": "env1",
"ENV_VAR_2": "env2",
"otherStuff": {
"k1": "v1",
"k2": "v2"
}
}
I want it to be passed as is to FirstStep, Worker and EndStep inputs without changes.
Is this possible?

Given that you invoke the step function with an object (let's call that A), then a task's...
...InputPath specifies what part of A is handed to your task
...ResultPath specifies where in A to put the result of the task
...OutputPath specifies what part of A to hand over to the next state
Source: https://docs.aws.amazon.com/step-functions/latest/dg/input-output-example.html
So you are currently overwriting all content in A with the result of your Worker state (implicitly). If you want to discard the result of your Worker state, you have to specify:
ResultPath: null
Source: https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html#input-output-resultpath-null

Related

Reference value from state’s input, using JSONPath syntax in a SSMSendCommand API step through parameter which expects an array

I have on a AWS state machine the following step defined for api aws-sdk:ssm:sendCommand
{
"Type": "Task",
"Parameters": {
"DocumentName.$": "$.result.DocumentName",
"InstanceIds.$": "$..Dimensions[?(#.Name=~/.*InstanceId.*/)].Value",
"MaxErrors": "0",
"MaxConcurrency": "100%",
"CloudWatchOutputConfig": {
"CloudWatchLogGroupName": "diskspace-log",
"CloudWatchOutputEnabled": true
},
"Parameters": {
"workingDirectory": [
""
],
"executionTimeout": [
"3600"
],
"commands": [
"echo -------------------Mounting volume without signals $..Dimensions[?(#.Name=~/.*device.*/)].Value---------------------",
"echo",
"mount $..Dimensions[?(#.Name=~/.*device.*/)].Value"
]
}
}
}
The section: "commands": [] expects an array.
"commands" should accept input reference as any other parameter in the schema, so in theory will be posible to use json path parameters (Example: "size.$": "$.product.details.size") for referencing needed parameters from input.
https://docs.aws.amazon.com/step-functions/latest/dg/input-output-inputpath-params.html
This following example works without using input referencing :
"commands": [
"echo -------------------Mounting /dev/ebs---------------------",
"echo",
"mount /dev/ebs"
]
But I need to reference from input, hardcoded values won't work for me. I tried, but no working.
"commands": [
"echo -------------------Mounting volume without signals $..Dimensions[?(#.Name=~/.*device.*/)].Value---------------------",
"echo",
"mount $..Dimensions[?(#.Name=~/.*device.*/)].Value"
]
Also tried, not working also:
"commands.$": "States.Array(States.Format('echo -------------------Mounting volume without signals {} ---------------------', $..Dimensions[?(#.Name=~/.*device.*/)].Value),'echo',States.Format('mount {}', $..Dimensions[?(#.Name=~/.*device.*/)].Value))"
I believe some of the provided intrinsic functions will help on achieving the expected result but I'm lost on how to properly set up the syntax.
https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-intrinsic-functions.html#asl-intrsc-func-arrays
The step calls a RunShellScript type of documentCommand.
And executes the commands provided on parameters in the step of the state machine.
I got on the output:
States.Format('echo -------------------Mounting volume without signals {} ---------------------', $..Dimensions[?(#.Name=~/.*device.*/)].Value)'
Its not detecting the input reference, I expect to output.
-------------------Mounting volume without signals /dev/ebs ---------------------
and in the background execute:
mount /dev/ebs
I was able to send the commands through a Pass State Flow, here is the definition:
{
"Type": "Pass",
"Next": "SendCommand",
"ResultPath": "$.ForArgs",
"Parameters": {
"Params": {
"Args": [
{
"Arg1": "ec2-metadata -i"
},
{
"Arg2": "echo"
},
{
"Arg3.$": "States.Format('echo -------------------Mounting volume without signals {} ---------------------', States.ArrayGetItem($..Dimensions[?(#.Name=~/.*device.*/)].Value, 0))"
},
{
"Arg4": "echo"
},
{
"Arg5.$": "States.Format('mount {}', States.ArrayGetItem($..Dimensions[?(#.Name=~/.*device.*/)].Value, 0))"
},
{
"Arg6.$": "States.Format('echo Checking if device {} is mounted', States.ArrayGetItem($..Dimensions[?(#.Name=~/.*device.*/)].Value, 0))"
},
{
"Arg7.$": "States.Format('if findmnt --source \"{}\" >/dev/null', States.ArrayGetItem($..Dimensions[?(#.Name=~/.*device.*/)].Value, 0))"
},
{
"Arg8": "\tthen echo device is mounted"
},
{
"Arg9": "\telse echo device is not mounted"
},
{
"Arg10": "fi"
}
]
}
}
}
Next on the sendCommandApi:
"commands.$": "$.ForArgs.Params.Args[*][*]"

How to run AWS ECS Task with CloudFormation overriding container environment variables

I was searching a way to run ecs task. I already have a cluster and task definition settings. I just wanted to trigger a task using CloudFormation template. I know that I can run a task by clicking on the console and it works fine. For cfn, approach needs to be define properly.
Check the attached screenshots. I wanted to run that task using CloudFormation and pass container override environment variables. As per my current templates, it is not allowing me to do same like I can do using console. Using console I just need to select the following options
1. Launch type
2. Task Definition
Family
Revision
3. VPC and security groups
4. Environment variable overrides rest of the things automatically selected
It starts working with console but with cloudformaton template how can we do that. Is it possible to do or there is no such feature?
"taskdefinition": {
"Type" : "AWS::ECS::TaskDefinition",
"DependsOn": "DatabaseMaster",
"Properties" : {
"ContainerDefinitions" : [{
"Environment" : [
{
"Name" : "TARGET_DATABASE",
"Value" : {"Ref":"DBName"}
},
{
"Name" : "TARGET_HOST",
"Value" : {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]}
}
]
}],
"ExecutionRoleArn" : "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
"Family" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"TaskRoleArn" : "arn:aws:iam::xxxxxxxxxxxxxxx:role/xxxxxxxxxxxxxxx-XXXXXXXXX"
}
},
"EcsService": {
"Type" : "AWS::ECS::Service",
"Properties" : {
"Cluster" : "xxxxxxxxxxxxxxxxx",
"LaunchType" : "FARGATE",
"NetworkConfiguration" : {
"AwsvpcConfiguration" : {
"SecurityGroups" : ["sg-xxxxxxxxxxx"],
"Subnets" : ["subnet-xxxxxxxxxxxxxx"]
}
},
"TaskDefinition" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
There is no validity error in the code however, I am talking about the approach. I added image name container name but now it is asking for memory and cpu, it should not ask as it is already defined we just need to run a task.
Edited
I wanted to run a task after creation of my database and wanted to pass those database values to the task to run and complete a job.
Here is the working example of what you can do if you wanted to pass variable and run a task. In my case, I wanted to run a task after creation of my database but with environment variables, directly AWS does not provide any feature to do so, this is the solution which can help to trigger you ecs task.
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "Allow CloudWatch Events to trigger ECS task",
"Policies": [
{
"PolicyName": "Allow-ECS-Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*",
"iam:PassRole",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"RoleName": { "Fn::Join": [ "", ["CloudWatchTriggerECSRole-", { "Ref": "DBInstanceIdentifier" }]]}
}
},
"DummyParameter": {
"Type" : "AWS::SSM::Parameter",
"Properties" : {
"Name" : {"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"},
"Type" : "String",
"Value" : {"Fn::GetAtt": "DatabaseMaster.Endpoint.Address"}
},
"DependsOn": "TaskSchedule"
},
"TaskSchedule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Trigger ECS task upon creation of DB instance",
"Name": { "Fn::Join": [ "", ["ECSTaskTrigger-", { "Ref": "DBName" }]]},
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EventPattern": {
"source": [ "aws.ssm" ],
"detail-type": ["Parameter Store Change"] ,
"resources": [{"Fn::Sub":"arn:aws:ssm:eu-west-1:XXXXXXX:parameter/${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"detail": {
"operation": ["Create"],
"name": [{"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"type": ["String"]
}
},
"State": "ENABLED",
"Targets": [
{
"Arn": "arn:aws:ecs:eu-west-1:xxxxxxxx:cluster/NameOf-demo",
"Id": "NameOf-demo",
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EcsParameters": {
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsVpcConfiguration": {
"SecurityGroups": {"Ref":"VPCSecurityGroups"},
"Subnets": {"Ref":"DBSubnetName"}
}
},
"PlatformVersion": "LATEST",
"TaskDefinitionArn": "arn:aws:ecs:eu-west-1:XXXXXXXX:task-definition/NameXXXXXXXXX:1"
},
"Input": {"Fn::Sub": [
"{\"containerOverrides\":[{\"name\":\"MyContainerName\",\"environment\":[{\"name\":\"VAR1\",\"value\":\"${TargetDatabase}\"},{\"name\":\"VAR2\",\"value\":\"${TargetHost}\"},{\"name\":\"VAR3\",\"value\":\"${TargetHostPassword}\"},{\"name\":\"VAR4\",\"value\":\"${TargetPort}\"},{\"name\":\"VAR5\",\"value\":\"${TargetUser}\"},{\"name\":\"VAR6\",\"value\":\"${TargetLocation}\"},{\"name\":\"VAR7\",\"value\":\"${TargetRegion}\"}]}]}",
{
"VAR1": {"Ref":"DBName"},
"VAR2": {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]},
"VAR3": {"Ref":"DBPassword"},
"VAR4": "5432",
"VAR5": {"Ref":"DBUser"},
"VAR6": "value6",
"VAR7": "eu-west-2"
}
]}
}
]
}
}
For Fargate task, we need to specify in CPU in Task Definition. and memory or memory reservation in either task or container definition.
and environment variables should be passed to each container as ContainerDefinitions and overrided when task is run from ecs task-run from console or cli.
{
"ContainerTaskdefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": "SomeFamily",
"ExecutionRoleArn": !Ref RoleArn,
"TaskRoleArn": !Ref TaskRoleArn,
"Cpu": "256",
"Memory": "1GB",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"EC2",
"FARGATE"
],
"ContainerDefinitions": [
{
"Name": "container name",
"Cpu": 256,
"Essential": "true",
"Image": !Ref EcsImage,
"Memory": "1024",
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": null,
"awslogs-region": null,
"awslogs-stream-prefix": "ecs"
}
},
"Environment": [
{
"Name": "ENV_ONE_KEY",
"Value": "Valu1"
},
{
"Name": "ENV_TWO_KEY",
"Value": "Valu2"
}
]
}
]
}
}
}
EDIT(from discussion in comments):
ECS Task Run is not a cloud-formation resource, it can only be run from console or CLI.
But if we choose to run from a cloudformation resource, it can be done using cloudformation custom resource. But once task ends, we now have a resource in cloudformation without an actual resource behind. So, custom resource needs to do:
on create: run the task.
on delete: do nothing.
on update: re-run the task
Force an update by changing an attribute or logical id, every time we need to run the task.

Invalid json format, please check. Reason: invalid character 'a' looking for beginning of object key string

could you please assist me here? I have validated JSON but the issue appears.. also the strange thing is that when I create the JSON file with the wizard the issue does not appear. Thank you in advance.
Validate JSON:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/messages",
"log_group_name": "messages",
"log_stream_name": "{instance_id}"
}
]
}
}
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"collectd": {
"metrics_aggregation_interval": 60
},
"cpu": {
"measurement": [
"cpu_usage_idle",
"cpu_usage_iowait",
"cpu_usage_user",
"cpu_usage_system"
],
"metrics_collection_interval": 60,
"totalcpu": false
},
"disk": {
"measurement": [
"used_percent",
"inodes_free"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"diskio": {
"measurement": [
"io_time",
"write_bytes",
"read_bytes",
"writes",
"reads"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"netstat": {
"measurement": [
"tcp_established",
"tcp_time_wait"
],
"metrics_collection_interval": 60
},
"statsd": {
"metrics_aggregation_interval": 60,
"metrics_collection_interval": 10,
"service_address": ":8125"
},
"swap": {
"measurement": [
"swap_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
/opt/aws/amazon-cloudwatch-agent/bin/config-downloader --output-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --download-source file:/opt/aws/amazon-cloudwatch-agent/bin/config.json --mode ec2 --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config append
Successfully fetched the config and saved in /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json.tmp
Start configuration validation...
/opt/aws/amazon-cloudwatch-agent/bin/config-translator --input /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json --input-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --output /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml --mode ec2 --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config append
2019/08/26 07:58:17 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json ...
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json does not exist or cannot read. Skipping it.
2019/08/26 07:58:17 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/default ...
2019/08/26 07:58:17 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json.tmp ...
2019/08/26 07:58:17 Invalid json format, please check. Reason: invalid character 'a' looking for beginning of object key string
2019/08/26 07:58:17 I! AmazonCloudWatchAgent Version 1.223987.0.
2019/08/26 07:58:17 Configuration validation first phase failed. Agent version: 1.223987.0. Verify the JSON input is only using features supported by this version.
I managed to fix the java error this by adding \ in front of the $. Then there was a terraform error message when I used terraform apply and added a second dollar sign to skip the interpolation.
"AutoScalingGroupName": "\$${aws:AutoScalingGroupName}",
"ImageId": "\$${aws:ImageId}",
"InstanceId": "\$${aws:InstanceId}",
"InstanceType": "\$${aws:InstanceType}"
Hope this will help somebody in the future.

Ansible - Register variable and then search the variable to set_fact (Cisco Aci)

I query the Cisco ACI to attain the advanced vmm provider details for a specific EPG.
The result is successful.
I then register the result to a variable.
I try to search that variable and obtain\extract a single specific piece of information such as 'dn' or 'encap' as this would allow me to use the information in other plays.
Unfortunately i'm unable to extract the information as the result comes back in an usual format. Looking at a debug on the register variable, it would appear it's a dictionary variable but no matter what I try the only item i'm able to access is the 'current' item.
All other items are not registered as dictionary items.
I have tried to change the variable to a list but still i'm unable to attain the information I require.
I've searched forums to see if the there is a methodology to convert the variable from a json result or dictionary variable to a string and then grep for the information but no success.
Ideally I would like to extract the information without installing additional 'apps'.
Will be very grateful if someone can advise how to search for a specific result from an irregular nested result which doesn't list the items in a correct dictionary format.
- name: Access VMM provider Information
hosts: apics
gather_facts: false
connection: local
#
vars:
ansible_python_interpreter: /usr/bin/python3
#
tasks:
- name: Play 1 Obtain VMM Provider Information
aci_epg_to_domain:
hostname: "{{ apics.hostname }}"
username: "{{ apics.username }}"
password: "{{ apics.password }}"
tenant: Tenant_A
ap: AP_Test
epg: EPG_Test
domain: DVS_Dell
domain_type: vmm
vm_provider: vmware
state: query
validate_certs: no
register: DVS_Result
#
- set_fact:
aci_result1: "{{ DVS_Result.current }}"
- set_fact:
aci_result2: "{{ DVS_Result.fvRsDomAtt.attributes.dn }}"
#
- debug:
msg: "{{ DVS_Result }}"
- debug:
var=aci_result1
- debug:
var=aci_result2
DVS_Result
ok: [apic1r] => {
"msg": {
"changed": false,
"current": [
{
"fvRsDomAtt": {
"attributes": {
"annotation": "",
"bindingType": "none",
"childAction": "",
"classPref": "encap",
"configIssues": "",
"delimiter": "",
"dn": "uni/tn-TN_prod/ap-AP_Test/epg-EPG_Test/rsdomAtt-[uni/vmmp-VMware/dom-DVS_Dell]",
"encap": "unknown",
"encapMode": "auto",
"epgCos": "Cos0",
"epgCosPref": "disabled",
"extMngdBy": "",
"forceResolve": "yes",
"instrImedcy": "lazy",
"lagPolicyName": "",
"lcOwn": "local",
"modTs": "2019-08-18T20:52:13.570+00:00",
"mode": "default",
"monPolDn": "uni/tn-common/monepg-default",
"netflowDir": "both",
"netflowPref": "disabled",
"numPorts": "0",
"portAllocation": "none",
"primaryEncap": "unknown",
"primaryEncapInner": "unknown",
"rType": "mo",
"resImedcy": "lazy",
"secondaryEncapInner": "unknown",
"state": "missing-target",
"stateQual": "none",
"status": "",
"switchingMode": "native",
"tCl": "infraDomP",
"tDn": "uni/vmmp-VMware/dom-DVS_Dell",
"tType": "mo",
"triggerSt": "triggerable",
"txId": "8646911284551354729",
"uid": "15374"
}
}
}
],
"failed": false
}
}
######################################
### aci_result1
ok: [apic1r] => {
"aci_result1": [
{
"fvRsDomAtt": {
"attributes": {
"annotation": "",
"bindingType": "none",
"childAction": "",
"classPref": "encap",
"configIssues": "",
"delimiter": "",
"dn": "uni/tn-TN_prod/ap-AP_Test/epg-EPG_Test/rsdomAtt-[uni/vmmp-VMware/dom-DVS_Dell]",
"encap": "unknown",
"encapMode": "auto",
"epgCos": "Cos0",
"epgCosPref": "disabled",
"extMngdBy": "",
"forceResolve": "yes",
"instrImedcy": "lazy",
"lagPolicyName": "",
"lcOwn": "local",
"modTs": "2019-08-18T20:52:13.570+00:00",
"mode": "default",
"monPolDn": "uni/tn-common/monepg-default",
"netflowDir": "both",
"netflowPref": "disabled",
"numPorts": "0",
"portAllocation": "none",
"primaryEncap": "unknown",
"primaryEncapInner": "unknown",
"rType": "mo",
"resImedcy": "lazy",
"secondaryEncapInner": "unknown",
"state": "missing-target",
"stateQual": "none",
"status": "",
"switchingMode": "native",
"tCl": "infraDomP",
"tDn": "uni/vmmp-VMware/dom-DVS_Dell",
"tType": "mo",
"triggerSt": "triggerable",
"txId": "8646911284551354729",
"uid": "15374"
}
}
}
]
}
############################################
### aci_result2
fatal: [apic1r]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'fvRsDomAtt'\n\nThe error appears to be in '/etc/ansible/playbooks/cisco/aci/create_bd_ap_epg3.yml': line 37, column 8, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - set_fact:\n ^ here\n"}
Use json_query. For example
- debug:
msg: "{{ DVS_Result.current|
json_query('[].fvRsDomAtt.attributes.dn') }}"

CannotStartContainerError while submitting a AWS Batch Job

In AWS Batch I have a job definition and a job queue and a compute environment where to execute my AWS Batch jobs.
After submitting a job, I find it in the list of the failed ones with this error:
Status reason
Essential container in task exited
Container message
CannotStartContainerError: API error (404): oci runtime error: container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file= --key=.
and in the cloudwatch logs I have:
container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file=Toulouse.json --key=out\": stat /var/application/script.sh --file=Toulouse.json --key=out: no such file or directory"
I have specified a correct docker image that has all the scripts (we use it already and it works) and I don't know where the error is coming from.
Any suggestions are very appreciated.
The docker file is something like that:
# Pull base image.
FROM account-id.dkr.ecr.region.amazonaws.com/application-image.base-php7-image:latest
VOLUME /tmp
VOLUME /mount-point
RUN chown -R ubuntu:ubuntu /var/application
# Create the source directories
USER ubuntu
COPY application/ /var/application
# Register aws profile
COPY data/aws /home/ubuntu/.aws
WORKDIR /var/application/
ENV COMPOSER_CACHE_DIR /tmp
RUN composer update -o && \
rm -Rf /tmp/*
Here is the Job Definition:
{
"jobDefinitionName": "JobDefinition",
"jobDefinitionArn": "arn:aws:batch:region:accountid:job-definition/JobDefinition:25",
"revision": 21,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "account-id.dkr.ecr.region.amazonaws.com/application-dev:latest",
"vcpus": 1,
"memory": 512,
"command": [
"/var/application/script.sh",
"--file=",
"Ref::file",
"--key=",
"Ref::key"
],
"volumes": [
{
"host": {
"sourcePath": "/mount-point"
},
"name": "logs"
},
{
"host": {
"sourcePath": "/var/log/php/errors.log"
},
"name": "php-errors-log"
},
{
"host": {
"sourcePath": "/tmp/"
},
"name": "tmp"
}
],
"environment": [
{
"name": "APP_ENV",
"value": "dev"
}
],
"mountPoints": [
{
"containerPath": "/tmp/",
"readOnly": false,
"sourceVolume": "tmp"
},
{
"containerPath": "/var/log/php/errors.log",
"readOnly": false,
"sourceVolume": "php-errors-log"
},
{
"containerPath": "/mount-point",
"readOnly": false,
"sourceVolume": "logs"
}
],
"ulimits": []
}
}
In Cloudwatch log stream /var/log/docker:
time="2017-06-09T12:23:21.014547063Z" level=error msg="Handler for GET /v1.17/containers/4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67/json returned error: No such container: 4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67"
This error was because the command was malformed. I was submitting the job by a lambda function (python 2.7) using boto3 and the syntax of the command should be something like this:
'command' : ['sudo','mkdir','directory']
Hope it helps somebody.