How to run AWS ECS Task with CloudFormation overriding container environment variables - json

I was searching a way to run ecs task. I already have a cluster and task definition settings. I just wanted to trigger a task using CloudFormation template. I know that I can run a task by clicking on the console and it works fine. For cfn, approach needs to be define properly.
Check the attached screenshots. I wanted to run that task using CloudFormation and pass container override environment variables. As per my current templates, it is not allowing me to do same like I can do using console. Using console I just need to select the following options
1. Launch type
2. Task Definition
Family
Revision
3. VPC and security groups
4. Environment variable overrides rest of the things automatically selected
It starts working with console but with cloudformaton template how can we do that. Is it possible to do or there is no such feature?
"taskdefinition": {
"Type" : "AWS::ECS::TaskDefinition",
"DependsOn": "DatabaseMaster",
"Properties" : {
"ContainerDefinitions" : [{
"Environment" : [
{
"Name" : "TARGET_DATABASE",
"Value" : {"Ref":"DBName"}
},
{
"Name" : "TARGET_HOST",
"Value" : {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]}
}
]
}],
"ExecutionRoleArn" : "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
"Family" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"TaskRoleArn" : "arn:aws:iam::xxxxxxxxxxxxxxx:role/xxxxxxxxxxxxxxx-XXXXXXXXX"
}
},
"EcsService": {
"Type" : "AWS::ECS::Service",
"Properties" : {
"Cluster" : "xxxxxxxxxxxxxxxxx",
"LaunchType" : "FARGATE",
"NetworkConfiguration" : {
"AwsvpcConfiguration" : {
"SecurityGroups" : ["sg-xxxxxxxxxxx"],
"Subnets" : ["subnet-xxxxxxxxxxxxxx"]
}
},
"TaskDefinition" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
There is no validity error in the code however, I am talking about the approach. I added image name container name but now it is asking for memory and cpu, it should not ask as it is already defined we just need to run a task.
Edited
I wanted to run a task after creation of my database and wanted to pass those database values to the task to run and complete a job.

Here is the working example of what you can do if you wanted to pass variable and run a task. In my case, I wanted to run a task after creation of my database but with environment variables, directly AWS does not provide any feature to do so, this is the solution which can help to trigger you ecs task.
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "Allow CloudWatch Events to trigger ECS task",
"Policies": [
{
"PolicyName": "Allow-ECS-Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*",
"iam:PassRole",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"RoleName": { "Fn::Join": [ "", ["CloudWatchTriggerECSRole-", { "Ref": "DBInstanceIdentifier" }]]}
}
},
"DummyParameter": {
"Type" : "AWS::SSM::Parameter",
"Properties" : {
"Name" : {"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"},
"Type" : "String",
"Value" : {"Fn::GetAtt": "DatabaseMaster.Endpoint.Address"}
},
"DependsOn": "TaskSchedule"
},
"TaskSchedule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Trigger ECS task upon creation of DB instance",
"Name": { "Fn::Join": [ "", ["ECSTaskTrigger-", { "Ref": "DBName" }]]},
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EventPattern": {
"source": [ "aws.ssm" ],
"detail-type": ["Parameter Store Change"] ,
"resources": [{"Fn::Sub":"arn:aws:ssm:eu-west-1:XXXXXXX:parameter/${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"detail": {
"operation": ["Create"],
"name": [{"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"type": ["String"]
}
},
"State": "ENABLED",
"Targets": [
{
"Arn": "arn:aws:ecs:eu-west-1:xxxxxxxx:cluster/NameOf-demo",
"Id": "NameOf-demo",
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EcsParameters": {
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsVpcConfiguration": {
"SecurityGroups": {"Ref":"VPCSecurityGroups"},
"Subnets": {"Ref":"DBSubnetName"}
}
},
"PlatformVersion": "LATEST",
"TaskDefinitionArn": "arn:aws:ecs:eu-west-1:XXXXXXXX:task-definition/NameXXXXXXXXX:1"
},
"Input": {"Fn::Sub": [
"{\"containerOverrides\":[{\"name\":\"MyContainerName\",\"environment\":[{\"name\":\"VAR1\",\"value\":\"${TargetDatabase}\"},{\"name\":\"VAR2\",\"value\":\"${TargetHost}\"},{\"name\":\"VAR3\",\"value\":\"${TargetHostPassword}\"},{\"name\":\"VAR4\",\"value\":\"${TargetPort}\"},{\"name\":\"VAR5\",\"value\":\"${TargetUser}\"},{\"name\":\"VAR6\",\"value\":\"${TargetLocation}\"},{\"name\":\"VAR7\",\"value\":\"${TargetRegion}\"}]}]}",
{
"VAR1": {"Ref":"DBName"},
"VAR2": {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]},
"VAR3": {"Ref":"DBPassword"},
"VAR4": "5432",
"VAR5": {"Ref":"DBUser"},
"VAR6": "value6",
"VAR7": "eu-west-2"
}
]}
}
]
}
}

For Fargate task, we need to specify in CPU in Task Definition. and memory or memory reservation in either task or container definition.
and environment variables should be passed to each container as ContainerDefinitions and overrided when task is run from ecs task-run from console or cli.
{
"ContainerTaskdefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": "SomeFamily",
"ExecutionRoleArn": !Ref RoleArn,
"TaskRoleArn": !Ref TaskRoleArn,
"Cpu": "256",
"Memory": "1GB",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"EC2",
"FARGATE"
],
"ContainerDefinitions": [
{
"Name": "container name",
"Cpu": 256,
"Essential": "true",
"Image": !Ref EcsImage,
"Memory": "1024",
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": null,
"awslogs-region": null,
"awslogs-stream-prefix": "ecs"
}
},
"Environment": [
{
"Name": "ENV_ONE_KEY",
"Value": "Valu1"
},
{
"Name": "ENV_TWO_KEY",
"Value": "Valu2"
}
]
}
]
}
}
}
EDIT(from discussion in comments):
ECS Task Run is not a cloud-formation resource, it can only be run from console or CLI.
But if we choose to run from a cloudformation resource, it can be done using cloudformation custom resource. But once task ends, we now have a resource in cloudformation without an actual resource behind. So, custom resource needs to do:
on create: run the task.
on delete: do nothing.
on update: re-run the task
Force an update by changing an attribute or logical id, every time we need to run the task.

Related

Need to apply IAMPass Role to specific environment

I have cloudformation template. Here we have multiple environments(dev,qa,uat) and need to use same template for all environments.
In template Under "Action": ["iam:PassRole"] there are 4 resources, 3 resources are belongs to qa. When am deploying code on dev and uat Env, qa resources are applying to dev and uat environment as well but I need to create these 3 qa resources only for qa environment. I tried some conditions but isn't working. Is there any approach for this.
Please find below template code.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"CloudFormationRole": {
"Type": "AWS::IAM::Role",
"Description": "Service role in IAM for AWS CloudFormation",
"Properties": {
"RoleName": {
"Fn::Sub": "${Environment}-workflow-CloudFormationRole"
},
"Path": "/",
"Policies": [
{
"PolicyName": "WorkerCloudFormationRolePolicy",
"PolicyDocument": {
"Statement": [
{
"Action": [
"lambda:AddPermission",
"lambda:PutFunctionEventInvokeConfig",
"lambda:UpdateFunctionEventInvokeConfig"
],
"Resource": {
"Fn::Sub": "arn:aws:lambda:function:orderser-${Environment}-workflow-*"
},
"Effect": "Allow"
},
{
"Action": [
"iam:PassRole"
],
"Resource": [
{"Fn::Sub": "arn:aws:iam::role/orderser-workflow-*"},
{"Fn::Sub": "arn:aws:iam::role/orderserv-qa-workflowLambdaRole1"},
{"Fn::Sub": "arn:aws:iam::role/orderserv-qa-workflowLambdaRole2"},
{"Fn::Sub": "arn:aws:iam::role/orderserv-qa-workflowLmbdRole3"}
],
"Effect": "Allow"
}
]
}
}
]
}
}
}
}
Parameterize the pass role ARNs and have different set of parameter files for each environment
And, since you have multiple values, use CommaDelimitedList type parameter which can take multiple string values
Ref here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html

Failed to retrieve function source code when deploying a cloud function from a repository on a different project

I am trying to deploy a Cloud Function from a Cloud Source Repository placed in a different project, but getting the following error: Failed to retrieve function source code (see full proto below).
Project-A contains the cloud function and service accounts listed below.
Project-B contains the source repository.
I have successfully deployed the function on Project-B.
I've tried giving the following service accounts the Source Repository Administrator role on the cloud source repository, but that did not help.
{project_A_number}#cloudservices.gserviceaccount.com
{project_A_number}-compute#developer.gserviceaccount.com
{project_A_number}#cloudbuild.gserviceaccount.com
Project-A#appspot.gserviceaccount.com
I have also tried disabling the Cloud Functions API on Project-A and turning it back on again.
I am not sure what is going wrong - if anyone has a clue as to where to further look, I would appreciate it - thanks in advance!
The deployment creates two entries in monitoring - a NOTICE followed by an ERROR:
The ERROR log:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 5,
"message": "Failed to retrieve function source code"
},
"authenticationInfo": {
"principalEmail": "***#***.**"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs"
},
"insertId": "-vmfbt4cd54",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "pubsub-to-gcs",
"region": "europe-west1",
"project_id": "Project-A"
}
},
"timestamp": "2021-10-20T12:21:45.352043Z",
"severity": "ERROR",
"logName": "projects/Project-A/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cm9ldHotbGlmZS1kYXRhLXRlc3QvZXVyb3BlLXdlc3QxL3B1YnN1Yi10by1nY3MvVEhFbUQtLTZITWM",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-10-20T12:21:45.781856467Z"
}
The NOTICE log (logged right before the ERROR):
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "***#****.**"
},
"requestMetadata": {
"callerIp": "35.205.252.75",
"callerSuppliedUserAgent": "google-cloud-sdk gcloud/360.0.0 command/gcloud.functions.deploy invocation-id/917d697431e84b91bfa2bd9f9cc4f302 environment/devshell environment-version/None interactive/True from-script/False python/3.7.3 term/screen (Linux 5.4.144+),gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2021-10-20T12:21:44.909430Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"request": {
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest",
"function": {
"timeout": "60s",
"status": "UNKNOWN",
"serviceAccountEmail": "Project-A#appspot.gserviceaccount.com",
"availableMemoryMb": 256,
"name": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"runtime": "python39",
"labels": {
"deployment-tool": "cli-gcloud"
},
"entryPoint": "pubsub-to-gcs",
"updateTime": "2021-10-20T12:21:40.149Z",
"sourceRepository": {
"url": "https://source.developers.google.com/projects/Project-B/repos/my-repo/moveable-aliases/master/paths/my-folder"
},
"httpsTrigger": {},
"ingressSettings": "ALLOW_ALL",
"versionId": "1"
},
"updateMask": "eventTrigger,httpsTrigger,runtime,sourceRepository"
},
"resourceLocation": {
"currentLocations": [
"europe-west1"
]
}
},
"insertId": "1xdbim3e16pgu",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "pubsub-to-gcs",
"region": "europe-west1",
"project_id": "Project-A"
}
},
"timestamp": "2021-10-20T12:21:44.650257Z",
"severity": "NOTICE",
"logName": "projects/Project-A/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cm9ldHotbGlmZS1kYXRhLXRlc3QvZXVyb3BlLXdlc3QxL3B1YnN1Yi10by1nY3MvVEhFbUQtLTZITWM",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2021-10-20T12:21:45.832588036Z"
}
Turns out it wasn't an IAM issue: I've tried deploying the function from the UI, but that's not possible when deploying from a source repo in a different project.
Deploying using gcloud function deploy solved the issue.

CFT Template error: unresolved condition dependency UseDBSnapshot in Fn::If

Trying to create a CFT for RDS which can handle both the scenarios
creating a new RDS Aurora MySQL cluster and
create a RDS cluster with a existing DB Cluster Snapshot
Here is what I tried,
I have provide the below conditions section of the template
"UseDbSnapshot" : {
"Fn::Not" : [
{
"Fn::Equals":[
{"Ref": "DBSnapshotName"},
""
]
}
]
}
and referenced in Resource section as below
"RDSCluster1": {
"Type": "AWS::RDS::DBCluster",
"Condition": "isResourceCreate",
"Properties": {
"Engine": "aurora",
"DBSubnetGroupName": {
"Ref": "DBSubnetGroup"
},
"DBClusterParameterGroupName": {
"Ref": "RDSDBClusterParameterGroup"
},
"DBSnapshotIdentifier" : {
"Fn::If" : [
"UseDBSnapshot",
{"Ref" : "DBSnapshotName"},
{"Ref" : "AWS::NoValue"}
]
},
"MasterUsername": {
"Ref": "DbUser"
},
"MasterUserPassword": {
"Ref": "MasterUserPassword"
},
"StorageEncrypted" : true,
"KmsKeyId" : {
"Ref": "KmsKeyId"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"DBAccessSecurityGroup",
"GroupId"
]
}
],
"Port": "3306",
"BackupRetentionPeriod": "1"
},
"DeletionPolicy": "Snapshot"
}
The condition "isResourceCreate" is satisfied but I am getting below error
Template error: unresolved condition dependency UseDBSnapshot in Fn::If
Could you please help me here.
Have looked up online link https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-sample-templates.html
and created this CFT.
Let me know if you require any more details.
If you are restoring DB from snapshot, you can't provide MasterUsername and MasterUserPassword. These values will be inherited from the snapshot, so you have to make them optional.
If you specify the SourceDBInstanceIdentifier or DBSnapshotIdentifier property, don't specify this property. The value is inherited from the source DB instance or snapshot.

Specify shared/'common' values for configurations in CppProperties.json or CMakeSettings.json

When using the "Open Folder" functionality of Visual Studio, the IDE searches for project settings and configurations in a special json file. For CPP projects, this could be CppProperties.json. For CMake projects, this could be CMakeSettings.json.
This json file contains a collection of one or more "configurations," such as "Debug" or "Release". I will use a recent CMake project as an example:
"configurations": [
{
"name": "ARM-Debug",
"generator": "Ninja",
"configurationType": "Debug",
"inheritEnvironments": [
"gcc-arm"
],
"buildRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\build\\${name}",
"installRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"intelliSenseMode": "linux-gcc-arm",
"variables": [
{
"name": "CMAKE_TOOLCHAIN_FILE",
"value": "${workspaceRoot}/cmake/arm-none-eabi-toolchain.cmake"
}
]
},
{
"name": "ARM-Release",
"generator": "Ninja",
"configurationType": "Release",
"inheritEnvironments": [
"gcc-arm"
],
"buildRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\build\\${name}",
"installRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"intelliSenseMode": "linux-gcc-arm",
"variables": [
{
"name": "CMAKE_TOOLCHAIN_FILE",
"value": "${workspaceRoot}/cmake/arm-none-eabi-toolchain.cmake"
}
]
}
As you can see, I have two configurations with nearly identical properties.
My question: is it possible to define these common/shared properties once, in such a way as to allow the configurations to inherit them and avoid repeating myself?
The easier way is to define an environment at global level (outside of any configuration), such as:
{
"environments": [
{
"namespace" : "env",
"varName": "varValue"
}
],
Then you can reuse that wherever you need to, e.g.:
"cmakeCommandArgs": "${env.varName}",
You can also have multiple environments, and reuse them, like this:
{
"environments": [
{
"environment": "env1",
"namespace": "env",
"varName": "varValueEnv1"
},
{
"environment": "env2",
"namespace": "env",
"varName": "varValueEnv2"
}
],
"configurations": [
{
"name": "x64-Release",
"inheritEnvironments": [
"msvc_x64_x64", "env2"
],
"cmakeCommandArgs": "${env.varName}",
.....
}
]
the 'x64-Release' will inherit the variables's value in the environment called "env2" (namespace 'env')

AWS Data Pipeline - Set Hive site values during EMR Creation

We are upgrading our Data pipeline version from 3.3.2 to 5.8, so those bootstrap actions on old AMI release have changed to be setup using configuration and specifying them under classification / property definition.
So my Json looks like below
{
"enableDebugging": "true",
"taskInstanceBidPrice": "1",
"terminateAfter": "2 Hours",
"name": "ExportCluster",
"taskInstanceType": "m1.xlarge",
"schedule": {
"ref": "Default"
},
"emrLogUri": "s3://emr-script-logs/",
"coreInstanceType": "m1.xlarge",
"coreInstanceCount": "1",
"taskInstanceCount": "4",
"masterInstanceType": "m3.xlarge",
"keyPair": "XXXX",
"applications": ["hadoop","hive", "tez"],
"subnetId": "XXXXX",
"logUri": "s3://pipelinedata/XXX",
"releaseLabel": "emr-5.8.0",
"type": "EmrCluster",
"id": "EmrClusterWithNewEMRVersion",
"configuration": [
{ "ref": "configureEmrHiveSite" }
]
},
{
"myComment": "This object configures hive-site xml.",
"name": "HiveSite Configuration",
"type": "HiveSiteConfiguration",
"id": "configureEmrHiveSite",
"classification": "hive-site",
"property": [
{"ref": "hive-exec-compress-output" }
]
},
{
"myComment": "This object sets a hive-site configuration
property value.",
"name":"hive-exec-compress-output",
"type": "Property",
"id": "hive-exec-compress-output",
"key": "hive.exec.compress.output",
"value": "true"
}
],
"parameters": []
With the above Json file it gets loaded into Data Pipeline but throws an error saying
Object:HiveSite Configuration
ERROR: 'HiveSiteConfiguration'
Object:ExportCluster
ERROR: 'configuration' values must be of type 'null'. Found values of type 'null'
I am not sure what this really means and could you please let me know if i am specifying this correctly which i think i am according to http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-configure-apps.html
The below block should have the name as "EMR Configuration" only then its recognized correctly by the AWS Data pipeline and the Hive-site.xml is being set accordingly.
{
"myComment": "This object configures hive-site xml.",
"name": "EMR Configuration",
"type": "EmrConfiguration",
"id": "configureEmrHiveSite",
"classification": "hive-site",
"property": [
{"ref": "hive-exec-compress-output" }
]
},