Trying to create a CFT for RDS which can handle both the scenarios
creating a new RDS Aurora MySQL cluster and
create a RDS cluster with a existing DB Cluster Snapshot
Here is what I tried,
I have provide the below conditions section of the template
"UseDbSnapshot" : {
"Fn::Not" : [
{
"Fn::Equals":[
{"Ref": "DBSnapshotName"},
""
]
}
]
}
and referenced in Resource section as below
"RDSCluster1": {
"Type": "AWS::RDS::DBCluster",
"Condition": "isResourceCreate",
"Properties": {
"Engine": "aurora",
"DBSubnetGroupName": {
"Ref": "DBSubnetGroup"
},
"DBClusterParameterGroupName": {
"Ref": "RDSDBClusterParameterGroup"
},
"DBSnapshotIdentifier" : {
"Fn::If" : [
"UseDBSnapshot",
{"Ref" : "DBSnapshotName"},
{"Ref" : "AWS::NoValue"}
]
},
"MasterUsername": {
"Ref": "DbUser"
},
"MasterUserPassword": {
"Ref": "MasterUserPassword"
},
"StorageEncrypted" : true,
"KmsKeyId" : {
"Ref": "KmsKeyId"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"DBAccessSecurityGroup",
"GroupId"
]
}
],
"Port": "3306",
"BackupRetentionPeriod": "1"
},
"DeletionPolicy": "Snapshot"
}
The condition "isResourceCreate" is satisfied but I am getting below error
Template error: unresolved condition dependency UseDBSnapshot in Fn::If
Could you please help me here.
Have looked up online link https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-sample-templates.html
and created this CFT.
Let me know if you require any more details.
If you are restoring DB from snapshot, you can't provide MasterUsername and MasterUserPassword. These values will be inherited from the snapshot, so you have to make them optional.
If you specify the SourceDBInstanceIdentifier or DBSnapshotIdentifier property, don't specify this property. The value is inherited from the source DB instance or snapshot.
Related
I was searching a way to run ecs task. I already have a cluster and task definition settings. I just wanted to trigger a task using CloudFormation template. I know that I can run a task by clicking on the console and it works fine. For cfn, approach needs to be define properly.
Check the attached screenshots. I wanted to run that task using CloudFormation and pass container override environment variables. As per my current templates, it is not allowing me to do same like I can do using console. Using console I just need to select the following options
1. Launch type
2. Task Definition
Family
Revision
3. VPC and security groups
4. Environment variable overrides rest of the things automatically selected
It starts working with console but with cloudformaton template how can we do that. Is it possible to do or there is no such feature?
"taskdefinition": {
"Type" : "AWS::ECS::TaskDefinition",
"DependsOn": "DatabaseMaster",
"Properties" : {
"ContainerDefinitions" : [{
"Environment" : [
{
"Name" : "TARGET_DATABASE",
"Value" : {"Ref":"DBName"}
},
{
"Name" : "TARGET_HOST",
"Value" : {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]}
}
]
}],
"ExecutionRoleArn" : "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
"Family" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"TaskRoleArn" : "arn:aws:iam::xxxxxxxxxxxxxxx:role/xxxxxxxxxxxxxxx-XXXXXXXXX"
}
},
"EcsService": {
"Type" : "AWS::ECS::Service",
"Properties" : {
"Cluster" : "xxxxxxxxxxxxxxxxx",
"LaunchType" : "FARGATE",
"NetworkConfiguration" : {
"AwsvpcConfiguration" : {
"SecurityGroups" : ["sg-xxxxxxxxxxx"],
"Subnets" : ["subnet-xxxxxxxxxxxxxx"]
}
},
"TaskDefinition" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
There is no validity error in the code however, I am talking about the approach. I added image name container name but now it is asking for memory and cpu, it should not ask as it is already defined we just need to run a task.
Edited
I wanted to run a task after creation of my database and wanted to pass those database values to the task to run and complete a job.
Here is the working example of what you can do if you wanted to pass variable and run a task. In my case, I wanted to run a task after creation of my database but with environment variables, directly AWS does not provide any feature to do so, this is the solution which can help to trigger you ecs task.
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "Allow CloudWatch Events to trigger ECS task",
"Policies": [
{
"PolicyName": "Allow-ECS-Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*",
"iam:PassRole",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"RoleName": { "Fn::Join": [ "", ["CloudWatchTriggerECSRole-", { "Ref": "DBInstanceIdentifier" }]]}
}
},
"DummyParameter": {
"Type" : "AWS::SSM::Parameter",
"Properties" : {
"Name" : {"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"},
"Type" : "String",
"Value" : {"Fn::GetAtt": "DatabaseMaster.Endpoint.Address"}
},
"DependsOn": "TaskSchedule"
},
"TaskSchedule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Trigger ECS task upon creation of DB instance",
"Name": { "Fn::Join": [ "", ["ECSTaskTrigger-", { "Ref": "DBName" }]]},
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EventPattern": {
"source": [ "aws.ssm" ],
"detail-type": ["Parameter Store Change"] ,
"resources": [{"Fn::Sub":"arn:aws:ssm:eu-west-1:XXXXXXX:parameter/${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"detail": {
"operation": ["Create"],
"name": [{"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"type": ["String"]
}
},
"State": "ENABLED",
"Targets": [
{
"Arn": "arn:aws:ecs:eu-west-1:xxxxxxxx:cluster/NameOf-demo",
"Id": "NameOf-demo",
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EcsParameters": {
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsVpcConfiguration": {
"SecurityGroups": {"Ref":"VPCSecurityGroups"},
"Subnets": {"Ref":"DBSubnetName"}
}
},
"PlatformVersion": "LATEST",
"TaskDefinitionArn": "arn:aws:ecs:eu-west-1:XXXXXXXX:task-definition/NameXXXXXXXXX:1"
},
"Input": {"Fn::Sub": [
"{\"containerOverrides\":[{\"name\":\"MyContainerName\",\"environment\":[{\"name\":\"VAR1\",\"value\":\"${TargetDatabase}\"},{\"name\":\"VAR2\",\"value\":\"${TargetHost}\"},{\"name\":\"VAR3\",\"value\":\"${TargetHostPassword}\"},{\"name\":\"VAR4\",\"value\":\"${TargetPort}\"},{\"name\":\"VAR5\",\"value\":\"${TargetUser}\"},{\"name\":\"VAR6\",\"value\":\"${TargetLocation}\"},{\"name\":\"VAR7\",\"value\":\"${TargetRegion}\"}]}]}",
{
"VAR1": {"Ref":"DBName"},
"VAR2": {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]},
"VAR3": {"Ref":"DBPassword"},
"VAR4": "5432",
"VAR5": {"Ref":"DBUser"},
"VAR6": "value6",
"VAR7": "eu-west-2"
}
]}
}
]
}
}
For Fargate task, we need to specify in CPU in Task Definition. and memory or memory reservation in either task or container definition.
and environment variables should be passed to each container as ContainerDefinitions and overrided when task is run from ecs task-run from console or cli.
{
"ContainerTaskdefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": "SomeFamily",
"ExecutionRoleArn": !Ref RoleArn,
"TaskRoleArn": !Ref TaskRoleArn,
"Cpu": "256",
"Memory": "1GB",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"EC2",
"FARGATE"
],
"ContainerDefinitions": [
{
"Name": "container name",
"Cpu": 256,
"Essential": "true",
"Image": !Ref EcsImage,
"Memory": "1024",
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": null,
"awslogs-region": null,
"awslogs-stream-prefix": "ecs"
}
},
"Environment": [
{
"Name": "ENV_ONE_KEY",
"Value": "Valu1"
},
{
"Name": "ENV_TWO_KEY",
"Value": "Valu2"
}
]
}
]
}
}
}
EDIT(from discussion in comments):
ECS Task Run is not a cloud-formation resource, it can only be run from console or CLI.
But if we choose to run from a cloudformation resource, it can be done using cloudformation custom resource. But once task ends, we now have a resource in cloudformation without an actual resource behind. So, custom resource needs to do:
on create: run the task.
on delete: do nothing.
on update: re-run the task
Force an update by changing an attribute or logical id, every time we need to run the task.
When using the "Open Folder" functionality of Visual Studio, the IDE searches for project settings and configurations in a special json file. For CPP projects, this could be CppProperties.json. For CMake projects, this could be CMakeSettings.json.
This json file contains a collection of one or more "configurations," such as "Debug" or "Release". I will use a recent CMake project as an example:
"configurations": [
{
"name": "ARM-Debug",
"generator": "Ninja",
"configurationType": "Debug",
"inheritEnvironments": [
"gcc-arm"
],
"buildRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\build\\${name}",
"installRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"intelliSenseMode": "linux-gcc-arm",
"variables": [
{
"name": "CMAKE_TOOLCHAIN_FILE",
"value": "${workspaceRoot}/cmake/arm-none-eabi-toolchain.cmake"
}
]
},
{
"name": "ARM-Release",
"generator": "Ninja",
"configurationType": "Release",
"inheritEnvironments": [
"gcc-arm"
],
"buildRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\build\\${name}",
"installRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"intelliSenseMode": "linux-gcc-arm",
"variables": [
{
"name": "CMAKE_TOOLCHAIN_FILE",
"value": "${workspaceRoot}/cmake/arm-none-eabi-toolchain.cmake"
}
]
}
As you can see, I have two configurations with nearly identical properties.
My question: is it possible to define these common/shared properties once, in such a way as to allow the configurations to inherit them and avoid repeating myself?
The easier way is to define an environment at global level (outside of any configuration), such as:
{
"environments": [
{
"namespace" : "env",
"varName": "varValue"
}
],
Then you can reuse that wherever you need to, e.g.:
"cmakeCommandArgs": "${env.varName}",
You can also have multiple environments, and reuse them, like this:
{
"environments": [
{
"environment": "env1",
"namespace": "env",
"varName": "varValueEnv1"
},
{
"environment": "env2",
"namespace": "env",
"varName": "varValueEnv2"
}
],
"configurations": [
{
"name": "x64-Release",
"inheritEnvironments": [
"msvc_x64_x64", "env2"
],
"cmakeCommandArgs": "${env.varName}",
.....
}
]
the 'x64-Release' will inherit the variables's value in the environment called "env2" (namespace 'env')
We are upgrading our Data pipeline version from 3.3.2 to 5.8, so those bootstrap actions on old AMI release have changed to be setup using configuration and specifying them under classification / property definition.
So my Json looks like below
{
"enableDebugging": "true",
"taskInstanceBidPrice": "1",
"terminateAfter": "2 Hours",
"name": "ExportCluster",
"taskInstanceType": "m1.xlarge",
"schedule": {
"ref": "Default"
},
"emrLogUri": "s3://emr-script-logs/",
"coreInstanceType": "m1.xlarge",
"coreInstanceCount": "1",
"taskInstanceCount": "4",
"masterInstanceType": "m3.xlarge",
"keyPair": "XXXX",
"applications": ["hadoop","hive", "tez"],
"subnetId": "XXXXX",
"logUri": "s3://pipelinedata/XXX",
"releaseLabel": "emr-5.8.0",
"type": "EmrCluster",
"id": "EmrClusterWithNewEMRVersion",
"configuration": [
{ "ref": "configureEmrHiveSite" }
]
},
{
"myComment": "This object configures hive-site xml.",
"name": "HiveSite Configuration",
"type": "HiveSiteConfiguration",
"id": "configureEmrHiveSite",
"classification": "hive-site",
"property": [
{"ref": "hive-exec-compress-output" }
]
},
{
"myComment": "This object sets a hive-site configuration
property value.",
"name":"hive-exec-compress-output",
"type": "Property",
"id": "hive-exec-compress-output",
"key": "hive.exec.compress.output",
"value": "true"
}
],
"parameters": []
With the above Json file it gets loaded into Data Pipeline but throws an error saying
Object:HiveSite Configuration
ERROR: 'HiveSiteConfiguration'
Object:ExportCluster
ERROR: 'configuration' values must be of type 'null'. Found values of type 'null'
I am not sure what this really means and could you please let me know if i am specifying this correctly which i think i am according to http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-configure-apps.html
The below block should have the name as "EMR Configuration" only then its recognized correctly by the AWS Data pipeline and the Hive-site.xml is being set accordingly.
{
"myComment": "This object configures hive-site xml.",
"name": "EMR Configuration",
"type": "EmrConfiguration",
"id": "configureEmrHiveSite",
"classification": "hive-site",
"property": [
{"ref": "hive-exec-compress-output" }
]
},
I create by cloudformation an EFS in a VPC, this VPC contains few EC2 instances.
I need to create as well mount targets for each subnet.
This cloudformation template can be executed in different AWS accounts.
How to know how meny Mount targets resources to create?
I can have a vpc in account 1 with 3 subnets, another vpc in account 2 with 2 subnets...
How to make the template generic so that it's going according to every account environment ?
"FileSystem" : {
"Type" : "AWS::EFS::FileSystem",
"Properties" : {
"FileSystemTags" : [
{
"Key" : "Name",
"Value" : "FileSystem"
}
]
}
},
"MountTarget1": {
"Type": "AWS::EFS::MountTarget",
"Properties": {
"FileSystemId": { "Ref": "FileSystem" },
"SubnetId": { "Ref": "Subnet1" },
"SecurityGroups": [ { "Ref": "MountTargetSecurityGroup" } ]
}
},
"MountTarget2": {
"Type": "AWS::EFS::MountTarget",
"Properties": {
"FileSystemId": { "Ref": "FileSystem" },
"SubnetId": { "Ref": "Subnet2" },
"SecurityGroups": [ { "Ref": "MountTargetSecurityGroup" } ]
}
},
As I understand, you want to be able to re-use this same template across accounts and create the appropriate number of MountTargets based on the number of subnets required by an account.
There are many variables and conditions that would apply depending on the number of subnets (ACL, Routetables, etc). You could perhaps accomplish it with a large number of conditions and parameters, but the template would get quite messy. Although that's not an elegant solution.
The better approach would be creating your template using Troposphere. Here's an example for EFS to get you started. https://github.com/cloudtools/troposphere/blob/master/examples/EFS.py
I am using Cygnus with Mongo and sth sink to retrieve historical data.
In the current implementation of cygnus mongo sink the attribute metadata is not stored in the data base. So I updated cygnus to be able to store the attribute metadata.
But when I use the STH-comet to retrieve the history, the API appreantly does not support retrieveing the attribute metadata.
Am I missing some kind of configuration or the API is not supporting the attribute metadata since the response that I am getting from STH-comet is:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "humidity",
"values": [
{
"recvTime": "2017-03-08T08:06:11.463Z",
"attrType": "Number",
"attrValue": "999"
},
{
"recvTime": "2017-03-08T08:10:54.199Z",
"attrType": "Number",
"attrValue": "3.06"
}
]
}
],
"id": "Room1",
"isPattern": false,
"type": "Room"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
In the mongoDB data base I have this content:
{ "_id" : ObjectId("58bfbb7c973c5c22d258cffc"), "recvTime" : ISODate("2017-03-08T08:06:11.463Z"), "attrName" : "humidity", "attrType" : "Number", "attrValue" : "999", "attrMetadata" : [ ] }
{ "_id" : ObjectId("58bfbc93973c5c22d258cffd"), "recvTime" : ISODate("2017-03-08T08:10:54.199Z"), "attrName" : "humidity", "attrType" : "Number", "attrValue" : "3.06", "attrMetadata" : [ { "name" : "unit", "type" : "Text", "value" : "voltage" } ] }
In case the API is not supporting the retrieval of the attribute metadata, can this feature be added?
Thanks & Best regards.
STH and Cygnus are aligned with regards to the information stored in MongoDB, both raw and aggregated one. In this sense, because Cygnus originally did not support for attribute metadata in NGSIMongoSink (the one in charge of storing the information in raw format), STH do not support attribute metadata in its raw API either.
As long as you have extended Cygnus functionality for this purpose, you'll have to extend STH API as well.