I have a container that is deployed with Fargate and runs without any issues when I select "Run Task" in ECS. The container uses S3, SES and CloudWatch services (it contains a Python script). When a task is run, I receive an email with output files.
The next step is to trigger a task in ECS to run this container using Fargate on a schedule. For that, I am trying to use Amazon EventBridge. However, something is wrong, because the tasks fail to run.
The rule that I create has the following setup:
cron expression, which I have confirmed that is valid (the next 10 triggered dates appear in the console).
choose AWS Service -> ECS Task and then set the cluster, task name and subnet ID.
I choose the task execution role (ecsTaskExecutionRole). This task has a Amazon_EventBridge_Invoke_ECS policy attached to it. This policy came from previous failed runs.
The event was successfully attached to the task in ECS, because if I go to the specified cluster and the tab Scheduled tasks, it is there. I have tried multiple configurations and I keep getting FailedInvocations, which makes me think it is a problem with the role policies.
I have created an additional target for the rule to log in CloudWatch, but the logs are not useful at all. I have checked also CloudTrail and looked for RunTask events. In some occasions, when I set a rule, no RunTask events are shown in CloudTrail. Other times they appear but do not show any ErrorCode. I also had instances where the RunTasks had the error InvalidParameterException: "No Container Instances were found in your cluster. Any ideas about what may be wrong?
I'm not sure this could be the problem for you.
I was having a VERY similar issue, and I fixed it by changing the role's policy to this:
{
"Statement": [
{
"Action": [
"iam:PassRole"
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Action": [
"ecs:RunTask"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
],
"Version": "2012-10-17"
}
I have the feeling that you need to change your role to a new role that has this policy instead of the one that you mentioned (ecsTaskExecutionRole), since that role has the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
EDIT: Just to add. This would be the role that the EventBridge rule should have, not the task definition within the cluster. The task definition role should still be the one that you've mentioned (ecsTaskExecutionRole)
Related
I am trying to create a policy to restrict users to view only specific instance in AWS EC2 console. I have tried the below policy and it still showing me all my available instances so I am wondering where did I do wrong on my JSON policy below. Thank you
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/UserName": "${aws:username}"
}
}
},
{
"Effect": "Deny",
"Action": "ec2:Describe*",
"Resource": "arn:aws:ec2:*:*:DescribeInstances/instance-id"
}
]
}
In looking at Actions, resources, and condition keys for Amazon EC2 - Service Authorization Reference, the DescribeInstances API call does not accept any Conditions to limit the results.
Therefore, users either have permission to make that API call (and hence view all instances), or you can Deny them from being able to make the API call. There is no ability to control which instances they can include in their request.
Agree with John.
A slightly different way to go about this is not with policies and restrictions but filtering via Tags and filters on the console.
Not exactly what you want but if you only people to see the ones they should. Tag them and send get the link like
https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-1#Instances:tag:YourTagName=AllYouCanSee
I am creating a Secret in AWS secret manager and I try to put in a policy to restrict access by IP.
I do it under the Secret console in [Resource Permissions] section.
I keep getting syntax error, but not what is the error.
Here is the policy I am trying ( was create via the visual editor in AWS console).
{
"Version":"2012-10-17",
"Statement": [{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "secretsmanager:*",
"Resource": "arn:aws:secretsmanager:us-east-2:722317156788:secret:dev/playAround/junju-MWTXvg",
"Condition": {
"IpAddress": {
"aws:SourceIp": "210.75.12.75/32"
}
}
}]
}
It works after making two changes as below:
remove leading space in front of opening brace "{" on the first line of policy
for resource based policies, Principal is required (in certain circumstances)
Please refer to the attached picture of your updated policy to resolve the issue.
I am trying to deploy an ARM template through Azure DevOps. I've tried doing a test deployment (Test-AzResourceGroupDeployment) through PowerShell without any issues.
This issue has persisted for several weeks, and i've read some posts stating it dissapeared after a few hours or after a day, however this has not been the case for me.
In Azure DevOps my build is succeeding just fine. But when i try to create a release through my release pipeline using the resource "Azure resource group deployment" it will fail stating the error:
"Code": "Conflict",
"Message": "Cannot modify this site because another operation is in progress. Details: Id: 4f18af87-8848-4df5-82f0-ec6be47fb599, OperationName: Update, CreatedTime: 9/27/2019 8:55:26 AM, RequestId: 691b5183-aa8b-4a38-8891-36906a5e2d20, EntityType: 3"
Update
I have later noticed that the error surfaces when trying to deploy my hostNameBindings for the site.
I have 2 different hostNameBindings in my template which causes the failure.
It fails apparently because it tries to deploy both of them at the same time, though i am not aware of an apparent fix for this so any help would still be appreciated!
I tried to use the copy function but as far as i know that will make an exact copy for both hostNameBindings which is not what i need. first of all they have different names and properties, anyone got a fix for this?
Make the one hostNameBindings depend on the other host name binding. Then they will be executed 1 after another and you should not get the same error message.
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', variables('websitename'))]",
"[resourceId('Microsoft.Web/sites/hostNameBindings/',variables('websitename'), variables('firstbindingame-aftertheslash-sowithoutthewebsitename'))]"
],
Look like people already notice this issue and trying to fix it.
https://status.azure.com/
I had the same issue when using the Copy function in order to add multiple Custom Domains. Thanks to David Gnanasekaran's blog I was able to fix this issue.
By default the copy function will execute in parallel. By setting the mode to serial and setting the batchSize to 1 I did not receive any operation is in progress errors.
Here is my piece of ARM template to set the custom domains.
"copy": {
"name": "hostNameBindingsCopy",
"count": "[length(parameters('customDomainNames'))]",
"mode": "Serial",
"batchSize": 1
},
"apiVersion": "[variables('webApiVersion')]",
"name": "[concat(variables('webAppName'), '/', parameters('customDomainNames')[copyIndex()])]",
"type": "Microsoft.Web/sites/hostNameBindings",
"kind": "string",
"location": "[resourceGroup().location]",
"condition": "[greater(length(parameters('customDomainNames')), 0)]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('webAppName'))]"
],
"properties": {
"customHostNameDnsRecordType": "CName",
"hostNameType": "Verified",
"siteName": "parameters('webAppName')"
}
I have two buckets mywesbite.com and www.mywebsite.com.
I have done the following -
Made the bucket mywesbite.com public with the following code -
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mywebsite.com/*"
}
]
}
Set the index.html file as the index document
I can now see my website loading, however this is only when I click the endpoint url - http://mywebsite.com.s3-website.eu-west-2.amazonaws.com
Of course my actual website is simply https://mywesbsite.com/ - yet I do not see any of my files being rendered here.
Is there something I'm missing?? It's all good having a working endpoint, but I need to see my files rendered on my actual domain.
Added a picture of my route 53 settings below
You need to create an alias record in your hosted zone for the domain "mywebsite.com" to point to the S3 bucket.
Remember though that there are some restrictions:
The S3 bucket must have the same name as your domain name.
The domain name has to be registered via route 53
Ofcourse you need to own the domain name "mywebsite.com" Just having an S3 bucket doesn't mean you own a domain name.
The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}
At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).