I am trying to create a policy to restrict users to view only specific instance in AWS EC2 console. I have tried the below policy and it still showing me all my available instances so I am wondering where did I do wrong on my JSON policy below. Thank you
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/UserName": "${aws:username}"
}
}
},
{
"Effect": "Deny",
"Action": "ec2:Describe*",
"Resource": "arn:aws:ec2:*:*:DescribeInstances/instance-id"
}
]
}
In looking at Actions, resources, and condition keys for Amazon EC2 - Service Authorization Reference, the DescribeInstances API call does not accept any Conditions to limit the results.
Therefore, users either have permission to make that API call (and hence view all instances), or you can Deny them from being able to make the API call. There is no ability to control which instances they can include in their request.
Agree with John.
A slightly different way to go about this is not with policies and restrictions but filtering via Tags and filters on the console.
Not exactly what you want but if you only people to see the ones they should. Tag them and send get the link like
https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-1#Instances:tag:YourTagName=AllYouCanSee
Related
I have a container that is deployed with Fargate and runs without any issues when I select "Run Task" in ECS. The container uses S3, SES and CloudWatch services (it contains a Python script). When a task is run, I receive an email with output files.
The next step is to trigger a task in ECS to run this container using Fargate on a schedule. For that, I am trying to use Amazon EventBridge. However, something is wrong, because the tasks fail to run.
The rule that I create has the following setup:
cron expression, which I have confirmed that is valid (the next 10 triggered dates appear in the console).
choose AWS Service -> ECS Task and then set the cluster, task name and subnet ID.
I choose the task execution role (ecsTaskExecutionRole). This task has a Amazon_EventBridge_Invoke_ECS policy attached to it. This policy came from previous failed runs.
The event was successfully attached to the task in ECS, because if I go to the specified cluster and the tab Scheduled tasks, it is there. I have tried multiple configurations and I keep getting FailedInvocations, which makes me think it is a problem with the role policies.
I have created an additional target for the rule to log in CloudWatch, but the logs are not useful at all. I have checked also CloudTrail and looked for RunTask events. In some occasions, when I set a rule, no RunTask events are shown in CloudTrail. Other times they appear but do not show any ErrorCode. I also had instances where the RunTasks had the error InvalidParameterException: "No Container Instances were found in your cluster. Any ideas about what may be wrong?
I'm not sure this could be the problem for you.
I was having a VERY similar issue, and I fixed it by changing the role's policy to this:
{
"Statement": [
{
"Action": [
"iam:PassRole"
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Action": [
"ecs:RunTask"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
],
"Version": "2012-10-17"
}
I have the feeling that you need to change your role to a new role that has this policy instead of the one that you mentioned (ecsTaskExecutionRole), since that role has the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
EDIT: Just to add. This would be the role that the EventBridge rule should have, not the task definition within the cluster. The task definition role should still be the one that you've mentioned (ecsTaskExecutionRole)
I am creating a Secret in AWS secret manager and I try to put in a policy to restrict access by IP.
I do it under the Secret console in [Resource Permissions] section.
I keep getting syntax error, but not what is the error.
Here is the policy I am trying ( was create via the visual editor in AWS console).
{
"Version":"2012-10-17",
"Statement": [{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "secretsmanager:*",
"Resource": "arn:aws:secretsmanager:us-east-2:722317156788:secret:dev/playAround/junju-MWTXvg",
"Condition": {
"IpAddress": {
"aws:SourceIp": "210.75.12.75/32"
}
}
}]
}
It works after making two changes as below:
remove leading space in front of opening brace "{" on the first line of policy
for resource based policies, Principal is required (in certain circumstances)
Please refer to the attached picture of your updated policy to resolve the issue.
I have two buckets mywesbite.com and www.mywebsite.com.
I have done the following -
Made the bucket mywesbite.com public with the following code -
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mywebsite.com/*"
}
]
}
Set the index.html file as the index document
I can now see my website loading, however this is only when I click the endpoint url - http://mywebsite.com.s3-website.eu-west-2.amazonaws.com
Of course my actual website is simply https://mywesbsite.com/ - yet I do not see any of my files being rendered here.
Is there something I'm missing?? It's all good having a working endpoint, but I need to see my files rendered on my actual domain.
Added a picture of my route 53 settings below
You need to create an alias record in your hosted zone for the domain "mywebsite.com" to point to the S3 bucket.
Remember though that there are some restrictions:
The S3 bucket must have the same name as your domain name.
The domain name has to be registered via route 53
Ofcourse you need to own the domain name "mywebsite.com" Just having an S3 bucket doesn't mean you own a domain name.
This is what my Amazon S3 bucket policy looks like (generated in part, using the AWS Policy Generator):
{
"Id": "Policy1350503700228",
"Statement": [
{
"Sid": "Stmt1350503699292",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::files.example.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.com/*",
"http://www.example.com/*"
]
}
},
"Principal": {
"AWS": [
"*"
]
}
}
]
}
What the bucket policy is supposed to do is - throw a '403 Forbidden' error if any file in the bucket is accessed directly or from a referrer other than (www.)example.com.
It seems to work, except that Chrome seems to have issues with PDF files served in this manner (for instance, images load just fine). So, any PDF from files.example.com (with referrer based restrictions) seems to be loading forever in Chrome (latest version, on Ubuntu 12.04). Firefox on the other hand loads the PDF, which is less than 100KB in size, in a snap.
Any idea as to what I am / could be doing wrong?
PS: If I right-click and select 'Save As..' Chrome is able to download the file. I don't understand why it's not showing it.
I checked developed tools on Chrome and found that Chrome pdf plugin requests pdf in multiple chunks. First chunk will have correct referer but all consequent chunk will have https://s3.amazonaws.com/.... instead of http://mywebsite.com. Adding https://s3.amazonaws.com/* into bucket policy list solved the problem.
Go into your bucket and double-check the MIME type specified on the file (metadata tab). It should be Content-Type: application/pdf
You can set the response-content-disposition to "attachment" as described in this post: https://stackoverflow.com/a/9099933/568383
Our current S3 policy reads as:
{
"Version": "2008-10-17",
"Id": "45103629-690a-4a93-97f8-1abe2f9bb68c",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::incredibad29/*"
}
]
}
This just allows anyone to access files from within.
We want to add a hotlinking statement, so users can ONLY acccess the file if referred from our site. So from a domain starting with incredibad29.com or www.incredibad.com
I just can't figure out how to do this. Any help would be amazing, thank you!
If it is for images and other media types, there is a known hack that uses content type headers:
There’s a workaround that you may use to block hotlinking of selective images and files that you think are putting a major strain in your Amazon S3 budget.
When you upload a file to your Amazon S3 account, the service assigns a certain Content-Type to every file based on its extension. For instance, a .jpg file will have the Content-Type set as image/jpg while a .html file will have the Content-Type as text/html. A hidden feature in Amazon S3 is that you can manually assign any Content-Type to any file, irrespective of the file’s extension, and this is what you can use to prevent hotlinking.
From: http://www.labnol.org/internet/prevent-image-hotlinking-in-amazon-s3/13156/
I think this is pretty much the basic technique. However, if you skim the 6350 results for `google s3 hotlinking deny you might find alternative ways :)