I have been trying to add a json policy inside a yaml file but unsuccessful so far
custom:
deploymentBucket:
versioning: true
policy: |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSSLRequestsOnly",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::deeperion-deployment-bucket",
"arn:aws:s3:::deeperion-deployment-bucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
below serverless framework plugin allows you to add bucket policy to the deployment bucket
https://www.serverless.com/plugins/serverless-deployment-bucket
Related
I have a situation where I need to restrict s3 bucket to deny all other ips except the list of ips provided but also allow access for snowflake. Since the list of possible ip addresses used by snowflake in a region is a lot - https://ip-ranges.amazonaws.com/ip-ranges.json, I was trying to see if I can provide an 'Allow' based on the snowflake role created for snowflake s3 stage. The policy I tried looks like below.
{
"Version": "2012-10-17",
"Id": "SourceIP",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::11111111111:role/snowflake-role"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::s3-bucket"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:role/snowflake-role"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::s3-bucket/*"
},
{
"Sid": "SourceIP",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::s3-bucket",
"arn:aws:s3:::s3-bucket/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"10.10.100.10",
"10.10.100.11",
"10.10.100.12",
"10.10.100.13"
]
}
}
}
]
}
This works perfectly on blocking other ip addresses but Snowflake cannot access.
Since 'Deny' possibly denied all the ip addresses irrespective of above 'Allow' statement for snowflake, I tried Allow ip address as below.
{
"Version": "2012-10-17",
"Id": "SourceIP",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::11111111111:role/snowflake-role"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::s3-bucket"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:role/snowflake-role"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::s3-bucket/*"
},
{
"Sid": "SourceIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::s3-bucket",
"arn:aws:s3:::s3-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"10.10.100.10",
"10.10.100.11",
"10.10.100.12",
"10.10.100.13"
]
}
}
}
]
}
Now snowflake can access but ip restriction doesnt work. All ips can access the bucket. Can someone help me with my scenario?
I think that indeed the most elegant solution would be to create an IAM role and assign it to the corresponding snowflakes you want to allow accessing the S3 bucket. After that block all the access to the bucket with an explicit Deny for "Principal" : "*". Finally, you can use aws:userId or aws:PrincipalArn condition keys to only allow the users with the role to access the bucket.
Have a look at this article for more details https://levelup.gitconnected.com/how-i-locked-the-whole-company-out-of-an-amazon-s3-bucket-1781de51e4be
Best, Stefan
I am defining attribute-based access control (ABAC) for AWS IAM within my terraform file. Sample policy is
resource "aws_iam_role_policy" "testS3" {
name = "testS3"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::dev-${aws:PrincipalTag/team}*"
}
]
}
EOF
}
How do I call that ${block} within terraform? Terraform translates that into its own variables.
It worked with extra $ in the string.
resource "aws_iam_role_policy" "testS3" {
name = "testS3"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::dev-$${aws:PrincipalTag/team}*"
}
]
}
EOF
}
I also tried with variables.tf file and referenced the variable here in json.
variables.tf
variable "principaltag" {
default = "$${aws:PrincipalTag/tedteam}"
}
****
policy.tf
resource "aws_iam_role_policy" "testS3" {
name = "testS3"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::dev-${var.principaltag}*"
}
]
}
EOF
}
I have s3 buckets named as per team names. For example the below policy works if I want to provide Get, List permissions by using a PrincipalTag in Condition operator. But I'll have to define similar policy by changing the S3 arn for every team.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::companyName-TeamName*",
"arn:aws:s3:::companyName-TeamName*/*"
],
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/teamname": "${aws:PrincipalTag/teamname}"
}
}
}
]
}
What if I want to define the resource arn using the PrincipalTag like below
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::companyName-${aws:PrincipalTag/teamname}*",
"arn:aws:s3:::companyName-${aws:PrincipalTag/teamname}*/*"
],
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/teamname": "${aws:PrincipalTag/teamname}"
}
}
}
]
}
All teams assumes their roles which has a tag 'teamname':'Their Team Name'
Can I define a policy like this? This will reduce the redundancy of policies. I do not want to define all the S3 arns in the resource section, it will be long list of teams and their buckets.
Does anyone know how to create a policy using cloud formation and then have another cloud formation template that assigns that policy to a role?
I'm looking at http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html and that doesn't answer my question.
The link between a policy and a role is declared in the AWS::IAM::Policy resource. So, for instance, you can have one stack export the role and another stack import it using the intrinsic function Fn::ImportValue and link it to a policy resource.
Exporting stack:
Resources:
myRole:
Type: "AWS::IAM::Role"
Properties:
...
Outputs:
exportedRole:
Value: !Ref myRole
Export:
Name: "myExportedRole"
Importing stack:
Resources:
myPolicy:
Type: "AWS::IAM::Policy"
Properties:
Roles:
- !ImportValue myExportedRole
...
You can create the role and the policy at the same time. Here is an example:
"LambdaFunctionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "AlexaSkillCloudWatchLogsAccess",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLogging",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
}
]
}
}
This resource creates a policy for a Lambda function with a policy included. Then you can include the ARN of the role in a lambda function in the same template with "Fn::GetAtt"
I tried preventing hotlinking media files on Amazon S3 with this bucket policy.
{
"Version": "2008-10-17",
"Id": "my-id",
"Statement": [
{
"Sid": "Allow get requests to specific referrers",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketname/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://sitename.com/"
}
}
},
{
"Sid": "Allow CloudFront get requests",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::amazonaccountid:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
The ACL is set to private. I am still unable to get it to accept the files that I am trying to access.
I tried many different policies that I found here but none of them seem to have any effect.
The files that I am trying to prevent from hotlinking are .swf files.
When I use the exact (bucketname.s3.amazonaws.com) link without the cloudfront, it works.
Here is the bucket policy I used that got it to work.
{
"Version": "2008-10-17",
"Id": "http referer policy",
"Statement": [
{
"Sid": "Allow get requests referred by www.mysite.com and mysite.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketname/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://www.mysite.com/*"
}
}
}
]
}