Deny non-SDK and "NotIPAddress" in AWS S3 bucket policy - json

everyone. I have a situation in which I want a web application to add / update files to an S3 bucket, and then have only specified IPs be able to read these files. My web application uses AWS SDK to access the S3 bucket and upload files. So, in other words, I want SDK and specified IPs access to the bucket, and otherwise deny access.
I tried doing this through the S3 bucket policy but was unable to make it work. My latest attempt at the policy is as follows:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "MyPolicy",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:SourceIp": [
"11.11.11.111/32",
"22.22.22.222/32",
"333.333.33.333/32",
"444.444.44.444/32"
],
"aws:SourceAccount": [
"123456789012"
]
}
}
}
]
}
This did not work, however, as it blocked even the bucket owner / root account from accessing files. Any help is appreciated, thanks in advance!

Related

Json policy script for denying object upload to aws s3 if the object doesn't uses aws s3 encryption or aws:kms encryption

Hi I want to create a s3 policy for my bucket which denies the user to upload an object which doesn't uses aws s3 encryption or aws kms encryption (It must use one of the encryption). Here is the link for the policy generator https://awspolicygen.s3.amazonaws.com/policygen.html
I have generated this policy.
{
"Id": "Policy1631518070654",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1631518063107",
"Action": [
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::webserver7/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
},
"StringNotLike": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "aws:kms"
}
},
"Principal": "*"
}
]
}
We can Interpret the policy as follows.... "If the object doesn't uses aws s3 encryption AND aws:kms encryption then deny the upload.
but we can't use the two encryption at the same time. So I want the policy as follows.
"If the object doesn't uses aws s3 encryption OR aws:kms encryption then deny the object upload.
If you want OR, you need to have two statements:
{
"Id": "Policy1631518070654",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1631518063107",
"Action": [
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::webserver7/*",
"Condition": {
"StringNotLike": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "aws:kms"
}
},
"Principal": "*"
},
{
"Sid": "Stmt16315180631072",
"Action": [
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::webserver7/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": "*"
}
]
}
This is not possible to add this kind of policy at the moment. Try using policy for single encryption only.

Permission Boundary IAM role denying attaching administrator policy

Can anyone point me how to accomplish the instruction below. I am trying to find it playing with roles and policy but I can't find any way to accomplish a granular approach to deny attaching administrator policy and maintaining other IAM rights.
Set an IAM permission boundary on the development IAM role that explicitly denies attaching the administrator policy
You need to combine multiple conditions to achieve this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IAMPermissions",
"Effect": "Allow",
"Action": [
"iam:*"
],
"Resource": "*"
},
{
"Sid": "DenyAttachAdministratorPolicy",
"Effect": "Deny",
"Action": [
"iam:AttachRolePolicy"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PermissionsBoundary": "arn:aws:iam::012345678912:policy/MyPermissionBoundary"
},
"ArnEquals": {
"iam:PolicyARN": "arn:aws:iam::aws:policy/AdministratorAccess"
}
}
}
]
}
(You would need to update the PB ARN mentioned in the policy)
Note that this might not handle other edge cases where a malicious user could potentially attach AdministratorAccess to something else and escalate their privileges (e.g. via a Lambda function or container maybe?).

How do you dynamically create an AWS IAM policy document with a variable number of resource blocks using terraform?

In my current terraform configuration I am using a static JSON file and importing into terraform using the file function to create an AWS IAM policy.
Terraform code:
resource "aws_iam_policy" "example" {
policy = "${file("policy.json")}"
}
AWS IAM Policy definition in JSON file (policy.json):
{
"Version": "2012-10-17",
"Id": "key-consolepolicy-2",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::777788889999:root"
]
},
"Action": [
"kms:Decrypt"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::444455556666:root"
]
},
"Action": [
"kms:Decrypt"
],
"Resource": "*"
}
]
}
My goal is to use a list of account numbers stored in a terraform variable and use that to dynamically build the aws_iam_policy resource in terraform. My first idea was to try and use the terraform jsonencode function. However, it looks like there might be a way to implement this using the new terraform dynamic expressions foreach loop.
The sticking point seems to be appending a variable number of resource blocks in the IAM policy.
Pseudo code below:
var account_number_list = ["123","456","789"]
policy = {"Statement":[]}
for each account_number in account_number_list:
policy["Statement"].append(policy block with account_number var reference)
Any help is appreciated.
Best,
Andrew
The aws_iam_policy_document data source from aws gives you a way to create json policies all in terraform, without needing to import raw json from a file or from a multiline string.
Because you define your policy statements all in terraform, it has the benefit of letting you use looping/filtering on your principals array.
In your example, you could do something like:
data "aws_iam_policy_document" "example_doc" {
statement {
sid = "Enable IAM User Permissions"
effect = "Allow"
actions = [
"kms:*"
]
resources = [
"*"
]
principals {
type = "AWS"
identifiers = [
for account_id in account_number_list:
account_id
]
}
}
statement {
...other statements...
}
}
resource "aws_iam_policy" "example" {
// For terraform >=0.12
policy = data.aws_iam_policy_document.example_doc.json
// For terraform <0.12
policy = "${data.aws_iam_policy_document.example_doc.json}"
}
1st option:
if you don't want to rebuild the policy in aws_iam_policy_document you can use templatefile see https://www.terraform.io/docs/language/functions/templatefile.html
resource "aws_iam_policy" "example" {
policy = templatefile("policy.json",{account_number_list = ["123","456","789"]})
}
...
%{ for account in account_number_list ~}
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${account}:root"
},
"Action": "kms:*",
"Resource": "*"
},
%{ endfor ~}
...
2nd option:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html#policy-vars-infotouse
AWS's IAM policy document syntax allows for replacement of policy
variables within a statement using ${...}-style notation, which
conflicts with Terraform's interpolation syntax. In order to use AWS
policy variables with this data source, use &{...} notation for
interpolations that should be processed by AWS rather than by
Terraform.
...
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::&{aws:userid}:root"
},
"Action": "kms:*",
"Resource": "*"
},
Like in: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document
This was great and is a good pattern to be able to hold onto. Unfortunately, I ran into an issue with it going up against the quota limit:
Assume Role Policy: LimitExceeded: Cannot exceed quota for ACLSizePerRole: 2048
You can request an increase on this quota size but supposedly the max is 4098. the assume role policy I am attempting to create is needed for every AWS account we have so we will eventually hit that limit as well.
It's unfortunate that you can use wild cards within arns of an assume role policy but you can use "*" which I would argue is much much riskier.

Amazon S3 bucket policy allow access to ONLY specific http

I'm trying to restrict access to objects (media files) in an Amazon S3 bucket to a specific referral domain, privatewebsite.com, with a bucket policy, but keep getting access denied, no matter the domain referred.
I have the following settings for Block Public Access
Block public access to buckets and objects granted through new access control lists (ACLs) - On
Block public access to buckets and objects granted through any access control lists (ACLs) - On
Block public access to buckets and objects granted through new public bucket policies - Off
Block public and cross-account access to buckets and objects through any public bucket policies - Off
I've added the following code, URL with and without, http:// and https://, yet still get access denied. (privatewebsite.com, https://privatewebsite.com, http://privatewebsite.com)
{
"Version": "2012-10-17",
"Id": "Policy8675309",
"Statement": [
{
"Sid": "Stmt8675309",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-media-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://privatewebsite.com"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-media-bucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://privatewebsite.com/*",
"http://privatewebsite.com/*"
]
}
}
}
]
}
Can anyone see any obvious errors in my bucket policy?
I expect this policy to ALLOW any request, when coming from a page on privatewebsite.com, while DENY-ing all other requests, but at the moment ALL requests are denied.
From Bucket Policy Examples - Restricting Access to a Specific HTTP Referrer:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
}
]
}
This method only grants Allow access for the given Referer. There is no need to use a Deny policy with it because access is denied by default. Thus, only the Allow permissions are granted.
Try this for you string-like section (allow section):
"StringLike": {
"aws:Referer": [
"https://privatewebsite.com/*",
"http://privatewebsite.com/*"
]
}

Create a single IAM user to access only specific S3 bucket

I have many S3 buckets in my AWS account. But now I created an IAM user and a new S3 bucket, I would like to give this user the ability to access the new S3 bucket using a client like CyberDuck.
I tried to create so many policies. But after that this user getting permission to list all my other buckets also. How can I give access to listing and writing access to a single S3 bucket?
First you create a Policy to allow access to a single S3 bucket (IAM -> Policies -> Create Policy). You can use AWS Policy Generator (http://awspolicygen.s3.amazonaws.com/policygen.html), it should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1528735049406",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListObjects",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOURBUCKETNAME"
}
]
}
Save the policy and note the name you gave to it, then go to IAM -> Users and select the desired user. In the permissions tab, click 'Add permissions', then select 'Attach existing policies directly' near the top. Find your policy by its name, tick its checkbox and complete the process.
Per this ( https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/ )
they’ll need to be able to at least list all the buckets. But other than that, this also provides an example policy, which I just used last night for my own account, so I can confirm that it works.
Update
Okay, I've tested and confirmed using CyberDuck that the following policy (customized to your environment of course) will prevent users from viewing all root buckets, and only allow them access to the bucket you specify:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllInBucket",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket-for-single-user"
}
]
}
Just make sure that when you specify the path in CyberDuck, that you enter it as: bucket-for-single-user.s3.amazonaws.com.
Also, only START unrestricted like that, just to make sure it's working for you (since access appears to be an issue). After that, apply restrictions, you know...least privilege and all.
According to Cyberduck Help / Howto / Amazon S3, it supports directly entering the Bucket name, as <bucketname>.s3.amazonaws.com. If this is possible with the client you are using, you don't need s3:ListAllMyBuckets permissions.
Actions should be grouped by the Resources that they can parse
(Conditions are also potentially different per Action).
This IAM policy will allow full control of all the content (aka in the bucket)
without controlling of the S3 bucket subresources (aka of the bucket):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketOperations",
"Effect": "Allow",
"Action": "s3:ListBucket*",
"Resource": "arn:aws:s3:::<bucketname>"
},
{
"Sid": "ObjectOperations",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:ListMultipartUploads",
"s3:DeleteObject*",
"s3:GetObject*",
"s3:PutObject*"
],
"Resource": "arn:aws:s3:::<bucketname>/*"
},
{
"Sid": "DenyAllOthers",
"Effect": "Deny",
"Action": "s3:*",
"NotResource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
}
]
}
If you aren't specifically trying to lock the IAM user out of every
possible public S3 bucket, you can leave the "DenyAllOthers" Sid off,
without granting additional permissions to the users.
FYI, the AWS ReadOnlyAccess policy automatically gives s3:* to
anything it's attached to. I recommend ViewOnlyAccess (which will
unfortunately grant s3:ListAllMyBuckets without the DenyAllOthers).
Create my own policy and working for me. The IAM user can just list all bucket. But cant do anything on another bucket. The user can only get access to the specific bucket with reading, write, delete files privileges.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "<EXAMPLE_SID>",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<MYBUCKET>"
},
{
"Sid": "<EXAMPLE_SID>",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}, {
"Sid": "<EXAMPLE_SID>",
"Effect": "Deny",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<MYotherBUCKET>"
}, {
"Sid": "<EXAMPLE_SID>",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<MYBUCKET>/*"
}
]
}
Then add this policy also to this user. This policy will restrict all type of operation to listed other s3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "<EXAMPLE_SID>",
"Effect": "Deny",
"Action": [
"s3:PutAnalyticsConfiguration",
"s3:GetObjectVersionTagging",
"s3:CreateBucket",
"s3:ReplicateObject",
"s3:GetObjectAcl",
"s3:DeleteBucketWebsite",
"s3:PutLifecycleConfiguration",
"s3:GetObjectVersionAcl",
"s3:PutBucketAcl",
"s3:PutObjectTagging",
"s3:DeleteObject",
"s3:GetIpConfiguration",
"s3:DeleteObjectTagging",
"s3:GetBucketWebsite",
"s3:PutReplicationConfiguration",
"s3:DeleteObjectVersionTagging",
"s3:GetBucketNotification",
"s3:PutBucketCORS",
"s3:DeleteBucketPolicy",
"s3:GetReplicationConfiguration",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:GetObject",
"s3:PutBucketNotification",
"s3:PutBucketLogging",
"s3:PutObjectVersionAcl",
"s3:GetAnalyticsConfiguration",
"s3:GetObjectVersionForReplication",
"s3:GetLifecycleConfiguration",
"s3:ListBucketByTags",
"s3:GetInventoryConfiguration",
"s3:GetBucketTagging",
"s3:PutAccelerateConfiguration",
"s3:DeleteObjectVersion",
"s3:GetBucketLogging",
"s3:ListBucketVersions",
"s3:ReplicateTags",
"s3:RestoreObject",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:PutEncryptionConfiguration",
"s3:GetEncryptionConfiguration",
"s3:GetObjectVersionTorrent",
"s3:AbortMultipartUpload",
"s3:PutBucketTagging",
"s3:GetBucketRequestPayment",
"s3:GetObjectTagging",
"s3:GetMetricsConfiguration",
"s3:DeleteBucket",
"s3:PutBucketVersioning",
"s3:PutObjectAcl",
"s3:ListBucketMultipartUploads",
"s3:PutMetricsConfiguration",
"s3:PutObjectVersionTagging",
"s3:GetBucketVersioning",
"s3:GetBucketAcl",
"s3:PutInventoryConfiguration",
"s3:PutIpConfiguration",
"s3:GetObjectTorrent",
"s3:ObjectOwnerOverrideToBucketOwner",
"s3:PutBucketWebsite",
"s3:PutBucketRequestPayment",
"s3:GetBucketCORS",
"s3:PutBucketPolicy",
"s3:GetBucketLocation",
"s3:ReplicateDelete",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::<MYotherBUCKET>/*",
"arn:aws:s3:::<MYotherBUCKET>"
]
}
]
}
I was recently able to get this to work using Amazon's documentation. The key for me was to point the IAM User to the specific bucket NOT the S3 console. Per the documentation, "Warning: After you change these permissions, the user gets an Access Denied error when they access the main Amazon S3 console. The main console link is similar to the following:
https://s3.console.aws.amazon.com/s3/home
Instead, the user must access the bucket using a direct console link to the bucket, similar to the following:
https://s3.console.aws.amazon.com/s3/buckets/awsexamplebucket/"
My policy is below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1589486662000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::AWSEXAMPLEBUCKET",
"arn:aws:s3:::AWSEXAMPLEBUCKET/*"
]
}
]
}