AWS S3 bucket changes are not reflected on my domain - html

I have two buckets mywesbite.com and www.mywebsite.com.
I have done the following -
Made the bucket mywesbite.com public with the following code -
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mywebsite.com/*"
}
]
}
Set the index.html file as the index document
I can now see my website loading, however this is only when I click the endpoint url - http://mywebsite.com.s3-website.eu-west-2.amazonaws.com
Of course my actual website is simply https://mywesbsite.com/ - yet I do not see any of my files being rendered here.
Is there something I'm missing?? It's all good having a working endpoint, but I need to see my files rendered on my actual domain.
Added a picture of my route 53 settings below

You need to create an alias record in your hosted zone for the domain "mywebsite.com" to point to the S3 bucket.
Remember though that there are some restrictions:
The S3 bucket must have the same name as your domain name.
The domain name has to be registered via route 53
Ofcourse you need to own the domain name "mywebsite.com" Just having an S3 bucket doesn't mean you own a domain name.

Related

How to describe instance specific instance in Policy

I am trying to create a policy to restrict users to view only specific instance in AWS EC2 console. I have tried the below policy and it still showing me all my available instances so I am wondering where did I do wrong on my JSON policy below. Thank you
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/UserName": "${aws:username}"
}
}
},
{
"Effect": "Deny",
"Action": "ec2:Describe*",
"Resource": "arn:aws:ec2:*:*:DescribeInstances/instance-id"
}
]
}
In looking at Actions, resources, and condition keys for Amazon EC2 - Service Authorization Reference, the DescribeInstances API call does not accept any Conditions to limit the results.
Therefore, users either have permission to make that API call (and hence view all instances), or you can Deny them from being able to make the API call. There is no ability to control which instances they can include in their request.
Agree with John.
A slightly different way to go about this is not with policies and restrictions but filtering via Tags and filters on the console.
Not exactly what you want but if you only people to see the ones they should. Tag them and send get the link like
https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-1#Instances:tag:YourTagName=AllYouCanSee

What is the right syntax for an IAM policy to add to AWS Secret Manager to restrict access by IP

I am creating a Secret in AWS secret manager and I try to put in a policy to restrict access by IP.
I do it under the Secret console in [Resource Permissions] section.
I keep getting syntax error, but not what is the error.
Here is the policy I am trying ( was create via the visual editor in AWS console).
{
"Version":"2012-10-17",
"Statement": [{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "secretsmanager:*",
"Resource": "arn:aws:secretsmanager:us-east-2:722317156788:secret:dev/playAround/junju-MWTXvg",
"Condition": {
"IpAddress": {
"aws:SourceIp": "210.75.12.75/32"
}
}
}]
}
It works after making two changes as below:
remove leading space in front of opening brace "{" on the first line of policy
for resource based policies, Principal is required (in certain circumstances)
Please refer to the attached picture of your updated policy to resolve the issue.

AWS CloudFront Issue for Custom Error File: AccessDenied Message

This is my first post in Stackoverflow and I have tried to search for the answer to a problem I am currently having with CloudFront serving up static S3 website page, to be precise, custom 404 error page. Hope you can help me out :=))
I am not using any code, simply using the AWS console as a POC. Here is the scenario:
a) I have created two buckets. The names are (as an example): mybucket.com and www.mybucket.com.
b) I have placed my static site (a very simple one) inside the mybucket.com and redirect www.mybucket.com to it.
c) The content bucket (mybucket.com) has an index.html file, an image file. I have created a folder under the bucket (called error) and placed a custom error message file called 404error.html in it.
d) The index.html file also calls a simple JavaScript code that loads the contents of another file (welcome.html) from another bucket (resource.mybucket.com). I have ensured that bucket is enabled for CORS and it is working.
d) The bucket has a bucket policy that allows everyone access to the bucket and it's contents. The bucket polcy is shown below:
{
"Id": "Policy1402570669260",
"Statement": [
{
"Sid": "Stmt1402570667997",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket.com/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
e) I have ensured the www.mybucket.com and resource.mybucket.com also has the same policy.
f) mybucket.com has been configured for static website hosting and the error file for mybucket.com has been configured to be error/404error.html.
c) If I access the site using the S3 URL (mybucket.com.s3-website-.amazonaws.com), and try to access a non-existent file (say myfile.html), it correctly shows the custom 404 error page.
The problem arises when I try to access the page using the CloudFront distribution. I created a CloudFront distribution on the S3 bucket (mybucket.com) and here are the properties I set:
a) Error Page:
i) HTTP Error Code: 404-Not Found
ii) Error Caching TTL: 300
iii) Customize Error Response: Yes
iv) Response Page Path: /error/404error.html
v) HTTP Response Code: OK
b) A separate cache behaviour was set as well:
i) Path Pattern: /error/*
ii) Restrict Viewer Access: No
I am keeping it very simple and standard. I am not forwarding cookies or query strings or using any signed URLs etc.
Once the distribution is created, when I try to access the site with CloudFront URL, the main page works fine. If I try to test with a non existent page, however, I am not served with the custom 404 error page that I configured. Instead, I get the following XML file in the browser (Chrome/FireFox -latest):
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>EB4CA7EC05512320</RequestId>
<HostId>some-really-big-alphanumeric-id</HostId>
</Error>
No clue is shown in the the Console when I try to inspect elements from the browser.
Now I know this AccessDenied error has been reported and discussed before and I have tried what most suggest: giving full access to the bucket. I have ensured that (as you can see from the bucket policy above, access is open for anybody). I have also tried to ensure Origin Access ID has been given GetObject permission. I have also dropped and recreated the CloudFront distribution and also deleted/re-uploaded the error folder and the 404error.html file within the folder. The error file is manually accessible from the CloudFront URL:
http://xxxxxxxx.cloudfront.net/error/404error.html
But it does not work if I try to access an arbitrary non-existent file:
http://xxxxxxxx.cloudfront.net/myfile.html
Is there something I am missing here?
I really appreciate your help.
Regards
Here is a rudimentary policy for making your S3 bucket work with CloudFront custom error pages.
{
"Version": "2012-10-17",
"Statement": [{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<yourBucket>",
"Principal": "*"
}, {
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<yourBucket>/*",
"Principal": "*"
}]
}
As Ben W already pointed out, the trick is to give the ListBucket permission. Without that you will get an access denied error.
It may also be worth mentioning that 5xx errors only make sense if you serve them from another bucket than the bucket where your website is on.
Also a 404 error should respond with a 404 error code, even on your custom error page, and not just suddenly turn into a code 200. Same for the other error codes, of course.
If you setup a bucket (the-bucket) quickly, you may set it up without List permissions for Everyone. This will stop CloudFront from determining if an asset is 404 correctly.
Your uploader tool may be uploading with read permissions on each object - so you will not notice this lack of permissions.
So if you request <URL>/non-existent.html, CloudFront tries to read the bucket e.g. http://the-bucket.s3.amazonaws.com/non-existent.html
if list permissions are granted to everyone, a 404 is returned, and CloudFront can remap the request as a 200 or a custom 404
if list permissions are not granted to everyone, a 403 is returned, and CloudFront returns the 403 to the end user (which is what you are seeing in the log).
It makes perfect sense, but is quite confusing!
I got these hints from http://blog.celingest.com/en/2013/12/12/cloudfront-configuring-custom-error-pages/ and your question might also be related to https://serverfault.com/questions/642511/how-to-store-a-cloudfront-custom-error-page-in-s3/
You need another S3 bucket to host your error pages
You need to add another CloudFront origin pointing to the bucket where your error pages are
The cache behaviour of the newly-created origin should have a Path Pattern pointing to the folder (in the error page bucket) where the error pages reside
You can then use that path in the Response Page Path when you create the Custom Error Response config
S3 permissions have changed in 2019. If you're reading this since 2019, you can't follow any of the above advice! I made a tutorial to follow on Youtube:
https://www.youtube.com/watch?v=gBkysF8_-Es
I ran into this when setting up a single-page app where I wanted every missing path to render /index.html.
If you set up the CloudFront "Error pages" handling to redirect from HTTP error code 403 to the path /index.html with response code 200, it should just work.
If you set it to handle error code 404, you'll get AccessDenied unless you give everyone ListBucket permissions, like some other answers describe. But you don't need that if you handle 403 instead.

Amazon S3 Restrict Access To File By Referer - Chrome Has Issues

This is what my Amazon S3 bucket policy looks like (generated in part, using the AWS Policy Generator):
{
"Id": "Policy1350503700228",
"Statement": [
{
"Sid": "Stmt1350503699292",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::files.example.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.com/*",
"http://www.example.com/*"
]
}
},
"Principal": {
"AWS": [
"*"
]
}
}
]
}
What the bucket policy is supposed to do is - throw a '403 Forbidden' error if any file in the bucket is accessed directly or from a referrer other than (www.)example.com.
It seems to work, except that Chrome seems to have issues with PDF files served in this manner (for instance, images load just fine). So, any PDF from files.example.com (with referrer based restrictions) seems to be loading forever in Chrome (latest version, on Ubuntu 12.04). Firefox on the other hand loads the PDF, which is less than 100KB in size, in a snap.
Any idea as to what I am / could be doing wrong?
PS: If I right-click and select 'Save As..' Chrome is able to download the file. I don't understand why it's not showing it.
I checked developed tools on Chrome and found that Chrome pdf plugin requests pdf in multiple chunks. First chunk will have correct referer but all consequent chunk will have https://s3.amazonaws.com/.... instead of http://mywebsite.com. Adding https://s3.amazonaws.com/* into bucket policy list solved the problem.
Go into your bucket and double-check the MIME type specified on the file (metadata tab). It should be Content-Type: application/pdf
You can set the response-content-disposition to "attachment" as described in this post: https://stackoverflow.com/a/9099933/568383

S3 policy to stop hotlinking?

Our current S3 policy reads as:
{
"Version": "2008-10-17",
"Id": "45103629-690a-4a93-97f8-1abe2f9bb68c",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::incredibad29/*"
}
]
}
This just allows anyone to access files from within.
We want to add a hotlinking statement, so users can ONLY acccess the file if referred from our site. So from a domain starting with incredibad29.com or www.incredibad.com
I just can't figure out how to do this. Any help would be amazing, thank you!
If it is for images and other media types, there is a known hack that uses content type headers:
There’s a workaround that you may use to block hotlinking of selective images and files that you think are putting a major strain in your Amazon S3 budget.
When you upload a file to your Amazon S3 account, the service assigns a certain Content-Type to every file based on its extension. For instance, a .jpg file will have the Content-Type set as image/jpg while a .html file will have the Content-Type as text/html. A hidden feature in Amazon S3 is that you can manually assign any Content-Type to any file, irrespective of the file’s extension, and this is what you can use to prevent hotlinking.
From: http://www.labnol.org/internet/prevent-image-hotlinking-in-amazon-s3/13156/
I think this is pretty much the basic technique. However, if you skim the 6350 results for `google s3 hotlinking deny you might find alternative ways :)