I am creating a Virtual reality 360-degree video website using the krpano html5 player.
This was going great until testing on safari and I realised it didn't work. The reason for this is because safari does not support CORS (cross-origin resource sharing) for videos going through WebGL.
To clarify, if my videos where on the same server with my application files it would work, but because I have my files hosted on Amazon s3 , they are CORS. Now I'm unsure what to do because I have built my application on digital ocean which connects to my amazon s3 bucket, but I cannot afford to up my droplet just to get the storage I need(which is around 100GB to start and will increase in the future to Terrabytes and my video collection gets bigger).
So does anyone know a way I can get around this to make it seem like the video is not coming from a different origin or alternatively anything I can do to get past this obstacle?
Is there any way that I could set up amazon s3 and amazon EC2 so that they dont see each other as cross-origin resource sharing?
EDIT:
I load my videos like this:
<script>
function showVideo(){
embedpano({
swf:"/krpano/krpano.swf",
xml:"/krpano/videopano.xml",
target:"pano",
html5:"only",
});
}
</script>
This then calls my xml file which calls the video file:
<krpano>
<!-- add the video sources and play the video -->
<action name="add_video_sources">
videointerface_addsource(‘medium', 'https://s3-eu-west-1.amazonaws.com/myamazonbucket/Shoots/2016/06/the-first-video/videos/high.mp4|https://s3-eu-west-1.amazonaws.com/myama…ideos/high.webm');
videointerface_play(‘medium');
</action>
</krpano>
I don't know exactly how krpano core works, I assume it the javascript gets the URLs from the XML file and then makes a request to pull them in.
#datasage mentions in comments that CloudFront is a common solution. I don't know if this is what he was thinking of but it certainly will work.
I described using this solution to solve a different problem, in detail, on Server Fault. In that case, the question was about integrating the main site and "/blog/*" from a different server under a single domain name, making a unified web site.
This is exactly the same thing you need, for a different reason.
Create a CloudFront distribution, setting the alternate domain name to your site's name.
Create two (or more) origin servers pointing to your dynamic and static content origin servers.
Use one of them as default, initially handling all possible path patterns (*, the default cache behavior) and then carve out appropriate paths to point to the other origin (e.g. /asset/* might point to the bucket, while the default behavior points to the application itself).
In this case, CloudFront is being used other than for its primary purpose as a CDN and instead, we're leveraging a secondary purpose, using it as a reverse proxy that can selectively route requests to multiple back-ends, based on the path of the request, without the browser being aware that there are in fact multiple origins, because everything sits behind the single hostname that points to CloudFront (which, obviously, you'll need to point to CloudFront in DNS.)
The caching features can be disabled if you don't yet want/need/fully-understand them, particularly on requests to the application itself, where disabling caching is easily done by selecting the option to forward all request headers to the origin, in any cache behavior that sends requests to the application itself. For your objects in S3, be sure you've set appropriate Cache-Control headers on the objects when you uploaded them, or you can add them after uploading, using the S3 console.
Side bonus, using CloudFront allows you to easily enable SSL for the entire site, with a free SSL certificate from Amazon Certificate Manager (ACM). The certificate needs to be created in the us-east-1 region of ACM, regardless of where your bucket is, because that is the region CloudFront uses when fetching the cert from ACM. This is a provisioning role only, and has no performance implications if your bucket is in another region.
You need to allow your host in CORS Configuration of your AWS-S3 bucket .
Refer to Add CORS Configuration in Editing Bucket Permissions.
Hence after that, every request you make to the S3 bucket files, will have CORS headers set.
In case you need to serve the content via AWS-CDN CloudFront then follow these steps, ignore if you server content directly via S3 :
Go to AWS CloudFront Console.
Select your CloudFront Distribution.
Go to Behaviors Tab.
Create a Behavior(for the files which needs to be served with CORS Header).
Enter Path Pattern, Select Protocol & Methods.
Select All in Forward Headers option.
Save the behavior.
If needed, Invalidate the CloudFront Edge Caches by running an Invalidation request for the Files you just allowed for CORS.
Related
I'm trying to utilize IPFS to load static content, such as images and javascript libraries, on a dynamic site loaded on the http protocol.
For example https://www.example.com/ is a normal web 2.0 page, with an image reference here https://www.example.com/images/myimage.jpg
When the request is made on myimage.jpg, the following header is served
x-ipfs-path: /ipfs/QmXXXXXXXXXXXXXXXXX/images/myimage.jpg
Which then gets translated by the IPFS Companion browser plugin as:
https://127.0.0.1:8081/ipfs/QmXXXXXXXXXXXXXXXXX/images/myimage.jpg
The problem being, is that it has directed to an SSL page on the local IP, which won't load due to a protocol error. (changing the above from https to http works)
Now, if I were to request https://www.example.com/images/myimage.jpg directly from the address bar, it loads the following:
http://localhost:8081/ipfs/QmYcJvDhjQJrMRFLsuWAJRDJigP38fiz2GiHoFrUQ53eNi/images/myimage.jpg
And then a 301 to:
http://(some other hash).ipfs.localhost:8081/images/myimage.jpg
Resulting in the image loading successfully.
I'm assuming because the initial page is served over SSL, it wants to serve the static content over SSL as well. I also assume that's why it then uses the local IP over https, rather than localhost in the other route.
My question is, how do I get this to work?
Is there a header which lets IPFS companion to force it to load over http? If so, I'm assuming this would cause browser security warnings due to mixed content. I have tried adding this header without luck: X-Forwarded-Proto: http
Do I need to do something to enable SSL over 127.0.0.1, connecting this up with my local node? If so, this doesn't seem to be the default setup for clients, and worry that all the content will show broken images if they do not follow some extra steps.
Is it even possible to serve static content over IPFS from non-IPFS pages?
Any hints appreciated!
Edit: This appears to effect the Chrome engine and and Firefox.
Looks like a configuration error on your end.
Using IPFS Companion with default settings on a clean browser profile works as expected.
Opening: https://example.com/ipfs/bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi redirects fine to http://localhost:8080/ipfs/bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi which then redirects to unique Origin based on the root CID: http://bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi.ipfs.localhost:8080/
You use custom port (8081), which means you changed the Gateway address in ipfs-companion Preferences at some point.
Potential fix: go there and make sure your "Local gateway" is set to http://localhost:8081 (instead of https://).
If you have http:// there, then see if you have some other extension or browser setting forcing https:// (check if this behavior occurs on a browser profile, and then add extensions/settings one by one, to identify the source of the problem).
I'm hosting my own server for my website, and I would like to store all the resources on my isps ftp instead (like images, scripts and stuff like that) to prevent unnecessary strain on my server, and because my isps network speed ought to be quicker than the service they provide me. Now the fonts and the javascripts work fine, but when I try the following in my css:
background-image:url("-url-");
It does not want to display on my website, and in chrome I get this:
XMLHttpRequest cannot load -url-. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://my-domain-name/' is therefore not allowed access.
What does this mean?
-edit-
Actually it does seem to display, but I don't think it is supposed to. But they load up in a weird manner so I think I might just host the files on my own server after all.
I don't know the format of your URL (I assume you are using http, not ftp to link the image) or the nature of your "IPS's server" but the error suggests it does not allow hotlinking of images (I.e. cross-domain access to the files is not allowed by the server).
What is the service you are using to host the images? If it isn't indended for this purpose and you want to reduce strain on your own server it would make more sense to use a proper CDN service like AWS (they have a free tier I think).
I have a web app that stores videos. I am using a java servlet (over https) which verifies a username and password. Once the details are verified, i redirect the user to a video stored in AWS S3. For those who don't know how S3 works, its just a web service that stores objects (basically think of it as storing files). It also uses https. Now obviously to make this work, the s3 object (file) is public. I've given it a random name full of numbers and letters.
So the servlet basically looks like this:
void doGet(request, response){
if (authenticateUser(request.getParameter("Username"), request.getParameter("Password")){
response.sendRedirect("https://s3.amazonaws.com/myBucket/xyz1234567.mp4");
}
}
This is obviously simplified but it gets the point across. Are there any very obvious security flaws here? The video tag will obviously have a source of something like https://www.mysite.com/getVideo?Username="Me"&Password="randomletters". At first blush it seems like it should be as secure as anything else assuming i give the file names sitting at AWS s3 sufficiently random names?
The obvious security flaw is that anybody could detect which URL the authentication servlet redirects to, and share this URL with all his friends, thus allowing anyone to access the resource directly, without going through the euthentication servlet.
Unfortunately, I don't know S3 at all, so I can't recommend any fix to the security problem.
All this mechanism does is provide a very limited obfuscation - using developer tools in most modern browsers (or a proxy such as Fiddler) a user will be able to watch the URL of the video that's being loaded and, if it's in a Public S3 bucket, then simply share the link.
With S3 your only real solution would be to secure the bucket and then either require that the user is logging in or use the temporary tokens for access [http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html] ... though this does complicate your solution
I would also mention that including the password and username in plaintext on the link to the video asset (from the question above) is very insecure
I am using CloudFront to serve images, css and js files for my website using the custom origin option with subdomains CNAMEd to my account. It works pretty well.
Main site: www.mainsite.com
static1.mainsite.com
static2.mainsite.com
Sample page: www.mainsite.com/summary/page1.htm
This page calls an image from static1.mainsite.com/images/image1.jpg
If Cloudfront has not already cached the image, it gets the image from www.mainsite.htm/images/image1.jpg
This all works fine.
The problem is that google alert has reported the page as being found at both:
http://www.mainsite.com/summary/page1.htm
http://static1.mainsite.com/summary/page1.htm
The page should only be accessible from the www. site. Pages should not be accessible from the CNAME domains.
I have tried to put a mod rewrite in the .htaccess file and I have also tried to put a exit() in the main script file.
But when Cloudfront does not find the static1 version of the file in its cache, it calls it from the main site and then caches it.
Questions then are:
1. What am I missing here?
2. How do I prevent my site from serving pages instead of just static components to cloudfront?
3. How do I delete the pages from cloudfront? just let them expire?
Thanks for your help.
Joe
[I know this thread is old, but I'm answering it for people like me who see it months later.]
From what I've read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.
1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.
2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.
3) Go to Behaviors and click Create Behavior:
Path Pattern: robots.txt
Origin: (your new bucket)
4) Set the robots.txt behavior at a higher precedence (lower number).
5) Go to invalidations and invalidate /robots.txt.
Now abc123.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.
Another domain/subdomain will also work in place of a bucket, but why go to the trouble.
You need to add a robots.txt file and tell crawlers not to index content under static1.mainsite.com.
In CloudFront you can control the hostname with which CloudFront will access your server. I suggest using a specific hostname to give to CloudFront which is different than you regular website hostname. That way you can detect a request to that hostname and serve a robots.txt which disallows everything (unlike your regular website robots.txt)
Not 100% sure if this is the right SE site to ask this, so feel free to move/warn me.
If I have a site www.mysite.com with a form on it and define its action as "http://www.mysite.com/handlepost" instead of "/handlepost", does it still get parsed as a local address by apache? That is, will apache figure out that I'm trying to send my form data to the same server the form resides on and do an automatic local post, or will the data be forced to make a round trip, going online, looking up the domain and actually being sent as an outside request?
Apache does not look at this information. It's your browser which does this job.
On the Apache side the job is only outputing content (html in this case), apache does not care about the way you write your url in this content.
On the browser side the page is analysed and GET requests (images,etc) are sent automatically to all collected url. The browser SHOULD know that relative url /foo are in fact http://currentsite/foo - or it's a really dump browser -. It is his job. And then it's his job to push the request to the right server (and to known if he should make a new DNS request, build a new HTTP connection, reuse an existing opened connection, build several connections -- usually max 3 conn per DNS--, etc). Apache does nothing in this part of the job.
So why absolute url are bad? Not because of the job the browser should have to do handling it (which is in fact nothing, his job is transforming relative url to absolute ones); It's because if your web application use only relative url the admin of the web server will have far more possibilities on proxying your application. For example:
he will be able to server your web application on several different DNS domains
(and then make the browser think he's talking to several servers, parallelizing static files downloads)
he could as use use this multi-domain to set up the application for different costumers
he could build an HTTPS access for external network access and an HTTP (without the S) access on a local name for the local network
And if your application is building the absolute url these tasks will become really harder.
dont use absolute URL's . As i feel it will do a round trip in your case as you have used round trip for the action part. so better use releative URL's