context.getImageData() on https? - html

I have the same problem with this post context.getImageData() on localhost?, but instead of localhost I am working on a https site so this provokes the problem with the canvas. It there a solution for this case?

These problems arise when you are trying to get data from images loaded across different domains.
One way to solve this (if you are the one in control of serving the images) is by enabling CORS (Cross-Origin Resource Sharing). What this does is basically add an Access-Control-Allow-Origin header to the served image.
You can read all about it in http://www.w3.org/TR/cors/. Your use case is described specifically in http://www.w3.org/TR/cors/#use-cases, section "Not tainting the canvas element".
There is a great resource for understanding how to enable CORS in http://enable-cors.org/. If you're running an apache instance, the easiest way is to use a .htaccess file to enable the headers.
However, if you are not in control of the served images, then you may need to ask permission to use them and probably copy them to your own server.

Related

Refusing to display LOCALHOST in a frame because 'X-Frame-Options' set to 'sameorigin'

This question specifically regards the localhost. I am trying to embed a localhost web page in another localhost web page however it states that this cannot be done. This was the message in chrome developer tools:
Refused to display 'http://127.0.0.1:4040/jobs/' in a frame because it set 'X-Frame-Options' to 'sameorigin'.
Pictured here
I have tried to use both firefox and chrome. This is the error message from firefox:
Load denied by X-Frame-Options: “SAMEORIGIN” from “http://127.0.0.1:4040/jobs/”, site does not permit cross-origin framing from “http://localhost:8888/lab”.
Why is localhost not considered to be the same origin?
How can I remove this restriction on my localhost?
Thank you in advance.
N.B. I would prefer to use iframes over AJAX requests unless AJAX can copy over the web page in the same fashion as iframes are capable of doing.
I don't know if there are any additional rules about localhost, but in general, the port number is part of the origin. So for it to count as same-origin, you need to use the same port number, i.e. a single web server.
If you absolutely need to use multiple servers for some technical reason you can have one server act as a "reverse proxy" for the other, or use a single reverse proxy server talking to both.
Kevin Reid's answer is correct and his proxy server solution is also correct. But I'll add a bit more detail.
From Mozilla Developer Network
Two URLs have the same origin if the protocol, port (if specified), and host are the same for both. You may see this referenced as the "scheme/host/port tuple", or just "tuple". (A "tuple" is a set of items that together comprise a whole — a generic form for double/triple/quadruple/quintuple/etc.)
http://127.0.0.1:4040 and http://localhost:8888 are not of the same origin. Heck, maybe even http://127.0.0.1 and http://localhost are not of the same origin based on the definition.
How to fix? This solution is specific to IIS 10 Windows Server 2016. But the idea as Kevin said is to enable "reverse proxy".
Install Application Request Routing.
Go to Application Request Routing Cache -> Server Proxy Settings and check Enable Proxy.
Go to URL Rewrite and create an Inbound rule. That will be two inbound rules (port 4040 and 8888). The objective of this is to route localhost/jobs to localhost:4040/jobs and localhost/labs to localhost:8888/labs. In effect this will make both web servers of the same origin, http://localhost.
Your pattern will likely be something like jobs(/)?(.*) and rewrite URL http://localhost:4040/jobs/{R:2} for the jobs web server. Test the pattern to get the {R:x} right.
Test using your browser if you can access http://localhost/jobs and http://localhost/labs.
iframe should work now.

get around cross-origin resource sharing on Amazon Aws

I am creating a Virtual reality 360-degree video website using the krpano html5 player.
This was going great until testing on safari and I realised it didn't work. The reason for this is because safari does not support CORS (cross-origin resource sharing) for videos going through WebGL.
To clarify, if my videos where on the same server with my application files it would work, but because I have my files hosted on Amazon s3 , they are CORS. Now I'm unsure what to do because I have built my application on digital ocean which connects to my amazon s3 bucket, but I cannot afford to up my droplet just to get the storage I need(which is around 100GB to start and will increase in the future to Terrabytes and my video collection gets bigger).
So does anyone know a way I can get around this to make it seem like the video is not coming from a different origin or alternatively anything I can do to get past this obstacle?
Is there any way that I could set up amazon s3 and amazon EC2 so that they dont see each other as cross-origin resource sharing?
EDIT:
I load my videos like this:
<script>
function showVideo(){
embedpano({
swf:"/krpano/krpano.swf",
xml:"/krpano/videopano.xml",
target:"pano",
html5:"only",
});
}
</script>
This then calls my xml file which calls the video file:
<krpano>
<!-- add the video sources and play the video -->
<action name="add_video_sources">
videointerface_addsource(‘medium', 'https://s3-eu-west-1.amazonaws.com/myamazonbucket/Shoots/2016/06/the-first-video/videos/high.mp4|https://s3-eu-west-1.amazonaws.com/myama…ideos/high.webm');
videointerface_play(‘medium');
</action>
</krpano>
I don't know exactly how krpano core works, I assume it the javascript gets the URLs from the XML file and then makes a request to pull them in.
#datasage mentions in comments that CloudFront is a common solution. I don't know if this is what he was thinking of but it certainly will work.
I described using this solution to solve a different problem, in detail, on Server Fault. In that case, the question was about integrating the main site and "/blog/*" from a different server under a single domain name, making a unified web site.
This is exactly the same thing you need, for a different reason.
Create a CloudFront distribution, setting the alternate domain name to your site's name.
Create two (or more) origin servers pointing to your dynamic and static content origin servers.
Use one of them as default, initially handling all possible path patterns (*, the default cache behavior) and then carve out appropriate paths to point to the other origin (e.g. /asset/* might point to the bucket, while the default behavior points to the application itself).
In this case, CloudFront is being used other than for its primary purpose as a CDN and instead, we're leveraging a secondary purpose, using it as a reverse proxy that can selectively route requests to multiple back-ends, based on the path of the request, without the browser being aware that there are in fact multiple origins, because everything sits behind the single hostname that points to CloudFront (which, obviously, you'll need to point to CloudFront in DNS.)
The caching features can be disabled if you don't yet want/need/fully-understand them, particularly on requests to the application itself, where disabling caching is easily done by selecting the option to forward all request headers to the origin, in any cache behavior that sends requests to the application itself. For your objects in S3, be sure you've set appropriate Cache-Control headers on the objects when you uploaded them, or you can add them after uploading, using the S3 console.
Side bonus, using CloudFront allows you to easily enable SSL for the entire site, with a free SSL certificate from Amazon Certificate Manager (ACM). The certificate needs to be created in the us-east-1 region of ACM, regardless of where your bucket is, because that is the region CloudFront uses when fetching the cert from ACM. This is a provisioning role only, and has no performance implications if your bucket is in another region.
You need to allow your host in CORS Configuration of your AWS-S3 bucket .
Refer to Add CORS Configuration in Editing Bucket Permissions.
Hence after that, every request you make to the S3 bucket files, will have CORS headers set.
In case you need to serve the content via AWS-CDN CloudFront then follow these steps, ignore if you server content directly via S3 :
Go to AWS CloudFront Console.
Select your CloudFront Distribution.
Go to Behaviors Tab.
Create a Behavior(for the files which needs to be served with CORS Header).
Enter Path Pattern, Select Protocol & Methods.
Select All in Forward Headers option.
Save the behavior.
If needed, Invalidate the CloudFront Edge Caches by running an Invalidation request for the Files you just allowed for CORS.

background-image full-path url to my isp ftp?

I'm hosting my own server for my website, and I would like to store all the resources on my isps ftp instead (like images, scripts and stuff like that) to prevent unnecessary strain on my server, and because my isps network speed ought to be quicker than the service they provide me. Now the fonts and the javascripts work fine, but when I try the following in my css:
background-image:url("-url-");
It does not want to display on my website, and in chrome I get this:
XMLHttpRequest cannot load -url-. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://my-domain-name/' is therefore not allowed access.
What does this mean?
-edit-
Actually it does seem to display, but I don't think it is supposed to. But they load up in a weird manner so I think I might just host the files on my own server after all.
I don't know the format of your URL (I assume you are using http, not ftp to link the image) or the nature of your "IPS's server" but the error suggests it does not allow hotlinking of images (I.e. cross-domain access to the files is not allowed by the server).
What is the service you are using to host the images? If it isn't indended for this purpose and you want to reduce strain on your own server it would make more sense to use a proper CDN service like AWS (they have a free tier I think).

canvas.toDataURL() causing a security error

I'm using HTML5 canvas and the .toDataURL() function through KineticJS's .toDataURL() method. The canvas uses images that my users uploaded to the site, which are stored on a different machine and subdomain farm1.domain.com.
Problem: When .toDataURL() is called, I get the error
SECURITY_ERR: DOM Exception 18
Is there a way around this? I also get the same problem if the user access the page via domain.com and the image is hosted at www.domain.com.
Attempt:
I added the following lines to httpd.conf within virtualhost and restarted the apache service.
Header add Access-Control-Allow-Origin "http://www.domain.com"
Header add Access-Control-Allow-Origin "http://domain.com"
Header add Access-Control-Allow-Origin "http://farm1.domain.com"
I still get the same error when accessing image hosted on www.domain.com from page on domain.com! Is there a way around this in KineticJS?
You will need to add the Access-Control-Allow-Origin headers to the images you are loading, not to the page which is loading them. For details on this header, and on CORS in general, you may want to read "CORS isn't just for XHR", which specifically discusses this issue.
There is no way around this error. Images loaded in a canvas from a different domain will raise this error as currently implemented by every browser. In your case the images should be stored in the same domain and you would not be getting exceptions.

Cloudfront Custom Origin Is Causing Duplicate Content Issues

I am using CloudFront to serve images, css and js files for my website using the custom origin option with subdomains CNAMEd to my account. It works pretty well.
Main site: www.mainsite.com
static1.mainsite.com
static2.mainsite.com
Sample page: www.mainsite.com/summary/page1.htm
This page calls an image from static1.mainsite.com/images/image1.jpg
If Cloudfront has not already cached the image, it gets the image from www.mainsite.htm/images/image1.jpg
This all works fine.
The problem is that google alert has reported the page as being found at both:
http://www.mainsite.com/summary/page1.htm
http://static1.mainsite.com/summary/page1.htm
The page should only be accessible from the www. site. Pages should not be accessible from the CNAME domains.
I have tried to put a mod rewrite in the .htaccess file and I have also tried to put a exit() in the main script file.
But when Cloudfront does not find the static1 version of the file in its cache, it calls it from the main site and then caches it.
Questions then are:
1. What am I missing here?
2. How do I prevent my site from serving pages instead of just static components to cloudfront?
3. How do I delete the pages from cloudfront? just let them expire?
Thanks for your help.
Joe
[I know this thread is old, but I'm answering it for people like me who see it months later.]
From what I've read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.
1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.
2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.
3) Go to Behaviors and click Create Behavior:
Path Pattern: robots.txt
Origin: (your new bucket)
4) Set the robots.txt behavior at a higher precedence (lower number).
5) Go to invalidations and invalidate /robots.txt.
Now abc123.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.
Another domain/subdomain will also work in place of a bucket, but why go to the trouble.
You need to add a robots.txt file and tell crawlers not to index content under static1.mainsite.com.
In CloudFront you can control the hostname with which CloudFront will access your server. I suggest using a specific hostname to give to CloudFront which is different than you regular website hostname. That way you can detect a request to that hostname and serve a robots.txt which disallows everything (unlike your regular website robots.txt)