HTML5 Canvas getImageData and Same Origin Policy - html

I have a site running at pixie.strd6.com and images hosted through Amazon S3 with a CNAME for images.pixie.strd6.com.
I would like to be able to draw these images to an HTML5 canvas and call the getImageData method but it throws Error: SECURITY_ERR: DOM Exception 18
I have tried setting window.domain = "pixie.strd6.com", but that has no effect.
Additionally, $.get("http://dev.pixie.strd6.com/sprites/8516/thumb.png?1293830982", function(data) {console.log(data)}) also throws an error: XMLHttpRequest cannot load http://dev.pixie.strd6.com/sprites/8516/thumb.png?1293830982. Origin http://pixie.strd6.com is not allowed by Access-Control-Allow-Origin.
Ideally HTML5 canvas wouldn't block calling getImageData from subdomains. I've looked into setting an Access-Control-Allow-Origin header in S3, but haven't succeeded.
Any help or workarounds are greatly appreciated.

Amazon recently announced CORS support
We're delighted to announce support for Cross-Origin Resource Sharing (CORS) in Amazon S3. You can now easily build web applications that use JavaScript and HTML5 to interact with resources in Amazon S3, enabling you to implement HTML5 drag and drop uploads to Amazon S3, show upload progress, or update content. Until now, you needed to run a custom proxy server between your web application and Amazon S3 to support these capabilities.
How to enable CORS
To configure your bucket to allow cross-origin requests, you create a CORS configuration, an XML document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) will support for each origin, and other operation-specific information. You can add up to 100 rules to the configuration. You add the XML document as the cors subresource to the bucket.

One possible solution is to use nginx to act as a proxy. Here is how to configure urls going to http://pixie.strd6.com/s3/ to pass on through to S3, but the browser can still believe that it is non-cross domain.
location /s3/ {
proxy_pass http://images.pixie.strd6.com/;
}

If you are using PHP, you can do something like:
function fileExists($path){
return (#fopen($path,"r")==true);
}
$ext = explode('.','https://cgdev-originals.s3.amazonaws.com/fp9emn.jpg');
if(fileExists('https://cgdev-originals.s3.amazonaws.com/fp9emn.jpg')){
$contents = file_get_contents('https://cgdev-originals.s3.amazonaws.com/fp9emn.jpg');
header('Content-type: image/'.end($ext));
echo $contents;
}
And access the image by using that php file, like if the file is called generateImage.php you can do <img src="http://GENERATEPHPLOCATION/generateImage.php"/> and the external image url can be a get parameter for the file

Recently, I came across $.getImageData, by Max Novakovic. The page includes a couple of neat demos of fetching and operating on Flickr photos, along with some code examples.
It allows you to fetch an image in JavaScript-manipulable form from an arbitrary site. It works by appending a script to the page. The script then requests the image from a Google App Engine server. The server fetches the requested image and relays it converted to base64 to the script. When the script receives the base64, it passes the data to a callback, which can then draw it onto a canvas and begin messing with it.

In the past Amazon S3 didn't allow you to modify or add the access-control-allow-origin and access-control-allow-credentials HTTP headers so it may have been better to switch to a different service like Rackspace Cloud Files or some other service that does.
Add or modify the HTTP headers like this:
access-control-allow-origin: [your site]
access-control-allow-credentials: true
See http://www.w3.org/TR/cors/#use-cases for more information.
Using a service that allows you to modify the HTTP headers entirely solves the same origin problem.

For people who do not use S3 can try to build a image proxy that encode the image file and wrap it into a JSON object.
Then you can use JSONP which supports cross domain to fetch the JSON object and assign the image data to img.src .
I wrote a sample code of the image proxy server with Google App Engine.
https://github.com/flyakite/gae-image-proxy
The JSON object returns in the format like this
{
'height': 50,
'width' : 50,
'data' : 'data:image/jpeg;base64,QWRarjgk4546asd...QWAsdf'
}
The 'data' is the image data in base64 format. Assign it to a image.
img.src = result.data;
The image is now "clean" for your canvas.

To edit your S3 bucket permissions:
1) Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/
2) In the Buckets list, open the bucket whose properties you want to view and click "add CORS configuration"
3) Write the rules you are willing to add in between the tags <CORSConfiguration>
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
You can learn more about rules at: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
4) Specify crossorigin='anonymous' on the image you'll use in your canvas

This behavior is by-design. Per the HTML5 spec, as soon as you draw a cross-origin image to a canvas, it is dirty and you can no longer read the pixels. Origin-matching compares the scheme, fully-qualified host, and in non-IE browsers, the port.

Just bumped into the same problem. I found out about CORS that might be helpful.
http://html5-demos.appspot.com/static/html5-whats-new/template/index.html#14
It didn't work for me since I'm trying to manipulate an image from Flickr. So, I'm still looking for the solution.

Related

Getting the URL for a bucket or an object using oci-java-sdk

I have already a code to retrieve the objects in the bucket using oci-java-sdk and this is working as expected. I would like to retrieve the URL of the file which was uploaded to the bucket in object storage and when I use this URL, this should redirect to the actual location without asking any credentials.
I saw preauthenticated requests but again i need to create one more request. I dont want to send one more request and want to get URL in the existing GetObjectResponse.
Any suggestions>
Thanks,
js
The URL of an object is not returned from the API but can be built using information you know (See Update Below!). The pattern is:
https://{api_endpoint}/n/{namespace_name}/b/{bucket_name}/o/{object_name}
Accessing that URL will (generally, see below) require authentication. Our authentication mechanism is described at:
https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/signingrequests.htm
Authentication is NOT required if you configure the bucket as a Public Bucket.
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/managingbuckets.htm?TocPath=Services%7CObject%20Storage%7C_____2#publicbuckets
As you mentioned, Pre-authenticated Requests (PARs) are an option. They are generally used in this situation, and they work well.
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm
Strictly speaking, it is also possible to use our Amazon S3 Compatible API...
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm
...and S3's presigned URLs to generate (without involving the API) a URL that will work without additional authentication.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
Update: A teammate pointed out that the OCI SDK for Java now includes a getEndpoint method that can be used to get the hostname needed when querying the Object Storage API. https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.25.3/com/oracle/bmc/objectstorage/ObjectStorage.html#getEndpoint--

Google Drive's save to drive with Laravel?

I'm trying to use the "Save to drive" button that Google provides to make Drive uploads even easier, it looks like this:
<script src="https://apis.google.com/js/platform.js" async defer></script>
<div class="g-savetodrive"
data-src="//example.com/path/to/myfile.pdf"
data-filename="My Statement.pdf"
data-sitename="My Company Name">
</div>
My question is, since I am using Laravel and the php artisan serve command to serve my project, how am I supposed to write the path to my file? It's located at 'Project name'/storage/app/docs/, I've tried //storage/app/docs/{{ $file->path }} but it doesn't work, and using storage_path() didn't change anything. What am I missing here?
EDIT:
I tried using another file, one that was hosted somewhere else. So I enabled CORS on my project and, using Postman, I tested to see the headers I was using:
Access-Control-Allow-Headers →Content-Type, X-Auth-Token, Origin, Range
Access-Control-Allow-Methods →POST, GET, OPTIONS, PUT, DELETE
Access-Control-Allow-Origin →*
Access-Control-Expose-Headers →Cache-Control, Content-Encoding, Content-Range
According to the Google documentation, it should be working now, yet it's not.
This is the error that I'm getting in the console:
Response to preflight request doesn't pass access control check:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:8000' is therefore not allowed access.
The response had HTTP status code 400.
And I'm oficially out of ideas.
As stated in the document - Troubleshooting,
If you get an XHR error when downloading your data-src URL, verify that the resource actually exists, and that you do not have a CORS issue.
If the Save to Drive button works with all browsers except Internet Explorer 9, you may need to configure your browser to enable CORS, which is disabled by default.
If large files are truncated to 2MB, it is likely that your server is not exposing Content-Range, likely a CORS issue.
Take note the answer on the related SO question - Save To Drive Button Doesn't Work and the documentation that:
The data-src URL can be served from another domain but the responses from the HTTP server needs to support HTTP OPTION requests and include the following special HTTP headers:
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Range
Access-Control-Expose-Headers: Cache-Control, Content-Encoding, Content-Range

Can I access a blob URL in an external page? [duplicate]

I try to write an extension caching some large media files used on my website so you can locally cache those files when the extension is installed:
I pass the URLs via chrome.runtime.sendMessage to the extension (works)
fetch the media file via XMLHttpRequest in the background page (works)
store the file using FileSystem API (works)
get a File object and convert it to a URL using URL.createObjectURL (works)
return the URL to the webpage (error)
Unfortunately the URL can not be used on the webpage. I get the following error:
Not allowed to load local resource: blob:chrome-extension%3A//hlcoamoijhlmhjjxxxbl/e66a4ebc-1787-47e9-aaaa-f4236b710bda
What is the best way to pass a large file object from an extension to the webpage?
You're almost there.
After creating the blob:-URL on the background page and passing it to the content script, don't forward it to the web page. Instead, retrieve the blob using XMLHttpRequest, create a new blob:-URL, then send it to the web page.
// assuming that you've got a valid blob:chrome-extension-URL...
var blobchromeextensionurlhere = 'blob:chrome-extension....';
var x = new XMLHttpRequest();
x.open('GET', blobchromeextensionurlhere);
x.responseType = 'blob';
x.onload = function() {
var url = URL.createObjectURL(x.response);
// Example: blob:http%3A//example.com/17e9d36c-f5cd-48e6-b6b9-589890de1d23
// Now pass url to the page, e.g. using postMessage
};
x.send();
If your current setup does not use content scripts, but e.g. the webRequest API to redirect request to the cached result, then another option is to use data-URIs (a File or Blob can be converted to a data-URI using <FileReader>.readAsDataURL. Data-URIs cannot be read using XMLHttpRequest, but this will be possible in future versions of Chrome (http://crbug.com/308768).
Two possibilities I can think of.
1) Employ externally_connectable.
This method is described in the docs here.
The essence of it: you can declare that such and such webpage can pass messages to your extension, and then chrome.runtime.connect and chrome.runtime.sendMessage will be exposed to the webpage.
You can then probably make the webpage open a port to your extension and use it for data. Note that only the webpage can initiate the connection.
2) Use window.PostMessage.
The method is mentioned in the docs (note the obsolete mention of window.webkitPostMessage) and described in more detail here.
You can, as far as I can tell from documentation of the method (from various places), pass any object with it, including blobs.

CORS problems with Amazon S3 on the latest Chomium and Google Canary

Our website is having problems loading CSS and JS resources on a Amazon S3 bucket with the very latest version of Chromium (Version 33.0.1722.0 - 237596) and Chrome Canary.
It works well with any of the other browsers including the current Chrome (31.0.1650.57).
The error is:
Script from origin 'https://mybucket.s3.amazonaws.com' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://app.example.com' is therefore not allowed access.
Our S3 CORS configuration on the resource bucket is:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>300000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Is it a bug with Chromium?
Did something change on the latest CORS spec?
Add any query parameter such as ?cacheblock=true to the url, like so:
Instead of: https://somebucket.s3.amazonaws.com/someresource.pdf
do: https://somebucket.s3.amazonaws.com/someresource.pdf?cacheblock=true
The technical explanation I don't have entirely down. But it is something like the following:
Including a query parameter will prevent the 'misbehaving' caching behavior in Chrome, causing Chrome to send out a fresh request for both the preflight request and the actual request, allowing the proper headers to be present on both requests, allowing S3 to respond properly. Approximately.
Amazon released a fix for this a few months back. We were seeing the errors in current versions of Chrome & Safari (did not check Firefox). For anyone still running into this problem, try the following configuration:
S3 bucket CORS policy:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
CloudFront distribution settings (Behavior tab):
Allowed HTTP Methods: GET, HEAD, OPTIONS
Forward headers: Whitelist
Whitelist headers: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
We are hosting css and static javascript files via CloudFront with an S3 origin. We reference our javascript files via
<script crossorigin="anonymous" src="http://assets.domain.com/app.js">.
EDIT
We began seeing this issue again with Safari 10.1.2. It turns out that we were accessing the Javascript file in two ways...
On page A via <script crossorigin="anonymous" src="http://assets.domain.com/app.js">.
On page B via $.ajax() (so that it was lazy loaded).
If you went to page A -> page B -> page A, we would get a cross origin denied error. We took out the lazy loading approach and it solved our issue (again).
In all likelihood, you're running into a very well-known problem with S3/CloudFront/CORS. The best solution I've been able to find is to have an app that proxies between S3 and CloudFront, always adding the appropriate CORS headers to the objects as they come back.
S3 + CloudFront are broken when it comes to serving CORS assets to different web browsers. The issue is two-fold.
Not all browsers require CORS for web fonts and other static assets. If one of these browsers makes the request, S3 won't send the CORS headers, and CloudFront will cache the (unhelpful) response.
CloudFront doesn't support the Vary: Origin header, so it has issues with using * for the AllowedOrigin value, and will only cache the first of multiple AllowedOrigin values.
In the end, these two issues make S3 + CloudFront an untenable solution for using CORS with a (fast) CDN solution — at least, out of the box. The bulletproof solution is to create a simple app that proxies the requests between S3 and CloudFront, always adding the requisite CORS headers so that CloudFront always caches them.
Request against a “Cold” cache
← Browser requests a static asset from CloudFront.
← CloudFront misses, and hits its origin server (a Proxy App).
← The Proxy App passes the request to S3.
→ S3 responds back to the Proxy App.
→ The Proxy App adds the correct CORS headers (whether S3 had sent them or not). The Proxy App responds back to CloudFront.
→ CloudFront caches the result and responds back to the browser.
Request against a “Warm” cache
← Browser requests a static asset from CloudFront.
→ CloudFront hits, and responds back to the browser.
Yes, this is a well-known, widespread issue:
https://forums.aws.amazon.com/message.jspa?messageID=445325
https://forums.aws.amazon.com/thread.jspa?messageID=404768
https://forums.aws.amazon.com/message.jspa?messageID=346287
https://forums.aws.amazon.com/message.jspa?messageID=278230
https://forums.aws.amazon.com/thread.jspa?messageID=388132
https://twitter.com/kindofwater/status/350630880651395072
Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading
https://coderwall.com/p/ub8zug
http://timwhitlock.info/blog/2012/09/web-fonts-on-amazon-s3/
http://www.yodi.sg/solve-load-font-face-cloudfront-amazon-s3-and-firefox-ie-caused-by-cors-access-control-allow-origin/
And many more!
I can say that our S3 and CloudFront teams are well-aware of the issues discussed here. By writing up a simple app that can act as a proxy between S3 and CloudFront, you can manually inject all of the correct CORS response headers before CloudFront caches them.
If you always work in Firefox, then you likely won't notice the issue — CloudFront will always be caching your CORS-enabled responses. If you work primarily in Safari or Chrome, you'll see it much more often when you switch back to a browser which requires these headers (Firefox and IE). Also, if you have separate development/staging/production environments, you're likely to run into the multi-origin issues more often.
Wanted to chime in with an alternate theory to this old question: Chrome has a bug/"feature" that's been present since at least Aug 2014 that causes a cross-origin request to fail if the resource was first loaded via a normal fetch, apparently because Chrome caches the CORS-less resource headers and then refuses to give the cached resource to the cross-origin request.
To make matters worse in our testing in a complex scenario, it isn't even necessarily fully consistent between refreshes (because order of resource loading?) and other browsers don't appear to share the behavior.
It was fun bug hunt! It seems that simply adding crossorigin='anonymous' to any tags loading the resource forces Chrome to pull the CORS headers in, fixing the subsequent cross-origin requests.

Accessing redirected-to URL when making an HTTP request

When making an HTTP request (using URLLoader, for example) that results in a redirect, is it possible to access any of the URLs in the redirect chain?
For example, let's say that the following happens:
We make a request to example.com/a.gif
example.com redirects to example2.com/b.gif
example2.com redirects to example3.com/c.gif
I've stared at the documentation for URLLoader and its various events for a while, and it doesn't seem like there's a way to either:
Instruct URLLoader to not follow redirects
Access any of the URLs involved after the initial request
Does anyone know if there's a way to do this? I'm not attached to using URLLoader, so if there's another class that supports this functionality, I'd be fine with using it.
Can anyone point me in the right direction? Thanks in advance!
Edit - I should clarify: I know how to detect the redirects outside of AS3 using a DOM debugger. I'm specifically interested in accessing the redirect chain within AS3. It would appear that it's possible using the AIR player via the HttpStatusEvent, but the relevant properties aren't available when using Flash Player.
Edit 2 - I've also tried using an HTTP client lib (as3httpclientlib, to be specific). This works except for the fact that it loads cross-domain policies from port 843 rather than by making an HTTP request to /crossdomain.xml. The context I'm working in requires the latter, so using something with Socket underlying it won't work unless there's a way to force Socket to load cross-domain policies from HTTP instead of port 843.
The redirects are generally in place because the original URL shouldn't be used anymore. The file doesn't exist at example.com/a.gif so in theory you don't need to know about it. Why do you need the intermediate request path?
I'm not aware of an actionscript way of finding the redirect chain for any request, but if you want to do it for a specific chain you can use HttpFox for Firefox, or hit f12 in google chrome and look at the network tab when making a request to the URL that redirects. This will only work if the client is redirected by the server to the new address (a HTTP 302 responce or similar.) If the server chooses to return the contents of example3.com/c.gif when someone's browser asks for example.com/a.gif there is nothing you can do.