Can I access a blob URL in an external page? [duplicate] - html

I try to write an extension caching some large media files used on my website so you can locally cache those files when the extension is installed:
I pass the URLs via chrome.runtime.sendMessage to the extension (works)
fetch the media file via XMLHttpRequest in the background page (works)
store the file using FileSystem API (works)
get a File object and convert it to a URL using URL.createObjectURL (works)
return the URL to the webpage (error)
Unfortunately the URL can not be used on the webpage. I get the following error:
Not allowed to load local resource: blob:chrome-extension%3A//hlcoamoijhlmhjjxxxbl/e66a4ebc-1787-47e9-aaaa-f4236b710bda
What is the best way to pass a large file object from an extension to the webpage?

You're almost there.
After creating the blob:-URL on the background page and passing it to the content script, don't forward it to the web page. Instead, retrieve the blob using XMLHttpRequest, create a new blob:-URL, then send it to the web page.
// assuming that you've got a valid blob:chrome-extension-URL...
var blobchromeextensionurlhere = 'blob:chrome-extension....';
var x = new XMLHttpRequest();
x.open('GET', blobchromeextensionurlhere);
x.responseType = 'blob';
x.onload = function() {
var url = URL.createObjectURL(x.response);
// Example: blob:http%3A//example.com/17e9d36c-f5cd-48e6-b6b9-589890de1d23
// Now pass url to the page, e.g. using postMessage
};
x.send();
If your current setup does not use content scripts, but e.g. the webRequest API to redirect request to the cached result, then another option is to use data-URIs (a File or Blob can be converted to a data-URI using <FileReader>.readAsDataURL. Data-URIs cannot be read using XMLHttpRequest, but this will be possible in future versions of Chrome (http://crbug.com/308768).

Two possibilities I can think of.
1) Employ externally_connectable.
This method is described in the docs here.
The essence of it: you can declare that such and such webpage can pass messages to your extension, and then chrome.runtime.connect and chrome.runtime.sendMessage will be exposed to the webpage.
You can then probably make the webpage open a port to your extension and use it for data. Note that only the webpage can initiate the connection.
2) Use window.PostMessage.
The method is mentioned in the docs (note the obsolete mention of window.webkitPostMessage) and described in more detail here.
You can, as far as I can tell from documentation of the method (from various places), pass any object with it, including blobs.

Related

HTML junk returned when JSON is expected

The following code used to work but not anymore and I'm seeing junk HTML with success code of 200 returned.
response = urlopen('https://www.tipranks.com/api/stocks/stockAnalysisOverview/?tickers='+symbol)
data = json.load(response)
If you open the page in chrome you will see the JSON file format. But when opened in python I'm now getting:
f1xx.v1xx=v1xx;f1xx[374148]=window;f1xx[647467]=e8NN(f1xx[374148]);f1xx[125983]=n3EE(f1xx[374148]);f1xx[210876]=(function(){var
P6=2;for(;P6 !== 1;){switch(P6){case 2:return {w3:(function(v3){var
v6=2;for(;v6 !== 10;){switch(v6){case 2:var O3=function(W3){var
u6=2;for(;u6 !== 13;){switch(u6){case 2:var o3=[];u6=1;break;case
14:return E3;break;case 8:U3=o3.H8NN(function(){var Z6=2;for(;Z6 !==
1;){switch(Z6){case 2:return 0.5 - B8NN.P8NN();break;}}.....
What should I be doing to adapt to the new backend change so that I can parse the JSON again.
It is a bot protection, to prevent people from doing what you are doing. This API endpoint is supposed to be used only by the website itself, not by some Python script!
If you delete your site data and then freshly access the page in the browser, you'll see it first loads the HTML page that you see which loads some JavaScript, which then executes a POST to another URL with some data. Somewhere in the process a number of cookies get set and finally the code refreshes the page which then loads the JSON data. At this point visiting the URL directly returns the data because the correct cookies are already set.
If you look at those requests, you'll see the server returns a header server: rhino-core-shield. If you google that, you can see that it's part of the Reblaze DDoS Protection Platform.
You may have luck with a headless browser like ghost.py or pyppetteer but I'm not sure how effective it will be, you'll have to try. The proper way to do this would be to find an official (probably paid) API for getting the information you need instead of relying on non-public endpoints.

MEAN.js $http.get() return index html content instead of json file

I'm doing a web app based on original MEAN.js framework. When I want to request local json test file using $http.get() method in my AngularJS file, it returned my index html content.Is it a routing problem? I didnot change the original mean.js routing code(https://github.com/meanjs/mean), just added a $http.get() method in home.client.controller.js file. Can anyone help me with this? Thanks!
That is most likely happening, because you didn't define an endpoint for that particular GET request in your app.
Everytime you make a request to your server (for example a GET request to /my-request) nodejs/express are configured in MEAN.js so that your server will try to find the endpoint for that request, if it does not find any, that request will be handled by this particular code block (specified in /modules/core/server/routes/core.server.routes.js):
// Define application route
app.route('/*').get(core.renderIndex);
Which will basically render the index view.
I'm not sure if you're using a custom module or not, eitherway, if you want that request to be handled in a different way in MEAN.js, you can specify your endpoint in your custom module routes file (or in core.server.controller.js) like so:
// Define application route
app.route('/my-request').get(core.sendMyJSON);
Be careful, because this route must be placed before the one I mentioned earlier, otherwise your request will still be handled the same way and the index view will be rendered and served again.
Then you will have to create the controller that should be called to handle that request:
exports.sendMyJSON = function (req, res) {
// logic to serve the JSON file
};
This way you should be able to get it done with a few adjustments.
Side note:
I'm not entirely sure but I think if you place your JSON file in the public directory of your app you should be able to directly access it without the need for the extra logic.

How do chromiumapp.org extension redirects work for Google Chrome?

When you create a Chrome extension and want to use OAuth 2.0, you can use a https://<app-id>.chromiumapp.org/* URL and be therefore able to have remote servers hit your browser instance directly (answered before - for example https://stackoverflow.com/a/30613603/61239). Does anyone know, or is able to theorize on how this works? And are you able to target any request at your browser, or does this only work for OAuth 2.0?
This is handled by the WebAuthFlow class, whose purpose is the following:
Given a provider URL, load the URL and perform usual web navigation until it results in redirection to a valid extension redirect URL.
The provider can show any UI to the user if needed before redirecting to an appropriate URL.
When the server instructs the browser to redirect to a valid extension redirect URL, that URL is instead passed to the callback function provided to chrome.identity.launchWebAuthFlow.
The 'appropriate' URLs are hardcoded in web_auth_flow.cc:
static const char kChromeExtensionSchemeUrlPattern[] =
"chrome-extension://%s/";
static const char kChromiumDomainRedirectUrlPattern[] =
"https://%s.chromiumapp.org/";
So the special URL https://<app-id>.chromiumapp.org/* only works in the context of a WebAuthFlow of the chrome.identity API. Note that the mechanism is totally internal to Chrome. The URL is never requested.

Data array from Couchdb documents into D3

I am having a problem integrating Couchdb and D3. D3 is a Javascript library that performs document driven data visualization. Couchdb is a document database. They were made for each other.
D3 binds an array of data to DOM elements of a web page. In most of the examples I have seen on the web or in books, people are working on a static data set. Generally, examples will show an array written into the Javascript or a text.csv file loaded into the page.
I would like to take data directly from database documents and load it into D3. I'm uncertain how to do it. I have seen one example on the web where a person has loaded all of their data as an array into one couchdb document and then brought the data into index.html with a couchdb.jquery call:
/ This function replaces the d3.csv function.
$.couch.db("d3apps3").openDoc("sp500", {
success : function (doc) {
var data = doc.data;
data.forEach(function(d) {
d.date = formatDate.parse(d.date);
d.price = +d.price;
})
I tried something similar with db.allDocs:
<script type="text/javascript">
$dbname = "dataset2";
$appname = "dataset2";
$db = $.couch.db("dataset2");
$db.allDocs({
success: function (data) {
console.log(data)
}
});
</script>
I could get the data to render in console.log, but could not get it into D3 and index.html. I also realized that the datastream resulting from db.allDocs is limited to the _id and _rev of each document.
I also tried to GET the data from a Couchdb view with a d3.json call. That wouldn't work because d3.json is looking for an existing .json file.
It's funny, I can call the view with cURL using a GET command and see the datastream, but can't seem to bind it with D3.
~$ curl -X GET http://anywhere.com:5984/dataset2/_desing/list_view/_view/arnold
{"total_rows":25,"offset":0,"rows":[
{"id":"dataset.csv1","key":"0","value":null},
{"id":"dataset.csv2","key":"1","value":null},
{"id":"dataset.csv11","key":"10","value":null},
{"id":"dataset.csv12","key":"11","value":null},
Any ideas would be appreciated.
Part four of https://gist.github.com/anonymous/9275891 has an example that I think you'd appreciate. You don't need to rely on the jquery.couchdb library at all - d3 knows enough abuot http and json to work right out the box. The relevant piece of code is:
d3.json("_view/pricetimeseries", function(viewdata) {
// We just want rows from the view in the visualisation
data = viewdata["rows"];
data.forEach(function(d) {
// the key holds the date, in seconds
d.date = new Date(d.key);
d.price = +d.value;
});
// rest of the visalisation code
HTH
If the page in which your D3 code is embedded is not served from the same domain (+ port) than CouchDB you will have to enable Cross-Origin Resource Sharing.
Assume your page is at http://example.com/data.html which contains JavaScript D3 code that acesses data from http://db.example.com/ or http://example.com:5984/. In that case your browser (which is executing the JavaScript) will by default deny such (cross-origin) requests unless the requested domain explicitly allows it.
There are basically two solutions to this:
Serve both the data and the page from the same domain, either by
putting a reverse proxy in between that maps resources to upstream servers (eg /couch to your CouchDB server and everything else to your web server)
serving your static files directly from CouchDB
or by allowing Cross-Origin Resource Sharing, which is available in CouchDB since version 1.3. You can find a list of relevant settings in the CouchDB docs on CORS.

HTML5 Canvas getImageData and Same Origin Policy

I have a site running at pixie.strd6.com and images hosted through Amazon S3 with a CNAME for images.pixie.strd6.com.
I would like to be able to draw these images to an HTML5 canvas and call the getImageData method but it throws Error: SECURITY_ERR: DOM Exception 18
I have tried setting window.domain = "pixie.strd6.com", but that has no effect.
Additionally, $.get("http://dev.pixie.strd6.com/sprites/8516/thumb.png?1293830982", function(data) {console.log(data)}) also throws an error: XMLHttpRequest cannot load http://dev.pixie.strd6.com/sprites/8516/thumb.png?1293830982. Origin http://pixie.strd6.com is not allowed by Access-Control-Allow-Origin.
Ideally HTML5 canvas wouldn't block calling getImageData from subdomains. I've looked into setting an Access-Control-Allow-Origin header in S3, but haven't succeeded.
Any help or workarounds are greatly appreciated.
Amazon recently announced CORS support
We're delighted to announce support for Cross-Origin Resource Sharing (CORS) in Amazon S3. You can now easily build web applications that use JavaScript and HTML5 to interact with resources in Amazon S3, enabling you to implement HTML5 drag and drop uploads to Amazon S3, show upload progress, or update content. Until now, you needed to run a custom proxy server between your web application and Amazon S3 to support these capabilities.
How to enable CORS
To configure your bucket to allow cross-origin requests, you create a CORS configuration, an XML document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) will support for each origin, and other operation-specific information. You can add up to 100 rules to the configuration. You add the XML document as the cors subresource to the bucket.
One possible solution is to use nginx to act as a proxy. Here is how to configure urls going to http://pixie.strd6.com/s3/ to pass on through to S3, but the browser can still believe that it is non-cross domain.
location /s3/ {
proxy_pass http://images.pixie.strd6.com/;
}
If you are using PHP, you can do something like:
function fileExists($path){
return (#fopen($path,"r")==true);
}
$ext = explode('.','https://cgdev-originals.s3.amazonaws.com/fp9emn.jpg');
if(fileExists('https://cgdev-originals.s3.amazonaws.com/fp9emn.jpg')){
$contents = file_get_contents('https://cgdev-originals.s3.amazonaws.com/fp9emn.jpg');
header('Content-type: image/'.end($ext));
echo $contents;
}
And access the image by using that php file, like if the file is called generateImage.php you can do <img src="http://GENERATEPHPLOCATION/generateImage.php"/> and the external image url can be a get parameter for the file
Recently, I came across $.getImageData, by Max Novakovic. The page includes a couple of neat demos of fetching and operating on Flickr photos, along with some code examples.
It allows you to fetch an image in JavaScript-manipulable form from an arbitrary site. It works by appending a script to the page. The script then requests the image from a Google App Engine server. The server fetches the requested image and relays it converted to base64 to the script. When the script receives the base64, it passes the data to a callback, which can then draw it onto a canvas and begin messing with it.
In the past Amazon S3 didn't allow you to modify or add the access-control-allow-origin and access-control-allow-credentials HTTP headers so it may have been better to switch to a different service like Rackspace Cloud Files or some other service that does.
Add or modify the HTTP headers like this:
access-control-allow-origin: [your site]
access-control-allow-credentials: true
See http://www.w3.org/TR/cors/#use-cases for more information.
Using a service that allows you to modify the HTTP headers entirely solves the same origin problem.
For people who do not use S3 can try to build a image proxy that encode the image file and wrap it into a JSON object.
Then you can use JSONP which supports cross domain to fetch the JSON object and assign the image data to img.src .
I wrote a sample code of the image proxy server with Google App Engine.
https://github.com/flyakite/gae-image-proxy
The JSON object returns in the format like this
{
'height': 50,
'width' : 50,
'data' : 'data:image/jpeg;base64,QWRarjgk4546asd...QWAsdf'
}
The 'data' is the image data in base64 format. Assign it to a image.
img.src = result.data;
The image is now "clean" for your canvas.
To edit your S3 bucket permissions:
1) Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/
2) In the Buckets list, open the bucket whose properties you want to view and click "add CORS configuration"
3) Write the rules you are willing to add in between the tags <CORSConfiguration>
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
You can learn more about rules at: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
4) Specify crossorigin='anonymous' on the image you'll use in your canvas
This behavior is by-design. Per the HTML5 spec, as soon as you draw a cross-origin image to a canvas, it is dirty and you can no longer read the pixels. Origin-matching compares the scheme, fully-qualified host, and in non-IE browsers, the port.
Just bumped into the same problem. I found out about CORS that might be helpful.
http://html5-demos.appspot.com/static/html5-whats-new/template/index.html#14
It didn't work for me since I'm trying to manipulate an image from Flickr. So, I'm still looking for the solution.