Purpose of the crossorigin attribute...? - html

In both image and script tags.
My understanding was that you can access both scripts and images on other domains. So when does one use this attribute?
Is this when you want to restrict the ability of others to access your scripts and image?
Images:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-crossorigin
Scripts:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script

The answer can be found in the specification.
For img:
The crossorigin attribute is a CORS settings attribute. Its purpose is to allow images from third-party sites that allow cross-origin access to be used with canvas.
and for script:
The crossorigin attribute is a CORS settings attribute. It controls, for scripts that are obtained from other origins, whether error information will be exposed.

This is how we have successfully used crossorigin attribute it in a script tag:
Problem we had: We were trying to log js errors in the server using window.onerror
Almost all of the errors that we were logging had this message : Script error. and we were getting very little information about how to solve these js errors.
It turns out that the native implementation in chrome to report errors
if (securityOrigin()->canRequest(targetUrl)) {
message = errorMessage;
line = lineNumber;
sourceName = sourceURL;
} else {
message = "Script error.";
sourceName = String();
line = 0;
}
will send message as Script error. if the requested static asset violates the browser's same-origin policy.
In our case we were serving the static asset from a cdn.
The way we solved it was adding the crossorigin attribute to the script tag.
P.S. Got all the info from this answer

CORS-enabled images can be reused in the element without being tainted. The allowed values are:
The page already answers your question.
If you have a cross-origin image you can copy it into a canvas but this "taints" the canvas which prevents you from reading it (so you cannot "steal" images e.g. from an intranet where the site itself doesn't have access to). However, by using CORS the server where the image is stored can tell the browser that cross-origin access is permitted and thus you can access the image data through a canvas.
MDN also has a page about just this thing: https://developer.mozilla.org/en-US/docs/HTML/CORS_Enabled_Image
Is this when you want to restrict the ability of others to access your scripts and image?
No.

If you're developing a quick piece of code locally, and you're using Chrome, there's a problem. if your page loads using a URL of the form "file://xxxx", then trying to use getImageData() on the canvas will fail, and throw the cross-origin security error, even if your image is being fetched from the same directory on your local machine as the HTML page rendering the canvas. So if your HTML page is fetched from, say:
file://D:/wwwroot/mydir/mytestpage.html
and your Javascript file and images are being fetched from, say:
file://D:/wwwroot/mydir/mycode.js
file://D:/wwwroot/mydir/myImage.png
then despite the fact that these secondary entities are being fetched from the same origin, the security error is still thrown.
For some reason, instead of setting the origin properly, Chrome sets the origin attribute of the requisite entities to "null", rendering it impossible to test code involving getImageData() simply by opening the HTML page in your browser and debugging locally.
Also, setting the crossOrigin property of the image to "anonymous" doesn't work, for the same reason.
I'm still trying to find a workaround for this, but once again, it seems that local debugging is being rendered as painful as possible by browser implementors.
I just tried running my code in Firefox, and Firefox gets it right, by recognising that my image is from the same origin as the HTML and JS scripts. So I'd welcome some hints on how to get round the problem in Chrome, as at the moment, while Firefox works, it's debugger is painfully slow, to the point of being one step removed from a denial of service attack.

I've found out how to persuade Google Chrome to allow file:// references without throwing a cross-origin error.
Step 1: Create a shortcut (Windows) or the equivalent in other operating systems;
Step 2: Set the target to something like the following:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files
That special command line argument, --allow-file-access-from-files, tells Chrome to let you use file:// references to web pages, images etc., without throwing cross-origin errors every time you try to transfer those images to an HTML canvas, for example. It works on my Windows 7 setup, but it's worth checking to see if it will work on Windows 8/10 and various Linux distros too. If it does, problem solved - offline development resumes as normal.
Now I can reference images and JSON data from file:// URIs, without Chrome throwing cross-origin errors if I attempt to transfer image data to a canvas, or transfer JSON data to a form element.

Related

Could someone explain why my code is giving me the following error in safari?

I have recently been building an overlay system for a display in my lobby. I am using the embed tag as follows:
<embed src="flatly.theme/index.html" style="width:237px;height:344px;left:0px;top:0px;z-index:1;position:absolute;">
this is the error I have been getting:
[Error] XMLHttpRequest cannot load http://xml.weather.yahoo.com/forecastrss/CAXX0504.xml. Origin null is not allowed by Access-Control-Allow-Origin. (index.html, line 0)
If someone could explain to me why this is happening and how I can make it run fluently again, it would be great.
Thanks in advance
Brett
.html file you a trying to use in <iframe> not supposed to be there.
Access-Control-Allow-Origin is part of Cross-origin resource sharing
is a mechanism that allows restricted resources (e.g. fonts,
JavaScript, etc.) on a web page to be requested from another domain
outside the domain from which the resource originated.1
And your domain is not allowed by xml.weather.yahoo.com
I believe this can sometimes happen for local html pages (file:///* )
And can be working as intended on any http server, even on http://localhost
it is explained in a lot of similar questions.
JavaScript - XMLHttpRequest, Access-Control-Allow-Origin errors
Your browser is detected cross domain request inside your "index.html" and block him

Navigating to a Data URI in IE

I have this extremely simple HTML:
<a download="red.png"
href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==">
Static
</a>
In Chrome or Firefox, it downloads red.png as expected.
In IE, it navigates to an error page. See it on JSFiddle.
Now, I know the download attribute is not supported in IE, and that's fine. I'd still expect it to navigate to the "file", allowing the user to save it. Instead, it's navigating to an error page.
Is there a way to get around this problem? The Data URI is generated client-side; creating the file on the server is not an option.
Edit: MSDN says:
For security reasons, data URIs are restricted to downloaded
resources. Data URIs cannot be used for navigation, for scripting, or
to populate frame or iframe elements.
...which I read as "Even though every other browser supports this, we don't know how to do it". So, still looking for a workaround to download a file generated on the client.
Since IE does not support either navigating to a data URI, nor the download attribute, the solution is to use navigator.msSaveBlob to generate the file and prompt the user to save it.
Credit goes to this answer.

SVG stacking, anchor elements, and HTTP fetching

I have a series of overlapping questions, the intersection of which can best be asked as:
Under what circumstances does an # character (an anchor) in a URL trigger an HTTP fetch, in the context of either an <a href or an <img src ?
Normally, should:
http://foo.com/bar.html#1
and
http://foo.com/bar.html#2
require two different HTTP fetches? I would think the answer should definitely be NO.
More specific details:
The situation that prompted this question was my first attempt to experiment with SVG stacking - a technique where multiple icons can be embedded within a single svg file, so that only a single HTTP request is necessary. Essentially, the idea is that you place multiple SVG icons within a single file, and use CSS to hide all of them, except the one that is selected using a CSS :target selector.
You can then select an individual icon using the # character in the URL when you write the img element in the HTML:
<img
src="stacked-icons.svg#icon3"
width="80"
height="60"
alt="SVG Stacked Image"
/>
When I try this out on Chrome it works perfectly. A single HTTP request is made, and multiple icons can be displayed via the same svg url, using different anchors/targets.
However, when I try this with Firefox (28), I see via the Console that multiple HTTP requests are made - one for each svg URL! So what I see is something like:
GET http://myhost.com/img/stacked-icons.svg#icon1
GET http://myhost.com/img/stacked-icons.svg#icon2
GET http://myhost.com/img/stacked-icons.svg#icon3
GET http://myhost.com/img/stacked-icons.svg#icon4
GET http://myhost.com/img/stacked-icons.svg#icon5
...which of course defeats the purpose of using SVG stacking in the first place.
Is there some reason Firefox is making a separate HTTP request for each URL instead of simply fetching img/stacked-icons.svg once like Chrome does?
This leads into the broader question of - what rules determine whether an # character in the URL should trigger an HTTP request?
Here's a demo in Plunker to help sort out some of these issues
Plunker
stack.svg#circ
stack.svg#rect
A URI has a couple basic components:
Protocol - determines how to send the request
Domain - where to send the request
Location - file path within the domain
Fragment - which part of the document to look in
Media Fragment URI
A fragment is simply identifying a portion of the entire file.
Depending on the implementation of the Media Fragment URI spec, it actually might be totally fair game for the browser to send along the Fragment Identifier. Think about a streaming video, some of which has been cached on the client. If the request is for /video.ogv#t=10,20 then the server can save space by only sending back the relevant portion/segment/fragment. Moreover, if the applicable portion of the media is already cached on the client, then the browser can prevent a round trip.
Think Round Trips, Not Requests
When a browser issues a GET Request, it does not necessarily mean that it needs to grab a fresh copy of the file all the way from the server. If it has already has a cached version, the request could be answered immediately.
HTTP Status Codes
200 - OK - Got the resource and returned from the server
302 - Found - Found the resource in the cache and not enough has changed since the previous request that we need to get a fresh copy.
Caching Disabled
Various things can affect whether or not a client will perform caching: The way the request was issued (F5 - soft refresh; Ctrl + R - hard refresh), the settings on the browser, any development tools that add attributes to requests, and the way the server handles those attributes. Often, when a browser's developer tools are open, it will automatically disable caching so you can easily test changes to files. If you're trying to observe caching behavior specifically, make sure you don't have any developer settings that are interfering with this.
When comparing requests across browsers, to help mitigate the differences between Developer Tool UI's, you should use a tool like fiddler to inspect the actual HTTP requests and responses that go out over the wire. I'll show you both for the simple plunker example. When the page loads it should request two different ids in the same stacked svg file.
Here are side by side requests of the same test page in Chrome 39, FF 34, and IE 11:
Caching Enabled
But we want to test what would happen on a normal client where caching is enabled. To do this, update your dev tools for each browser, or go to fiddler and Rules > Performance > and uncheck Disable Caching.
Now our request should look like this:
Now all the files are returned from the local cache, regardless of fragment ID
The developer tools for a particular browser might try to display the fragment id for your own benefit, but fiddler should always display the most accurate address that is actually requested. Every browser I tested omitted the fragment part of the address from the request:
Bundling Requests
Interestingly, chrome seems to be smart enough to prevent a second request for the same resource, but FF and IE fail to do so when a fragment is part of address. This is equally true for SVG's and PNG's. Consequently, the first time a page is ever requested, the browser will load one file for each time the SVG stack is actually used. Thereafter, it's happy to take the version from the cache, but will hurt performance for new viewers.
Final Score
CON: First Trip - SVG Stacks are not fully supported in all browsers. One request made per instance.
PRO: Second Trip - SVG resources will be cached appropriately
Additional Resources
simurai.com is widely thought to be the first application of using SVG stacks and the article explains some browser limitations that are currently in progress
Sven Hofmann has a great article about SVG Stacks that goes over some implementation models.
Here's the source code in Github if you'd like to fork it or play around with samples
Here's the official spec for Fragment Identifiers within SVG images
Quick look at Browser Support for SVG Fragments
Bugs
Typically, identical resources that are requested for a page will be bundled into a single request that will satisfy all requesting elements. Chrome appears to have already fixed this, but I've opened bugs for FF and IE to one day fix this.
Chrome - This very old bug which highlighted this issue appears to have been resolved for webkit.
Firefox - Bug Added Here
Internet Explorer - Bug Added Here

What should the value of a script tag's crossorigin attribute be? [duplicate]

In both image and script tags.
My understanding was that you can access both scripts and images on other domains. So when does one use this attribute?
Is this when you want to restrict the ability of others to access your scripts and image?
Images:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-crossorigin
Scripts:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script
The answer can be found in the specification.
For img:
The crossorigin attribute is a CORS settings attribute. Its purpose is to allow images from third-party sites that allow cross-origin access to be used with canvas.
and for script:
The crossorigin attribute is a CORS settings attribute. It controls, for scripts that are obtained from other origins, whether error information will be exposed.
This is how we have successfully used crossorigin attribute it in a script tag:
Problem we had: We were trying to log js errors in the server using window.onerror
Almost all of the errors that we were logging had this message : Script error. and we were getting very little information about how to solve these js errors.
It turns out that the native implementation in chrome to report errors
if (securityOrigin()->canRequest(targetUrl)) {
message = errorMessage;
line = lineNumber;
sourceName = sourceURL;
} else {
message = "Script error.";
sourceName = String();
line = 0;
}
will send message as Script error. if the requested static asset violates the browser's same-origin policy.
In our case we were serving the static asset from a cdn.
The way we solved it was adding the crossorigin attribute to the script tag.
P.S. Got all the info from this answer
CORS-enabled images can be reused in the element without being tainted. The allowed values are:
The page already answers your question.
If you have a cross-origin image you can copy it into a canvas but this "taints" the canvas which prevents you from reading it (so you cannot "steal" images e.g. from an intranet where the site itself doesn't have access to). However, by using CORS the server where the image is stored can tell the browser that cross-origin access is permitted and thus you can access the image data through a canvas.
MDN also has a page about just this thing: https://developer.mozilla.org/en-US/docs/HTML/CORS_Enabled_Image
Is this when you want to restrict the ability of others to access your scripts and image?
No.
If you're developing a quick piece of code locally, and you're using Chrome, there's a problem. if your page loads using a URL of the form "file://xxxx", then trying to use getImageData() on the canvas will fail, and throw the cross-origin security error, even if your image is being fetched from the same directory on your local machine as the HTML page rendering the canvas. So if your HTML page is fetched from, say:
file://D:/wwwroot/mydir/mytestpage.html
and your Javascript file and images are being fetched from, say:
file://D:/wwwroot/mydir/mycode.js
file://D:/wwwroot/mydir/myImage.png
then despite the fact that these secondary entities are being fetched from the same origin, the security error is still thrown.
For some reason, instead of setting the origin properly, Chrome sets the origin attribute of the requisite entities to "null", rendering it impossible to test code involving getImageData() simply by opening the HTML page in your browser and debugging locally.
Also, setting the crossOrigin property of the image to "anonymous" doesn't work, for the same reason.
I'm still trying to find a workaround for this, but once again, it seems that local debugging is being rendered as painful as possible by browser implementors.
I just tried running my code in Firefox, and Firefox gets it right, by recognising that my image is from the same origin as the HTML and JS scripts. So I'd welcome some hints on how to get round the problem in Chrome, as at the moment, while Firefox works, it's debugger is painfully slow, to the point of being one step removed from a denial of service attack.
I've found out how to persuade Google Chrome to allow file:// references without throwing a cross-origin error.
Step 1: Create a shortcut (Windows) or the equivalent in other operating systems;
Step 2: Set the target to something like the following:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files
That special command line argument, --allow-file-access-from-files, tells Chrome to let you use file:// references to web pages, images etc., without throwing cross-origin errors every time you try to transfer those images to an HTML canvas, for example. It works on my Windows 7 setup, but it's worth checking to see if it will work on Windows 8/10 and various Linux distros too. If it does, problem solved - offline development resumes as normal.
Now I can reference images and JSON data from file:// URIs, without Chrome throwing cross-origin errors if I attempt to transfer image data to a canvas, or transfer JSON data to a form element.

Failed to load resource under Chrome

There is a bunch of images in a web page.
Other browsers do download them correctly, but Chrome doesn't.
In the developer's console, it shows the following message for each image:
Failed to load resource
As mentioned before, problem appears only in Chrome.
What is it?
I recently ran into this problem and discovered that it was caused by the "Adblock" extension (my best guess is that it's because I had the words "banner" and "ad" in the filename).
As a quick test to see if that's your problem, start Chrome in incognito mode with extensions disabled (ctrl+shift+n) and see if your page works now. Note that by default all extensions will be already disabled in incognito mode unless you've specifically set them to run (via chrome://extensions).
Check the network tab to see if Chrome failed to download any resource file.
In case it helps anyone, I had this exact same problem and discovered that it was caused by the "Do Not Track Plus" Chrome Extension (version 2.0.8). When I disabled that extension, the image loaded without error.
There is also the option of turning off the cache for network resources. This might be best for developing environments.
Right-click chrome
Go to 'inspect element'
Look for the 'network' tab somewhere at the top. Click it.
Check the 'disable cache' checkbox.
Kabir's solution is correct. My image URL was
/images/ads/homepage/small-banners01.png,
and this was tripping up AdBlock. This wasn't a cross-domain issue for me, and it failed on both localhost and on the web.
I was using Chrome's network tab to debug and finding very confusing results for these specific images that failed to load. The first request would return no response (Status "(pending)"). Later down the line, there was a second request that listed the original URL and then "Redirect" as the Initiator. The redirect request headers were all for this identical short line of base64-encoded data, and each returned no response, although the status was "Successful":
GET data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAACklEQVR4nGMAAQAABQABDQottAAAAABJRU5ErkJggg== HTTP/1.1
Later I noticed that these inline styles were added to all the image elements:
display: none !important;
visibility: hidden !important;
opacity: 0 !important;
Finally, I did not receive any "failed to load resource" messages in the console, but rather this:
Port error: Could not establish connection. Receiving end does not exist.
If any of these things is happening to you, it probably has something to do with AdBlock. Turn it off and/or rename your image files.
Also, because of the inline CSS created by AdBlock, the layout of my promotions slider was being thrown off. While I was able to fix the layout issues with CSS before finding Kabir's solution, the CSS was somewhat unnecessary and affected the flexibility of the slider to handle images of multiple sizes.
I guess the lesson is: Be careful what you name your images. These images weren't malicious or annoying as much as they were alerting visitors to current promotions and specials in an unobtrusive way.
If the images are generated via an ASP Response.Write(), make sure you don't call Response.Close();. Chrome doesn't like it.
I was getting this error, only in Chrome (last version 24.0.1312.57 m), and only if the image was larger than the html img. I was using a php script to output the image like this:
header('Content-Length: '.strlen($data));
header("Content-type: image/{$ext}");
echo base64_decode($data);
I resolved it adding 1 to the lenght of the image:
header('Content-Length: '.strlen($data) + 1);
header("Content-type: image/{$ext}");
echo base64_decode($data);
Appears that Chrome dont expect the correct number of bytes.
Tested with sucess in Chrome and IE 9. Hope this help.
Facts:
I have disabled all plugins, and the problem still remains.
There are some sites, where the problem does not occour.
It's a known issue. See Issue 424599: Failed to load resource: net::ERR_CACHE_MISS error when opening DevTools on PHP pages and Stack Overflow question Bizarre Error in Chrome Developer Console - Failed to load resource: net::ERR_CACHE_MISS.
There is a temporary work around in Reenable (temporary) showModalDialog support in Chrome (for Windows) 37+.
Basically, create a new string in the registry at
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\EnableDeprecatedWebPlatformFeatures
Under the EnableDeprecatedWebPlatformFeatures key, create a string value with the name 1 and a value of ShowModalDialog_EffectiveUntil20150430. To verify that the policy is enabled, visit chrome://policy URL.
FYI - I had this issue as well and it turned out that my html listed the .jpg file with a .JPG in caps, but the file itself was lowercase .jpg. That rendered fine locally and using Codekit, but when it was pushed to the web it wouldn't load. Simply changing the file names to have a lowercase .jpg extension to match the html did the trick.
In Chrome (Canary) I unchecked "Appspector" extension. That cleared the error.
I updated my Chrome browser to the latest version and the issue was fixed.
It is due to ad-blocker.When project files names contains words like 'ad' then ad-blockers also block theses files to load.
Best solution is that never save any files with these name keys.
Removing the / from the path helped me to solve this problem.
See more at:
Failed to load resource: net::ERR_FILE_NOT_FOUND loading json.js