In previous version of chrome, on a webpage with the following:
<script>
document.write('<plaintext>');
</script>
<img src="http://example.com/image.jpg">
the image would not be downloaded. At some point a chrome update changed this behavior. Now when I look at the network tab, I see the image is downloaded. (fiddle here: https://jsfiddle.net/doojunqx/)
I have a script that is on a page, I would like to use this script to stop the browser from downloading (using up network bandwidth) for images and other assets that are unwanted and below my script tag.
Mobify does something similar here:
http://cdn.mobify.com/mobifyjs/examples/capturing-grumpycat/index.html
As they say on the page "Open your web inspector and note the original imgs did not load." However, when I open chrome developer tools and look at the network tab, I see the original images ARE now loading. I'm not sure what version of chrome changed this, but I think it is recent, within the last month or two.
Is there any way to force chrome back to the old behavior? Or any other way to stop these unwanted assets from loading?
Thanks,
Great question, and you're correct that it is a recent change in Chromium that affected the plaintext tag behaviour. In versions up to and including version 42.*, the HTML document parser would not spawn an asynchronous parsing thread until an external resource was found in the original HTML document. Once such a resource was found, an asynchronous thread would be spawned that would aggressively download all resources references within the HTML.
The recent change simplified the parsing behaviour by moving all document parsing to the asynchronous thread which now kicks off automatically. Whereas before, using the plaintext tag would ensure that no resources would be loaded if it was inserted before the first external resource, the plaintext tag is now racy as resources will download up to the moment the plaintext tag is executed in the main HTML document. As there is a time delay for the script to execute, an unknown number of resources will be retrieved.
There is as of yet no solution to this new behaviour, nor is there a way to disable the preload scanner as you would like. You will need to rely on workarounds such as polyfills to control your resource downloads. This new behaviour is only present in all versions of Chrome >= 43.* and has not been implemented in Safari, Firefox, or other browsers.
Related
Click Me
This used to work as a valid href attribute but it seems in the past few months it now shows an error in the console (I'm using Chrome):
Not allowed to load local resource: view-source: http://stackoverflow.com
I found some links from 2013 where this was once a bug in Chrome but said it was fixed.
Could someone point me to an authoritative source that can explain why this no longer works? I assume that this is security by the browser and not an angular issue (since view-source is whitelisted and used to work)
Looks like Chrome and Firefox (at least) disabled this within the past year or so
I found this thread, and these release notes explaining why and provides a timeline as to when the change took place.
Related StackOverflow question: File URL "Not allowed to load local resource" in the Internet Browser
Chrome responds with the "Not allowed to load local resource:" as a security protocol. I'm not sure why this used to work, but not now, though there is no real way around this unless web-security is disabled. There may be a different outcome on other browsers, but ultimately you are correct in thinking that it's Chrome's security.
The reason is that Chrome tries to preload URLs in background, to speed up your browsing experience.
If you open the DevTools after loading the page, the content of the items listed on the Resources tab may not be populated. This is also true of network requests on the Network tab. To see the fully populated resources on the Resources tab, first open the DevTools, then refresh the page, or navigate to the desired page with the DevTools open. Now select the html resource and it should be populated.
Edited to clarify the underlying question.
I am trying to debug a simple HTML5 webpage containing one image and one video. Everything displays fine. The video plays correctly. But, when I try to refresh the page, everything is downloaded except the video file. I am using the Firefox developer tools but I can't understand what is going on.
On the network tab I see the .html file being downloaded, then the image.jpg file. But I never see the video.mp4 file downloaded. The video plays OK, but it is not the current version on the server. It seems to be a previous version that has been cached.
I'm mystified why this should be. The cache is disabled in developer tools. I'm refreshing the page with Ctrl+F5. It's as if the video is being served from some secret local cache that I don't know about. I'm using Firefox 47.0.1. The same thing also happens when I test with Firebug.
Edit. I have now tried Developer Tools in Chrome and it's exactly the same. The very first time I access the page, I can see video.mp4 being downloaded. On subsequent reloads, I see the .html and .jpg files normally, but not the video.mp4 file. It must be cached somewhere because it plays. I disabled the cache in Chrome Dev Tools. I cleared the cache explicitly and tried an incognito window. Apart from the very first time, I never see any indication of the video file being downloaded.
I must be missing something obvious. Can anyone else reproduce this?
Here is my HTML.
<! DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<p>Test page.</p>
<img src="media/image.jpg">
<video src="media/video.mp4" controls="">
Display this if the browser can't play video.
</video>
</body>
Information moved from comments on an answer to the question:
1:
Thanks #nakji. Clearing the cache and private browsing made no difference at all. But closing the browser did. I reopened the browser after clearing the cache. On my very first access to the page I could see two GETs for video.mp4 with responses 206 (Partial Content). But after that it was back to the original problem. I will download Chrome and try that
2:
#ManoDestro. I tried everything possible to force a fresh download of video.mp4. But it's not happening. I reloaded the page with Ctrl+F5. I turned off caching in Dev Tools settings. I cleared the cache manually. I tried a private browsing window. I can't think of anything else. It's like the video is served from a secret cache that doesn't obey the normal caching rules. I have used multiple tools to confirm that the file is not coming down the wire - FF Dev Tools, Firebug, and now Wireshark. Can someone please test with a similar setup?
After a whole day's Googling I can now answer my own question. It turns out that Firefox has a special "media cache" for HTML5 video and audio content which is completely separate from the regular cache that everyone knows about. It is optimised for the high bandwidth and huge files associated with media content. One of the devs, Robert O'Callahan explains it all here.
The dumb thing is that this media cache doesn't seem to get cleared when you would expect it to. In fact it never seems to get cleared. Ever. The result is that Firefox keeps serving up stale content from the cache when you really want it to fetch the media file again from the server. This was the problem I was trying to debug originally. Firefox kept playing the wrong video after I changed the file on the server. I couldn't get it to download the new version.
All the things you normally do to force a page reload don't work with the media cache. The following have no effect.
The user selects 'Clear recent history' and deletes everything.
The user turns off caching in Developer tools.
The user forces a complete page reload with Ctrl+F5.
The only thing that does work is closing the browser and starting again. I'm still finding my way around this complex area. If anyone knows any more about it, please comment.
I reported this as a bug to Firefox here.
I have a test.html which uses multiple stylesheets. When opened the file from the local drive (not going through webserver), the styles were applied differently when compared to loading the files from webserver. Can this happen?. How to control this from happening?
attached is the comparision image from chrome browser developer tools styles listing.
As it was mentioned in comments, this looks like a browser caching issue.
A web cache stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met.
http://en.wikipedia.org/wiki/Web_cache
Try hitting F5 to refresh the page or disable cache. Usually, it is done through the Developer Tools in a browser (F12).
I'm having a very strange problem with a site in Google Chrome:
When I click on a link (from a list view to a detail page), the page hangs and I Chrome throws up a dialogue asking me to kill the page. The page is never displayed.
But if I navigate directly to the page, it loads in Chrome without any problems. Both actions (clicking on a link or navigating to the page) work fine in Safari and Firefox.
Disabling "Predict network actions to improve page load performance" in Chrome's settings seems to fix the problem, but this is not a viable solution as I don't have any control of my user's browser settings.
Some more detail about the situation:
The link is just a regular <href>. I'm not doing any javascript
click() handling or anything else. I'm not using any 'prefetch' or 'prerender' <link> elements.
The pages all validate using the W3
html5 validator.
The page I'm navigating to loads a lot of JS, uses Knockout.js for rendering and loads a video file over HTTP.
On the occasions that the page does load (after a very long wait),
Chrome appears to have rendered the entire page in the background and
loaded all external resources. If I navigate directly to the page it
doesn't preload anything though (I'm using knockout to show a 'please
wait' message while the external resources load).
When I log the network requests using Charles, it appears that
Chrome loads the HTML for the page instantly, but the requests for
the external resources seem to take forever.
If I look at the CPU usage in Activity Monitor, 'Google Chrome Renderer' uses 100% CPU when loading from the href, but only 30% when loading directly from the page.
I'm using the latest version of Chrome (22.0.1229.94)
So - my question
Is there a way to programatically disable "Predict network actions to improve page load performance"?
Or is there some other solution to this problem?
Just going through high voted unanswered questions I came across this one, and I once got into a similar situation for entirely different reasons (chrome was preloading a huge file I couldn't afford to load for every user). The fairly simple solution I applied back then was to open the link through Javascript rather than a simple href which worked wonders. Either way, your problem might already be solved, but seeing the number of views I thought I could at least share this small insight.
I've been using HTML5 Offline caching on my website for a while and for some reasons I am considering turning it off. To my surprise it doesn't work.
This is how I've implemented HTML5 Offline caching.
In my index.html I give path to the manifest file
<html manifest="app.manifest">
In the app.manifest file I list all the js/css/png file that I would like to be cached by the browser for offline usage. Every time I deploy updates, I update the app.manifest file, which causes the browser to fetch latest version of all the files listed in the manifest file.
In order to turn off the offline caching, I changed my index.html's opening tag to
<html>
I made a dummy change to app.manifest file, so that browser (which has already cached my website), will detect the change and download latest version of all the files (including index.html).
What I noticed is, the browser indeed gets the latest version of all the files. I see the new <html> tag in the updated version without the manifest declaration, however the behavior of the browser for future changes does not change. i.e. I now expect the browser to immediately fetch the new version of the index.html file, when it's changed on server. However that doesn't happen. The browser doesn't download updated index.html until I make any changes to the manifest file.
Thus it appears to me that the browser has permanently associated app.manifest file with my website URL and it won't get rid of it even when I don't mention it in <html> tag.
I have tested this on both Google Chrome and Firefox, same results. I also tried restarting Chrome, but it won't forget that my site ever had app.manifest defined for it. I haven't found any discussion on this aspect of offline caching on the web.
Update: I managed to get rid of the behavior in Chrome by clearing all the browsing data (by going to settings). But that's not something I can tell the users to do.
Make the manifest URL return a 404 to indicate you don't want offline web applications anymore. According to Step 5 of HTML5 ยง5.6.4, this marks the cache as obsolete, and will remove it.
You can also manually delete the offline web application in Chrome by going to about:appcache-internals.