Firefox - can't refresh cached video file - html

Edited to clarify the underlying question.
I am trying to debug a simple HTML5 webpage containing one image and one video. Everything displays fine. The video plays correctly. But, when I try to refresh the page, everything is downloaded except the video file. I am using the Firefox developer tools but I can't understand what is going on.
On the network tab I see the .html file being downloaded, then the image.jpg file. But I never see the video.mp4 file downloaded. The video plays OK, but it is not the current version on the server. It seems to be a previous version that has been cached.
I'm mystified why this should be. The cache is disabled in developer tools. I'm refreshing the page with Ctrl+F5. It's as if the video is being served from some secret local cache that I don't know about. I'm using Firefox 47.0.1. The same thing also happens when I test with Firebug.
Edit. I have now tried Developer Tools in Chrome and it's exactly the same. The very first time I access the page, I can see video.mp4 being downloaded. On subsequent reloads, I see the .html and .jpg files normally, but not the video.mp4 file. It must be cached somewhere because it plays. I disabled the cache in Chrome Dev Tools. I cleared the cache explicitly and tried an incognito window. Apart from the very first time, I never see any indication of the video file being downloaded.
I must be missing something obvious. Can anyone else reproduce this?
Here is my HTML.
<! DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<p>Test page.</p>
<img src="media/image.jpg">
<video src="media/video.mp4" controls="">
Display this if the browser can't play video.
</video>
</body>
Information moved from comments on an answer to the question:
1:
Thanks #nakji. Clearing the cache and private browsing made no difference at all. But closing the browser did. I reopened the browser after clearing the cache. On my very first access to the page I could see two GETs for video.mp4 with responses 206 (Partial Content). But after that it was back to the original problem. I will download Chrome and try that
2:
#ManoDestro. I tried everything possible to force a fresh download of video.mp4. But it's not happening. I reloaded the page with Ctrl+F5. I turned off caching in Dev Tools settings. I cleared the cache manually. I tried a private browsing window. I can't think of anything else. It's like the video is served from a secret cache that doesn't obey the normal caching rules. I have used multiple tools to confirm that the file is not coming down the wire - FF Dev Tools, Firebug, and now Wireshark. Can someone please test with a similar setup?

After a whole day's Googling I can now answer my own question. It turns out that Firefox has a special "media cache" for HTML5 video and audio content which is completely separate from the regular cache that everyone knows about. It is optimised for the high bandwidth and huge files associated with media content. One of the devs, Robert O'Callahan explains it all here.
The dumb thing is that this media cache doesn't seem to get cleared when you would expect it to. In fact it never seems to get cleared. Ever. The result is that Firefox keeps serving up stale content from the cache when you really want it to fetch the media file again from the server. This was the problem I was trying to debug originally. Firefox kept playing the wrong video after I changed the file on the server. I couldn't get it to download the new version.
All the things you normally do to force a page reload don't work with the media cache. The following have no effect.
The user selects 'Clear recent history' and deletes everything.
The user turns off caching in Developer tools.
The user forces a complete page reload with Ctrl+F5.
The only thing that does work is closing the browser and starting again. I'm still finding my way around this complex area. If anyone knows any more about it, please comment.
I reported this as a bug to Firefox here.

Related

view-source in href shows error in console

Click Me
This used to work as a valid href attribute but it seems in the past few months it now shows an error in the console (I'm using Chrome):
Not allowed to load local resource: view-source: http://stackoverflow.com
I found some links from 2013 where this was once a bug in Chrome but said it was fixed.
Could someone point me to an authoritative source that can explain why this no longer works? I assume that this is security by the browser and not an angular issue (since view-source is whitelisted and used to work)
Looks like Chrome and Firefox (at least) disabled this within the past year or so
I found this thread, and these release notes explaining why and provides a timeline as to when the change took place.
Related StackOverflow question: File URL "Not allowed to load local resource" in the Internet Browser
Chrome responds with the "Not allowed to load local resource:" as a security protocol. I'm not sure why this used to work, but not now, though there is no real way around this unless web-security is disabled. There may be a different outcome on other browsers, but ultimately you are correct in thinking that it's Chrome's security.
The reason is that Chrome tries to preload URLs in background, to speed up your browsing experience.
If you open the DevTools after loading the page, the content of the items listed on the Resources tab may not be populated. This is also true of network requests on the Network tab. To see the fully populated resources on the Resources tab, first open the DevTools, then refresh the page, or navigate to the desired page with the DevTools open. Now select the html resource and it should be populated.

How to stop chrome from downloading unwanted assets?

In previous version of chrome, on a webpage with the following:
<script>
document.write('<plaintext>');
</script>
<img src="http://example.com/image.jpg">
the image would not be downloaded. At some point a chrome update changed this behavior. Now when I look at the network tab, I see the image is downloaded. (fiddle here: https://jsfiddle.net/doojunqx/)
I have a script that is on a page, I would like to use this script to stop the browser from downloading (using up network bandwidth) for images and other assets that are unwanted and below my script tag.
Mobify does something similar here:
http://cdn.mobify.com/mobifyjs/examples/capturing-grumpycat/index.html
As they say on the page "Open your web inspector and note the original imgs did not load." However, when I open chrome developer tools and look at the network tab, I see the original images ARE now loading. I'm not sure what version of chrome changed this, but I think it is recent, within the last month or two.
Is there any way to force chrome back to the old behavior? Or any other way to stop these unwanted assets from loading?
Thanks,
Great question, and you're correct that it is a recent change in Chromium that affected the plaintext tag behaviour. In versions up to and including version 42.*, the HTML document parser would not spawn an asynchronous parsing thread until an external resource was found in the original HTML document. Once such a resource was found, an asynchronous thread would be spawned that would aggressively download all resources references within the HTML.
The recent change simplified the parsing behaviour by moving all document parsing to the asynchronous thread which now kicks off automatically. Whereas before, using the plaintext tag would ensure that no resources would be loaded if it was inserted before the first external resource, the plaintext tag is now racy as resources will download up to the moment the plaintext tag is executed in the main HTML document. As there is a time delay for the script to execute, an unknown number of resources will be retrieved.
There is as of yet no solution to this new behaviour, nor is there a way to disable the preload scanner as you would like. You will need to rely on workarounds such as polyfills to control your resource downloads. This new behaviour is only present in all versions of Chrome >= 43.* and has not been implemented in Safari, Firefox, or other browsers.

Chrome keeps loading a old cache of my website

I am experiencing this weird issue where my Chrome browser keeps loading a old version of my website whose code doesn't even exist on my server any more. I assume it's a typical cache issue.
I tried to clean the browser cache, use igcognito mode, and clean DNS cache. The old cached page is still being loaded.
This issue seems to have been discussing on this google group for three years but there is still no solutions. https://productforums.google.com/forum/#!topic/chrome/xR-6YAkcASQ
Using firefox or any other web browsers works perfectly.
It doesn't just happen to me. All my coworkers experience the same issue on my website.
<?php Header("Cache-Control: max-age=3000, must-revalidate"); ?>
You can implement a PHP script that must be the first line of code in your index file . It is an http header typically issued by web servers. You can also rename the resource that is considered "stale". This tutorial will give you more details. https://www.mnot.net/cache_docs/
I'm not sure if I understand your problem correctly, but I was experiencing something similar and instead of clearing the cache I disabled it by doing this:
Open chrome and then go to your website
Press Command + Option + C(Mac)
Now that you've opened chrome's DevTools, go to the main menu where it says: Elements Console Sources ...
Click on the menu element that says Network
Make sure that the "Disable Cache" checkbox is checked
Then reload the page without closing the DevTools
This worked for me.
Let me know if it worked for you :)
A short term fix to view the new version of your site would normally be to clear out the cache and reload, for some reason this doesn't always work on Chrome. This short term solution is not going to fix the problem for every user that's on your site though, it will just allow you to see the new version of your site.
Adding version numbers to CSS and JS files will allow you and every other user, to see the most recent version of your site. A version number will force any browser not to load from the a user's personal computer cache, and instead load the actual files on the server, if the version number varies from the one in the user's cache.
So if you have these files on your server:
ExJS.js
ExCSS.css
and change them to:
ExJS.js?v=1.01
ExCSS.css?v=1.01
the new version of these files will load in any browser.
Normally, a browser will always load the HTML file from the server, but there are some HTML meta tags you can use to make sure that the most recent HTML version will load for any user:
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
There are also ways to make sure that files in other languages always load the most recent version as well, which is discussed on this post:
How to add version number to HTML file (not only to css and js files)
You can press on Inspect, then Network and check Disable cache.
change the name of images and make the necessary image name changes in html file.. found this quick fix for my website
I ran into the same issue, and I also tried to disable caching on my JSP pages
<% response.setHeader("Cache-Control","no-cache");
response.setHeader("Pragma","no-cache");
response.setDateHeader ("Expires", 0); %>
But it didn't help.
This is a known issue with google chrome and chromium browsers, even though you clear cache and cookie.
However it may or may not happen for most of the users.
Also this has been unresolved till the date since last 9-10 years.
Hence for testing purposes I would highly recommend to use Mozilla Firefox or Opera.
However it does sounds that your application is limited to certain browsers for best experience, and may not sound convincing to Business/End users.
But having said that, this caching issue may or may not happen to most of us.
You should be able to clear the problem by resetting Chrome. This is the only way I found to clear this condition - after tearing my hair out for half a day.
Prior to finding this, I tried clearing the cache, deleting the contents of the various cache directories etc. in vain.
[As of today May 3 2021] You can do this by gong to 'Settings > Advanced > Reset and clean up > Restore settings to their original defaults'. Note that this will not remove any bookmarks but will log you out of all accounts you are signed into.
Adding CNAME Will help also if you always run site without www, try with www.example.com will work.
I came across this issue developing locally, and tried the following things:
Clearing Cache + generally ALL files in Chrome
Setting the Cache-Control Header like Eli Duhon mentioned.
Setting the Cache Control Header in multiple other ways.
And the only thing that fixed the problem for me was to basically re-start my docker containers on which the app was running.
so I did this:
docker-compose down
And then
docker-compose up
and everything was updated after that.
HOWEVER, if you have changes again, they are still not updated...
So this is certainly not a fix to this problem, as I dont even know what causes this behaviour in the first place, but I assume it has to do something with hot reloading and/or Docker but that was the only thing that did the trick for me so I thought I would mention it here...
I had this problem moving a Wordpress site to new hosting where the URL redirects to .../wp, which hadn't been the case before.
Chrome was helpfully presenting a directory listing showing the file dates from the old server, despite the DNS having updated fully a week ago. So it was obviously demonstrating the problem discussed here.
I added an index.html file with just the following in it:
<meta http-equiv="refresh" content="0; URL='http://my-wp-site.com/wp'"/>
which fixed the problem straight away, including on Chrome browsers that had not had their cache cleared and that had no knowledge of any Google account of mine.
I don't know why this worked, however, given all the problems people have listed above.
you have two options
a) consider fingerprint the stale resources like
<script src="js/app-4829382839238882882bb3442bbbbdhh3kh3.js" type="text/javascript"></script>
b) Add cache control headers such as Cache-Control, Expires on your webserver.
This is a good read on browser caching

Page changes not displaying on my computer only

I have come across a strange issue with my company's site, and it seems to only affect my computer. Changes that I have made to the raw HTML are not reflected in my browser (Firefox).
I have taken the following steps to resolving this issue, without any luck:
Ensured that the page was uploaded successfully to the correct directory (downloading the file from the server shows that it is the same file as the one just uploaded).
Cleared my browser history and cache, refreshed the page.
Opened the page in other browsers (IE, and Chrome).
RDP'd into our server and opened multiple browsers that way.
A colleague of mine sits directly across from me and he has opened the page in the same version of Firefox that I typically use and he can see the changes. He and I both work on the site regularly.
The strangest part is that I have made changes to this page before, and they showed on my screen instantly. These changes are still in place and visible, yet some HTML elements that existed before I made those changes do not show on my screen currently (despite still existing in the HTML).
Has anyone else ever experienced such a phenomenon? Is there anything else that I can try in order to resolve this issue?
Have you tried forcing cache refresh? Try it by clicking Ctrl + F5

How to turn off HTML5 Offline caching

I've been using HTML5 Offline caching on my website for a while and for some reasons I am considering turning it off. To my surprise it doesn't work.
This is how I've implemented HTML5 Offline caching.
In my index.html I give path to the manifest file
<html manifest="app.manifest">
In the app.manifest file I list all the js/css/png file that I would like to be cached by the browser for offline usage. Every time I deploy updates, I update the app.manifest file, which causes the browser to fetch latest version of all the files listed in the manifest file.
In order to turn off the offline caching, I changed my index.html's opening tag to
<html>
I made a dummy change to app.manifest file, so that browser (which has already cached my website), will detect the change and download latest version of all the files (including index.html).
What I noticed is, the browser indeed gets the latest version of all the files. I see the new <html> tag in the updated version without the manifest declaration, however the behavior of the browser for future changes does not change. i.e. I now expect the browser to immediately fetch the new version of the index.html file, when it's changed on server. However that doesn't happen. The browser doesn't download updated index.html until I make any changes to the manifest file.
Thus it appears to me that the browser has permanently associated app.manifest file with my website URL and it won't get rid of it even when I don't mention it in <html> tag.
I have tested this on both Google Chrome and Firefox, same results. I also tried restarting Chrome, but it won't forget that my site ever had app.manifest defined for it. I haven't found any discussion on this aspect of offline caching on the web.
Update: I managed to get rid of the behavior in Chrome by clearing all the browsing data (by going to settings). But that's not something I can tell the users to do.
Make the manifest URL return a 404 to indicate you don't want offline web applications anymore. According to Step 5 of HTML5 ยง5.6.4, this marks the cache as obsolete, and will remove it.
You can also manually delete the offline web application in Chrome by going to about:appcache-internals.