Chrome not updating CSS file for offline-enabled HTML5 website - html

I have an offline-enabled website that uses a cache manifest. I'm finding with Chrome that it is serving an older version of my stylesheet, even if I do a "Empty Cache and Hard Reload"
If I append ?foo=bar to the URL of the page or the CSS, the new version of the CSS is delivered.
My manifest is dynamically generated at /Manifest/Index (e.g. )
If I open the page in Chrome and check out Fiddler, I see a single request is made to the web server, as expected:
# Result Protocol Host URL Body Caching Content-Type Process Comments Custom
6 200 HTTP 10.6.4.67 /Manifest/Index 2,476 no-cache Expires: -1 text/cache-manifest; charset=utf-8 chrome:5484
Here is the header detail for /Manifest/Index
GET /Manifest/Index HTTP/1.1
Host: 10.6.4.67
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko)
Chrome/23.0.1271.97 Safari/537.11
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
HTTP/1.1 200 OK
Date: Thu, 10 Jan 2013 17:59:42 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 4.0.30319
X-AspNetMvc-Version: 4.0
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: text/cache-manifest; charset=utf-8
Content-Length: 2476
Can anyone tell me why on earth a CSS file reference in this cache-manifest isn't updating unless I append a cache-busting querystring variable to the CSS? Especially even if I empty Chrome's cache??!
More info:
If I update the cache-manifest, I can open up Chrome's console and see the App Cache events fire:
Document was loaded from Application Cache with manifest /Manifest/Index
Application Cache Checking event
Application Cache Downloading event
Application Cache Progress event (0 of 61) http://x.x.x.x/Content/themes/base/jquery.ui.progressbar.css
Application Cache Progress event (1 of 61) http://x.x.x.x/Content/themes/base/jquery.ui.accordion.css
Snip
Application Cache Progress event (54 of 61) http://x.x.x.x/Content/Site.css
I do notice that some of the items in this list, like Site.css, are underlined. Why is that?
Thanks,
Chris

Clear your appcache in Chrome using: chrome://appcache-internals/ and remove it there.
Also you need to rebuild your manifest file each time you change the files contained in it for the new copied to be downloaded.
This is accomplished by using a random number in your manifest and generating it when files are edited.
For example in node.js
function generateCacheManifest(...) {
manifest = 'CACHE MANIFEST';
manifest += '#version ' + Math.random();
...
}
Yes the random number can be in a comment. The point is that Chrome will check the cache manifest and when it sees that nothing has changed it will not fetch the updated files.
Change a file, change your manifest, it's that simple.

Related

Workbox pre-cached audio with range requests router fails to play in Chrome when served from Firebase

Background
I have created a PWA test project to find out how to get audio caching working with Workbox,
including scrub/seek using the range requests plugin.
I want the app to precache all audio and for this audio to be playable offline, including scrub/seek.
Pre-caching the audio can be done in one of two ways:
Using Workbox injectManifest.
By manually adding audio files to a cache using cache.add(URL)
But audio files cached with the first method (injectManifest) will not scrub/seek because the Workbox pre-cache does not
support range request headers. So you need to put a range request enabled router in front of the
pre-cache for audio files if you want to be able to scrub through/seek within a cached audio file.
Problem
Pre-cached audio with a range requests enabled router will play and scrub/seek fine in Chrome and Firefox when app is served
from localhost but fails to play in Chrome when served from Firebase.
I see the same error for all audio files that are pre-cached with a range requests router in front of them:
Router is responding to: /media/audio/auto-pre-cached.mp3
Using CacheOnly to respond to '/media/audio/auto-pre-cached.mp3'
No response found in the 'act-auto-pre-cache-wbv4.3.1-actv0.0.1' cache.
The FetchEvent for "https://daffinm-test.firebaseapp.com/media/audio/auto-pre-cached.mp3" resulted in a network error response: the promise was rejected.
CacheOnly.mjs:115 Uncaught (in promise) no-response: The strategy could not generate a response for 'https://daffinm-test.firebaseapp.com/media/audio/auto-pre-cached.mp3'.
at CacheOnly.makeRequest (https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-strategies.dev.js:343:15)
Chrome versions tried:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36
The files are present in the Workbox caches. The only difference I can see between locahost and Firebase is in the cached response headers:
Localhost
cache-control: max-age=3600
content-length: 3770956
content-type: audio/mpeg; charset=utf-8
Date: Mon, 07 Oct 2019 09:37:03 GMT
etag: "12456134-3770956-"2019-09-29T20:05:00.314Z""
last-modified: Sun, 29 Sep 2019 20:05:00 GMT
server: ecstatic-2.2.2
Firebase
accept-ranges: bytes
cache-control: max-age=3600
content-encoding: gzip
content-length: 3686565
content-type: audio/mpeg
date: Mon, 07 Oct 2019 11:47:43 GMT
etag: 267d9ec42517198c01e2cad893f1b14662a2d91904bc517aeda244c30358457c
last-modified: Mon, 07 Oct 2019 03:48:25 PDT
status: 200
strict-transport-security: max-age=31556926; includeSubDomains; preload
vary: x-fh-requested-host, accept-encoding
x-cache: MISS
x-cache-hits: 0
x-served-by: cache-lhr7363-LHR
x-timer: S1570448862.315027,VS0,VE1472
Firefox works fine in both cases.
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0
Code
You can find the code for the test app here including a full description of the test setup, expectations and results:
https://github.com/daffinm/audio-cache-test
And the app is currently deployed on Firebase here if you want to take a look:
https://daffinm-test.firebaseapp.com/
Question
Does anyone have any idea what's going on here and why the pre-cached audio with range request routers are failing to play in Chrome? Is this a Chrome bug and/or a Workbox mis-configuration and/or a Firebase configuration issue - or something completely different? (I have contacted Firebase support and they are being very helpful but are currently unable to enlighten me).
The presence of the Vary header in the Firebase responses sounds like the culprit. By default, the Cache Storage API will use the Vary header when determining whether or not there's a cache match. You can override this default behavior by passing in {ignoreVary: true} when querying the Cache Storage API. Workbox supports this as an option you can provide when creating your strategy, via the matchOptions parameter.
It looks like you're already passing in ignoreSearch: true, so you can just add ignoreVary: true alongside that.

Chrome over-caching .json files served from nginx

I'm using nginx and Dojo to build an embedded UI driven by a set of JSON files. Our primary target browser is Chrome, but it should work with all modern browsers.
Changing the JSON files can change the UI drastically, and I use this to give different presentations to different users. See my previous question for the details (Configure nginx to return different files to different authenticated users with the same URI), but basically my nginx configuration is such that the same URI with different users can yield different content.
This all works very well, except when someone switches to a different user. Some browsers will grab those JSON files from their own internal cache without even checking with the server, which leaves the UI display the previous user's presentation. Reloading the page fixes it, but boy! would I rather the right thing happened automatically.
The obvious solution is to use the various cache headers, but they don't appear to help. I'm using the following nginx directives:
expires epoch;
etag off;
if_modified_since off;
add_header Last-Modified "";
... which yields the following response headers:
HTTP/1.1 200 OK
Server: nginx/1.4.1
Date: Wed, 24 Sep 2014 16:58:32 GMT
Content-Type: application/octet-stream
Content-Length: 1116
Connection: keep-alive
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
Accept-Ranges: bytes
This looks pretty conclusive to me, but the problem still occurs with Chrome 36 for OS X and Opera 24 for OS X (although Firefox 29 and 32 do the right thing). Chrome is content to grab files from its cache without even referring to the server.
Here's a detailed example, with headers pulled from Chrome's Network debug panel. The first time Chrome fetches /app/resources/states.json, Chrome reports
Remote Address:75.144.159.89:8765
Request URL:http://XXXXXXXXXXXXXXX/app/resources/screens.json
Request Method:GET
Status Code:200 OK
with request headers:
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Authorization:Basic dm9sdGFpcndlYjp2b2x0YWly
Cache-Control:max-age=0
Connection:keep-alive
Content-Type:application/x-www-form-urlencoded
DNT:1
Host:suitable.dyndns.org:8765
Referer:http://XXXXXXXXXXXXXXXXXXXXXX/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
X-Requested-With:XMLHttpRequest
and response headers:
Accept-Ranges:bytes
Cache-Control:no-cache
Connection:keep-alive
Content-Length:2369
Content-Type:application/octet-stream
Date:Wed, 24 Sep 2014 17:19:46 GMT
Expires:Thu, 01 Jan 1970 00:00:01 GMT
Server:nginx/1.4.1
Again, all fine and good. But, when I change the user (by restarting Chrome and then reloading the parent page), I get the following Chrome report:
Remote Address:75.144.159.89:8765
Request URL:http://suitable.dyndns.org:8765/app/resources/states.json
Request Method:GET
Status Code:200 OK (from cache)
with no apparent contact to the server.
This doesn't seem to happen with all files. A few .js files are cached, most are not; none of the .css files seem to be cached; all the .html files are cached, and all of the .json files are cached.
How can I tell the browser (I'm looking at you, Chrome!) that these files are good at the moment it requests them, but will never again be good? Is this a Chrome bug? (If so, it's strange that Opera also shows the problem.)
I believe I've found the problem. Apparently "Cache-Control: no-cache" is insufficient to tell the browser to, um, not cache the data. I added "no-store":
Cache-Control:no-store, no-cache
and that did the trick. No more caching by Chrome or Opera.
I had the same problem, with json being cached...
If you control the client application-code, a possible workaround is to just add a random-value query-parameter at the end of the URL.
So instead of calling:
http://XXXXXXXXXXXXXXX/app/resources/screens.json
you call, for example:
http://XXXXXXXXXXXXXXX/app/resources/screens.json?rand=rrrrrrrrrr
where rrrrrrrrrr is some random-value that is different in each call.
Then, the browser will not be able to reuse any cached values.

WinJS webpage inside webview returns 0x80070005 on ajax call that uses cors

We have a WinJS apps that shows a web site inside a WebView control, the WebPage loads fine however every ajax call fails with 0x80070005 Access is denied error. We tried adding "Internet (Client)", "Internet (Client/Server)" and "Private Networks" capabilities to the app without success.
The calls use CORS to allow calling multiple domains, the site is working fine on every desktop browser, even on IE inside Modern environment. However, when the site is running inside the WebView control, only the preflight request runs, despite the server being responding with status code 200, the real request is never being sent.
We can see this using Fiddler, here is the preflight request:
OPTIONS /Queries/QueryContentAreas/GetAvaliableContentAreas HTTP/1.1
Accept: */*
Origin: https://myapp.demo.es
Access-Control-Request-Method: GET
Access-Control-Request-Headers: accept, authorization
UA-CPU: AMD64
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; Trident/7.0; MSAppHost/2.0; rv:11.0) like Gecko
Host: myappapi.demo.es
Content-Length: 0
Connection: Keep-Alive
Cache-Control: no-cache
And here is the server's response:
HTTP/1.1 200 OK
Server: Microsoft-IIS/8.0
Access-Control-Allow-Origin: https://myapp.demo.es
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: authorization
X-Powered-By: ASP.NET
Date: Thu, 29 May 2014 16:32:56 GMT
Content-Length: 0
In Visual Studio this access denied error appears:
SCRIPT7002: XMLHttpRequest: Error de red 0x80070005, Acceso denegado.
File: demo
Not sure if you ever figured this out or not, but I'm running into something similar.
Based on Features and restrictions by context (HTML) it seems like since the app is inside the web context (because it's in a WebView) cross-domain XHR requests are not allowed.

Why does Browser still sends request for cache-control public with max-age?

I have Amazon S3 objects, and for each object, I have set
Cache-Control: public, max-age=3600000
That is roughly 41 days.
And I have Amazon CloudFront Distribution set with Minimum TTL also with 3600000.
This is the first request after clearing cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
And Response is
HTTP/1.1 200 OK
Content-Type: application/x-javascript
Content-Length: 226802
Connection: keep-alive
Date: Wed, 28 Aug 2013 10:37:38 GMT
Cache-Control: public, max-age=3600000
Last-Modified: Wed, 28 Aug 2013 10:36:42 GMT
ETag: "124752e0d85461a16e76fbdef2e84fb9"
Accept-Ranges: bytes
Server: AmazonS3
Age: 342557
Via: 1.0 6eb330235ca3971f6142a5f789cbc988.cloudfront.net (CloudFront)
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: 92Q2uDA4KizhPk4TludKpwP6Q6uEaKRV0ls9P_TIr11c8GQpTuSfhw==
Even while Amazon clearly sends Cache-Control, Chrome still makes second request instead of reading it from Cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
If-None-Match: "124752e0d85461a16e76fbdef2e84fb9"
If-Modified-Since: Wed, 28 Aug 2013 10:36:42 GMT
Question:
Why does chrome makes second request?
Expires
This behavior changes when I put an explicit Expires attribute in headers. Browser will not send subsequent request for Expires header, but for cache-control public, it does send it. My all S3 objects will never change, they are immutable, when we change file, we put them as new object with new URL.
In Page Script Reference
Chrome makes subsequent requests only sometimes, I did this test by actually typing URL in browser. When script is referenced by HTML page, for few subsequent requests chrome loads cached scripts, but once again after sometime, once in a while it does send request to server. There is no Disk Size issue here, Chrome has sufficient cache space.
Problem is we get charged for every request, and I want S3 objects to be cached forever, and should be loaded from Cache and should never connect to server back.
When you press F5 in Chrome, it will always send requests to the server. These will be made with the Cache-Control:max-age=0 header. The server will usually respond with a 304 (Not Changed) status code.
When you press Ctrl+F5 or Shift+F5, the same requests are performed, but with the Cache-Control:no-cache header, thus forcing the server to send an uncached version, usually with a 200 (OK) status code.
If you want to make sure that you're utilizing the local browser cache, simply press Enter in the address bar.
If the HTTP Response contains the etag entry, the conditional request will always be made. ETag is a cache validator tag. The client will always send the etag to the server to see if the element has been modified.
If Chrome Developer Tools are open (F12), Chrome usually disables caching.
It is controllable in the Developer Tools settings - the Gear icon to the right of the dev-tools top bar.
If you are hitting the refresh button for loading the particular page or resource, the if-modified-since header request is sent everytime, if you instead request the page/resource as a separate request in a new tab or via a link in a script or html page, it will load the page/resource from the browser cache itself.
This is what has happened in my case, may be this is the general universal case. I am not completely sure, but this is what I gathered via my digging.
Chrome adds Cache-control: max-age=0 header when you use self-signed certificate. Switching from HTTPS to HTTP will remove this header.
Firefox doesn't add this header.

Cross Origin request blocked by HTML5 cache manifest on Firefox

I am having problems doing some cross-origin requests with Firefox and the application cache.
The error handler of my XHR request get called, and the status of the XHR request is 0.
When I see the network logs with firebug, I see an OPTIONS request that looks fine :
OPTIONS /foo.bar HTTP/1.1
Host: localhost:1337
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Origin: http://localhost:8080
Access-Control-Request-Method: GET
Access-Control-Request-Headers: content-type
Connection: keep-alive
To which the server respond something that looks OK :
HTTP/1.1 200 OK
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Access-Control-Allow-Origin: http://localhost:8080
Access-Control-Allow-Methods: GET, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: content-type
Date: Thu, 14 Mar 2013 17:55:22 GMT
Connection: keep-alive
Transfer-Encoding: chunked
Then the GET itself gets no responses :
GET /foo.bar HTTP/1.1
Host: localhost:1337
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Origin: http://localhost:8080
Connection: keep-alive
(When looking at the server logs, the server never receives the request)
I am using the html5 application cache mechanism, and here is the network section of my manifest :
NETWORK:
default.php
resource.php
http://localhost:1337/
Here is what I tried :
Replace http://localhost:1337/ with * in the manifest file : It works, but I don't like it, I found blocking non explicit network request handy when it comes to detecting missing CACHE entries.
Replace the GET method with a POST method : it works, but I don't like it as it is semantically wrong (I am trying to get a resource, not to post data).
Replace the GET method with a custom-but-semantically-correct READ method : it doesn't work, but it was fun.
It is my understanding that what I am trying to do falls into the step 3 of the Changes to the networking model in the W3C's specs and should work as is.
So, after all this, my questions are :
What am I doing wrong ?
Is this a bug with firefox ? (I forgot to tell, my site works like a charm in Chrome and IE10 (yes, IE10, like Microsoft Internet Explorer version 10)
If I had to do a quirk to make it work with Firefox, which one should I do ? Is there a better solution than the 2 bad ones I found ?
Although the spec says that http://localhost:1337 in the NETWORK section of your cache manifest should be sufficient, it might be worth trying the full URL (http://localhost:1337/foo.bar) to see if there's a bug in Firefox's implementation.
If that doesn't do the trick and all else fails, I would just go with putting * in your NETWORK section, at least until you figure out what's causing the problem. Value code that works for your users over code that works for you. Besides, there are other ways to find missing entries in the manifest.
That problem was mentioned in A List Apart: Application Cache is a Douchebag. See Gotcha #9.
You have to listen to each response and then filter for response or error on your own.
$.ajax( url ).always( function(response) {
// Exit if this request was deliberately aborted
if (response.statusText === 'abort') { return; } // Does this smell like an error?
if (response.responseText !== undefined) {
if (response.responseText && response.status < 400) {
// Not a real error, recover the content resp
} else {
// This is a proper error, deal with it
return;
}
}
// do something with 'response'
});
There is an open defect in Firefox (see also the linked duplicate) that any cross domain resource referenced in the manifest gets blocked on subsequent refreshes. Not much you can do at this point except vote and wait.
Note that this issue should be resolved in Firefox 33 onwards.