More Odd Firefox Cache Manifest Behavior: Redirect to Outside Domain Results in 404 Failure - html

I have an HTML5/Javascript (PHP/MySQL on the server) app with a cache manifest and it runs fine on almost all mobile and desktop browsers except for Firefox.
When I remove the cache manifest, it works fine in Firefox. Therefore, it's something odd with the cache manifest in Firefox that I can't figure out. It's loading a file from the cache, even though the file sends a Cache-Control: no-store, no-cache header.
The file handles the OAuth dance for getting a LinkedIn access token and follows these steps:
The app calls the file via Javascript using window.location.replace('file.php')
file.php loads and is redirected to file.php?param=initiate
file.php?param=initiate loads, gets a request token from LinkedIn, then redirects to the LinkedIn authorization page, then gets redirected to file.php?param=initiate&otherparameters
file.php?param=initiate&otherparameters loads, otherparameters is used to get an access token from LinkedIn, then reloads the app because now it has access.
However, on Firefox (16.0.2 on Windows 7), I get the following:
The app calls the file via Javascript using window.location.replace('file.php')
file.php loads and is redirected to file.php?param=initiate
(FireBug shows Status 302 Found and the Response Headers show the location /file.php?param=initiate)
file.php?param=initiate loads, gets a request token from LinkedIn, but does NOT redirect to the LinkedIn authorization page: it shows the 404 page (FireBug shows Status 302 Found and the Response Headers show the location https:linkedin.com/authenication link, but Firefox does not go to the LinkedIn page, it makes another GET request for file.php?param=initiate and loads it from the cache: Status 200 OK (BF Cache) and shows the 404 page).
file.php is NOT in the cache manifest.
Basically it does not go to the Location in the response header from step 3 that should take it to the LinkedIn authorization page, but I can't figure out why not.
Any ideas on how to fix this?
If you want to reproduce this problem, here's a link to a test event. Try to send a LinkedIn connection request and watch Firebug. All the LinkedIn profiles for this event (except mine) are dummy profiles, so don't worry about sending a LinkedIn connection request to a random stranger. You have to register first with your e-mail to get an activation link, but you can use a disposable e-mail address if you want to.
Some things I've tried:
No cache manifest: this fixes it, but I need offline functionality
Sending headers with various permutations of no-store, no-cache, must-ravalidate, past Expires date, etc.
Reducing the number of entries in the cache manifest
Various combinations of SETTINGS: prefer-online, NETWORK: *, NETWORK: https://*, etc.

I solved this problem by re-writing my LinkedIn/OAuth library so that it does everything via ajax instead of sending the Location header via PHP.
After much frustration I figured out why this problem was happening, so hopefully this will help others who face a similar predicament.
It turns out the cache manifest does not allow redirects to outside domains (this is probably documented somewhere, but it didn't show up for me when I searched for a solution).
Part of the problem was that I didn't know the redirect was causing the problem. I thought it was just the weirdness of the cache manifest. After all, it worked fine on Chrome and Safari and I didn't get any useful debugging info from Firebug.
Here's a useful article on the gotchas of cache manifest (this issue is listed as Gotcha #8).
Here's a link to the HTML Offline Spec (this issue appears to be listed in section 6.7.6(4), but the document is so opaque I can't event tell whether that's really what it's referring to).

Related

Chrome Https request canceled and gets retried on http

From Chrome 93 we start to see the follow behavior.
Browse to a specific page
Only when you click in the navigation panel on the URL and press enter (for example the reload button works fine)
the page reloads and after a few seconds the requests get cancelled
A new request starts but instead of https its now with http.
At first, we thought it is in the webserver, but the server log does not show any request, most likely because a cancellation is a client-side Chrome action.
Its works fine in other browsers.
There are no service workers installed
We tested it on multiple machines in private mode, to avoid any interruption of 3th party plugins
Whe the request including cookies and sessions is copied to postman, the request loads normal.
During a navigation request: After waiting 3 seconds on https, a new attempt is started on http.

Errors on requests for resources, http instead of https and some net::ERR_FAILED

I manage a site and if I open the browser console with disabled cache, there are sometimes errors in queries on some resources. Among other things, errors are GET https://www.domainname.topdomain/wp-content/uploads/2018/04/AN-IMAGE-150x150.jpg net::ERR_FAILED These errors are not always, and not on the same resources. One time there was an http request on the emoji script instead of https. And I then saw in the html code that the address had the protocol http instead of https for this script. How can that be? It is a wordpress site, https is set in admin. So it should be https everywhere. The error with http in a script src attribute in html should not happen (have only seen it once, but still ..). Maybe this is more related to just wordpress, I do not know.
For the net::ERR_FAILED errors, I dont see any response headers i the browser.
I dont see any net::ERR_FAILED errors in Firefox, I see them in Chrome and Edge.

Why doesn't Chrome or Firefox show request headers for pending requests?

If you are building a website and put a breakpoint in your server code so that a page cannot be returned until you move past the breakpoint and you (for instance) reload the page in Chrome or Firefox (haven't tested others), you can't see any information about the request.
While debugging, sometimes it's easier to view information about the HTTP request in the browser's dev tools than it is to find that information in the server code. Why am I not able to see HTTP request information until a response is returned by the server?
From: https://bugs.chromium.org/p/chromium/issues/detail?id=294891:
Headers displayed for pending requests are provisional. They represent
what request was sent from Blink to Chromium.
We do not update headers until server responds to avoid additional
notification used only by DevTools.
To watch real network activity you can use chrome://net-internals
It's not clear what that means, but that's the cited reason.

Since v38, Chrome extension cannot load from HTTP URLs anymore, workaround?

The users of our website run our Chrome plugin which, amongst other things, performs cross-origin requests via XMLHttpRequest as described on the Chrome extension development pages. This has been running just fine for a few years now. However, ever since our users upgraded to the latest version of Chrome (v38), these requests have failed. Our site runs on HTTPS and some of the URLs loaded via our content script are on HTTP. The message is:
[blocked] The page at 'https://www.ourpage.com/' was loaded over
HTTPS, but ran insecure content from 'http://www.externalpage.com':
this content should also be loaded over HTTPS.
The reported line where the error occurred is in the content script where I'm issuing the HTTP call:
xhr.send(null);
I have no control over the external page and I would rather not remove SSL from our own page. Question: Is this a bug or is there a workaround that I am not aware of?
(Note: The permissions in the manifest were always set to <all_urls> which had worked for a long time. Setting it to http://*/ and https://*/ did not help.)
If possible, use the https version of that external page.
If that is not possible, use the background page to handle the AJAX request (example).

Why does Google Chrome NOT use cached pages when I define the HTTP "Expires" header

I am sending validly formatted HTTP response "Expired" headers (e.g. "Wed, 04 May 2011 09:29:09 GMT") with a page served through https://[host]:{port}/ (with [host] being localhost) from a J2EE application, using response.setDateHeader("Expires", {milliseconds a few seconds in the future} ).
On my pages I have a link to the same page. When I click this link from within Firefox (4) or IE (8), the page is reloaded from cache until the Expired time is reached. Once the Expired time is passed, clicking on the same link results in the page being loaded from the server with fresh data. If I hit F5 on either of the mentioned browsers, the page is reloaded with new data from the server (Firebug shows me that Cache-Control: max-age=0 is being sent with the request).
With Google Chrome, both F5 and clicking on the link have the same effect. The page is ALWAYS reloaded from the server with new data.
I was unable to find any well documented explanation of this effect.
Does any one know why in my case Google Chrome is not respecting the "Expired" headers the server is sending with the page responses and thus ALWAYS requesting the data from the server?
The way chrome works in this respect can cause extreme confusion. It seems that pressing F5 or "reload this page" simply prevents chrome from serving a request from the cache. This is easily compared with pressing enter in the url window, where it will use the cache, even though in both cases the request header (which doesn't get sent anywhere) has Cache-Control: max-age=0.
If you press ctrl+F5 you get Cache-Control: no-cache. I think the difference between F5 & ctrl+F5 is that both will result in a request being sent to the server, but in the ctrl+F5 case the server should know to not respond with a 304 not modified.