Chrome Https request canceled and gets retried on http - google-chrome

From Chrome 93 we start to see the follow behavior.
Browse to a specific page
Only when you click in the navigation panel on the URL and press enter (for example the reload button works fine)
the page reloads and after a few seconds the requests get cancelled
A new request starts but instead of https its now with http.
At first, we thought it is in the webserver, but the server log does not show any request, most likely because a cancellation is a client-side Chrome action.
Its works fine in other browsers.
There are no service workers installed
We tested it on multiple machines in private mode, to avoid any interruption of 3th party plugins
Whe the request including cookies and sessions is copied to postman, the request loads normal.

During a navigation request: After waiting 3 seconds on https, a new attempt is started on http.

Related

Google Chrome DevTools appears to be caching settings for blocked requests? Cannot unblock a previously blocked request

I blocked a websocket request to add error handling functionality via the Chrome dev tools, by right-clicking on the request and selecting "Block Request URL." This worked fined. When it came time to unblock the request however, it seems that although the menu is now updated (it's back to "Block Request URL"), the actual request shows a state of "(pending)" for an indefinite amount of time.
I have a feeling DevTools has stored the decision somewhere and I need to manually delete that setting in order to restart it. How reset the chrome settings to allow me to do so?
Things I've tried:
Deleting my user profile
Clearing browser cache and cookies
Clicking block/unblock request for that particular url numerous times.

Why doesn't Chrome or Firefox show request headers for pending requests?

If you are building a website and put a breakpoint in your server code so that a page cannot be returned until you move past the breakpoint and you (for instance) reload the page in Chrome or Firefox (haven't tested others), you can't see any information about the request.
While debugging, sometimes it's easier to view information about the HTTP request in the browser's dev tools than it is to find that information in the server code. Why am I not able to see HTTP request information until a response is returned by the server?
From: https://bugs.chromium.org/p/chromium/issues/detail?id=294891:
Headers displayed for pending requests are provisional. They represent
what request was sent from Blink to Chromium.
We do not update headers until server responds to avoid additional
notification used only by DevTools.
To watch real network activity you can use chrome://net-internals
It's not clear what that means, but that's the cited reason.

URLs that 302 Redirect to the Play Store with "market://details" do NOT show in the network tab in Chrome

We are using Chrome Puppeteer to verify that links are redirecting to the play store properly. However, we are seeing weird behavior on both Puppeteer and Chrome Desktop where links that 302 redirect to the play store do not show in the network tab--at all (well, for Puppeteer, it just gets stuck on the previous URL). As if the requests are never made.
To reproduce, you'll need a URL whose server responds with a 302 to a market link, such as market://details?id=com.kabam.marvelbattle. The network tab shows no activity when visiting this URL. Is this intentional? Is there a flag that can be used to show ALL network requests, no matter the response?
EDIT:
Example URL: http://appclk.me/store.php. With network tab open, visit this URL. You will see nothing happens, nothing shows in the network tab. Firefox DOES show this request.
I didn't figure out how to show all the requests in Chrome, but because my specific use case was Puppeteer, I'll post how it was solved with that:
Use the page.setRequestInterception(true) method to get all requests to fire in the request event. Then listen for the request event like so:
// Set a listener for new requests
page.on('request', request => {
console.log(request.url);
request.continue();
});
The 302s and market link request events are fired.

More Odd Firefox Cache Manifest Behavior: Redirect to Outside Domain Results in 404 Failure

I have an HTML5/Javascript (PHP/MySQL on the server) app with a cache manifest and it runs fine on almost all mobile and desktop browsers except for Firefox.
When I remove the cache manifest, it works fine in Firefox. Therefore, it's something odd with the cache manifest in Firefox that I can't figure out. It's loading a file from the cache, even though the file sends a Cache-Control: no-store, no-cache header.
The file handles the OAuth dance for getting a LinkedIn access token and follows these steps:
The app calls the file via Javascript using window.location.replace('file.php')
file.php loads and is redirected to file.php?param=initiate
file.php?param=initiate loads, gets a request token from LinkedIn, then redirects to the LinkedIn authorization page, then gets redirected to file.php?param=initiate&otherparameters
file.php?param=initiate&otherparameters loads, otherparameters is used to get an access token from LinkedIn, then reloads the app because now it has access.
However, on Firefox (16.0.2 on Windows 7), I get the following:
The app calls the file via Javascript using window.location.replace('file.php')
file.php loads and is redirected to file.php?param=initiate
(FireBug shows Status 302 Found and the Response Headers show the location /file.php?param=initiate)
file.php?param=initiate loads, gets a request token from LinkedIn, but does NOT redirect to the LinkedIn authorization page: it shows the 404 page (FireBug shows Status 302 Found and the Response Headers show the location https:linkedin.com/authenication link, but Firefox does not go to the LinkedIn page, it makes another GET request for file.php?param=initiate and loads it from the cache: Status 200 OK (BF Cache) and shows the 404 page).
file.php is NOT in the cache manifest.
Basically it does not go to the Location in the response header from step 3 that should take it to the LinkedIn authorization page, but I can't figure out why not.
Any ideas on how to fix this?
If you want to reproduce this problem, here's a link to a test event. Try to send a LinkedIn connection request and watch Firebug. All the LinkedIn profiles for this event (except mine) are dummy profiles, so don't worry about sending a LinkedIn connection request to a random stranger. You have to register first with your e-mail to get an activation link, but you can use a disposable e-mail address if you want to.
Some things I've tried:
No cache manifest: this fixes it, but I need offline functionality
Sending headers with various permutations of no-store, no-cache, must-ravalidate, past Expires date, etc.
Reducing the number of entries in the cache manifest
Various combinations of SETTINGS: prefer-online, NETWORK: *, NETWORK: https://*, etc.
I solved this problem by re-writing my LinkedIn/OAuth library so that it does everything via ajax instead of sending the Location header via PHP.
After much frustration I figured out why this problem was happening, so hopefully this will help others who face a similar predicament.
It turns out the cache manifest does not allow redirects to outside domains (this is probably documented somewhere, but it didn't show up for me when I searched for a solution).
Part of the problem was that I didn't know the redirect was causing the problem. I thought it was just the weirdness of the cache manifest. After all, it worked fine on Chrome and Safari and I didn't get any useful debugging info from Firebug.
Here's a useful article on the gotchas of cache manifest (this issue is listed as Gotcha #8).
Here's a link to the HTML Offline Spec (this issue appears to be listed in section 6.7.6(4), but the document is so opaque I can't event tell whether that's really what it's referring to).

Why does Google Chrome NOT use cached pages when I define the HTTP "Expires" header

I am sending validly formatted HTTP response "Expired" headers (e.g. "Wed, 04 May 2011 09:29:09 GMT") with a page served through https://[host]:{port}/ (with [host] being localhost) from a J2EE application, using response.setDateHeader("Expires", {milliseconds a few seconds in the future} ).
On my pages I have a link to the same page. When I click this link from within Firefox (4) or IE (8), the page is reloaded from cache until the Expired time is reached. Once the Expired time is passed, clicking on the same link results in the page being loaded from the server with fresh data. If I hit F5 on either of the mentioned browsers, the page is reloaded with new data from the server (Firebug shows me that Cache-Control: max-age=0 is being sent with the request).
With Google Chrome, both F5 and clicking on the link have the same effect. The page is ALWAYS reloaded from the server with new data.
I was unable to find any well documented explanation of this effect.
Does any one know why in my case Google Chrome is not respecting the "Expired" headers the server is sending with the page responses and thus ALWAYS requesting the data from the server?
The way chrome works in this respect can cause extreme confusion. It seems that pressing F5 or "reload this page" simply prevents chrome from serving a request from the cache. This is easily compared with pressing enter in the url window, where it will use the cache, even though in both cases the request header (which doesn't get sent anywhere) has Cache-Control: max-age=0.
If you press ctrl+F5 you get Cache-Control: no-cache. I think the difference between F5 & ctrl+F5 is that both will result in a request being sent to the server, but in the ctrl+F5 case the server should know to not respond with a 304 not modified.