Chrome is caching even with HTTP no-cache headers - google-chrome

I am trying to serve a PHP file output with HTTP Headers configured so the content will NOT be served from cache in Chrome.
If I go to Dev tools (in Chrome), and mark the "Disable cache" option, then it works.
But I don't want to depend on that, I hope I can setup HTTP headers in a way I can force Chrome to reload the page everytime.
Here's a screenshot of my current attempt, please note the red marks.
Could you please provide good documentation or which headers I must declare for this ?
Thanks in advance.
Edit
So I found this other reply too: Chrome caching like a mad browser, but since I recall being told that Chrome needs special headers for Cache-Control, I will keep this question.

Web browsers can cache AJAX requests with the same request parameters. In order to work around this problem, use a query string that changes. One example would be to use the date in seconds as a query parameter.

Related

What is causing Chrome to show error "The request's credentials mode prohibits modifying cookies and other local data."?

We have a react front-end that is communicating with an ASP Core API.
Sometimes we detect there is something wrong with the front-end, including service workers, local cache, and that kind of stuff, so we want to tell the client to clean it up.
I've implemented the Clear-Site-Data (dev-moz) (w3c) as a response header, as "Clear-Site-Data": "cache", "cookies", "storage", "executionContexts"
When testing this out in Firefox, it works, and in the console I'm seeing:
Clear-Site-Data header found. Unknown value “"executionContexts"”. SignIn
Clear-Site-Data header forced the clean up of “cache” data. SignIn
Clear-Site-Data header forced the clean up of “cookies” data. SignIn
Clear-Site-Data header forced the clean up of “storage” data.
When doing the same in Chrome, it's not working, and I'm seeing the message
The request's credentials mode prohibits modifying cookies and other local data.
I'm trying to figure out what to do to fix it, but there are barely any any references. Just 7 results, mostly from browser integration test logs
All the documentation says that this should be implemented and working in Chrome... Any idea what's the problem?
Try manually reloading the page after the Clear-Site-Data has been received (so that the local data / cache is cleared and the header no longer contain Clear-Site-Data).
Both Firefox & Chrome don't appear to support executionContexts, which tells the browser to reload the original response.
If header contains executionContexts, then the browser should ignore it (as you see in Firefox console). However you can try wildcard mapping (*). (This will also add support for future properties).
Response.AppendHeader("Clear-Site-Data", "\"*\"");
Also Google reuse parts of their Chrome source code in their open source project Chromium. If you take a look at Chromium source code (https://doss-gitlab.eidos.ic.i.u-tokyo.ac.jp/sneeze/chromium/blob/9b22da4739ec7bf54fb8e730662e2ab7996532e0/content/browser/browsing_data/clear_site_data_handler.cc line 308). This implements the same exception The request's credentials mode prohibits modifying cookies. A flag LOAD_DO_NOT_SAVE_COOKIES is somehow being sent. The console error maybe an caused by an additional response header or a small chance theres a bug in Chrome.
Right now, my advice would be do not implement the Clear-Site-Data header at this time.
Despite the spec being widely available for some years, vendor support is still hit-and-miss and is now actually going in reverse.
As per the w3c github for this, there are a number of issues regarding executionContexts. The wildcard ('*') mentioned by Greg in their answer is not supported by Chrome, and Mozilla are about to remove the cache value as well.
All this points to a flawed standard which is likely to be removed at some point in the future.

Ajax requests and offline cache

I am developping a HTML5 application. I cache all the files thanks to the .manifest solution, so the app can be used when you don't have access to Internet.
I want to synchronise some data to the server whenever it is possible. Since the variable navigator.online seems to not be reliable enough, I do an AJAX request (using jQuery.get()) to detect if I get something in response, meaning the user is online.
The problem is that, as soon as the whole application is cached, every AJAX request towards a file on Internet fails with no reason. I tried on Chrome, Firefox, and Safari (on iPad) with the same result.
I use jQuery.get() to get the content of some files that are part of the application (and cached at the same time), these requests work flawlessly.
I first tought of a Same-origin problem. So I tried to do a request to https://graph.facebook.com, just to see if I get anything in return. In the Chrome console, the request status is "Pending", with 0bytes received.
When I deactivate the manifest and empty the cache, there is no problem at all.
Do you have any idea or hint on why this is happenning ?
Thank you :)
PS : English isn't my mother tongue, so please excuse (and feel free to correct) any mistakes I might have made. :)
Check that you have the network section. The network section of a cache manifest file specifies resources for which a web application requires online access. * is a wildcard which matches all.
CACHE MANIFEST
# 2013-11-21:v1
CACHE:
/Content/Site.css
/Scripts/jquery-1.8.2.js
NETWORK:
*
FALLBACK:
/ /Home/Offline

Open application/hal+json response inline in browser instead of downloading it

I would like to open "application/hal+json" response inline in my Chrome browser. The problem is that the Chrome browser doesn't recognize the HAL response and downloads it. Before I always used the JSON view extension for Chrome for checking my JSON response. But since swapping to HAL it immediately downloads my response so I cannot review it anymore.
For Chrome:
I just ran into a nice solution myself. I hope answering here will help some other people who run into the same problem...
Installing this 'application/...+json|+xml as inline' chrome extension solved it nicely. I am now able to review my server response again as normal.
For FireFox:
Install the extension called JSONView. After install go to extensions page (default: ctrl+shift+a) find the JSONView extension and go to options. There you can add "Alternate JSON content types" that should be opened as by the extension. Simply add application/hal+json to the input field and it will work:
REST developer add-ons: Another solution can be to install a REST developer add-on. The advantage is that you can also change the http request verb (POST, GET, PATCH, PUT, DELETE) and custom set your request headers. A great REST plugin for Chrome is POSTMAN and a nice one for FireFox is RESTClient. But there are several other ones available.
This doesn't exactly answer your question, but installing the HAL Browser from your web server will convert HAL to HTML for viewing your application/hal+json responses inline. It's particularly nice because it makes the links navigable and also links to the relation documentation (if the relation name is a URI).
Oddly enough, my Chrome sends an Accept header of /;q=0.8 so it falls back to anything the server sends it. I wonder if you're running a different version of Chrome than I am. I'm on v30.0.1599.101.
Came here looking for a solution for Firefox and application/hal+json content type.
I ended up installing JSONView, which has an option in its preferences menu to add extra content types.

What to do with chrome sending extra requests?

Google chrome sends multiple requests to fetch a page, and that's -apparently- not a bug, but a feature. And we as developers just have to deal with it.
As far as I could dig out in five minutes, chrome does that just to make the surfing faster, so if one connection gets lost, the second will take over.
I guess if the website is well developed, then it's functionality won't break by this, because multiple requests are just not new.
But I'm just not sure if I have accounted for all the situations this feature can produce.
Would there be any special situations? Any best practices to deal with them?
Update 1: Now I see why my bank's page throws an error when I open the page with chrome! It says: "Only one window of the browser should be open." That's their solution to security threats?!!
Your best bet is to follow standard web development best practises: don't change application state as a result of a GET call.
If you're worried I recommend updating your data layer unit tests for GET calls to be duplicated & ensure they return the same data.
(I'm not seeing this behaviour with Chrome 8.0.552.224, by the way, is very new?)
I saw the subjected behavior while writing a server application and found that earlier answers are probably not true.
Chrome distributes a single request into multiple http ones to fetch resources in parallel. In this case, it is an image which it fetches as a separate http get.
I have attached screen shot of packet capture through wireshark.
It is for a simple get request to port 8080 for which the server returns a hello message.
Chrome sends the second get request for obtaining favorite icon which you see on top of every tab opened. It is NOT a second get to cater time out or any such thing.
It should be considered another element that differs across browsers. However, doing things in multiple http requests in parallel is kind of a standard thing in browsers as of 2018.
Here is a reference question that i found latter
Chrome sends two requests SO
Chrome issue on google code
It also can be caused by link tags with empty href attributes, at least in Chromium (v41). For example, each of the following line will generate an additional query on the page :
<link rel="shortcut icon" href="" />
<link rel="icon" type="image/x-icon" href="" />
<link rel="icon" type="image/png" href="" />
It seams that looking for empty attributes in the page is a good starting point, either href or src.
This behavior can be caused by SRC='' or SRC='#' in IMG or (as in my case) IFRAME tag. Replacing '#' with 'about:blank" has fixed the problem.
Here http://forums.mozillazine.org/viewtopic.php?f=7&t=1816755 they say that SCRIPT tags can be the issue as well.
My observation of this characteristic (bug/feature/whatever) occurs when I am typing in a URL and the Autocomplete lands on a match while still typing in the URL.
Chrome takes that match and fetches the page, I assume for the caching benefits that would occur when loading the page yourself....
I have just implemented a single-use Guid token (asp.net/TSQL) which is generated when the first form in a series of two (+confirmation page) is generated. The Token is recorded as "pending" in the DB when it is generated. The Guid token accompanies posts as a hidden field, and is finally marked as closed when the user operation is completed (payment). This mechanism does work, and prevents any of the forms being resubmitted after the payment is made. However, I see 2 or 3 (!?) additional tokens generated by additional requests quickly one after the other. The first request is what ends up in front of the user (localhost - so ie., me), where the generated content ends up for the other two requests I have no idea. I wondered initially why Page_Load handlers were firing mutliple times for one page impression, so I tried a flag in Http.Context.Current - but found to my dismay, that the subsequent requests come in on the same URL but with no post data, and empty Http.Context.Current arrays - ie., completely (for practical purposes) seperate http requests. How to handle this? Some sort of token and logic to refuse subsequent page body content requests while the first is still processing? I guess this could take place as a global context?
This only happens when I enable "webug" extension (which is a FirePHP replacement for chrome). If I disable the extension, server only gets one request.
I just want to update on this one. I've encountered the same problem but on css style.
I've looked at all my src, href, script tag and none of them had an empty string. The offending entry was this:
<div class="Picture" style="background-image: url('');"> </div>
Make sure you also check your styles for empty url string
I was having this problem, but none of the solutions here were the issue. For me, it was caused by the APNG extension in Chrome (support for animated PNGs). Once I disabled that extension, I no longer saw double requests for images in the browser. I should note that regardless of whether the page was outputting a PNG image, disabling this extension fixed the issue (i.e., APNG seems to cause the issue for images regardless of image type, they don't have to be PNG).
I had many other extensions as well (such as "Web Developer" which many have suggested is the issue), and those were not the problem. Disabling them did not fix the issue. I'm also running in Developer Mode and that didn't make a difference for me at all.
In my case, it was Chrome (v65) making a second GET /favicon.ico, even though the response was text/plain thus clearly no <link in there referring the icon. It stopped doing that after I replied with a 404.
Firefox (v59) was sending 2 requests for favicon; again it stopped doing this after the 404.
I'm having the same bug. And like the previous answer this issue is because I've installed the Validator chrome extension
Once disable the extension, works normally.
In my case I have enpoint (json) data to a different server and browser make first an empty request(Request Method:OPTIONS) to check if a endpoint accept requests from my server, Same-origin policy. Also goot to know is a Angular 1 App.
In conclusion I make requests from localhost to a online fake json data.
I had empty tcp packet sent by Chrome to my simple server before normal html GET query and /favicon after. Favicon wasn`t a problem but empty tcp was, since my server was waiting either for data or for connection to be finished. It had no data and wouldn't release connection for 2 minutes. So thread was hanging for 2 minutes.
Jrummell's Link in a comment to original post helped me. It says empty tcp packets could be caused by "Predict network actions to improve page load performance" setting. I tried turning off prediction settings one by one and it worked. In chrome version 73.0.3683.86 (Official Build) (64-bit) this behavior was caused by chrome setting "Use a prediction service to load pages more quickly" turned on.
So in chrome~73 you can try going to setting -> advanced -> privacy and security -> Use a prediction service to load pages more quickly and turn it OFF.
It could be situation when Chrome send in start the request with method OPTIONS and only the second is real request with method GET. Usually in code we deal only with GET (or POST/PUT/DELETE..) but not with OPTIONS. Check if the first request has method OPTIONS.

Identify Webserver & Script of a website

I have got two simple questions
How can I tell what server is a website on? I remember I used to read the HTTP Host Header to identify the type of server. Is there any tool to do it?
2a. A lot of the website have the page extension .html and you just know they are not html. How can I tell what programming language is behind them?
2b. For ASPX, I think IIS can map the extension, so it will show HTML instead of ASPX, right?
Cheers
1.
Yes, you can check the http header tag "SERVER". Example of responses:
-Microsoft-IIS/6.0
-GFE/1.3
-Server Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch
You can also check "X-Powered-By" on some servers, example:
-PHP/5.2.6-3ubuntu4.2
-ASP.NET
You can do this in firefox/firebug for example. Go to NET pick a request, select headers and check under response headers. You could do this is Fiddler to or any other http sniffer.
2a)
See my first answer
2b)
Yes you can map .html or anything as a "asp.net" extension, meaning that the extension will be handled by the web application. Common use is that you have a httphandler that catches that extension in web.config.
Not sure what your endgoal of these questions are.. or rather to what purpose, maybe we could answer better then.
Look at the HTTP headers. This works as long as the Server admin hasn't disabled them (which he usually doesn't).
Try http://kalender-365.de/ip/get-http-header.php
2a. This actually works with all servers and all extensions. Some Interpreters - such as e.g. PHP - send a special created-by HTTP header (which can be disabled, however).