Chrome is not sending if-none-match - google-chrome

I'm trying to do requests to my REST API, I have no problems with Firefox, but in Chrome I can't get the browser to work, always throws 200 OK, because no if-none-match (or similar) header is sent to the server.
With Firefox I get 304 perfectly.
I think I miss something, I tried with Cache-Control: max-age=10 to test but nothing.

One reason Chrome may not send If-None-Match is when the response includes an "HTTP/1.0" instead of an "HTTP/1.1" status line. Some servers, such as Django's development server, send an older header (probably because they do not support keep-alive) and when they do so, ETags don't work in Chrome.
In the "Response Headers" section, click "view source" instead of the parsed version. The first line will probably read something like HTTP/1.1 200 OK — if it says HTTP/1.0 200 OK Chrome seems to ignore any ETag header and won't use it the next load of this resource.
There may be other reasons too (e.g. make sure your ETag header value is sent inside quotes), but in my case I eliminated all other variables and this is the one that mattered.
UPDATE: looking at your screenshots, it seems this is exactly the case (HTTP/1.0 server from Python) for you too!
Assuming you are using Django, put the following hack in your local settings file, otherwise you'll have to add an actual HTTP/1.1 proxy in between you and the ./manage.py runserver daemon. This workaround monkey patches the key WSGI class used internally by Django to make it send a more useful status line:
# HACK: without HTTP/1.1, Chrome ignores certain cache headers during development!
# see https://stackoverflow.com/a/28033770/179583 for a bit more discussion.
from wsgiref import simple_server
simple_server.ServerHandler.http_version = "1.1"

Also check that caching is not disabled in the browser, as is often done when developing a web site so you always see the latest content.

I had a similar problem in Chrome, I was using http://localhost:9000 for development (which didn't use If-None-Match).
By switching to http://127.0.0.1:9000 Chrome1 automatically started sending the If-None-Match header in requests again.
Additionally - ensure Devtools > Network > Disable Cache [ ] is unchecked.
1 I can't find anywhere this is documented - I'm assuming Chrome was responsible for this logic.

Chrome is not sending the appropriate headers (If-Modified-Since and If-None-Match) because the cache control is not set, forcing the default (which is what you're experiencing). Read more about the cache options here: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache.
You can get the wished behaviour on the server by setting the Cache-Control: no-cache header; or on the browser/client through the Request.cache = 'no-cache' option.

Chrome was not sending 'If-None-Match' header for me either. I didn't have any cache-control headers. I closed the browser, opened it again and it started sending 'If-None-Match' header as expected. So restarting your browser is one more option to check if you have this kind of problem.

Related

How to force re-validation of cached resource?

I have a page /data.txt, which is cached in the client's browser. Based on data which might be known only to the server, I now know that this page is out of date and should be refreshed. However, since it is cached, they will not re-request it for a long time (until the cache expires).
The client is now requesting a different page /foo.html. How can I make the client's browser re-request /data.txt and update its cache?
This should be done using HTTP or HTML (not all clients have JS).
(I specifically want to avoid the "cache-busting" pattern of appending version numbers to the /data.txt URL, like /data.txt?v=2. This fills the cache with useless entries rather than replacing expired ones.)
Edit for clarity: I specifically want to cache /data.txt for a long time, so telling the client not to cache it is unfortunately not what I'm looking for (for this question). I want /data.txt to be cached forever until the server chooses to invalidate it. But since the user never re-requests /data.txt, I need to invalidate it as a side effect of another request (for /foo.html).
To expand my comment:
You can use IF-Modified-Since and Etag, and to invalidate the resource that has been already downloaded you may take a look at the different approaches suggested in Clear the cache in JavaScript and fetch(), how do you make a non-cached request?, most of the suggestions there mentioned fetching the resource from JavaScript with no-cache header fetch(url, {cache: "no-store"}).
Or, if you can try sending a Clear-Site-Data header if your clients' browsers are supported.
Or maybe, give up this time only for the cache-busting solution. And if it's possible for you, rename the file to something else rather than adding a querystring as suggested in Revving Filenames: don’t use querystring.
Update after clarification:
If you are not maintaining a legacy implementation with users that already have /data.txt cached, the use of Etag And IF-Modified-Since headers should help.
And for the users with the cached versions, you may redirect to: /newFile.txt or /data.txt?v=1 from /foo.html. The new requests will have the newly added headers.
The first step is to fix your cache headers on the data.txt resource so it uses your desired cache policy (perhaps Cache-Control: no-cache in conjunction with an ETag for conditional validation). Otherwise you're just going to have this problem over and over again.
The next step is to get clients who have it in their cache already to re-request it. In general there's no automatic way to achieve this, but if you know they're accessing foo.html then it should be possible. On that page you can make an AJAX request to data.txt with the Cache-Control: no-cache request header. That should force the browser to bypass the cache and get a fresh version, and the cache should then be repopulated with the new version.
(At least, that's how it's supposed to work. I've never tried this, and I've seen reports here that browsers don't handle Cache-Control request headers properly.)

Cache JSON for 1 second with Google CDN to reduce server load

I'm looking for a solution to reduce server load on my API. I thought it's a good idea to cache the content for 1 second, so the server will get only 1 requests each second instead of many. However, I wasn't able to make it work:
In Use origin settings based on Cache-Control headers mode: the Google CDN ignores the 'Cache-Control', 'public, max_age=1' header. It looks like I can't cache JSON in this mode, probably because of the missing Content-Range or Transfer-Encoding: chunked header?
In Force cache all content mode: it actually works, but the minimum TTL is 1 minute, which is just too much for me. I'd like to update the content every second.
I'm quite surprised no one uses short-lived caches for API-s, since it can theoretically reduce load and maybe save costs when using with external APIs (eg. you don't need to pay twice if you request the same thing from an expensive API endpoint). Any ideas?
In order to cache an object, even if you are using the honor origin cache mode, you need to have a valid content-range header. Using the force cache all mode will decorate the object with the cache control directive Cloud CDN needs to cache the object, even if the origin doesn't have the directive- FORCE_CACHE_ALL basically takes whatever the origin pushes out and overwrites the cache control directive.
If you are using the UX, you are correct, the console doesn't allow you to set a lower TTL value; however, you can set a lower TTL via gCloud commands
gcloud compute backend-services update BACKEND_SERVICE_NAME --client-ttl=1 --default-ttl=1 --global
When using the "Use origin settings based on Cache-Control headers" cache mode, you can instruct Google Cloud CDN to cache a response for 1 second by including a Cache-Control: public, max-age=1 response header.
The problem with your earlier attempt appears to be a typo: the correct directive is max-age (with a hyphen), not max_age (with an underscore).

CORB OPTIONS Requests Blocked in Chrome 73

It appears that in a recent Chrome release, (or at least recently when making calls to my API --- haven't see it until today), Google is throwing warnings about CORB requests being blocked.
Cross-Origin Read Blocking (CORB) blocked cross-origin response [domain] with MIME type text/plain. See https://www.chromestatus.com/feature/5629709824032768 for more details.
I have determined that the requests to my API are succeeding, and that it's the pre-flight OPTIONS request that is triggering the warning in console.
The application which is calling the API, is not explicitly making the OPTIONS request, rather I have come to understand this is enforced by the browser when making a cross-origin request and is done automatically by the browser.
I can confirm that the OPTIONS request response does not have a mime-type defined. However, I am a little confused as it is my understanding that an OPTIONS response, is only headers, and does not contain a body. I do not understand why such a request would require a mime-type to be defined.
Moreover, the console warning says the request was blocked; yet the various POST and GET requests, are succeeding. So it looks as though the OPTIONS request isn't actually being blocked?
This is a three-part question:
Why does an OPTIONS request require a mime-type to be defined, when there is no body response?
What should the mime-type be for an OPTIONS request, if plain/text is not appropriate? Would I assume application/json to be correct?
How do I configure my Apache2 server to include a mime-type for all pre-flight OPTIONS requests?
I have gotten to the bottom of these CORB warnings.
The issue is related, in part, to my use of the content-type-options: nosniff header. I set this header in order to stop the browser from trying to sniff the content-type itself, thereby removing mime-type trickery, namely with user-uploaded files, as an attack vector.
The other part of this, is related to the content-type being returned application/json;charset=utf-8. Per Google's documentation, it notes:
A response served with a "X-Content-Type-Options: nosniff" response header and an incorrect "Content-Type" response header, may be blocked.
Based on this, I set out to double check IANA's site on acceptable media types. To my surprise, I discovered that no charset parameter was ever actually defined in any RFC for the application/json type, and further notes:
No "charset" parameter is defined for this registration. Adding one really has no effect on compliant recipients.
Based on this, I removed the charset from the content-type: application/json and can confirm the CORB warnings stopped in Chrome.
In conclusion, it would appear that per a recent Chrome release, Google has opted to start treating the mime-type more strictly than it has in the past.
Lastly, as a side note, the reason all of our application requests still succeeds, is because it appears Cross-Origin Read Blocking isnt actually enforced in Chrome:
In most cases, the blocked response should not affect the web page's behavior and the CORB error message can be safely ignored.
Was having the same issue.
The problem I had was due to the fact the API was answering to the preflight with 200 OK but was an empty response without the Content-Length header set.
So, either changing the preflight response status to 204 No Content or by simply setting the Content-Length: 0 header solved the issue.

Why does Chrome ignore Set-Cookie header?

Chrome has a long history of ignoring Set-Cookie header. Some of these reasons have been termed bugs and fixed, others are persistent. None of them are easy to find in documentation.
Set-Cookie not allowed in 302 redirects
Set-Cookie not allowed if host is localhost
Set-Cookie not allowed if Expires is out of acceptable range
I am currently struggling with getting chrome to accept a simple session cookie. Firefox and Safari seem to accept most any RFC compliant string for Set-Cookie. Chrome stubbornly refuses to acknowledge that a Set-Cookie directive was even sent on the request (does not show up in Developer Tools (Network)). curl looks fine.
So does anyone have either 1) modern best practices for cross-browser Set-Cookie formatting or 2) more information regarding what can cause Chrome to bork here?
Thanks.
One thing that has bitten me and is not on your list: if you are trying to set a secure cookie through HTTP on localhost, Chrome will reject it because you are not using HTTPS.
This kind of makes sense, but is annoying for local development. (Firefox apparently makes an exception for this case and allow to set secure cookies over HTTP on localhost).

CSRF using CORS

I'm studing HTML5's security problems. I saw all the presentations made by Shreeraj Shah. I tried to simulate a basic CSRF attack with my own servers using withCredentials tag sets to true (so in the response message the cookies should be replayed) and adding Content-Type sets to text/plain in the request (to bypass the preflight call).
When I tried to start the attack the browser told me that the XMLHttpRequest can not be accomplish because of the Access-Control-Allow-Origin header. So I put a * in the header of the victim's web page and the browser told me that I can't use the * character when I send a request with withCredentials sets to true.
I tried to make the same thing with the web apps stored in the same domain, and all was fine (I suppose it is because the browser doesn't check if the request comes from the same domain).
I'm asking, it's a new features that modern browsers set up recently to avoid this kind of problems?
Because in the Shreeraj's videos, the request was across different domains and it worked...
Thank you all and sorry for my english :-)
EDIT:
I think I found the reason why the CSRF attack doesn't work fine as in the Shreeraj's presentations.
I read the previous CORS document, published in 2010, and I found that there wasn't any recommendation about the with credential flag setted to true when Access-Control-Allow-Origin is set to *, but if we look at the last two publications about CORS (2012 and 2013), in the section 6.1, one of the notes is that we can't make a request using with credentials flag setted to true if the Access-Control-Allow-Origin is set to *.
Here are the links:
The previous one (2010): http://www.w3.org/TR/2010/WD-cors-20100727/
The last two (2012, 2013): http://www.w3.org/TR/2012/WD-cors-20120403/ --- http://www.w3.org/TR/cors/
Here is the section I'm talking about: http://www.w3.org/TR/cors/#supports-credentials
If we look at the previous document we can not find it, because there isn't.
I think this is the reason why the simple CSRF attack made in 2012 by Shreeraj Shah today doesn't work (of course in modern browsers that follow the w3c's recommendations). Could it be?
The request will still be made despite the browser error (if there's no pre-flight).
The Access-Control-Allow-Origin simply allows access to the response from a different domain, it does not affect the actual HTTP request.
e.g. it would still be possible for evil.com to make a POST request to example.com/transferMoney even though there are no CORS headers set by example.com using AJAX.