Why do Firefox and Chrome react differently to these HTTP headers? - google-chrome

I'm not sure if my question is based on a lack of understanding - or on Google Chrome working incorrect.
My server sends the following HTTP headers (see yourself):
Etag:"1031384541"
Expires:Mon, 03 Nov 2014 00:01:46 GMT
On a reload Firefox will NOT ask the server but deliver a "200 OK" status code (that's how it should be).
But Google Chrome insists on asking the server and then delivers a "304".
Is there anything I did wrong? What should I change?
Btw.: Interestingly enough these are the default headers sent by GoGrid CDN - which I assume should be correct. But I'm also using the same approach on my own machine (see yourself).

Related

Why is my request for an ASP resource not routed as expected by Azure API Management?

For my use case, I would like to use Azure APIM as a proxy.
(Edit: I'm using the "Consumption" tier, and the answer given here works with the standard tiers. I will update this if I find a solution with MS support for the Consumption tier.)
So that a
GET https://my-awesome-api.azure-api.net/default.css
fetches and returns what sits there:
GET https://my-backend.my-domain.com/default.css
It works fine, except for ASP files. If my resource is /default.asp, I get a 404 generated directly by the APIM (not my backend, which is not called at all). The problem is reproduced at every level (I can get /foo/default.css, but 404 on /foo/default.asp).
I've not been able to find in the documentation anything related to special handling of ASP files by default (or any other for that matter). The fact that other types of resources work fine is even more puzzling.
GET /default.css -> works
GET /default.asp -> gets the Azure 404
GET /i-dont-exist.css -> gets the backend 404
GET /i-dont-exist.asp -> gets Azure 404
Azure's 404:
HTTP/1.1 404 Not Found
content-length: 103
content-type: text/html
date: Fri, 05 Apr 2019 15:35:34 GMT
vary: Origin
x-powered-by: ASP.NET
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
Most likely your API is misconfigured. Seems you want to pass through all traffic, so you need to create API with Web service URL set to "https://my-backend.my-domain.com" and Path suffix to "/".
Underneath it create an operation for each HTTP method you want to proxy with URL template set to /*.

Why does Chrome ignore Set-Cookie header?

Chrome has a long history of ignoring Set-Cookie header. Some of these reasons have been termed bugs and fixed, others are persistent. None of them are easy to find in documentation.
Set-Cookie not allowed in 302 redirects
Set-Cookie not allowed if host is localhost
Set-Cookie not allowed if Expires is out of acceptable range
I am currently struggling with getting chrome to accept a simple session cookie. Firefox and Safari seem to accept most any RFC compliant string for Set-Cookie. Chrome stubbornly refuses to acknowledge that a Set-Cookie directive was even sent on the request (does not show up in Developer Tools (Network)). curl looks fine.
So does anyone have either 1) modern best practices for cross-browser Set-Cookie formatting or 2) more information regarding what can cause Chrome to bork here?
Thanks.
One thing that has bitten me and is not on your list: if you are trying to set a secure cookie through HTTP on localhost, Chrome will reject it because you are not using HTTPS.
This kind of makes sense, but is annoying for local development. (Firefox apparently makes an exception for this case and allow to set secure cookies over HTTP on localhost).

Chrome is not sending if-none-match

I'm trying to do requests to my REST API, I have no problems with Firefox, but in Chrome I can't get the browser to work, always throws 200 OK, because no if-none-match (or similar) header is sent to the server.
With Firefox I get 304 perfectly.
I think I miss something, I tried with Cache-Control: max-age=10 to test but nothing.
One reason Chrome may not send If-None-Match is when the response includes an "HTTP/1.0" instead of an "HTTP/1.1" status line. Some servers, such as Django's development server, send an older header (probably because they do not support keep-alive) and when they do so, ETags don't work in Chrome.
In the "Response Headers" section, click "view source" instead of the parsed version. The first line will probably read something like HTTP/1.1 200 OK — if it says HTTP/1.0 200 OK Chrome seems to ignore any ETag header and won't use it the next load of this resource.
There may be other reasons too (e.g. make sure your ETag header value is sent inside quotes), but in my case I eliminated all other variables and this is the one that mattered.
UPDATE: looking at your screenshots, it seems this is exactly the case (HTTP/1.0 server from Python) for you too!
Assuming you are using Django, put the following hack in your local settings file, otherwise you'll have to add an actual HTTP/1.1 proxy in between you and the ./manage.py runserver daemon. This workaround monkey patches the key WSGI class used internally by Django to make it send a more useful status line:
# HACK: without HTTP/1.1, Chrome ignores certain cache headers during development!
# see https://stackoverflow.com/a/28033770/179583 for a bit more discussion.
from wsgiref import simple_server
simple_server.ServerHandler.http_version = "1.1"
Also check that caching is not disabled in the browser, as is often done when developing a web site so you always see the latest content.
I had a similar problem in Chrome, I was using http://localhost:9000 for development (which didn't use If-None-Match).
By switching to http://127.0.0.1:9000 Chrome1 automatically started sending the If-None-Match header in requests again.
Additionally - ensure Devtools > Network > Disable Cache [ ] is unchecked.
1 I can't find anywhere this is documented - I'm assuming Chrome was responsible for this logic.
Chrome is not sending the appropriate headers (If-Modified-Since and If-None-Match) because the cache control is not set, forcing the default (which is what you're experiencing). Read more about the cache options here: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache.
You can get the wished behaviour on the server by setting the Cache-Control: no-cache header; or on the browser/client through the Request.cache = 'no-cache' option.
Chrome was not sending 'If-None-Match' header for me either. I didn't have any cache-control headers. I closed the browser, opened it again and it started sending 'If-None-Match' header as expected. So restarting your browser is one more option to check if you have this kind of problem.

Issue with downloading PDF from S3 on Chrome

I'm facing an issue on downloading PDF files from Amazon S3 using Chrome.
When I click a link, my controller redirect the request to the file's URL on S3.
It works perfectly with Firefox, but nothing happens with Chrome.
Yet, if I perform a right click -> Save location as will download the file ...
And even a copy-paste of the S3 URL into Chrome will lead to a blank screen ...
Here is some information returned by curl:
Date: Wed, 01 Feb 2012 15:34:09 GMT
Last-Modified: Wed, 01 Feb 2012 04:45:24 GMT
Accept-Ranges: bytes
Content-Type: application/x-pdf
Content-Length: 50024
Server: AmazonS3
My guesses are related to an issue with the content type ... but all I tried didn't work.
The canonical Internet media type for a PDF document is actually application/pdf as defined in The application/pdf Media Type (RFC 3778) - please note that application/x-pdf, while commonly encountered and listed as a media type in Portable Document Format as well, is notably absent from the official Application Media Types listed by the Internet Assigned Numbers Authority (IANA).
I'm not aware of why and when application/x-pdf came to life, but apparently the Chrome PDF Plugin does not open application/x-pdf documents as of today.
Consequently you should be able to trigger a different behavior in Chrome by changing the media type of the stored objects accordingly.
Alternative (for authenticated requests)
Another approach would be to Force a PDF to download instead of letting Chrome attempt to open it, which can be done by means of triggering the Content-Diposition: attachment header with your GET request - please see the S3 documentation for GET Object on how to achieve this via the response-content-disposition request parameter, specifically response-content-disposition=attachment as demonstrated there in section Sample Request with Parameters Altering Response Header Values.
This is only available for authenticated requests though, see section Request Parameters:
Note
You must sign the request, either using an Authorization header
or a Pre-signed URL, when using these parameters. They can not be used
with an unsigned (anonymous) request.
There is an html based solution to this. Since chrome is up to date with HTML5, we can use the shiny new download attribute!
Broken
Works

how to get around "Content-encoding gzip deflate" header sent by Chrome?

We have a simple HTML login form on our embedded device's web server. The web server is custom coded because of severe memory limitations. Regardless of these limitations, we like Chrome and would like to support it.
All browsers post an HTTP Request to our login form containing the expected "username=myname&password=mypass" string, but not Chrome. Instead we receive from Chrome a "Content-encoding gzip deflate" request. BTW, by "all browsers", I mean this was tested to work fine on Internet Explorer versions 9 beta, 8, 7, 6 ; Firefox versions 4 beta, 3, 2 ; Opera 10, 9 ; Safari 5, 4, 3 ; and SeaMonkey 2.
Referring to section "14.2 Accept Charset" of the w3.org's http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html we tried sending back a HTTP 406 code to indicate that this server does not support that encoding in the hope that Chrome would try again and post the expected strings the standard way. The 406 code returned by the web server is clearly displayed in Chrome's "Inspect Element" window, but it seems to be treated by Chrome as an error code, and no further requests are sent to the web server. "Login failed." We also tried HTTP return codes 405 and 200, same result.
Is there a way to get around this behavior either with client-side JavaScript that will prevent Chrome from sending the "Content-encoding gzip deflate" request, or with a server-side response that will explain nicely to Chrome we don't do gzip, just send it to us the regular way?
We tried posting to the Google Chrome Troubleshooting forum with no response.
Any help would be greatly appreciated!
Best regards,
Bert
You're looking in the wrong section for the error code: Section 14.11 of RFC 2616 specifies that you send a 415 (Unsupported Media Type) if you can't deal with the Content-Encoding.
It sounds like when using chrome to do a post to a server the first time, chrome defaults to using a gzip encoding. Pretty strange.
Easy way out is to just place your username/pass as GET parameters, and when sending the response, as long as you don't send gzip content encoding, chrome should start using none-gzipped posts from that point on. Hope that works?
I tested this out a bit with a simple Python script that printed to stdout. I thought I was getting the same problem, but then I realized that I was just forgetting to flush stdout. It seems that Chrome always sends the request up to the end of the headers before sending the request content, and you have to use a second recv call to get the POST data. In contrast, the entire Firefox request is returned in a single recv call.