What is the "Upgrade-Insecure-Requests" HTTP header? - google-chrome

I made a POST request to a HTTP (non-HTTPS) site, inspected the request in Chrome's Developer Tools, and found that it added its own header before sending it to the server:
Upgrade-Insecure-Requests: 1
After doing a search on Upgrade-Insecure-Requests, I can only find information about the server sending this header:
Content-Security-Policy: upgrade-insecure-requests
This seems related, but still very different since in my case, the CLIENT is sending the header in the Request, whereas all the information I've found is concerning the SERVER sending the related header in a Response.
So why is Chrome (44.0.2403.130 m) adding Upgrade-Insecure-Requests to my request and what does it do?
Update 2016-08-24:
This header has since been added as a W3C Candidate Recommendation and is now officially recognized.
For those who just came across this question and are confused, the excellent answer by Simon East explains it well.
The Upgrade-Insecure-Requests: 1 header used to be HTTPS: 1 in the previous W3C Working Draft and was renamed quietly by Chrome before the change became officially accepted.
(This question was asked during this transition when there were no official documentation on this header and Chrome was the only browser that sent this header.)

Short answer: it's closely related to the Content-Security-Policy: upgrade-insecure-requests response header, indicating that the browser supports it (and in fact prefers it).
It took me 30mins of Googling, but I finally found it buried in the W3 spec.
The confusion comes because the header in the spec was HTTPS: 1, and this is how Chromium implemented it, but after this broke lots of websites that were poorly coded (particularly WordPress and WooCommerce) the Chromium team apologized:
"I apologize for the breakage; I apparently underestimated the impact based on the feedback during dev and beta."
— Mike West, in Chrome Issue 501842
Their fix was to rename it to Upgrade-Insecure-Requests: 1, and the spec has since been updated to match.
Anyway, here is the explanation from the W3 spec (as it appeared at the time)...
The HTTPS HTTP request header field sends a signal to the server expressing the client’s preference for an encrypted and authenticated response, and that it can successfully handle the upgrade-insecure-requests directive in order to make that preference as seamless as possible to provide.
...
When a server encounters this preference in an HTTP request’s headers, it SHOULD redirect the user to a potentially secure representation of the resource being requested.
When a server encounters this preference in an HTTPS request’s headers, it SHOULD include a Strict-Transport-Security header in the response if the request’s host is HSTS-safe or conditionally HSTS-safe [RFC6797].

This explains the whole thing:
The HTTP Content-Security-Policy (CSP) upgrade-insecure-requests
directive instructs user agents to treat all of a site's insecure URLs
(those served over HTTP) as though they have been replaced with secure
URLs (those served over HTTPS). This directive is intended for web
sites with large numbers of insecure legacy URLs that need to be
rewritten.
The upgrade-insecure-requests directive is evaluated before
block-all-mixed-content and if it is set, the latter is effectively a
no-op. It is recommended to set one directive or the other, but not
both.
The upgrade-insecure-requests directive will not ensure that users
visiting your site via links on third-party sites will be upgraded to
HTTPS for the top-level navigation and thus does not replace the
Strict-Transport-Security (HSTS) header, which should still be set
with an appropriate max-age to ensure that users are not subject to
SSL stripping attacks.
Source: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/upgrade-insecure-requests

Related

CORB OPTIONS Requests Blocked in Chrome 73

It appears that in a recent Chrome release, (or at least recently when making calls to my API --- haven't see it until today), Google is throwing warnings about CORB requests being blocked.
Cross-Origin Read Blocking (CORB) blocked cross-origin response [domain] with MIME type text/plain. See https://www.chromestatus.com/feature/5629709824032768 for more details.
I have determined that the requests to my API are succeeding, and that it's the pre-flight OPTIONS request that is triggering the warning in console.
The application which is calling the API, is not explicitly making the OPTIONS request, rather I have come to understand this is enforced by the browser when making a cross-origin request and is done automatically by the browser.
I can confirm that the OPTIONS request response does not have a mime-type defined. However, I am a little confused as it is my understanding that an OPTIONS response, is only headers, and does not contain a body. I do not understand why such a request would require a mime-type to be defined.
Moreover, the console warning says the request was blocked; yet the various POST and GET requests, are succeeding. So it looks as though the OPTIONS request isn't actually being blocked?
This is a three-part question:
Why does an OPTIONS request require a mime-type to be defined, when there is no body response?
What should the mime-type be for an OPTIONS request, if plain/text is not appropriate? Would I assume application/json to be correct?
How do I configure my Apache2 server to include a mime-type for all pre-flight OPTIONS requests?
I have gotten to the bottom of these CORB warnings.
The issue is related, in part, to my use of the content-type-options: nosniff header. I set this header in order to stop the browser from trying to sniff the content-type itself, thereby removing mime-type trickery, namely with user-uploaded files, as an attack vector.
The other part of this, is related to the content-type being returned application/json;charset=utf-8. Per Google's documentation, it notes:
A response served with a "X-Content-Type-Options: nosniff" response header and an incorrect "Content-Type" response header, may be blocked.
Based on this, I set out to double check IANA's site on acceptable media types. To my surprise, I discovered that no charset parameter was ever actually defined in any RFC for the application/json type, and further notes:
No "charset" parameter is defined for this registration. Adding one really has no effect on compliant recipients.
Based on this, I removed the charset from the content-type: application/json and can confirm the CORB warnings stopped in Chrome.
In conclusion, it would appear that per a recent Chrome release, Google has opted to start treating the mime-type more strictly than it has in the past.
Lastly, as a side note, the reason all of our application requests still succeeds, is because it appears Cross-Origin Read Blocking isnt actually enforced in Chrome:
In most cases, the blocked response should not affect the web page's behavior and the CORB error message can be safely ignored.
Was having the same issue.
The problem I had was due to the fact the API was answering to the preflight with 200 OK but was an empty response without the Content-Length header set.
So, either changing the preflight response status to 204 No Content or by simply setting the Content-Length: 0 header solved the issue.

Why does Chrome ignore Set-Cookie header?

Chrome has a long history of ignoring Set-Cookie header. Some of these reasons have been termed bugs and fixed, others are persistent. None of them are easy to find in documentation.
Set-Cookie not allowed in 302 redirects
Set-Cookie not allowed if host is localhost
Set-Cookie not allowed if Expires is out of acceptable range
I am currently struggling with getting chrome to accept a simple session cookie. Firefox and Safari seem to accept most any RFC compliant string for Set-Cookie. Chrome stubbornly refuses to acknowledge that a Set-Cookie directive was even sent on the request (does not show up in Developer Tools (Network)). curl looks fine.
So does anyone have either 1) modern best practices for cross-browser Set-Cookie formatting or 2) more information regarding what can cause Chrome to bork here?
Thanks.
One thing that has bitten me and is not on your list: if you are trying to set a secure cookie through HTTP on localhost, Chrome will reject it because you are not using HTTPS.
This kind of makes sense, but is annoying for local development. (Firefox apparently makes an exception for this case and allow to set secure cookies over HTTP on localhost).

EventSource Modifying Protocol

I work in a corporate environment and found an issue I can't resolve.
It has to do with EventSource changing the URL param from HTTP to HTTPS.
const url = 'http://localhost:8080'; // <-- using HTTP not HTTPS
new window.EventSource(url);
Which results in the browser throwing this error:
GET https://localhost:8080 net::ERR_TUNNEL_CONNECTION_FAILED
I am developing on a website using HTTPS so maybe this is by design that it uses the same protocol. Anyone experienced this issue or know how to resolve it?
--- Update ---
Looks like it is by design. When attempting this on another HTTPS site I got this:
Mixed Content: The page at 'https://...' was loaded over HTTPS, but requested an insecure EventSource endpoint 'http://localhost...'. This request has been blocked; the content must be served over HTTPS.
The question still remains, how do I get around this?
Eventsource won't change between http and https. Are you using the HTTPS Everywhere plugin for Chrome, or something like that?
I think you are being hit by the same-origin policy. What this means is that the SSE connection must be to the same origin, which basically means the same hostname and domain, same scheme (i.e. either both http or both https) and same port.
You can use CORS to get around this. At the top of your SSE script you need to send back this header:
Access-Control-Allow-Origin: *
This says anyone, from anywhere, is allowed to connect and get the data stream. It has to be done on the server script, there is no way to do it from the client. (By design: the whole point of same-origin policy is to stop people using other people's content and making it look like their own, without permission.)
Shameless Plug: see my chapter 9 in my book (Data Push Apps with HTML5 SSE, O'Reilly) for finer control of allow-origin, and how it interacts with cookies and basic auth.
BTW, I notice I mention that Chrome won't work with self-signed https certificates. To be honest I'm not sure if that is still the case, but that might also be something to watch out for when using https and localhost.

CSRF using CORS

I'm studing HTML5's security problems. I saw all the presentations made by Shreeraj Shah. I tried to simulate a basic CSRF attack with my own servers using withCredentials tag sets to true (so in the response message the cookies should be replayed) and adding Content-Type sets to text/plain in the request (to bypass the preflight call).
When I tried to start the attack the browser told me that the XMLHttpRequest can not be accomplish because of the Access-Control-Allow-Origin header. So I put a * in the header of the victim's web page and the browser told me that I can't use the * character when I send a request with withCredentials sets to true.
I tried to make the same thing with the web apps stored in the same domain, and all was fine (I suppose it is because the browser doesn't check if the request comes from the same domain).
I'm asking, it's a new features that modern browsers set up recently to avoid this kind of problems?
Because in the Shreeraj's videos, the request was across different domains and it worked...
Thank you all and sorry for my english :-)
EDIT:
I think I found the reason why the CSRF attack doesn't work fine as in the Shreeraj's presentations.
I read the previous CORS document, published in 2010, and I found that there wasn't any recommendation about the with credential flag setted to true when Access-Control-Allow-Origin is set to *, but if we look at the last two publications about CORS (2012 and 2013), in the section 6.1, one of the notes is that we can't make a request using with credentials flag setted to true if the Access-Control-Allow-Origin is set to *.
Here are the links:
The previous one (2010): http://www.w3.org/TR/2010/WD-cors-20100727/
The last two (2012, 2013): http://www.w3.org/TR/2012/WD-cors-20120403/ --- http://www.w3.org/TR/cors/
Here is the section I'm talking about: http://www.w3.org/TR/cors/#supports-credentials
If we look at the previous document we can not find it, because there isn't.
I think this is the reason why the simple CSRF attack made in 2012 by Shreeraj Shah today doesn't work (of course in modern browsers that follow the w3c's recommendations). Could it be?
The request will still be made despite the browser error (if there's no pre-flight).
The Access-Control-Allow-Origin simply allows access to the response from a different domain, it does not affect the actual HTTP request.
e.g. it would still be possible for evil.com to make a POST request to example.com/transferMoney even though there are no CORS headers set by example.com using AJAX.

How to: Custom Prevent Caching?

After symcbean's answer I decided to change my question to:
What's the correct way to keep cache of images/css/js only? Html will not be cached in any web browser.
Go read some good books on the topic - or the specs. You're currently very badly informed.
A normal "trick" is to use:
Normal for whom? Setting Pragma: no-cache has nothing to do with what the browser caches. Setting Expires to -1 should prevent the current document from being cached - but its an HTTP/1.0 ONLY attribute - HTTP/1.1 has been in widespread use for the past 8 years.
However this is a very expensive decision. The cost is to retrieve all the images, css and javascript files in every request
No - the example you've given is an HTML tag - which can only occur in an HTML file. By default (i.e. in the absence of any specific caching directions) browsers "MAY" use cached file - in my experience, its only some mobile devices which cache so aggressively - but none of them implement the requirement to warn the user about this (see rfc 2616 13.1.5).
Caching instructions (and indeed all meta-data) should be sent in the HTTP headers - the META tags provide a surrogate mechanism in some cases.
Have a google for Mark Nottingham's caching tutorial - its a good starting point - but only a starting point.
Configure your server to send the Pragma: no-cache and Expires: ... headers with html content. its trivial to do with apache in a .htaccess just add a files section with a pattern that matches any .html file and set the headers there using mod_headers or better yet mod_expires