Why does most modern browsers require TLS for HTTP2?
Is there a technical reason behind this? Or simply just to make the web more secure?
http://caniuse.com/#feat=http2
It is partly about making more things use HTTPS and encourage users and servers to go HTTPS. Both Firefox and Chrome developers have stated this to be generally good. For the sake of users and users' security and privacy.
It is also about broken "middle boxes" deployed on the Internet that assume TCP traffic over port 80 (that might look like HTTP/1.1) means HTTP/1.1 and then they will interfere in order to "improve" or filter the traffic in some way. Doing HTTP/2 clear text over such networks end up with a much worse success rate. Insisting on encryption makes those middle boxes never get the chance to mess up the traffic.
Further, there are a certain percentage of deployed HTTP/1.1 servers that will return an error response to an Upgrade: with an unknown protocol (such as "h2c", which is HTTP/2 in clear text) which also would complicate an implementation in a widely used browser. Doing the negotiation over HTTPS is much less error prone as "not supporting it" simply means switching down to the safe old HTTP/1.1 approach.
Related
Several days ago I saw Google.com was using HTTP/2, but yesterday I became aware that Google.com had switched to SPDY (HTTP/2+QUIC/35).
Two questions:
As you know, HTTP/2 extends SPDY, why did Google.com rollback to SPDY?
What's the difference between SPDY and SPDY (HTTP/2+QUIC/35)?
http/2+quic/35 is not Speedy, it is a new communication protocol, based on UDP instead of TCP, named QUIC.
Let's quote https://www.chromium.org/quic :
Key advantages of QUIC over TCP+TLS+HTTP2 include:
Connection establishment latency
Improved congestion control
Multiplexing without head-of-line blocking
Forward error correction
Connection migration
A good presentation is available in this blog article.
In fact, the whole QUIC project was used to by-pass the TCP standards, in a more reactive way. Google did experiment on QUIC since years, transparently in Chrome browsers of billions of users, and switched now to it by default, if it works (with a fallback to "classical" HTTP/2 over TCP).
From the developer point of view, QUIC has a HTTP/2 interface, with all its features.
To my knownledge, only the LiteSpeed supports QUIC outside of Google - not the OpenLiteSpeed version yet (sadly) - and the go-based Caddy server.
Are you sure they did? Or is the tool you are using to display this info (this extension perhaps?) choosing to display it as such? Show the Network tab in developer tools in Chrome to see what Chrome really thinks it's talking.
HTTP/2 is the standard version of SPDY so saying something is "SPDY-enabled (HTTP/2)" doesn't make sense. Unless it means it can talk SPDY ("SPDY-enabled") but has chosen in this case to talk HTTP/2 as that's better?
Finally QUIC is a new protocol Google is experimenting with, which replaces the TCP network layer that SPDY and HTTP/2 are built on top of. So both can use QUIC instead of TCP and it's usually faster than TCP (hence the name which sounds like "quick" and is an acronym of "Quick UDP Internet Connections")
Many, if not all modern browsers are not using pipelined HTTP requests. In theory pipelining should speed up requests by reducing the number of round trip times required to fetch a website.
According to the HTTP standard, all servers must handle pipelined requests, so the problem should not be in lack of support on the servers.
I have seen some security concerns, such as a layer 7 DoS attack if a client pushes as many pipelined requests as possible to a URL that's performance-intensive for the server, ignoring any answers that might be received.
That would be a reason to turn pipelining support off on the server (violating the standard), but I cannot find any reason to turn it off on the clients.
It is however turned on by default on Android browsers and Chrome mobile.
Why are Chrome, Firefox, IE, Opera and Safari not using pipelined HTTP requests in their desktop (and sometimes mobile) version? What is their reasoning behind turning it off?
Pipelining is disabled for the following reasons:
Firefox:
The bigger issue has frankly been head of line blocking and its impact on performance and robustness. Naïve pipelines simply make performance worse.
Chrome:
The option to enable pipelining has been removed from Chrome, as there are known crashing bugs and known front-of-queue blocking issues. There are also a large number of servers and middleboxes that behave badly and inconsistently when pipelining is enabled. Until these are resolved, it's recommended nobody uses pipelining. Doing so currently requires a custom build of Chromium.
In general:
Buggy proxies are still common and these lead to strange and erratic behaviors that Web developers cannot foresee and diagnose easily.
Pipelining is complex to implement correctly: the size of the resource being transferred, the effective RTT that will be used, as well as the effective bandwidth, have a direct incidence on the improvement provided by the pipeline. Without knowing these, important messages may be delayed behind unimportant ones. The notion of important even evolves during page layout! HTTP pipelining therefore brings a marginal improvement in most cases only.
Pipelining is subject to the HOL problem.
HTTP/2 offers an alternative:
With HTTP/1.x, the browser has limited ability to leverage above priority data: the protocol does not support multiplexing, and there is no way to communicate request priority to the server. Instead, it must rely on the use of parallel connections, which enables limited parallelism of up to six requests per origin. As a result, requests are queued on the client until a connection is available, which adds unnecessary network latency. In theory, HTTP Pipelining tried to partially address this problem, but in practice it has failed to gain adoption.
HTTP/2 resolves these inefficiencies: request queuing and head-of-line blocking is eliminated because the browser can dispatch all requests the moment they are discovered, and the browser can communicate its stream prioritization preference via stream dependencies and weights, allowing the server to further optimize response delivery.
A proxy can be used as well:
You can try something I did to speed up Konqueror in KDE3. I was dissatisfied that Konqueror did not have HTTP pipelining, so after some searching, I installed Polipo as a local HTTP/HTTPS/FTP proxy and set Konqueror to use it (localhost on port 8123 if I remember correctly). In addition to HTTP pipelining, Polipo also provided improved caching, and since it was a proxy, I could set every browser to use it and the caching would be shared between the browsers. (This also means that it is a good idea to disable each browser's independent caching.)
Salesforce uses the following process:
Salesforce has a powerful and field-tested approach for mitigating HOLB at the TCP layer: we decouple the relation between an HTTP request and a TCP connection. Think about your transport as composed of multiple TCP connections (as many as the network context would need). Any part of the HTTP request can go over any TCP connection. So if you hit the HOLB in one connection, it not only helps in mitigating affected requests, it also minimizes impact to other application requests using healthy connections. The result is an ability to enjoy the benefits of multiplexing and pipelining at the HTTP layer while minimizing risks of HOLB.
References
Mozilla Bug 264354 – Enable HTTP pipelining by default
HTTP Pipelining - The Chromium Projects
Chromium Issue 364557: Remove pipelining code from Chrome
Understanding Connection Limits and New Proxy Connection Limits in WinInet and Internet Explorer – Http Client Protocol Issues (and other fun stuff I support)
HTTPS and Keep-Alive Connections – IEInternals
Changes in WinHttp on Windows 7 and onwards wrt HTTP/1.0 – HTTPContext
Content-Length and Transfer-Encoding Validation in the IE10 Download Manager – IEInternals
Use Sensible Long-Lived Cache headers – IEInternals
Web Performance : 2015 : March | Akamai Community
WebSockets, caution required!
HTTP: HTTP/2 - High Performance Browser Networking (O'Reilly)
HTTP Pipelining - Not So Fast...(Nor Slow!) – Guy's Pod
Persistent Connection Behavior of Popular Browsers
Connection management in HTTP/1.x - HTTP | MDN
Download Resumption in Internet Explorer – IEInternals
Networking Improvements in IE10 and Windows 8 – IEInternals
Konqueror very slowly (KDE4) • KDE Community Forums
HTTP Optimization: Multiple TCP Connections and Pipelining
SpeedGuide :: Internet Explorer, Chrome, Firefox Web Browser Tweaks
The Full Picture on HTTP/2 and HOL Blocking – Salesforce Engineering
The accepted answer may be somewhat out of date. Today I've seen chrome desktop pipeline 10 requests in a single HTTPS connection against our server, which provided me with the pipeline counts.
My site uses HTTP authentication and I've learned it isn't very secure and it causes a lot of problems for many browsers, and not all browsers may support it, so I want to use an alternative that is secure and more widely supported; what are some alternatives?
Is it possible to lock all directories using an HTML login page?
My site uses HTTP authentication and I've learned it isn't very secure
That's false... unless you're referring to something like basic auth over an insecure channel. In that case, anything over the insecure channel has potential issues. (Even if you did some client-side encryption hackery, you still have the problem that the remote host is not verified without the TLS or SSL layer.)
Basic auth is fine in some cases, and not for others. It depends on what you're trying to do.
it causes a lot of problems for many browsers, and not all browsers may support it
Completely false. I've never seen a browser that didn't support basic auth and digest auth.
what are some alternatives?
This isn't possible to answer without a better understanding of your requirements. Two-factor auth with a DNA sample and a brainwave scan might be more secure but chances are that's not what you're looking for. Besides, you can't forget about the rest of your system and you've told us nothing about that.
Is it possible to lock all directories using an HTML login page?
Yes. How you do this depends on what you're running server-side, but yes it's completely possible and often done.
It's been an oft-discussed question on StackOverflow what this means:
<script src="//cdn.example.com/somewhere/something.js"></script>
This gives the advantage that if you're accessing it over HTTPS, you get HTTPS automatically, instead of that scary "Insecure elements on this page" warning.
But why use protocol-relative URLs at all? Why not simply use HTTPS always in CDN URLs? After all, an HTTP page has no reason to complain if you decide to load some parts of it over HTTPS.
(This is more specifically for CDNs; almost all CDNs have HTTPS capability. Whereas, your own server may not necessarily have HTTPS.)
As of December 2014, Paul Irish's blog on protocol-relative URLs says:
2014.12.17: Now that SSL is encouraged for everyone and doesn’t have performance concerns, this technique is now an anti-pattern. If the asset you need is available on SSL, then always use the https:// asset.
Unless you have specific performance concerns (such as the slow mobile network mentioned in Zakjan's answer) you should use https:// to protect your users.
Because of performance. Establishing of HTTPS connection takes much longer time than HTTP, TLS handshake adds latency delay up to 2 RTTs. You can notice it on mobile networks. So it is better not to use HTTPS asset URLs, if you don't need it.
There are a number of potential reasons, though they're all not particularly crucial:
How about the next time every business with an agenda pushes a new protocol? Are we going to have to swap out thousands of strings again then? No thanks.
HTTPS is slower than HTTP of same version
If any of the notes listed at caniuse.com for HTTP/2 are a problem
Conceptually, if the server enforces the protocol, there is no reason to be specific about it in the first place. Agnosticism is what it is. It's covering all your bases.
One thing to note, if you are using CSP's upgrade-insecure-requests, you can safely use protocol-agnostic URLs (//example.com).
Protocol-relative URLs sometimes break JS codes that try to detect location.protocol. They are also not understood by extremely old browsers. If you are developing web services that requires maximum backward-compatibility (i.e. serving crucial emergency information that can be received/sent on slow connections and/or old devices) do not use PRURLs.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Note: There are existing question that look like duplicates (linked below) but most of them are from a few years ago. I'd like to get a clear and definitive answer that proves things either way.
Is making an entire website run in HTTPS not an issue today from a best practice and performance / SEO perspective?
UPDATE: Am looking for more information with sources, esp. around impact to SEO. Bounty added
Context:
The conversation came up when we wanted to introduce some buttons that spawn lightboxes with forms in them that collect personal information (some of them even allow users to login). This is on pages that make up a big portion of the site. Since the forms would need to collect and submit information securely and the forms are not on pages of their own, the easiest way we could see to make this possible was to make the pages themselves be HTTPS.
What I would like is for an answer that covers issues with switching a long running popular site to HTTPS such as the ones listed below:
Would a handshake be negotiated on every request?
Will all assets need to be encrypted?
Would browsers not cache HTTPS content, including assets?
Is downstream transparent proxies not caching HTTPS content, including assets (css, js etc.) still an issue?
Would all external assets (tracking pixels, videos, etc) need to have HTTPS version?
HTTPS and gzip might not be happy together?
Backlinks and organic links will always be HTTP so you will be 301'ing all the time, does this impact SEO / performance? Any other SEO impact of changing this sitewide?
There's a move with some of the big players to always run HTTPS, see Always on SSL, is this setting a precedent / best practice?
Duplicate / related questions:
Good practice or bad practice to force entire site to HTTPS?
Using SSL Across Entire Site
SSL on entire site or just part of it?
Not sure I can answer all points in one go with references, but here goes. Please edit as appropriate:
Would a handshake must be negotiated on every request?
No, SSL connections are typically reused for a number of consecutive requests. The overhead once associated with SSL is mostly gone these days. Computers have also gotten a lot faster.
Will all assets need to be encrypted?
Yes, otherwise the browser will not consider the entire site secure.
Would browsers not cache HTTPS content, including assets?
I do not think so, caching should work just fine.
Is downstream transparent proxies not caching HTTPS content, including assets (css, js etc.) still an issue?
For the proxy to cache SSL encrypted connections/assets, the proxy would need to decrypt the connection. That largely negates the advantage of SSL. So yes, proxies would not cache content.
It is possible for a proxy to be an SSL endpoint to both client and server, so it has separate SSL sessions with each and can see the plaintext being transmitted. One SSL connection would be between the proxy and the server, the proxy and the client would have a separate SSL connection signed with the certificate of the proxy. That requires that the client trusts the certificate of the proxy and that the proxy trusts the server certificate. This may be set up this way in corporate environments.
Would all external assets (tracking pixels, videos, etc) need to have HTTPS version?
Yes.
HTTPS and gzip might not be happy together?
Being on different levels of protocols, it should be fine. gzip is negotiated after the SSL layer is put over the TCP stream. For reasonably well behaved servers and clients there should be no problems.
Backlinks and organic links will always be HTTP so you will be 301'ing all the time, does this impact SEO?
Why will backlinks always be HTTP? That's not necessarily a given. How it impacts SEO very much depends on the SE in question. An intelligent SE can recognize that you're simply switching protocols and not punish you for it.
1- Would a handshake be negotiated on every request?
There are two issues here:
Most browsers don't need to re-establish a new connection between requests to the same site, even with plain HTTP. HTTP connections can be kept alive, so, no, you don't need to close the connection after each HTTP request/response: you can re-use a single connection for multiple requests.
You can also avoid to perform multiple handshake when parallel or subsequent SSL/TLS connections are required. There are multiple techniques explained in ImperialViolet - Overclocking SSL (definitely relevant for this question), written by Google engineers, in particular session resumption and false start. As far as I know, most modern browsers support at least session resumption.
These techniques don't get rid of new handshakes completely, but reduce their cost. Apart from session-reuse, OCSP-stapling (to check the certificate revocation status) and elliptic curves cipher suites can be used to reduce the key exchange overhead during the handshake, when perfect forward-secrecy is required. These techniques also depend on browser support.
There will still be an overhead, and if you need massive web-farms, this could still be a problem, but such a deployment is possible nowadays (and some large companies do it), whereas it would have been considered inconceivable a few years ago.
2- Will all assets need to be encrypted?
Yes, as always. If you serve a page over HTTPS, all the resources it uses (iframe, scripts, stylesheets, images, any AJAX request) need to be using HTTPS. This is mainly because there is no way to show the user which part of the page can be trusted and which can't.
3- Would browsers not cache HTTPS content, including assets?
Yes, they will, you can either use Cache-Control: public explicitly to serve your assets, or assume that the browser will do so. (In fact, you should prevent caching for sensitive resources.)
4- Is downstream transparent proxies not caching HTTPS content, including assets (css, js etc.) still an issue?
HTTP proxy servers merely relay the SSL/TLS connection without looking into them. However, some CDNs also provide HTTPS access (all the links on Google Libraries API are available via https://), which, combined with in-browser caching, allows for better performance.
5- Would all external assets (tracking pixels, videos, etc) need to have HTTPS version?
Yes, this goes with point #3. The fact that YouTube supports HTTPS access helps.
6- HTTPS and gzip might not be happy together?
They're independent. HTTPS is HTTP over TLS, the gzip compression happens at the HTTP level. Note that you can compress the SSL/TLS connection directly, but this is rarely used: you might as well use gzip compression at the HTTP level if you need (there's little point compressing twice).
7- Backlinks and organic links will always be HTTP so you will be 301'ing all the time, does this impact SEO?
I'm not sure why these links should use http://. URL shortening services are a problem generally speaking for SEO if that's what you're referring to.
I think we'll see more and more usage of HTTP Strict Transport Security, so more https:// URLs by default.