Confusing HTTP/2 protocol information in Chrome debugger Network tab - google-chrome

I see some of them show "h2" and some "http/2+quic/43" but never "h2+quic/43". What's the difference between h2 and http/2 in this case? And what's the "43" in "quic/43"? Protocol version or port number?

Well basically QUIC is still being worked on and is not standardised. Google, as the inventors, have their own implementation (sometimes called gQUIC) which is only available in Chromium based browsers and on a few server implementations. It is based on HTTP/2 (well actually it was based on SPDY which then got standardised into HTTP/2).
It doesn't really use HTTP/2 any more but a modified version of it. So whether you call it h2 or http/2 doesn't really matter - it's neither. But at a high level h2 and http/2 can be treated the same in this context.
When QUIC is formally standardised later this year (or possibly even next year) by the IETF it will use HTTP/3 to reflect the diverges from HTTP/2 and so it should change to h3. That is currently being worked on but no browser supports it yet. It is known as iQUIC for now but imagine it will just become QUIC after it becomes a format standard and Google migrates to it and stops using gQUIC (in a similar way that the deprecated SDPY once HTTP/2 was formalised). gQUIC and iQUIC are already quite different.
The number 43 is a version number. Google used to iterate QUIC quite quickly as they were in charge of both ends (browser and server) though seems to have slowed down now (hopefully reflecting it's maturity and the fact less changes are needed). There used to be a change log in the Chromium source code showing what changed in each version, but can't find it now...

Related

Why does Google Chrome generate 2 types of client hello message

I'm analyzing google chrome TLS client hello messages and I found that google chrome generate 2 types of fingerprints. One withe GREASE extensions and one without. I want to know why google chrome generate 2 types of client hello messages and based on what it send each time a type?
Please I need help
In the past TLS implementations in servers and middleboxes often relied too much on behavior they've seen instead of behavior as standardized. This resulted in broken TLS handshakes when new ciphers or extensions were added. This was then worked around in clients by downgrading to older "known good" behavior by downgrading the TLS protocol version or not offering newer ciphers etc. Given that typical client successfully worked around broken systems there was no actual incentive to fix these systems.
Based on the market penetration and control Google has with its Chrome browser they've decided to tackle this problem in a more active way: on the one side they've nagged vendors to fix their implementations and on the other side they added some unpredictable "grease" to their clients so that system designers could not longer rely on a specific behavior but were forced to actually read and implement standards instead.
It looks like this grease is not done for all requests but only for a subset. And also different values for grease are used so that peer implementations cannot rely on a deterministic behavior. This matches what is described in Applying Generate Random Extensions And Sustain Extensibility (GREASE) to TLS Extensibility - RFC 8701.

How does GQUIC affect the WebRTC process?

I am making a simple WebRTC application for myself in order to understand the WebRTC process.
I am using the RTCPeerConnection object to generate an SDP and display it in my logs so I could see exactly what the SDP contains.
This was working fine on all popular browsers until the more recent Chrome update, which no longer displays the SDP.
I used wireshark to examine the packets and I can see that Chrome is using the GQUIC protocol, where other browsers use DNS and STUN protocols.
From this my questions are:
Is GQUIC preventing the SDP from being gnerated or from being displayed?
How, if at all, can I get the SDP to appear again in Chrome?
No
It appears GQUIC is not the reason the update prevented the SDP from being created. GQUIC seems to be a protocol built on UDP to improve latency while allowing for the reliability of TCP, and I could find no reason for it to impact on the SDP business but rather it was a coincidence that I noticed this for the first time when the other problem occured.
Quick fix: change WebRTC: Use Unified Plan SDP Semantics by default to disabled in chrome:flags
The reason the SDP stopped working for me is that the new Chrome version has enabled WebRTC: Use Unified Plan SDP Semantics by default, since they appear to be moving from Plan-B to the Unified Plan which alters how the SDP is being passed. I am still trying to work out the exact difference this has on the SDP but in the meantime I was able to at least see the site working again when I change the flag so I know now that was the cause.

Is Service Worker intended to replace or coexist with Appcache?

Is ServiceWorker intended to replace Appcache, or is the intention that the two will coexist? Phrased another way, is appcache about to become deprecated?
Blink's Service Worker team is keen on deprecating AppCache (We will follow our usual intent to deprecate process). We believe that Service Worker is a much better solution. Also, it should be pretty easy to offer a drop-in replacement for AppCache built on top of SW. We'll start by collecting usage metrics and do some outreach.
AppCache and Service Worker should coexist without any issue since offering offline support via AppCache for browsers that don't support Service Workers is a valid use case.
#flo850 If it's not working, please let us know by filing a bug.
I must say that Services Worker is not only the replacement for AppCache, but it’s far more capable. An AppCache can’t be partially updated, a byte-by-byte manifest comparison to trigger the update seems odd and there are several use cases leading to security and terrible usability problems.
Even Chrome and Firefox are planning to stop support for AppCache in the near future. Now that service workers are supported by Chrome, Opera, and Firefox.Also, The noises coming from Microsoft and Safari have been positive with respect to implementation and under consideration.
As a cache tool, it will coexist with appcache. Appcache works on virtually every browser.
But service workers are a solid foundation that will permit new usage like push (even when the browser is in the background) , geofencing or background synchronization.

HTML 5 Websockets will replace Comet?

It looks like Websockets in HTML 5 will become a new standard for server push.
Does that mean the server push hack called Comet will be obsolete?
Is there a reason why I should learn how to implement comet when Websockets soon (1-2 years) will be available in all major browsers?
Then I could just use Beaconpush or Pusher instead till then right?
There are 2 pieces to this puzzle:
Q: Will the client-side portion of "comet" be necessary?
A: Yes. Even in the next 2 years, you're not going to see full support for WebSockets in the "major" browsers. IE8 for example doesn't have support for it, nor does the current version of FireFox. Given that IE6 was released in 2001, and it's still around today, I don't see WebSockets as replacing comet completely anytime soon.
Q: Will the server-side portion of "comet" be necessary?
A: Yes. Comet servers are designed to handle long-lived HTTP connections, where "typical" web servers do not. Even if the client side supports WebSockets, the server side will still need to be designed to handle the load.
In addition, as "gustavogb" mentioned, at least right now WebSockets aren't properly supported in a lot of HTTP Proxies, so until those all get updated as well, we'll still need some sort of fallback mechanism.
In short: comet, as it exists today, is not going away any time soon.
As an added note: the versions of WebSockets that currently ARE implemented in Chrome and Safari are two different drafts, and work on the "current" draft is still under very heavy development, so I don't even believe it is realistic to say that WebSockets support is functional at the moment. As a curiosity or for learning, sure, but not as a real spec, at least not yet.
[Update, 2/23/11]
The currently shipping version of Safari has a broken implementation (it doesn't send the right header), Firefox 4 has just deprecated WebSockets, so it won't ship enabled, and IE9 isn't looking good either. Looks like Chrome is the only one with a working, enabled version of a draft spec, so WebSockets has a long way to go yet.
Does that mean the server push hack called Comet will be obsolete?
WebSockets are capable of replacing Comet, AJAX, Long Polling, and all the hacks to workaround the problem when web browsers could not open a simple socket for bi-directional communications with the server.
Is there a reason why I should learn how to implement comet when WebSockets soon will be available in all major browsers?
It depends what "soon" means to you. No version of Internet Explorer (pre IE 9) supports the WebSockets API yet, for example.
UPDATE:
This was not intended to be an exhaustive answer. Check out the other answers, and #jvenema's in particular, for further insight into this topic.
Consider using a web socket library/framework that falls back to comet in the absence of browser support.
Checkout out Orbited and Hookbox.
In the medium term websockets won't replace comet solutions not only because of lack of browsers support but also because of HTTP Proxies. Until most of HTTP Proxies will be updated to support websockets connections, web developers will have to implement alternative solutions based on comet techniques to work in networks "protected" with this kind of proxies.
In the short/medium websockets will be just an optimization to be used when available, but you will still need to implement long-polling (comet) to rely on when websockets are not available if you need to make your website accessible for a lot of customers with networks/browsers not under your control.
Hopefully this will be abstracted by javascript frameworks and will be transparent for web developers.
Yes, because "soon" is a very slippery term. Many web shops still have to support IE6.
No, because a rash of comet frameworks and servers has emerged in recent times that will soon make it largely unnecessary for you to get your hands dirty in the basement.
Yes, because "soon" is a very slippery term...
Charter for the [working group] working group tasked with websockets, BiDirectional or Server-Initiated HTTP (hybi):
Description of Working Group
HTTP has most often been used as a request/response protocol, leading
to clients polling for new data, or users hitting the refresh button in
their browsers. Recent web applications are finding ways to communicate
with web servers in realtime, pushing data from the server-side to the
client as soon as it is available. However, these applications at
present can only use a variety of HTTP mechanisms (e.g. long polling
requests) to communicate with web servers bidirectionally.
The Hypertext-Bidirectional (HyBi) working group will seek
standardization of one approach to maintain bidirectional
communications between the HTTP client, server and intermediate
entities, which will provide more efficiency compared to the current
use of hanging requests.
HTTP still has a role to play; it's a flexible message oriented system. websockets was developed to provide bidirectionality and avoid the long polling issue altogether. [it does this well]. but it's simpler than http. and there's a lot of things that are useful about http. there will certainly be continued progress enriching http's bidirectional communication, be it comet or other push technologies. my own humble attempt is [http://github.com/rektide/pipe-layer].

Browser feature detection: spell checking?

All the decent browsers (Chrome, FX, etc) now support built-in spell checking.
However the popular but rubbish axis of IE doesn't (not even IE8 - pointless 'accelerators': yes, much needed dictionary support: no). Unless you download an excellent free plugin, but you can't expect corp users to do that.
Our clients want spell checking in the enterprise web app that we supply, so we bought in a 3rd party spell checking component to keep them happy. It works, but isn't terribly good - especially when compared to anything built in to the browser.
It also looks like the spell check dialog in Word 2000 (probably current back when it was developed). Not such a problem for our clients, half of whom are stuck on Office 2000 and IE6.
I want to only enable this component when the user doesn't have built in spell checking.
Does anyone know a way to detect this?
You already know which browsers have built-in support and which browsers don't so you may use some form of browser sniffing to decide whether you enable the spell-checking component or not.
You may also try to ask your users if they already have some spell-checking enabled and let them answer Yes/No/Don't know. If they don't know, fall back to automatic detection. This is better than using sniffing only because sniffing is known to be unreliable in some circumstances.
Detecting things that are part of a browser's UI is hard, if possible. Due to browsers' security policies, a web site can't access most part of the API that could expose something useful for feature detection. And even if security was not a problem, you would probably still face one distinct API for every browser, since internal browser mechanics are not standardized.
Not sure if this is possible even with something like browsercap or Microsoft Browser Definition File Schema, as mentioned above it is kind of outside the allowed scope.
Have you considered just going with a server side spell checker? So they can use the client if they like or click the spell check button like in GMail. This also means that you can control any updates to the dictionary.