According to the WebSocket Draft-76 spec, Wocket.close is supposed to do the following:
"To close the connection cleanly, a frame consisting of just a 0xFF byte followed by a 0×00 byte is sent from one peer to ask that the other peer close the connection."
But, after a few tests, I don’t think that Chrome is doing anything when closed is called. I’m curious if I’m doing something wrong or if it’s a known bug.
I haven't noticed any issues when testing with Chrome. I haven't inspected the frames either though.
i know this topic is really old, but I noticed that Chrome is the only browser, which doesnt send 0xff00 at socket close command... instead it closes its socket conn on browser side, so if I notice, that a "chrome" user is offline, if i failt to receive data from that socket. Just my two cents :)
Related
I added a self-signed cert to an app. When I access it in the regular Chrome window, it does not offer a "Proceed to..." link under Advanced. However, if I access it in Incognito, I do get a "Proceed to" link. Can you help me understand why?
Most likely the very first time you hit that URL in Chrome, you were asked that question, and once you answered yes saying I'll trust this certificate then Chrome no longer asked you this question again.
When you start up in Incognito mode, the browser doesn't use any previously saved information so you are asked to Proceed to the link every time when running in Incognito mode.
I hope that explains it.
Similarly, as a web dev, you'll run into this a often. You'll find AJAX calls fail because your hitting a website that has an untrusted certificate (but the error is cryptic). It's the most confusing thing until you realize what is happening. The fix is to open the URL to the website, answer yes to proceed to the website, and now your AJAX calls will work again. Until Chrome decides that it doesn't trust the website again and you repeat the process.
For me it can be weeks before Chrome decides to untrust a website again so by then I've forgotten the solution. After about 50 times, you catch on and realize the pattern. At least I did. :-)
I have running a site serve both via https and http.
The issue only happen when access via https, steps are
open a chrome, first time to load the only single site in HTTPS, chrome stall this inital connection about 1second.
however any immediate subsequent refreshing the same page, only have zero to couples of milliseconds, can ignore.
put aside the page(don't interact with it, for a short while, may few minutes), then pick it up to refresh it, it will repeat from step #1(stall about 1 second, followed by any immediate refresh almost zero stalling)
This only happen when access the site by HTTPS, don't have this kind of issue when access by HTTP(always almost zero stalling).
Issue only seen in Chrome browser, but not safari and firefox(there is almost no stalled time), all tested in Mac OS
Would any one help to give some idea please? why the first loading introduce 1 second stalling? how to reduce that stalling time please
screenshot
sorry, this is really hard to explain the issue
I think i found the cause now, i was using a self-signed certificate for https connection, though i add it to browser exception list to trust it, it looks like chrome is being more strict on this(than others ), after switching to a trust CA signed cert. The chrome stalled time reduce to jus few millseconds for all request now, i am happy to close this question now
I'm trying to debug difference between HTTP/1.1 and HTTP/2.
Is there any possibility for disabling HTTP/2 in chrome or chromium?
I couldn't find this option flag in chrome 56. I have tried chromium 58 with flag --disable-http2:
./Chromium.app/Contents/MacOS/Chromium --disable-http2
But content is still delivered with HTTP/2 protocol after using this flag:
For what it is worth, the flag works.
The issue is that you need to quit EVERYTHING Chrome for it to take effect. Including plugin shims and other chrome tabs and so on.
It is not enough just to add the command line switch.
An easier way to achieve something broadly equivalent is to use an HTTP Proxy, like https://www.telerik.com/fiddler. This adds negligible additional time to your requests, and (as far as I know) doesn't support http/2 at all (yet); even if it did, I'm pretty sure it would be much easier/practical to switch behavior in than restarting all your Chrome windows.
The advantage of this approach is that it takes effect immediately - disabling and reenabling HTTP/2 becomes as easy as starting and stopping the proxy, without messing with the (if you're anything like me) dozens of Chrome tabs you have open, to StackOverflow and elsewhere :)
What happens when you try doing the same thing in WebPageTest (select Chrome as the test agent and add the command line switch in the Chrome tab under advanced settings)
Here's a test I did for my personal site just now and the flag appears to work OK (if you look at the response headers you'll see HTTP/1.1)
https://www.webpagetest.org/result/170322_1B_ab8656afcfb8bcc4103e9872ff56c28b/1/details/#waterfall_view_step1
I have seen the same problem created by a firewall running in proxy mode vs flow mode.
The firewall would buffer the entire file so it could scan it then pass it along vs scanning the individual packets.
https://docs.fortinet.com/document/fortigate/6.4.4/administration-guide/721410/inspection-modes
The problem would only happen when using http2 and might have something to do with http request priority not being handled properly or it forced it single threaded.
We would have a video request start with a low priority that would stall then start causing other file downloads to be delayed. Then there was a api poler in the background coming in with high priority requests. After a few high priority requests were blocked chrome would cancel the low priority video.
It would happen in other cases but the video made it very reproduceable for us.
https://medium.com/dev-channel/javascript-loading-priorities-in-chrome-57c54cfa6672
https://blog.cloudflare.com/better-http-2-prioritization-for-a-faster-web/
https://blog.cloudflare.com/http-2-prioritization-with-nginx/
https://calendar.perfplanet.com/2018/http2-prioritization/
We set it back to flow mode on the firewall and the problem went away.
Afterwards the downloads all happened in parallel with no blocking or stalling in the chrome network waterfall.
I have a weird problem with websockets and chrome (22.0.1229.79m) (I haven't coded authentication for other browsers yet so I cant test them). It seems like if I reload chrome 3 times, there will be a huge delay in connecting to my websocket server. The server is not delaying the connection, I tested this by connecting to it with another PC while chrome was delaying and it connected perfectly.
Is there anyway to fix this? This is a problem when I am switching servers receiving data. It will halt, and delay. This is really bad for user experience. I would assume this is strictly related to the chrome browser not closing the socket...
I have also seen this delay when creating multiple WebSocket connections from the same browser tab in Chrome within a short period of time. I believe this is to address a potential security issue with WebSockets which would allow a browser to be hijacked to do port scanning inside a network. By limiting the number of WebSocket connections that can happen within a given amount of time, you greatly limit the utility of a browser as a remote port scanner. In addition, the amount of information that is returned by onclose and onerror is intentionally limited for the same reasons.
test site http://socket.trailsandtribulations.net
firefox: v15 works fine. (however, if lots of traffic and slow net, firefox will frequently fail as well (quietly).)
chrome: previously worked, but v21 gets Error during WebSocket handshake: 'Connection' header value is not 'Upgrade'
however, if chrome is running locally, it works fine! it breaks on my client in Thailand and server in Germany. again, Firefox works correctly all the time, as did earlier versions of chrome.
using haproxy to split between websockets via node.js and html via nginx
has something changed that makes this solution not work?
haproxy.cfg now displays the test site link - that way it's always current.
It looks from your message that the error is reported by Chrome while initially I understood it was reported by the server when accessed by Chrome. I think that as a workaround, if you replace "option httpclose" with "option http-server-close", it could make the issue disappear. You also need to remove all "option forceclose". If there is only "option http-server-close", haproxy will not touch the Connection header in the response path, which should make the browser happy. However, you must keep in mind that there is still a bug where the error is displayed and that it should be reported to the software authors.
BTW, your timeouts are far too large, you'll end up with many dead connections at the end of the day, this does not make sense. If you use a recent enough haproxy, you can use "timeout tunnel" to set the WS timeout without having to deal with a large HTTP timeout. But even then, 1 day is far too large for TCP connections. Some of your users will be using smartphones where a TCP connection cannot live more than a few minutes before a handover happens.
firefox uses no-cache when requesting the websocket update; this version of chrome does not. see http://code.google.com/p/chromium/issues/detail?id=148908&thanks=148908&ts=1347523876
for some proxies evidently this is necessary
also, see https://github.com/sockjs/sockjs-node/pull/88 for related issue