Chrome websocket connection delay - google-chrome

I have a weird problem with websockets and chrome (22.0.1229.79m) (I haven't coded authentication for other browsers yet so I cant test them). It seems like if I reload chrome 3 times, there will be a huge delay in connecting to my websocket server. The server is not delaying the connection, I tested this by connecting to it with another PC while chrome was delaying and it connected perfectly.
Is there anyway to fix this? This is a problem when I am switching servers receiving data. It will halt, and delay. This is really bad for user experience. I would assume this is strictly related to the chrome browser not closing the socket...

I have also seen this delay when creating multiple WebSocket connections from the same browser tab in Chrome within a short period of time. I believe this is to address a potential security issue with WebSockets which would allow a browser to be hijacked to do port scanning inside a network. By limiting the number of WebSocket connections that can happen within a given amount of time, you greatly limit the utility of a browser as a remote port scanner. In addition, the amount of information that is returned by onclose and onerror is intentionally limited for the same reasons.

Related

Loading resources via URL is slow in Chrome - ONLY via https, ONLY from one specific website

I have this 1 Mb file for testing purposes (though the issue applies to any file hosted at lptoronto.com): https://lptoronto.com/sandbox/test.png
It always loads instantly in Internet Explorer (both via http and https).
In Google Chrome (currently Version 63.0.3239.132 (Official Build) (64-bit) on Windows 10) the situation is as follows.
When fetched via http, this file loads instantly without an issue.
When fetched via https, the same file takes about 40 (!) seconds to load (with very rare and irregular exceptions, when it also loads fast via https on occasion).
Chrome network monitor shows that all that 40 seconds the image is being slowly but steadily downloaded at low speed, i.e. there is nothing like waiting for server response etc.
Here's the screencast showing IE and Chrome side-by-side loading the same image:
https://www.youtube.com/watch?v=M4cUuhG1YuM
From time to time the issue disappears for a few minutes or an hour, but then re-appears again, without me doing anything on my side.
Same behavior is observed on at least one other computer - the one of my colleague (different ISP, different location).
Needless to say that I'm testing in a clean environment - cache cleared, extensions, firewall, antivirus disabled, connection verified and measured etc.
No Chrome issue whatsoever with any other website, be it http or https.
Hosting provider is yet unable to troubleshoot on their end, but they're still trying to help (it takes some time). They tried disabling mod_deflate, re-installing SSL, disabling caching rules, but to no avail.
The same issue was once observed about 2 months ago. That time I asked the hosting provider to disable SSL completely just to be able to work on my website content. When they re-enabled SSL in less than a day, the issue has gone; but now it re-appeared again, and there is no clue as to what is going on.
The bottomline:
issue appears only in Chrome, only with certain site, only with https
changing only the browser solves the issue
changing only the protocol (https to http) solves the issue
I honestly tried to google out anything similar, but failed to.
I would appreciate if you try the link above in the incognito Chrome window and report the load/refresh time, and of course any ideas are more than welcome.

websocket receive buffer in Chrome

I have an application in which I open a websocket from a browser (Chrome, in my case) to a server, then I start sending messages from the server side to the browser. What I am finding is that when I send messages too quickly from the server, messages start getting buffered up on the browser side. This means that the browser "falls behind" and ends up processing messages sent long ago by the server, which in my application is undesirable.
I have eliminated the following possible candidates for where this buffering is happening:
The server. I can kill the server process entirely and I see that messages continue to be received by my javascript code for several minutes, so the buffering is not happening inside the server process.
The network. I can reproduce the same issue when running the server on the same machine as my web browser, and the amount of data that I am sending is far below the bandwidth constraints for a TCP connection to localhost.
This leaves the browser. Is there any way I can (a) determine the size of the buffer chrome is maintaining for my websocket, or (b) reduce the size of this buffer and cause chrome to drop frames?
(a) Chrome buffers around 128KB per WebSocket connection. The amount
of buffered data is not visible to the application.
(b) Chrome will never intentionally drop frames (this would violate the standard).
When the processing done by Javascript is trivial, Chrome can handle over 50 MB per second over a WebSocket. So it sounds like the processing you are doing is non-trivial. You can drop messages that are too old in the onmessage handler (but please bear in mind that the clock on the client may be out-of-sync with the clock on the server).
If the main thread of the browser is always busy, even dropping messages may not be enough to keep up. I recommend the "Performance" tab in Chrome Devtools as a good way to see where your application is spending its time.

How to solve Chrome's 6 connection limit when using xhr polling

I recently found out that Chrome seems to have a connection limit of 6 ( Chrome hangs after certain amount of data transfered - waiting for available socket ) unfortunately I found this out the hard way by getting a "waiting for available sockets" message after loading up too many tabs (7).
I know it is Chrome since another Chrome user (a.k.a another browser session) loads the web page perfectly fine on the same computer at the same time (I have multiple Chrome users open on my computer). So it is not the server in any way.
I believe this is because, in socket.io (which I am using for notifications), I am xhr-polling which is causing Chrome to have to wait until it can grab a socket from one of those connections before it can process the page.
What is the solution to this?
I have thought of a couple of solutions:
make the xhr-polling window smaller, this increases connections in the browser and node.js but will mean the page won't stall.
Use websockets. I am unsure if websockets are immune to this problem either.
Make connections inactive on tabs not focused. Though it seems other sites don't have to do that...
Use some kind of connection sharing. Considering that Chrome isolates websockets and xhr requests to the tab I do find it difficult to understand how that works.
As an added point: the reason I have not gone with websockets from the start is because I use cloudflare. But if this is the way to solve it then: so be it.
Use a real webSocket, rather than XHR Polling. webSocket connections do not count toward the http connection limit to the same origin.
There is a separate global limit to how many webSocket connections can be created, but it is a high number (200 in Firefox - not sure what it is exactly in Chrome).
Here are some references on this topic:
Max parallel http connections in a browser?
Maximum concurrent connections to the same domain for browsers
HTTP simultaneous connections per host limit… are per tab, browser instance or global?.

Application Pool slow to start with Google Chrome

this is one that is confusing me completely.
This issue doesn't happen with IE, Firefox, Safari ONLY with Google Chrome. (I haven't tested other browsers).
Basically I run my own web server, IIS 7.5, and have a number of development websites on it which will be published and used in production from the same server. As there are a number of websites I must use Dynamic Idle times for Application Pools as resources are restricted.
Usually this wouldn't be an issue and is the way to do things seemingly based on Microsoft's best practices however there seems to be a problem with Chrome loading pages once the application pool has timed out/gone idle.
Now I understand that it takes time for the application pool to restart, which they do within seconds and serve content not long after, but with Chrome the application pool takes close to a minute to start.
This doesn't happen on first load of the website however - it only happens with subsequent loads within the same browser/session.
As I said this does not happen with IE, Firefox or Safari, the other browsers I have tested, the application pool restarts almost immediately.
I had thought that maybe this was a server side issue but since the other browsers work fine I can only figure that it is Chrome at fault. Yet I still want to make sure it isn't actually a server side issue.
Any one have any ideas?
I've just realized I posted this on Stack where it should be on Server Fault.
Sorry.
Anyway, something I wrote in the question prompted me to investigate further and I found this doesn't seem to be an application pool issue, although it could be, but more so a PHP-CGI issue. It might even be localized to my own machine.

newer chrome fails on websocket using haproxy server

test site http://socket.trailsandtribulations.net
firefox: v15 works fine. (however, if lots of traffic and slow net, firefox will frequently fail as well (quietly).)
chrome: previously worked, but v21 gets Error during WebSocket handshake: 'Connection' header value is not 'Upgrade'
however, if chrome is running locally, it works fine! it breaks on my client in Thailand and server in Germany. again, Firefox works correctly all the time, as did earlier versions of chrome.
using haproxy to split between websockets via node.js and html via nginx
has something changed that makes this solution not work?
haproxy.cfg now displays the test site link - that way it's always current.
It looks from your message that the error is reported by Chrome while initially I understood it was reported by the server when accessed by Chrome. I think that as a workaround, if you replace "option httpclose" with "option http-server-close", it could make the issue disappear. You also need to remove all "option forceclose". If there is only "option http-server-close", haproxy will not touch the Connection header in the response path, which should make the browser happy. However, you must keep in mind that there is still a bug where the error is displayed and that it should be reported to the software authors.
BTW, your timeouts are far too large, you'll end up with many dead connections at the end of the day, this does not make sense. If you use a recent enough haproxy, you can use "timeout tunnel" to set the WS timeout without having to deal with a large HTTP timeout. But even then, 1 day is far too large for TCP connections. Some of your users will be using smartphones where a TCP connection cannot live more than a few minutes before a handover happens.
firefox uses no-cache when requesting the websocket update; this version of chrome does not. see http://code.google.com/p/chromium/issues/detail?id=148908&thanks=148908&ts=1347523876
for some proxies evidently this is necessary
also, see https://github.com/sockjs/sockjs-node/pull/88 for related issue