websocket receive buffer in Chrome - google-chrome

I have an application in which I open a websocket from a browser (Chrome, in my case) to a server, then I start sending messages from the server side to the browser. What I am finding is that when I send messages too quickly from the server, messages start getting buffered up on the browser side. This means that the browser "falls behind" and ends up processing messages sent long ago by the server, which in my application is undesirable.
I have eliminated the following possible candidates for where this buffering is happening:
The server. I can kill the server process entirely and I see that messages continue to be received by my javascript code for several minutes, so the buffering is not happening inside the server process.
The network. I can reproduce the same issue when running the server on the same machine as my web browser, and the amount of data that I am sending is far below the bandwidth constraints for a TCP connection to localhost.
This leaves the browser. Is there any way I can (a) determine the size of the buffer chrome is maintaining for my websocket, or (b) reduce the size of this buffer and cause chrome to drop frames?

(a) Chrome buffers around 128KB per WebSocket connection. The amount
of buffered data is not visible to the application.
(b) Chrome will never intentionally drop frames (this would violate the standard).
When the processing done by Javascript is trivial, Chrome can handle over 50 MB per second over a WebSocket. So it sounds like the processing you are doing is non-trivial. You can drop messages that are too old in the onmessage handler (but please bear in mind that the clock on the client may be out-of-sync with the clock on the server).
If the main thread of the browser is always busy, even dropping messages may not be enough to keep up. I recommend the "Performance" tab in Chrome Devtools as a good way to see where your application is spending its time.

Related

Chrome ignores delayed HTTP response

There is a web page that after sending a POST request many database transactions take place lasting a significant amount of time.
When clicking to post data, a spinning wheel appears with JS.
Users (of Chrome) complain that the response does not appear and that the whell keeps spinning forever.
I would expect that a connection timeout occurs after 300secs, but wouldn't this generate an error code view in Chrome instead of just keeping the previous view?
Additionally, in case this is indeed a connection timeout, is there any way from server side to keep Chrome waiting after 300secs?
Thank you for your time!

Chrome Queueing Requests

Chrome Timing View
The image above show chrome spends most of the time queuing up the request. I am trying to figure out why that is happening to minimize it.
According to chrome developer documents:
A request being queued indicates that:
The request was postponed by the rendering engine because it's
considered lower priority than critical resources (such as
scripts/styles). This often happens with images.
The request was put on hold to wait for an unavailable TCP socket that's about to free up.
The request was put on hold because the browser only allows six TCP connections per origin on HTTP 1.
Time spent making disk cache
entries (typically very quick.)
Number 3 seems to be the most likely problem according to chrome developer documents but I know that only one request is going out at a time so that can't be it. I don't think it is number 1 either because the performance monitor doesn't show a lag from rendering. Maybe it is either 2 or 4 but I don't know how to test that.
Chrome Performance Monitor
I've included a picture of the performance monitor that shows these long tasks where something is happening in the system. These are also a mystery to me and seem related.
Any help is greatly appreciated!
Edit: It seems you can disable the disk cache when you open dev tools and that didn't seem to fix the problem.

How to solve Chrome's 6 connection limit when using xhr polling

I recently found out that Chrome seems to have a connection limit of 6 ( Chrome hangs after certain amount of data transfered - waiting for available socket ) unfortunately I found this out the hard way by getting a "waiting for available sockets" message after loading up too many tabs (7).
I know it is Chrome since another Chrome user (a.k.a another browser session) loads the web page perfectly fine on the same computer at the same time (I have multiple Chrome users open on my computer). So it is not the server in any way.
I believe this is because, in socket.io (which I am using for notifications), I am xhr-polling which is causing Chrome to have to wait until it can grab a socket from one of those connections before it can process the page.
What is the solution to this?
I have thought of a couple of solutions:
make the xhr-polling window smaller, this increases connections in the browser and node.js but will mean the page won't stall.
Use websockets. I am unsure if websockets are immune to this problem either.
Make connections inactive on tabs not focused. Though it seems other sites don't have to do that...
Use some kind of connection sharing. Considering that Chrome isolates websockets and xhr requests to the tab I do find it difficult to understand how that works.
As an added point: the reason I have not gone with websockets from the start is because I use cloudflare. But if this is the way to solve it then: so be it.
Use a real webSocket, rather than XHR Polling. webSocket connections do not count toward the http connection limit to the same origin.
There is a separate global limit to how many webSocket connections can be created, but it is a high number (200 in Firefox - not sure what it is exactly in Chrome).
Here are some references on this topic:
Max parallel http connections in a browser?
Maximum concurrent connections to the same domain for browsers
HTTP simultaneous connections per host limit… are per tab, browser instance or global?.

Chromium: is communicating with the page faster than communicating with a worker?

Suppose I've got the following parts in my system: Storage (S) and a number of Clients (C). The clients are separate Web Workers and I'm actually trying to emulate something like shared memory for them.
Right now I've got just one Client and it's communicating with the Storage pretty intensively. For the sake of testing it is spinning in a for-loop, requesting some information from the Storage and processing it (processing is really cheap).
It turns out that this is slow. I've checked the process list and noticed chrome --type=renderer eating lots of CPU, so I thought that it might be redrawing the page or doing some kind of DOM processing after each message, since the Storage is running in the page context. Ok, I've decided to try to move the Storage to a separate Worker so that the page is totally idle now and… ended up getting even worse performance—exactly twice slower (I've tried a Shared Worker and a Dedicated Worker with explicit MessageChannels with the same results).
So, here is my question: why sending a message from a Worker to another Worker is exactly twice slower than sending a message from a Worker to the page? Are they sending messages through the page? Is it “by design” or a bug? I was going to check the source code, but I'm afraid it's a bit too complex and, probably, someone is already familiar with this part of Chromium internals…
P.S. I'm testing in Chrome 27.0.1453.93 on Linux and Chrome 28.0.1500.20 on Windows.

Chrome websocket connection delay

I have a weird problem with websockets and chrome (22.0.1229.79m) (I haven't coded authentication for other browsers yet so I cant test them). It seems like if I reload chrome 3 times, there will be a huge delay in connecting to my websocket server. The server is not delaying the connection, I tested this by connecting to it with another PC while chrome was delaying and it connected perfectly.
Is there anyway to fix this? This is a problem when I am switching servers receiving data. It will halt, and delay. This is really bad for user experience. I would assume this is strictly related to the chrome browser not closing the socket...
I have also seen this delay when creating multiple WebSocket connections from the same browser tab in Chrome within a short period of time. I believe this is to address a potential security issue with WebSockets which would allow a browser to be hijacked to do port scanning inside a network. By limiting the number of WebSocket connections that can happen within a given amount of time, you greatly limit the utility of a browser as a remote port scanner. In addition, the amount of information that is returned by onclose and onerror is intentionally limited for the same reasons.