maybe I found a bug in socket behaviour. If mobile data is ON and wifi is connected, after 10 minutes, SocketActivityTrigger backgroundtask is fired with reason SocketClosed. This only happens, when both internet types are ON. When only wifi is connected, KeepAliveExpired is the reason of SocketActivityTrigger. The same scenario if mobile internet is ON. So my question is, is it a bug or normal behaviour, that SocketClosed is the reason, when both internet types are ON?
Thank you
Related
I work on a team that is developing a browser based video application. We have been experiencing an occasional severe degradation in the quality of our video starting around November 2022. It happens during a call and continues until the call is ended (the connection is ended), and appears to be a decoding issue (the screen looks pixelated/discolored but you retain some idea of the 'shape' of objects on the video feed).
We use Pexip which acts as an MCU between participants, and have validated from Pexip's outbound packets that it is sending a clear video stream, however on the browser we see the issue nonetheless. This is only seen on a single participants video stream.
The issue has only been seen on Chrome and Edge which default to VP8, and has not been reproduced on Firefox (defaults to H.264). This leads me to believe it is an issue with Chromium.
I have struggled to find any logs that relate to the start of the issue, and am asking if you have any suggestions on where to look on Chrome's logs to find any indication that an issue has started or to understand more about why the issue is happening?
We have tested on Chrome/Edge/Firefox browsers, and the issue only happens in Chrome or Edge (Chromium based). We have tested on a wireless or wired connections and the issue shows in both cases. We have tested on Windows and macOS and it happens on both. We have also tested on a freshly rebooted computer with no other applications running, and the issue still shows. There is nothing interesting happening on CPU/Memory/Network utilization when the issue begins.
I have a complex web app, which is working fine in desktop browsers, as well as in the Android native browser (which is part of why I got so long into this project before noticing this problem). The server setup is using the Typesafe Stack (Play/Akka/Scala), but I suspect that's not relevant to the question. Suffice it to say, it uses bog-standard transient session cookies to keep your login.
The problem is, in Chrome and Safari, that transient session appears to be too fragile, and very unpredictably so. In both cases, so long as I am working actively in the browser, everything is fine. But if I switch away from the browser for a while and return to it, it often loses the session cookie, forcing a re-login. Sometimes it takes an hour or two, sometimes just a few minutes -- I haven't yet been able to figure out a pattern.
Note that this doesn't involve closing the tab with my app in it, or manually closing the browser process. I would expect to be able to switch away from Chrome and come back to it using the app switcher and still have my session there; for some reason, though, it seems to be frequently and quickly losing the session cookie. This is a killer problem: users shouldn't be forced to re-login too often.
Any ideas or pointers to why these browsers might be losing their session cookies so easily? I've done lots of web development, but this is my first time seriously targeting mobile browsers, and I'm clearly missing something...
this is one that is confusing me completely.
This issue doesn't happen with IE, Firefox, Safari ONLY with Google Chrome. (I haven't tested other browsers).
Basically I run my own web server, IIS 7.5, and have a number of development websites on it which will be published and used in production from the same server. As there are a number of websites I must use Dynamic Idle times for Application Pools as resources are restricted.
Usually this wouldn't be an issue and is the way to do things seemingly based on Microsoft's best practices however there seems to be a problem with Chrome loading pages once the application pool has timed out/gone idle.
Now I understand that it takes time for the application pool to restart, which they do within seconds and serve content not long after, but with Chrome the application pool takes close to a minute to start.
This doesn't happen on first load of the website however - it only happens with subsequent loads within the same browser/session.
As I said this does not happen with IE, Firefox or Safari, the other browsers I have tested, the application pool restarts almost immediately.
I had thought that maybe this was a server side issue but since the other browsers work fine I can only figure that it is Chrome at fault. Yet I still want to make sure it isn't actually a server side issue.
Any one have any ideas?
I've just realized I posted this on Stack where it should be on Server Fault.
Sorry.
Anyway, something I wrote in the question prompted me to investigate further and I found this doesn't seem to be an application pool issue, although it could be, but more so a PHP-CGI issue. It might even be localized to my own machine.
I have a weird problem with websockets and chrome (22.0.1229.79m) (I haven't coded authentication for other browsers yet so I cant test them). It seems like if I reload chrome 3 times, there will be a huge delay in connecting to my websocket server. The server is not delaying the connection, I tested this by connecting to it with another PC while chrome was delaying and it connected perfectly.
Is there anyway to fix this? This is a problem when I am switching servers receiving data. It will halt, and delay. This is really bad for user experience. I would assume this is strictly related to the chrome browser not closing the socket...
I have also seen this delay when creating multiple WebSocket connections from the same browser tab in Chrome within a short period of time. I believe this is to address a potential security issue with WebSockets which would allow a browser to be hijacked to do port scanning inside a network. By limiting the number of WebSocket connections that can happen within a given amount of time, you greatly limit the utility of a browser as a remote port scanner. In addition, the amount of information that is returned by onclose and onerror is intentionally limited for the same reasons.
According to the WebSocket Draft-76 spec, Wocket.close is supposed to do the following:
"To close the connection cleanly, a frame consisting of just a 0xFF byte followed by a 0×00 byte is sent from one peer to ask that the other peer close the connection."
But, after a few tests, I don’t think that Chrome is doing anything when closed is called. I’m curious if I’m doing something wrong or if it’s a known bug.
I haven't noticed any issues when testing with Chrome. I haven't inspected the frames either though.
i know this topic is really old, but I noticed that Chrome is the only browser, which doesnt send 0xff00 at socket close command... instead it closes its socket conn on browser side, so if I notice, that a "chrome" user is offline, if i failt to receive data from that socket. Just my two cents :)