Memory leakage in Chrome using Shared Worker? - html

I have a web page that starts a HTML5 SharedWorker script. Chrome memory usage increases every time this page is reloaded (hitting F5).
The worker script is very simple. Every second (using setInterval) a message is sent to the connected port.
It seems like the worker process is terminated and restarted each time I hit F5. This is what I'd expect since the worker isn't actually shared by more than one 'document'. However, I cannot figure out why memory usage increases on every refresh.
Does anybody know why this is happening?
Given that memory increases each time the page is reloaded makes me think that I cannot use shared workers at all in Chrome. Has anyone been able to do so without having memory problems?
UPDATE
This is the hosting HTML:
<div id="output"></div>
<script type="text/javascript" src="/scripts/jquery-1.4.4.js"></script>
<script type="text/javascript">
$(function () {
var worker = new SharedWorker("/scripts/worker.js")
worker.port.onmessage = function(e) {
$("#output").append($("<div></div>").text(e.data));
};
worker.port.start();
});
</script>
...and this is worker.js:
var list = [];
setInterval(function () {
for (var i = 0; i < list.length; ++i) {
list[i].postMessage("#connections = " + list.length);
}
}, 1000);
onconnect = function (event) {
list.push(event.ports[0]);
};
The hosting page starts/connects to a shared worker and outputs whatever is received from it.
The worker code keeps a list of connected ports and sends a message to them all once a second.
This is simple stuff. Yet, each time the hosting page is reloaded in Chrome. Memory payload for that tab is increased.
The following shows Chrome's memory usage after a couple of refreshes:
...after refreshing some more I'm reaching 250 MB...
I'm running out of ideas, thinking this must be a bug in Chrome. Can anyone give me some sort of pointer?
UPDATE 2
Disabling my AdBlock extension seemed to fix the problem:
So I was happy for a little while but it turned out that memory is still being leaked. With AdBlock disabled it's just leaking quite a bit less per page refresh.

Seems like the Chromium-team have resolved this problem. I cannot reproduce it anymore.

Related

WebSocket does not connect to a specific host until Chromium is restarted

we are trying to debug an issue with Chromium (happens in Chrome, Edge, Brave), where it sometimes gets to a state where it does not open a WebSocket to a specific host.
We can see in console logs that it is trying to open the socket, but it never opens the connection. It fails with 1006 error. The same happens in new tabs and in new windows. The behaviour disappears after the browser is restarted or when an incognito tab is used.
There are no HTTP upgrade requests on the server and also the connection does not show up as WebSocket in chrome dev tools. We do not have much else to go on. Any suggestions on what we could try to debug the problem?
I tried to test the web socket with the MS Edge (chromium) Version 83.0.478.58 and Google Chrome Version 83.0.4103.116
In my test, both the Chromium browser works well without 1006 error.
Here is the test code:
<!DOCTYPE html>
<meta charset="utf-8" />
<title>WebSocket Test</title>
<script language="javascript" type="text/javascript">
var wsUri = "wss://echo.websocket.org/";
var output;
function init()
{
output = document.getElementById("output");
testWebSocket();
}
function testWebSocket()
{
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) { onOpen(evt) };
websocket.onclose = function(evt) { onClose(evt) };
websocket.onmessage = function(evt) { onMessage(evt) };
websocket.onerror = function(evt) { onError(evt) };
}
function onOpen(evt)
{
writeToScreen("CONNECTED");
doSend("WebSocket rocks");
}
function onClose(evt)
{
writeToScreen("DISCONNECTED");
}
function onMessage(evt)
{
writeToScreen('<span style="color: blue;">RESPONSE: ' + evt.data+'</span>');
websocket.close();
}
function onError(evt)
{
writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data);
}
function doSend(message)
{
writeToScreen("SENT: " + message);
websocket.send(message);
}
function writeToScreen(message)
{
var pre = document.createElement("p");
pre.style.wordWrap = "break-word";
pre.innerHTML = message;
output.appendChild(pre);
}
window.addEventListener("load", init, false);
</script>
<body>
<h2>WebSocket Test</h2>
<div id="output"></div>
</body>
</html>
Reference:
Web socket echo test
Try to check the security settings of the browsers and also confirm that you are trying to connect using a secure connection.
I found that 1006 is a special code that means the connection was closed abnormally (locally) by the browser implementation.
I suggest you can check WebSocket.onerror(evt) to get more details about the error.
Helpful thread link:
getting the reason why WebSockets closed with close code 1006
If there is more information then you can try to provide us that may help to narrow down the issue.
I observed exactly the same symptom (not sure about the error code) in Brave (but not in Chrome) during 2020 ... it was a constant issue... but since then it didn't happen almost at all since JAN 2021... except last week (in APR 2021) it happened again (in Brave).
Did anyone else notice the issue still being present from time to time? Or maybe it's a new bug, similar but more rare...
Exactly the same behaviour, socket doesn't reconnect except in incognito or after browser restart.
We have been facing this exact issue where Websocket's upgrade requests never reach the server even though the network inspect tab shows that the request has been fired. No amount of refreshing or new tabs help until we switch over to Chrome. The issue is intermittent and has become impossible to debug but nonetheless our users keep reporting infinite loading bars due to the un-connected websocket.
I went through capturing network logs using chrome://net-exports and viewing it on the net-export viewer and the only time I could capture the logs for the issue, I noticed the browser only trying an IPv6 address and not the IPv4 address. (when our servers don't even have one)
Would it be prudent to engage Brave or Chromium team in this? Anyone ever found concrete repro steps for this?

Detecting rendering events / layout changes (or any way to know when the page has stopped "changing")

I'm using Puppeteer (PuppeteerSharp actually, but the API is the same) to take a screenshot of a web page from my application.
The problem is that the page does several layout changes via JavaScript after the page has loaded, so a few seconds pass before seeing the "final" rendered version of the page.
At the moment I'm just waiting a "safe" amount of seconds before taking the screenshot, but this is obviously not a good approach, since a temporary performance slowdown on the machine can result in an incomplete rendering.
Since puppeteer uses Chromium in the background, is there a way to intercept Chromium's layouting/rendering events (like you can do in the DevTools console in Chrome)? Or, really, ANY other way to know when the page has stopped "changing" (visually I mean)
EDIT, some more info: The content is dynamic, so I don't know before hand what it will draw and how. Basically, it's a framework that draws different charts/tables/images/etc. (not open-source unfortunately). By testing with the "performance" tool in the Chrome DevTools however, I noticed that after the page has finished rendering all activity in the timeline stops, so if I could access that information it would be great. Unfortunately, the only way to do that in Puppeteer (that I can see) is using the "Tracing" feature, but that doesn't operate in real-time. Instead, it dumps the trace to file and the buffer is way too big to be of any use (the file is still 0 bytes after my page has already finished rendering, it only flushes to disk when I call "stopTracing"). What I would need is to access the Tracing feature of puppeteer in realt-time, for example via events or a in-memory stream, but that doesn't seem to be supported by the API. Any way around this?
You should use page.waitForSelector() to wait for the dynamic elements to finish rendering.
There must be a pattern that can be identified in terms of the content being generated.
Keep in mind that you can use flexible CSS Selectors to match elements or attributes without knowing their exact values.
await page.goto( 'https://example.com/', { 'waitUntil' : 'networkidle0' } );
await Promise.all([
page.waitForSelector( '[class^="chart-"]' ), // Class begins with 'chart-'
page.waitForSelector( '[name$="-image"]' ), // Name ends with '-image'
page.waitForSelector( 'table:nth-of-type(5)' ) // Fifth table
]);
This can be useful when waiting for a certain pattern to exist in the DOM.
If page.waitForSelector() is not powerful enough to meet your needs, you can use page.waitForXPath():
await page.waitForXPath( '//div[contains(text(), "complete")]' ); // Div contains 'complete'
Alternatively, you can plug the MutationObserver interface into page.evaluate() to watch for changes being made to the DOM tree. When the changes have stopped over a period of time, you can resume your program.
After some trial and error, I settled for this solution:
string traceFile = IOHelper.GetTemporaryFile("txt");
long lastSize = 0;
int cyclesWithoutTraceActivity = 0;
int totalCycles = 0;
while (cyclesWithoutTraceActivity < 4 && totalCycles < 25)
{
File.Create(traceFile).Close();
await page.Tracing.StartAsync(new TracingOptions()
{
Categories = new List<string>() { "devtools.timeline" },
Path = traceFile,
});
Thread.Sleep(500);
await page.Tracing.StopAsync();
long curSize = new FileInfo(traceFile).Length;
if(Math.Abs(lastSize - curSize) > 5)
{
logger.Debug("Trace activity detected, waiting...");
cyclesWithoutTraceActivity = 0;
}
else
{
logger.Debug("No trace activity detected, increasing idle counter...");
cyclesWithoutTraceActivity++;
}
lastSize = curSize;
totalCycles++;
}
File.Delete(traceFile);
if(totalCycles == 25)
{
logger.Warn($"WARNING: page did not stabilize within allotted time limit (15 seconds). Rendering page in current state, might be incomplete");
}
Basically what I do here is this: I run Chromium's tracing at 500 msec intervals, and each time I compare the size of the last trace file to the size of the current trace file. Any significant changes in the size are interpreted as activity on the timeline, and they reset the idle counter. If enough time passes without significant changes, I assume the page has finished rendering. Note that the trace file always starts with some debugging info (even if the timeline itself has no activity to report), this is the reason why I don't do an exact size comparison, but instead I check if the file's lengths are more than 5 bytes apart: since the initial debug info contains some counters and IDs that vary over time, I allow for a little variance to account for this.

MS Access too many client tasks

I was getting this error earlier today on my phone when I was trying to access a website I have been developing over the past year or so. I wasn't able to save the exact error message, but it wasn't returning any query results and would give me an error stating there were 'too many client tasks'.
Google searching doesn't help much to resolve the issue... am I supposed to be closing client connections to my database? I thought Access did that on it's own. There's no way there are ever more than 4-5 people on the site at once, so I'm not sure what would be causing this.
I do have one sneaking suspicion... there is an auto sign out and close the tab after 10 minutes feature for the site. Code looks like this:
var idleTime = 0;
$(document).ready(function () {
//Increment the idle time counter every minute.
var idleInterval = setInterval('timerIncrement()', 60000); // 1 minute
//Zero the idle timer on mouse movement.
$(this).mousemove(function (e) {
idleTime = 0;
});
$(this).keypress(function (e) {
idleTime = 0;
});
})
function timerIncrement() {
idleTime = idleTime + 1;
if (idleTime > 9) { // 10 minutes
document.getElementById('logoutbutton').click();
window.open('', '_self', ''); //bug fix
window.close();
}
}";
Could this be the culprit?
Any help would be great. I'm drawing a blank on this one.
Access is a desktop database and not well suited as a web-oriented database. (Many would state that it is a very poor choice.)
Anyway, a quick Google reveals this page which suggests that you need to explicitly close the database connection, and release any resources, as soon as you can. The page refers to ASP but you haven't told use which server-side technology you are using - but the same principle applies regardless.
Many, more web-capable, databases, such as MySQL, will implicitly close connections, and release resources, when no longer needed - or when the (server-side) script ends.

Chrome extension memory leak in chrome.extension.sendMessage()?

I'm seeing fairly massive memory leaks in long-lived pages using Chrome's chrome.extension.sendMessage()
After sending ~200k events from the Content-Script to the Background-Page as a test, chrome.Event's retained size is ~80% of the retained memory in ~50MB heap snapshot
I've been trying to track down any mistakes I might be making, closing over some variable and preventing it from being GC'd, but it seems to be related to the implementation of Chrome's eventing system
Has anyone run into anything like this, or seen memory leaks with extremely long-lived extensions with Content-Scripts that chatter a lot with a bg page?
The code on my Content-Script side:
csToBg = function(message) {
var csToBgResponseHandler = function(response) {
console.log("Got a response from bg");
};
var result = chrome.extension.sendMessage(null, message, csToBgResponseHandler)
};
And on the Background-Page side, a simple ACK function (to superstitiously avoid https://code.google.com/p/chromium/issues/detail?id=114738):
var handleIncomingCSMessage = function(message, sender, sendResponse) {
var response = message;
response.acked = "ACK";
window.console.log("Got a message, ACKing to CS")
sendResponse(response);
}
After sending ~200k messages in Chrome 23.0.1271.97 this way, the heap snapshot looks like:
The memory never seems to get reclaimed for the life of the page, and I'm stumped about how to fix it.
EDIT: This is a standard background page, and is not an event page.
This is probably fixed in chrome 32.
Finally!
see http://code.google.com/p/chromium/issues/detail?id=311665 for details

Run Chrome extension in the background

I'm currently creating my first Chrome extension, so far so good.
It's just a little test where I run multiple timers.
But obviously all my timers reset when I open and close the extension.
So to keep all my timers running, I would have to same them somehow when I close the extension and make them run in the background page.
When I open the extension again, those timers should be send back to the open page.
How would you handle this?
I already have an array of all my timers, what would be the best option for me>
A background page runs at all times when the extension is enabled. You cannot see it, but it can modify other aspects of the extension, like setting the browser action badge.
For example, the following would set the icon badge to the number of unread items in a hypothetical service:
function getUnreadItems(callback) {
$.ajax(..., function(data) {
process(data);
callback(data);
});
}
function updateBadge() {
getUnreadItems(function(data) {
chrome.browserAction.setBadgeText({text:data.unreadItems});
});
}
Then, you can make a request and schedule it so the data is retrieved and processed regularly, you can also stop the request at any time.
var pollInterval = 1000 * 60; // 1 minute
function startRequest() {
updateBadge();
window.setTimeout(startRequest, pollInterval);
}
function stopRequest() {
window.clearTimeout(timerId);
}
Now just load it...
onload='startRequest()'
Also, HTML5 offline storage is good for storing data and constantly update it...
var data = "blah";
localStorage.myTextData = data;