I'm seeing fairly massive memory leaks in long-lived pages using Chrome's chrome.extension.sendMessage()
After sending ~200k events from the Content-Script to the Background-Page as a test, chrome.Event's retained size is ~80% of the retained memory in ~50MB heap snapshot
I've been trying to track down any mistakes I might be making, closing over some variable and preventing it from being GC'd, but it seems to be related to the implementation of Chrome's eventing system
Has anyone run into anything like this, or seen memory leaks with extremely long-lived extensions with Content-Scripts that chatter a lot with a bg page?
The code on my Content-Script side:
csToBg = function(message) {
var csToBgResponseHandler = function(response) {
console.log("Got a response from bg");
};
var result = chrome.extension.sendMessage(null, message, csToBgResponseHandler)
};
And on the Background-Page side, a simple ACK function (to superstitiously avoid https://code.google.com/p/chromium/issues/detail?id=114738):
var handleIncomingCSMessage = function(message, sender, sendResponse) {
var response = message;
response.acked = "ACK";
window.console.log("Got a message, ACKing to CS")
sendResponse(response);
}
After sending ~200k messages in Chrome 23.0.1271.97 this way, the heap snapshot looks like:
The memory never seems to get reclaimed for the life of the page, and I'm stumped about how to fix it.
EDIT: This is a standard background page, and is not an event page.
This is probably fixed in chrome 32.
Finally!
see http://code.google.com/p/chromium/issues/detail?id=311665 for details
Related
I am trying to understand heap snapshots in Chrome. I am debugging memory leaks in my javascript application and was able to find out that most of the retained memory is used by Internal Node -> Pending activities -> C++ roots. What does it mean?
My application uses MediaRecorder for recording canvas and even though it is more complex the recording can be simplified like this:
const canvas = document.querySelector('canvas')
const stream = canvas.captureStream(1)
const mediaRecorder = new MediaRecorder(stream)
mediaRecorder.addEventlistener('dataavailable', processEvent())
mediaRecorder.start()
// later in the code
mediaRecorder.stop()
mediaRecorder.stream.getTracks().forEach((track) => track.stop())
I believe I work with MediaRecorder API correctly and close the recorder and even stream right. Does anyone know what can cause these objects still be kept in memory?
Similar question:
What is "Pending Activities" in Chrome?
Might be related to this chrome bug:
https://bugs.chromium.org/p/chromium/issues/detail?id=899722
I have read near 20 other posts about this particular error, but most seem to be issues with the code calling Response.Close or similar, which is not our case. I understand that this particular error means that typically a user browsed away from the web page or cancelled the request midway, but in our case we are getting this error without cancelling a request. I can observe the error just after a few seconds, the download just fails in the browser (both Chrome and IE, so it's not browser specific).
We have a web api controller that serves a file download.
[HttpGet]
public HttpResponseMessage Download()
{
//
// Enumerates a directory and returns a Read-only FileStream of the download
var stream = dataProvider.GetServerVersionAssemblyStream(configuration.DownloadDirectory, configuration.ServerVersion);
if (stream == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
var response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = $"{configuration.ServerVersion}.exe";
response.Content.Headers.ContentType = new MediaTypeHeaderValue(MediaTypeNames.Application.Octet);
response.Content.Headers.ContentLength = stream.Length;
return response;
}
Is there something incorrect we are doing in our Download method, or is there something we need to tweak in IIS?
This happens sporadically. I can't observe a pattern, it works sometimes and other times it fails repeatedly.
The file download is about 150MB
The download is initiated from a hyperlink on our web page, there is no special calling code
The download is over HTTPS (HTTP is disabled)
The Web Api is hosted on Azure
It doesn't appear to be timing out, it can happen just after a second or two, so it's not hitting the default 30 second timeout values
I also noticed I can't seem to initiate multiple file downloads from the server at once, which is concerning. This needs to be able to serve 150+ businesses and multiple simultaneous downloads, so I'm concerned there is something we need to tweak in IIS or the Web Api.
I was able to finally fix our problem. For us it turned out to be a combination of two things: 1) we had several memory leaks and CPU intensive code in our Web Api that was impacting concurrent downloads, and 2) we ultimately resolved the issue by changing MinBytesPerSecond (see: https://blogs.msdn.microsoft.com/benjaminperkins/2013/02/01/its-not-iis/) to a lower value, or 0 to disable. We have not had an issue since.
I'm trying to access local storage from content scripts but even though the message passing is working, the output isn't as expected.
CONTENT SCRIPT
var varproxy = localStorage.getItem('proxy'); //gets data from options page saved to local storage
var proxy = "proxystring";
chrome.runtime.sendMessage({message:"hey"},
function(response) {
proxy = response.proxy;
console.log(response.proxy);
}
);
console.log(proxy);
BACKGROUND PAGE (For message passing)
chrome.runtime.onMessage.addListener(
function(request, sender, sendResponse)
{
if (request.message == "hey")
{
sendResponse({proxy: varproxy});
console.log('response sent');
}
else
{sendResponse({});}
});
The console logs the proxy as the value of varproxy and also "response sent" but the
console.log(proxy);
logs the proxy as "proxystring"
Why isn't the value of proxy getting changed? How do I change it as required?
Message sending -- among lots of chrome API function -- is an asynchronous function. The interpreter won't wait for the response, but jumps to the next line. So it can easily happen that log(proxy) will be evaluated first, since communicating with the background page takes some time. As soon as the response is received, the value of proxy changes.
Might I recommend you try out another implementation? What about Chrome Storage?
Then you don't need any message passing at all, because you can access chrome storage within content scripts.
Example, this is something I do in my extensions' content script to grab several values from chrome storage:
chrome.storage.sync.get({HFF_toolbar: 'yes',HFF_logging: 'yes',HFF_timer: '1 Minute'},
function (obj) {
toolbar_option = obj.HFF_toolbar;
logging_option = obj.HFF_logging;
timer_option = obj.HFF_timer;
/* the rest of my content script, using those options */
I personally found this approach much easier, for my purposes anyway, than message passing implementations.
I have an Mvx base iOS project which is having problems with image downloads.
I have a couple of screens which contain UICollectionViews and the UICollectionViewCells use MvxDynamicImageHelpers to set the Image of their UIImageViews to images hosted on the internet (Azure blob storage via Azure CDN in actual fact). I have noticed that the images sometimes do not appear and that this is more common on a slow connection and if I scroll through the whole UICollectionView while the images are loading - presumably as it initiates a large number of simultaneous downloads. Restarting the app causes some, but not all, of the images to be shown.
Looking in the Caches/Pictures.MvvmCross folder I see there are a number of files with .tmp extensions and some without .tmp extensions but a 0 byte file size. I presume that the .tmp files are the ones that are re-downloaded following an app restart and that an invalid in-memory cache entry is causing them not to be re-downloaded until this happens.
I have implemented my versions of MvxDownloadRequest and MvxHttpFileDownloader and registered my IMvxHttpFileDownloader. The only modification in MvxHttpFileDownloader is to use my MvxDownloadRequest instead of the standard Mvx one.
As far as I can see, there are no exceptions being thrown in MvxDownloadRequest.Start or MvxDownloadRequest.ProcessResponse and MvxDownloadRequest.FileDownloadFailed is not being called. Having replaced MvxDownloadRequest.Start with the following, all images are always downloaded and displayed successfully:
try
{
ThreadPool.QueueUserWorkItem((state) => {
try
{
var fileService = this.GetService<IMvxSimpleFileStoreService>();
var tempFilePath = DownloadPath + ".tmp";
var imageData = NSData.FromUrl(NSUrl.FromString(Url));
var image = UIImage.LoadFromData(imageData);
NSError nsError;
image.AsPNG().Save(tempFilePath, true, out nsError);
fileService.TryMove(tempFilePath, DownloadPath, true);
}
catch (Exception exception)
{
FireDownloadFailed(exception);
return;
}
FireDownloadComplete();
});
}
catch (Exception e)
{
FireDownloadFailed(e);
}
So, what could be causing the problems with the standard WebRequest which is not affecting the above version? I'm guessing it's something to with GC and will do further debugging when I get time, but this won't be fore a while unfortunately. Would be very much appreciated if someone can answer this or provide pointers for when I do look at it.
Thanks,
J
From the description of your investigations so far, it sounds like you have isolated the problem down to the level that httpwebrequest sometimes fails, but that the NSData methods are 100% reliable.
If this is the case, then it would suggest that the problem is somewhere in the xamarin.ios network stack or in the use of it.
It might be worth checking the xamarin bugzilla repository and also asking their support team if they are aware of any issues in this area. I believe they did make some announcements about changes to the iOS networking at evolve - see the CFNetworkHandler part late in the video and slides at http://xamarin.com/evolve/2013#session-b3mx6e6rmb - and there are worrying questions on here like iPhone app gets into a state where network requests never complete
Beyond that, I'd guess the first step in any debugging would be to isolate the issue in a simple test app - eg a simple app which just downloads one image at a time and which demonstrates a simple pass/fail for each technique. If you can replicate the issue in a small test app, then it'll be much quicker to work out what the issue is.
I have a web page that starts a HTML5 SharedWorker script. Chrome memory usage increases every time this page is reloaded (hitting F5).
The worker script is very simple. Every second (using setInterval) a message is sent to the connected port.
It seems like the worker process is terminated and restarted each time I hit F5. This is what I'd expect since the worker isn't actually shared by more than one 'document'. However, I cannot figure out why memory usage increases on every refresh.
Does anybody know why this is happening?
Given that memory increases each time the page is reloaded makes me think that I cannot use shared workers at all in Chrome. Has anyone been able to do so without having memory problems?
UPDATE
This is the hosting HTML:
<div id="output"></div>
<script type="text/javascript" src="/scripts/jquery-1.4.4.js"></script>
<script type="text/javascript">
$(function () {
var worker = new SharedWorker("/scripts/worker.js")
worker.port.onmessage = function(e) {
$("#output").append($("<div></div>").text(e.data));
};
worker.port.start();
});
</script>
...and this is worker.js:
var list = [];
setInterval(function () {
for (var i = 0; i < list.length; ++i) {
list[i].postMessage("#connections = " + list.length);
}
}, 1000);
onconnect = function (event) {
list.push(event.ports[0]);
};
The hosting page starts/connects to a shared worker and outputs whatever is received from it.
The worker code keeps a list of connected ports and sends a message to them all once a second.
This is simple stuff. Yet, each time the hosting page is reloaded in Chrome. Memory payload for that tab is increased.
The following shows Chrome's memory usage after a couple of refreshes:
...after refreshing some more I'm reaching 250 MB...
I'm running out of ideas, thinking this must be a bug in Chrome. Can anyone give me some sort of pointer?
UPDATE 2
Disabling my AdBlock extension seemed to fix the problem:
So I was happy for a little while but it turned out that memory is still being leaked. With AdBlock disabled it's just leaking quite a bit less per page refresh.
Seems like the Chromium-team have resolved this problem. I cannot reproduce it anymore.