Understanding heap snapshot in chrome - pending activites, internal node - google-chrome

I am trying to understand heap snapshots in Chrome. I am debugging memory leaks in my javascript application and was able to find out that most of the retained memory is used by Internal Node -> Pending activities -> C++ roots. What does it mean?
My application uses MediaRecorder for recording canvas and even though it is more complex the recording can be simplified like this:
const canvas = document.querySelector('canvas')
const stream = canvas.captureStream(1)
const mediaRecorder = new MediaRecorder(stream)
mediaRecorder.addEventlistener('dataavailable', processEvent())
mediaRecorder.start()
// later in the code
mediaRecorder.stop()
mediaRecorder.stream.getTracks().forEach((track) => track.stop())
I believe I work with MediaRecorder API correctly and close the recorder and even stream right. Does anyone know what can cause these objects still be kept in memory?
Similar question:
What is "Pending Activities" in Chrome?
Might be related to this chrome bug:
https://bugs.chromium.org/p/chromium/issues/detail?id=899722

Related

Forge viewer crashes after loading specific model

I have been trying to use the forge viewer to load somewhat large model, but it seems like that the viewer crashes after few seconds (3 - 5) of usage. (with typical Aw snap! page).
I've had no trouble with other models, but this happens on this specific model on Windows 10, Chrome.
I've tested loading in OS X, but it seem to work although it is somewhat slow.
My current best guess is that this is happening due to memory overflow in Chrome, but this is not yet certain, because the viewer crashes before I try to log the heap usage.
Is there any option that I can use for efficient model loading?
Also, is there a debug mode that allows memory tracking?
If you need the model urn, please let me know.
Thanks!
To modify the memory environment for the viewer (like iPhone), change the options parameters with memory limit values found here:
(refer to Default Memory Management section)
https://developer.autodesk.com/en/docs/viewer/v2/overview/changelog/2.17/
In particular, you can force memory management like this:
var config3d = {
memory: {
limit: 400, // in MB
debug: {
force: true
}
}
};
var viewer = new av.Viewer3D(container, config3d);
viewer.loadModel( modelUrl, {}, onSuccess, onError );
For debugging memory, try the following:
var memInfo = viewer.getMemoryInfo();
console.log(memInfo.limit); // == 400 MB
console.log(memInfo.effectiveLimit); // >= 400 MB
console.log(memInfo.loaded);
Lastly, you can open the memory manager panel extension, from the Chrome debug console, with this command...
NOP_VIEWER.loadExtension("Autodesk.Viewing.MemoryManager")
Click on the memory-chip icon, to bring up the panel (see screenshot below)...
In the memory tab, you can see many parameters relating to paged memory in order to render and network-load many meshes (mesh packs (pf) zip, sort by closest or largest mesh AABB, ignore meshes that are too few pixels on the screen, etc).
Another quick way to activate the Viewer's low-memory mode, is to trick your desktop chrome browser into thinking it's a mobile device, by activating mobile debugging. You can use this to test out mobile related memory issues.
Follow this guide: Chrome debug - Mobile mode
Hope this helps!

Consistent Empty Data using MediaRecorderAPI, intermittently

I have a simple setup for Desktop Capturing using html5 libraries.
This includes a simple webpage and a chrome-extension. I am using
Extension to get the sourceId
Using the sourceId I call navigator.mediaDevices.getUserMedia to get the MediaStream
This MediaStream is then fed into an instance of MediaRecorder for recording.
This setup works most of the times, but a few times I see that requestData() on MediaRecorder instance returns blob with empty data consistently. I am clueless as to what can cause a running setup to start misbehaving sometimes.
Some weird behaviour that I noticed in the bad state:
When I try to close/refresh the window it doesn't respond.
The MediaStreamTrack object in Step 2) above is 'live' but as soon as I go to Step 3) it becomes 'muted'.
There's no pattern to it, sometimes it even happens when I request for the MediaStreams the very first time(which rules out the possibility that there could be some dangling resources eating up the contexts)
Is there anything that I am doing wrong and am unaware of? Any help/pointers would be highly appreciated!

WriteableBitmap or PNG writer memory leak?

I am building a small Windows Phone 8 app (a Christian-Orthodox calendar) which has a background agent which should update the live tile. The app will require access to the contacts in the phone so I opted out of internet access so backend tile generation is, at least now out of question. I personally would not trust an app that has access to my contacts AND to internet.
Recently my scheduled agent (which generates three PNGs) started OutOfMemoryException-ing on me. Consistently. I've used DeviceStatus to query and debug its behavior.
It's hard to call this a memory leak since between all three tile generations if I call GC.Collect it won't throw OutOfMemoryException. If it were a true memory leak some (large and/or many) objects would remain referenced by other live/root objects and no amount of GC.Collect will help. In my case GC.Collect WILL help. I can continue using GC.Collect but I feel dirty doing so.
As I'm developing the app free and open-source you can view in detail all the code of the project at the current state of development at http://orthodoxcalendar.codeplex.com
The tile generation consists of taking a background and overlaying two other images on that background. Basically for each of the three PNGs generated I do
var bytes1 = (byte[])resourceManager.GetObject(resourceName1);
var stream1 = new MemoryStream(bytes);
var bytes2 = (byte[])resourceManager.GetObject(resourceName2);
var stream2 = new MemoryStream(bytes);
var bytes3 = (byte[])resourceManager.GetObject(resourceName3);
var stream3 = new MemoryStream(bytes);
var writeableBitmap1 = BitmapFactory.New(size.Width, size.Height).FromStream(stream1); // background
var writeableBitmap2 = BitmapFactory.New(size.Width, size.Height).FromStream(stream2); // first overlay
var writeableBitmap3 = BitmapFactory.New(size.Width, size.Height).FromStream(stream3); // second overlay
writeableBitmap1.Blit(new Point(0, 0), writeableBitmap2, new Rect(0, 0, width2, height2), Colors.White, BlendMode.Alpha);
writeableBitmap1.Blit(new Point(0, 0), writeableBitmap3, new Rect(0, 0, width3, height3), Colors.White, BlendMode.Alpha);
writeableBitmap1.DrawText("Some text", new Point(5, 139), Color.Black, 17);
writeableBitmap1.Invalidate(); // flatten things
using(var outputStream = new WhateverStream())
{
PNGWriter.Write(writeableBitmap1, outputStream);
}
writeableBitmap1.SetSource(new MemoryStream(MiscData.MinimumPng)); // set the writeable bitmap to a 1x1 transparent PNG to, hopefully, force it to release unamanaged memory or other stuff
writeableBitmap2.SetSource(new MemoryStream(MiscData.MinimumPng));
writeableBitmap3.SetSource(new MemoryStream(MiscData.MinimumPng));
stream1.Dispose();
stream2.Dispose();
stream3.Dispose();
The code, if you'll check out the project, is not exactly like above since I've wrapped almost all dependencies in adapters and extracted interfaces. Across many assemblies. The above code is the simplified version which just shows, what I consider to be, the relevant code lines.
A few explanations for the code above :
all this code is run in the background agent inside a Dispatcher.BeginInvoke since you can't seem to manipulate a WritableBitmap on any other thread than the UI thread
The PNG data is stored in another assembly as resx. I know this fattens the assembly but I need this to reuse it across platforms as the assembly is a PCL
Creating the WriteableBitmap directly using a byte array seems to fail in a mysterious way so I'm wrapping it in a MemoryStream and somehow, this way, it works
The PNG writer is taken from ToolStack.
It's not feasible to pre-generate the images since there are multiple versions of "first overlay", "second overlay" and, mostly the "Some text". It would mean tens of thousands of images, at least.
The heart of the question : Am I doing something awfully wrong that I'm not aware of? The only thing that pops in my mind is that JPEGs are generated faster and with less memory consumption but they won't have transparency which I desire. Can this be actually called a memory leak?
LATER EDIT : It seems that after some more debugging it changed its behavior from the one above to a true memory leak. I switched from PNG generation to JPEG generation and the memory is lower now. The input images are still PNG but at the other end a JPEG will be spit. The memory footprint went several megabytes below the previous threshold(s).
SECOND EDIT : I put the logic in a 10.000 repeat loop on a button and there doesn't seem too much memory consumption. I am beginning to think that there isn't really a memory leak but just higher memory consumption during the generation and that's enough to bring the fragile agent down.
In doing a similar thing I've had to explicitly set the writeablebitmaps to null (even though should be unnecessary) before calling GC.Collect.
Additionally, it may be better to create and destroy (and collect) each of the images in turn, rather than creating them all and then destroying them all. This will help with the overhead at any one point.
Also note that when tracking the memory use in the debugger, the debugger adds about 3mb of overhead that you won't see when live.

Web Audio node connected to two gain nodes, connected to destination, duplicates speed / pitch

As the title says, if I have an audio node that emits sound and I connect it to two separate GainNodes, which in turn are connected to the Audio Context destination, the sound plays at double speed / double pitch (as if half samples are sent to one gain node and half samples to the other, and the time is halved as well).
I have created an handy jsfiddle here, just drag your sound files in the black rectangle canvas and listen.
// audioContext: Web Audio context
// decoded: decoded audioBuffer
// gainNode1, gainNode2: gain nodes
var bSrc = audioContext.createBufferSource();
bSrc.connect (gainNode1);
bSrc.connect (gainNode2);
gainNode1.connect (audioContext.destination);
gainNode2.connect (audioContext.destination);
bSrc.buffer = decoded;
bSrc.loop = false;
// You'll hear two double-speed buffers playing at unison
bSrc.start(0);
Is that by design? What I would like is to exactly "duplicate" the sound (that will be sent to two different routes, the fiddle is just a proof-of-concept for a bigger project).
Edit:
I tested this on Chrome Version 24.0.1312.56 / Ubuntu 12.10 and the behaviour is present.
The behaviour is also present on Chrome Version 24.0.1312.68 / Ubuntu 12.10
On Chrome Version 24.0.1312.57 / Mac OSX, the Audio API works well and this behaviour is not present.
Could it be a Linux-only issue?
Sounds like a Linux implementation issue. It works for me in Chrome on OS X.

Chrome extension memory leak in chrome.extension.sendMessage()?

I'm seeing fairly massive memory leaks in long-lived pages using Chrome's chrome.extension.sendMessage()
After sending ~200k events from the Content-Script to the Background-Page as a test, chrome.Event's retained size is ~80% of the retained memory in ~50MB heap snapshot
I've been trying to track down any mistakes I might be making, closing over some variable and preventing it from being GC'd, but it seems to be related to the implementation of Chrome's eventing system
Has anyone run into anything like this, or seen memory leaks with extremely long-lived extensions with Content-Scripts that chatter a lot with a bg page?
The code on my Content-Script side:
csToBg = function(message) {
var csToBgResponseHandler = function(response) {
console.log("Got a response from bg");
};
var result = chrome.extension.sendMessage(null, message, csToBgResponseHandler)
};
And on the Background-Page side, a simple ACK function (to superstitiously avoid https://code.google.com/p/chromium/issues/detail?id=114738):
var handleIncomingCSMessage = function(message, sender, sendResponse) {
var response = message;
response.acked = "ACK";
window.console.log("Got a message, ACKing to CS")
sendResponse(response);
}
After sending ~200k messages in Chrome 23.0.1271.97 this way, the heap snapshot looks like:
The memory never seems to get reclaimed for the life of the page, and I'm stumped about how to fix it.
EDIT: This is a standard background page, and is not an event page.
This is probably fixed in chrome 32.
Finally!
see http://code.google.com/p/chromium/issues/detail?id=311665 for details