I have been trying to use the forge viewer to load somewhat large model, but it seems like that the viewer crashes after few seconds (3 - 5) of usage. (with typical Aw snap! page).
I've had no trouble with other models, but this happens on this specific model on Windows 10, Chrome.
I've tested loading in OS X, but it seem to work although it is somewhat slow.
My current best guess is that this is happening due to memory overflow in Chrome, but this is not yet certain, because the viewer crashes before I try to log the heap usage.
Is there any option that I can use for efficient model loading?
Also, is there a debug mode that allows memory tracking?
If you need the model urn, please let me know.
Thanks!
To modify the memory environment for the viewer (like iPhone), change the options parameters with memory limit values found here:
(refer to Default Memory Management section)
https://developer.autodesk.com/en/docs/viewer/v2/overview/changelog/2.17/
In particular, you can force memory management like this:
var config3d = {
memory: {
limit: 400, // in MB
debug: {
force: true
}
}
};
var viewer = new av.Viewer3D(container, config3d);
viewer.loadModel( modelUrl, {}, onSuccess, onError );
For debugging memory, try the following:
var memInfo = viewer.getMemoryInfo();
console.log(memInfo.limit); // == 400 MB
console.log(memInfo.effectiveLimit); // >= 400 MB
console.log(memInfo.loaded);
Lastly, you can open the memory manager panel extension, from the Chrome debug console, with this command...
NOP_VIEWER.loadExtension("Autodesk.Viewing.MemoryManager")
Click on the memory-chip icon, to bring up the panel (see screenshot below)...
In the memory tab, you can see many parameters relating to paged memory in order to render and network-load many meshes (mesh packs (pf) zip, sort by closest or largest mesh AABB, ignore meshes that are too few pixels on the screen, etc).
Another quick way to activate the Viewer's low-memory mode, is to trick your desktop chrome browser into thinking it's a mobile device, by activating mobile debugging. You can use this to test out mobile related memory issues.
Follow this guide: Chrome debug - Mobile mode
Hope this helps!
Related
I am trying to understand heap snapshots in Chrome. I am debugging memory leaks in my javascript application and was able to find out that most of the retained memory is used by Internal Node -> Pending activities -> C++ roots. What does it mean?
My application uses MediaRecorder for recording canvas and even though it is more complex the recording can be simplified like this:
const canvas = document.querySelector('canvas')
const stream = canvas.captureStream(1)
const mediaRecorder = new MediaRecorder(stream)
mediaRecorder.addEventlistener('dataavailable', processEvent())
mediaRecorder.start()
// later in the code
mediaRecorder.stop()
mediaRecorder.stream.getTracks().forEach((track) => track.stop())
I believe I work with MediaRecorder API correctly and close the recorder and even stream right. Does anyone know what can cause these objects still be kept in memory?
Similar question:
What is "Pending Activities" in Chrome?
Might be related to this chrome bug:
https://bugs.chromium.org/p/chromium/issues/detail?id=899722
For starters I used navigator.hid.requestDevice without any filters. So I can see which devices are available before adding a custom filter, but my device doesn't show up in the HID browser device picker.
However, in the chrome device log chrome://device-log/ I can see connection/disconnection events for the device with both HID and USB labels:
I don't think this device is on the block list, so I'm not really sure why it's not showing up as an option when requesting HID devices. It shows up in Windows in the Device Manager under HID category as well.
If I use navigator.usb then the device does show up when requested, but when opened the device gets a Security Error, which possibly means it needs WinUSB driver. It's a HID USB device and works with libs outside of WebHID and WebUSB.
Any reasons it's not showing up?
Edit 1:
My device showed up in chrome://usb-internals/ where I see that it says HID is blocked by WebUSB. Not sure how to solve this yet.
Edit 2:
Using Chrome Canary and the devtools console provided a debug message when using the HID Device picker: Chooser dialog is not displaying a device blocked by the HID blocklist: vendorId=7992, productId=258, name='TEST', serial=''
Looking at the HID blocklist https://github.com/WICG/webhid/blob/main/blocklist.txt I still don't see an issue with the vendor id or product id. The usage page and usage don't match either, but the debug message doesn't mention those, so it's hard to say the exact reason.
Edit 3:
With chrome canary 103.0.5034.0 the new output gives this reason this device is blocked:
Chooser dialog is not displaying a device blocked by the HID blocklist: vendorId=7992, productId=258, name='TEST', serial='', numberOfCollections=1, numberOfProtectedInputReports=0, numberOfProtectedOutputReports=0, numberOfProtectedFeatureReports=0
If you're seeing it in the browser picker when you don't define any filters, it means this device is not blocklisted indeed.
I'd recommend you grab information such as vendorId and productId from https://nondebug.github.io/webhid-explorer/ for instance. After you connect your device, check out vendorId and productId info and use them as filters:
const filters = [{ vendorId: 0x1234, productId: 0x5678 }];
const [device] = await navigator.hid.requestDevice({ filters });
More generally, https://web.dev/hid/ is a great resource to get started with WebHID.
Edit 1
If you're not seeing your device in the browser picker when you don't have any filters but see "HID device added" in about:device-log, it means the browser picker is hiding it (either because it has a top-level collection with a FIDO usage or is blocklisted (https://github.com/WICG/webhid/blob/main/blocklist.txt). See Chromium source code at chrome/browser/ui/hid/hid_chooser_controller.cc:
bool HidChooserController::DisplayDevice(
const device::mojom::HidDeviceInfo& device) const {
// Check if `device` has a top-level collection with a FIDO usage. FIDO
// devices may be displayed if the origin is privileged or the blocklist is
// disabled.
const bool has_fido_collection =
base::Contains(device.collections, device::mojom::kPageFido,
[](const auto& c) { return c->usage->usage_page; });
if (has_fido_collection) {
if (base::CommandLine::ForCurrentProcess()->HasSwitch(
switches::kDisableHidBlocklist) ||
(chooser_context_ &&
chooser_context_->IsFidoAllowedForOrigin(origin_))) {
return FilterMatchesAny(device) && !IsExcluded(device);
}
VLOG(1) << "Not displaying a FIDO HID device.";
return false;
}
if (!device::HidBlocklist::IsDeviceExcluded(device))
return FilterMatchesAny(device) && !IsExcluded(device);
VLOG(1) << "Not displaying a device blocked by the HID blocklist.";
return false;
}
Edit 2
Note that it is possible your device is blocklisted if it doesn't have any collection. See HidBlocklist::IsDeviceExcluded() source code. Is that the case?
By the way, it is possible to disable the HID blocklist by running Chrome with a special flag:
$ chrome --disable-hid-blocklist
See Run Chromium with flags page.
I want to give an extra thanks to François Beaufort for pointing me in the right direction and adding debug logs to Chrome Canary specifically for WebHID testing.
The issue was that WebHID saw the number of input and output reports both as 0 and blocked the device from appearing in the HID device picker. Again, the device I was using works natively on Windows, so this issue was hidden until working with WebHID.
The exact issue was in the HID descriptor of the device firmware in the input & output sections of the HID descriptor. While I can't share the whole descriptor the input and output sections look as follows for the fix:
0x81, 0x00, // INPUT (Data, Var, Abs)
...
...
...
0x91, 0x01, // OUTPUT (Data, Var, Abs)
The first values 0x81 and 0x91 were correct, but the 2nd values needed to be changed to the above values to work. Once the firmware was modified the device immediately displayed using the WebHID device picker. Communication is working fine with WebHID now and the device still works natively on Windows as well.
On almost all of my revit models, GEOMETRY_LOADED_EVENT is triggered only once when I can see the scene as complete. This is what I expect, and when this event is reached, I can do some other actions on the model/view (like moving the complete model to some other coordinates).
But, en a revit model, I have GEOMETRY_LOADED_EVENT triggered several times, during loading, on model move or zoom in/out.
I can check this by registering a simple
NOP_VIEWER.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT,
(e) => {console.log(e)});
How many times should I expect GEOMETRY_LOADED_EVENT to be triggered?
Note: On revit file with multiple triggers, I have "onDemandLoad: true" set in the event. It might be the cause. Is there a way to disable this?
Thank you,
When a model is too large, Forge Viewer may evict certain parts of its geometry from the memory and later load (download) them again when they get back into view. In this case, the GEOMETRY_LOADED_EVENT would be triggered again.
You can disable the memory management by passing 0 as the memory limit in the viewer config. In that case, however, your webpage may run out of memory and get killed off by the browser:
const config = {
memory: {
limit: 0
}
};
let viewer = new Autodesk.Viewing.Viewer3D(document.getElementById('viewer'), config);
I'm using Puppeteer (PuppeteerSharp actually, but the API is the same) to take a screenshot of a web page from my application.
The problem is that the page does several layout changes via JavaScript after the page has loaded, so a few seconds pass before seeing the "final" rendered version of the page.
At the moment I'm just waiting a "safe" amount of seconds before taking the screenshot, but this is obviously not a good approach, since a temporary performance slowdown on the machine can result in an incomplete rendering.
Since puppeteer uses Chromium in the background, is there a way to intercept Chromium's layouting/rendering events (like you can do in the DevTools console in Chrome)? Or, really, ANY other way to know when the page has stopped "changing" (visually I mean)
EDIT, some more info: The content is dynamic, so I don't know before hand what it will draw and how. Basically, it's a framework that draws different charts/tables/images/etc. (not open-source unfortunately). By testing with the "performance" tool in the Chrome DevTools however, I noticed that after the page has finished rendering all activity in the timeline stops, so if I could access that information it would be great. Unfortunately, the only way to do that in Puppeteer (that I can see) is using the "Tracing" feature, but that doesn't operate in real-time. Instead, it dumps the trace to file and the buffer is way too big to be of any use (the file is still 0 bytes after my page has already finished rendering, it only flushes to disk when I call "stopTracing"). What I would need is to access the Tracing feature of puppeteer in realt-time, for example via events or a in-memory stream, but that doesn't seem to be supported by the API. Any way around this?
You should use page.waitForSelector() to wait for the dynamic elements to finish rendering.
There must be a pattern that can be identified in terms of the content being generated.
Keep in mind that you can use flexible CSS Selectors to match elements or attributes without knowing their exact values.
await page.goto( 'https://example.com/', { 'waitUntil' : 'networkidle0' } );
await Promise.all([
page.waitForSelector( '[class^="chart-"]' ), // Class begins with 'chart-'
page.waitForSelector( '[name$="-image"]' ), // Name ends with '-image'
page.waitForSelector( 'table:nth-of-type(5)' ) // Fifth table
]);
This can be useful when waiting for a certain pattern to exist in the DOM.
If page.waitForSelector() is not powerful enough to meet your needs, you can use page.waitForXPath():
await page.waitForXPath( '//div[contains(text(), "complete")]' ); // Div contains 'complete'
Alternatively, you can plug the MutationObserver interface into page.evaluate() to watch for changes being made to the DOM tree. When the changes have stopped over a period of time, you can resume your program.
After some trial and error, I settled for this solution:
string traceFile = IOHelper.GetTemporaryFile("txt");
long lastSize = 0;
int cyclesWithoutTraceActivity = 0;
int totalCycles = 0;
while (cyclesWithoutTraceActivity < 4 && totalCycles < 25)
{
File.Create(traceFile).Close();
await page.Tracing.StartAsync(new TracingOptions()
{
Categories = new List<string>() { "devtools.timeline" },
Path = traceFile,
});
Thread.Sleep(500);
await page.Tracing.StopAsync();
long curSize = new FileInfo(traceFile).Length;
if(Math.Abs(lastSize - curSize) > 5)
{
logger.Debug("Trace activity detected, waiting...");
cyclesWithoutTraceActivity = 0;
}
else
{
logger.Debug("No trace activity detected, increasing idle counter...");
cyclesWithoutTraceActivity++;
}
lastSize = curSize;
totalCycles++;
}
File.Delete(traceFile);
if(totalCycles == 25)
{
logger.Warn($"WARNING: page did not stabilize within allotted time limit (15 seconds). Rendering page in current state, might be incomplete");
}
Basically what I do here is this: I run Chromium's tracing at 500 msec intervals, and each time I compare the size of the last trace file to the size of the current trace file. Any significant changes in the size are interpreted as activity on the timeline, and they reset the idle counter. If enough time passes without significant changes, I assume the page has finished rendering. Note that the trace file always starts with some debugging info (even if the timeline itself has no activity to report), this is the reason why I don't do an exact size comparison, but instead I check if the file's lengths are more than 5 bytes apart: since the initial debug info contains some counters and IDs that vary over time, I allow for a little variance to account for this.
I have an Mvx base iOS project which is having problems with image downloads.
I have a couple of screens which contain UICollectionViews and the UICollectionViewCells use MvxDynamicImageHelpers to set the Image of their UIImageViews to images hosted on the internet (Azure blob storage via Azure CDN in actual fact). I have noticed that the images sometimes do not appear and that this is more common on a slow connection and if I scroll through the whole UICollectionView while the images are loading - presumably as it initiates a large number of simultaneous downloads. Restarting the app causes some, but not all, of the images to be shown.
Looking in the Caches/Pictures.MvvmCross folder I see there are a number of files with .tmp extensions and some without .tmp extensions but a 0 byte file size. I presume that the .tmp files are the ones that are re-downloaded following an app restart and that an invalid in-memory cache entry is causing them not to be re-downloaded until this happens.
I have implemented my versions of MvxDownloadRequest and MvxHttpFileDownloader and registered my IMvxHttpFileDownloader. The only modification in MvxHttpFileDownloader is to use my MvxDownloadRequest instead of the standard Mvx one.
As far as I can see, there are no exceptions being thrown in MvxDownloadRequest.Start or MvxDownloadRequest.ProcessResponse and MvxDownloadRequest.FileDownloadFailed is not being called. Having replaced MvxDownloadRequest.Start with the following, all images are always downloaded and displayed successfully:
try
{
ThreadPool.QueueUserWorkItem((state) => {
try
{
var fileService = this.GetService<IMvxSimpleFileStoreService>();
var tempFilePath = DownloadPath + ".tmp";
var imageData = NSData.FromUrl(NSUrl.FromString(Url));
var image = UIImage.LoadFromData(imageData);
NSError nsError;
image.AsPNG().Save(tempFilePath, true, out nsError);
fileService.TryMove(tempFilePath, DownloadPath, true);
}
catch (Exception exception)
{
FireDownloadFailed(exception);
return;
}
FireDownloadComplete();
});
}
catch (Exception e)
{
FireDownloadFailed(e);
}
So, what could be causing the problems with the standard WebRequest which is not affecting the above version? I'm guessing it's something to with GC and will do further debugging when I get time, but this won't be fore a while unfortunately. Would be very much appreciated if someone can answer this or provide pointers for when I do look at it.
Thanks,
J
From the description of your investigations so far, it sounds like you have isolated the problem down to the level that httpwebrequest sometimes fails, but that the NSData methods are 100% reliable.
If this is the case, then it would suggest that the problem is somewhere in the xamarin.ios network stack or in the use of it.
It might be worth checking the xamarin bugzilla repository and also asking their support team if they are aware of any issues in this area. I believe they did make some announcements about changes to the iOS networking at evolve - see the CFNetworkHandler part late in the video and slides at http://xamarin.com/evolve/2013#session-b3mx6e6rmb - and there are worrying questions on here like iPhone app gets into a state where network requests never complete
Beyond that, I'd guess the first step in any debugging would be to isolate the issue in a simple test app - eg a simple app which just downloads one image at a time and which demonstrates a simple pass/fail for each technique. If you can replicate the issue in a small test app, then it'll be much quicker to work out what the issue is.