On almost all of my revit models, GEOMETRY_LOADED_EVENT is triggered only once when I can see the scene as complete. This is what I expect, and when this event is reached, I can do some other actions on the model/view (like moving the complete model to some other coordinates).
But, en a revit model, I have GEOMETRY_LOADED_EVENT triggered several times, during loading, on model move or zoom in/out.
I can check this by registering a simple
NOP_VIEWER.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT,
(e) => {console.log(e)});
How many times should I expect GEOMETRY_LOADED_EVENT to be triggered?
Note: On revit file with multiple triggers, I have "onDemandLoad: true" set in the event. It might be the cause. Is there a way to disable this?
Thank you,
When a model is too large, Forge Viewer may evict certain parts of its geometry from the memory and later load (download) them again when they get back into view. In this case, the GEOMETRY_LOADED_EVENT would be triggered again.
You can disable the memory management by passing 0 as the memory limit in the viewer config. In that case, however, your webpage may run out of memory and get killed off by the browser:
const config = {
memory: {
limit: 0
}
};
let viewer = new Autodesk.Viewing.Viewer3D(document.getElementById('viewer'), config);
Related
With modelcoordination we experience a slowdown in the application Navisworks when accessing properties.
Our app searches for properties and creates searchsets automated to save time. We use the Navisworks API to do so using the functionality of:
ModelItemCollection searchResults = s.FindAll(Autodesk.Navisworks.Api.Application.ActiveDocument, true);
We redefine "s" and "searchResults" foreach needed searchset and save the search as SavedItem (SearchSet).
Because we create a lot of SearchSets using this method the slowdown is more noticeable.
This action in a model from BIM 360 Glue (Classic) takes up to 10 seconds, where the same model in Model Coordination (next gen) takes more than 30 minutes.
The slowdown is visible unregarding our app. Also when clicking on properties in the selection tree, or in SearchSets the slowdown occurs.
I'm using Puppeteer (PuppeteerSharp actually, but the API is the same) to take a screenshot of a web page from my application.
The problem is that the page does several layout changes via JavaScript after the page has loaded, so a few seconds pass before seeing the "final" rendered version of the page.
At the moment I'm just waiting a "safe" amount of seconds before taking the screenshot, but this is obviously not a good approach, since a temporary performance slowdown on the machine can result in an incomplete rendering.
Since puppeteer uses Chromium in the background, is there a way to intercept Chromium's layouting/rendering events (like you can do in the DevTools console in Chrome)? Or, really, ANY other way to know when the page has stopped "changing" (visually I mean)
EDIT, some more info: The content is dynamic, so I don't know before hand what it will draw and how. Basically, it's a framework that draws different charts/tables/images/etc. (not open-source unfortunately). By testing with the "performance" tool in the Chrome DevTools however, I noticed that after the page has finished rendering all activity in the timeline stops, so if I could access that information it would be great. Unfortunately, the only way to do that in Puppeteer (that I can see) is using the "Tracing" feature, but that doesn't operate in real-time. Instead, it dumps the trace to file and the buffer is way too big to be of any use (the file is still 0 bytes after my page has already finished rendering, it only flushes to disk when I call "stopTracing"). What I would need is to access the Tracing feature of puppeteer in realt-time, for example via events or a in-memory stream, but that doesn't seem to be supported by the API. Any way around this?
You should use page.waitForSelector() to wait for the dynamic elements to finish rendering.
There must be a pattern that can be identified in terms of the content being generated.
Keep in mind that you can use flexible CSS Selectors to match elements or attributes without knowing their exact values.
await page.goto( 'https://example.com/', { 'waitUntil' : 'networkidle0' } );
await Promise.all([
page.waitForSelector( '[class^="chart-"]' ), // Class begins with 'chart-'
page.waitForSelector( '[name$="-image"]' ), // Name ends with '-image'
page.waitForSelector( 'table:nth-of-type(5)' ) // Fifth table
]);
This can be useful when waiting for a certain pattern to exist in the DOM.
If page.waitForSelector() is not powerful enough to meet your needs, you can use page.waitForXPath():
await page.waitForXPath( '//div[contains(text(), "complete")]' ); // Div contains 'complete'
Alternatively, you can plug the MutationObserver interface into page.evaluate() to watch for changes being made to the DOM tree. When the changes have stopped over a period of time, you can resume your program.
After some trial and error, I settled for this solution:
string traceFile = IOHelper.GetTemporaryFile("txt");
long lastSize = 0;
int cyclesWithoutTraceActivity = 0;
int totalCycles = 0;
while (cyclesWithoutTraceActivity < 4 && totalCycles < 25)
{
File.Create(traceFile).Close();
await page.Tracing.StartAsync(new TracingOptions()
{
Categories = new List<string>() { "devtools.timeline" },
Path = traceFile,
});
Thread.Sleep(500);
await page.Tracing.StopAsync();
long curSize = new FileInfo(traceFile).Length;
if(Math.Abs(lastSize - curSize) > 5)
{
logger.Debug("Trace activity detected, waiting...");
cyclesWithoutTraceActivity = 0;
}
else
{
logger.Debug("No trace activity detected, increasing idle counter...");
cyclesWithoutTraceActivity++;
}
lastSize = curSize;
totalCycles++;
}
File.Delete(traceFile);
if(totalCycles == 25)
{
logger.Warn($"WARNING: page did not stabilize within allotted time limit (15 seconds). Rendering page in current state, might be incomplete");
}
Basically what I do here is this: I run Chromium's tracing at 500 msec intervals, and each time I compare the size of the last trace file to the size of the current trace file. Any significant changes in the size are interpreted as activity on the timeline, and they reset the idle counter. If enough time passes without significant changes, I assume the page has finished rendering. Note that the trace file always starts with some debugging info (even if the timeline itself has no activity to report), this is the reason why I don't do an exact size comparison, but instead I check if the file's lengths are more than 5 bytes apart: since the initial debug info contains some counters and IDs that vary over time, I allow for a little variance to account for this.
I have been trying to use the forge viewer to load somewhat large model, but it seems like that the viewer crashes after few seconds (3 - 5) of usage. (with typical Aw snap! page).
I've had no trouble with other models, but this happens on this specific model on Windows 10, Chrome.
I've tested loading in OS X, but it seem to work although it is somewhat slow.
My current best guess is that this is happening due to memory overflow in Chrome, but this is not yet certain, because the viewer crashes before I try to log the heap usage.
Is there any option that I can use for efficient model loading?
Also, is there a debug mode that allows memory tracking?
If you need the model urn, please let me know.
Thanks!
To modify the memory environment for the viewer (like iPhone), change the options parameters with memory limit values found here:
(refer to Default Memory Management section)
https://developer.autodesk.com/en/docs/viewer/v2/overview/changelog/2.17/
In particular, you can force memory management like this:
var config3d = {
memory: {
limit: 400, // in MB
debug: {
force: true
}
}
};
var viewer = new av.Viewer3D(container, config3d);
viewer.loadModel( modelUrl, {}, onSuccess, onError );
For debugging memory, try the following:
var memInfo = viewer.getMemoryInfo();
console.log(memInfo.limit); // == 400 MB
console.log(memInfo.effectiveLimit); // >= 400 MB
console.log(memInfo.loaded);
Lastly, you can open the memory manager panel extension, from the Chrome debug console, with this command...
NOP_VIEWER.loadExtension("Autodesk.Viewing.MemoryManager")
Click on the memory-chip icon, to bring up the panel (see screenshot below)...
In the memory tab, you can see many parameters relating to paged memory in order to render and network-load many meshes (mesh packs (pf) zip, sort by closest or largest mesh AABB, ignore meshes that are too few pixels on the screen, etc).
Another quick way to activate the Viewer's low-memory mode, is to trick your desktop chrome browser into thinking it's a mobile device, by activating mobile debugging. You can use this to test out mobile related memory issues.
Follow this guide: Chrome debug - Mobile mode
Hope this helps!
I've got an application that is downloading several large binary files and saving them to disk. On some machines it works fine and on some other machines every once in a while a download will proceed to 99.9% complete and the URLStream object will not fire Event.COMPLETE
This is almost identical to the issue that appears here:
Why does URLStream complete event get dispatched when the file is not finished loading?
I've tried using the 'Cache Bust' method described in one of the answers but still no dice.
Any help would be appreciated.
Here is some sample code to help illustrate what I am trying to do:
var contentURL:String = "http://some-large-binary-file-in-amazon-s3.tar";
var stream:URLStream = new URLStream();
stream.addEventListener(Event.COMPLETE, function(e:Event):void{
//This should fire when the file is done downloading
//On some systems this fails to fire once in a while
//On other systems it never fails to fire
});
stream.addEventListener(ProgressEvent.PROGRESS, function(pe:ProgressEvent):void{
//Write the bytes available in the stream and save them to disk
//Note that a download will reach 100% complete in terms of total progress but the 'complete' event might still not fire.
});
var urlRequest:URLRequest = new URLRequest(contentURL);
//Here we might add some headers to the URL Request for resuming a file
//but they don't matter, the 'Event.COMPLETE' will fail to fire with our without
//these headers
addCustomHeaders( urlRequest );
stream.load( urlRequest );
Imo this is a code meant to fail where you purposely give up any control on whatever is going on and just assume that everything would work by itself and go well. I never had any problems whatsoever with the URLStream class but here's basically what I never do:
I never not register all the different error event available (you don't register any).
I never use anonymous listeners. Even though they are supposed to not be GC until the download is complete this is imo an unnecessary unsafe bet especially since it's not rare for the URLStream to idle a little while loading the last bits. I would not be surprised if removing those anonymous listeners would actually fix the problem.
I am quite new to windows 8 phone and I don't know all the life cycle methods and when what is called.
My problem is the following: I have a page that loads some data from the disk and when the user exits the program ( or suspends ) the data should be saved. As far as I can tell Page doesn't have an OnSuspending method only someOnNavigatingFrom, but those are not called when you just exit the program. So I read that I should use the OnSuspending in my App.xaml.cs, but this class doesn't have this data and also shouldn't have it, maybe only for OnSuspending. But I don't know how to get the data from my page in the OnSuspending method.
The OnSuspending event is quite fragile and you cannot expect it to run and save the state for a long time. But it depends on how long it would take for you to save. It doesn't even get triggered when you hit the home key while closing the app. If you really want an easy way. Just register a background task. While your app is in the background, the state can be saved and when you open the app again things are in place.
There are certain constraints With Background task as well, you cant do heavy lifting etc...here's a link you could use.
https://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh977056.aspx
Implement an observer pattern (i.e. pub/sub) for your view-models to subscribe to in the event that your app is being suspended.
Your app handles the suspended event. As a result, publish a message for your view-models to respond to within your app's method handler for the suspended event.
You can use an EventAggregator or MessageBus (that I wrote).