This time i have two questions.
1. In development phase I was working with some free Revit files which is less than 200mb, and they were taken more than 20 Sec for loading. How to optimize this process so can viewer can work smoothly with large files (like 2-5GB).
My second question is like a movie story
2. Let's consider there is one hotel model with some vaults, so I wanted to hide this vault area (which includes AC ducts, electrical wiring and some other utilities) from everyone excluding some team.
How can I archive this type of functionality inside forge viewer?
You can load only selected elements with this approch. It can be two-barrel shot: for model loading optimization and hiding some elements on the current view.
Related
I've enabled two section planes in an NWD file so that it only shows one floor level in the view. However, when I load the model in the Forge viewer, the whole building is displayed - the section planes in NWD did not carry over to the translated model in the Forge viewer. Is this not possible to do?
My hope is to minimize the model loading time by cutting the whole building to individual floor levels using the section planes in the NWD file. The Forge viewer can then load geometry for one floor only. Can this be done?
I'm afraid that Navisworks section planes are not currently supported by the Forge Model Derivative service.
To improve the loading times, I would suggest one of the following:
use the new, heavily optimized output format SVF2: https://forge.autodesk.com/blog/call-private-beta-testers-svf-performance-enhancements
only load a subset of the objects using the ids option in viewer.loadDocumentNode: https://forge.autodesk.com/blog/minimizing-viewer-workloads-loading-models-partially-selected-components-and-features-only
I'm looking for best practices and performance-guided recommendations for recomputing a model's volume when it's missing from the source file. This is in the context of a web application I am working to build that enables:
Uploading 3D models in a variety of file formats
Interacting with these models using the AutoDesk Viewer
Displaying mass properties, eg volume and surface area, alongside the viewer (subject of this post)
Background
Some file formats have very reliable volume information that is computed and written to the file by the authoring application. For these files, we can access volume as a property via AutoDesk Viewer.
Other formats, however, do not carry volume information - at least not in a manner that is openly accessible using tools other than the authoring application (prime example here is SolidWorks). This leaves us with a giant gap to fill - we need to recompute the model's volume using what's in the file.
Known Workarounds and Options
AutoDesk published a blog post detailing an approach for approximating model volume using triangles of the model inside the viewer. I think it's an ideal solution for use cases that can afford to trade accuracy for a bump in performance - and it centers everything in the viewer making development and subsequent maintenance simpler. This application, however, cannot rely on such approximations. I'm left reviewing options for leveraging the AutoDesk Design Automation API to:
Spin up an instance of Inventor
Load the model file
Rely on iLogic to trigger a re-computation of the model's part properties (perhaps like this?)
Push that data back to my web application
Where I Need Help
My understanding is that an AppBundle and Activity are defined ahead of time and then every uploaded model would be submitted as a work item.
I am hoping for guidance in:
whether this is the only approach or whether there are other options worth considering
how best to orchestrate the end-to-end process from an order of operations/workflow standpoint to maximize performance
Current Thinking
For example, I'm thinking that my first step after the source file is uploaded is to immediately initialize two parallel processes: the first to translate the source file for the viewer, the second to spin up Inventor and trigger the related downstream process to get volume.
The other option I'm considering is handling all of the work in Inventor - and pushing out an SVF file to the viewer that's enriched with volume data. The advantage of this approach is that my frontend will have only one source for volume data, (it will be in the enriched SVF no matter whether it was supplied in the original file or not).
In an ideal world I'd be able to only invoke the Design Automation API when volume data is missing from the source file - but I'd only know that after translating the file and bringing it back to the viewer. Given that many of our files are created in SolidWorks and other high-end proprietary CAD platforms, my working hypothesis is that we'll be needing to fill in volume gaps more often than not.
Your understanding is correct:
appbundle is simply a collection of files (binaries, data) encapsulating a specific Inventor/Revit/3dsMax/AutoCad plugin
activity is a kind of a job template specifying which application should be invoked, which appbundle should be loaded into the application, what inputs will be provided to the job, and what outputs will be generated
work item is then a specific instance of a job, binding the activity inputs and outputs to specific URLs
There is currently no other way to access the Design Automation functionality other then using these 3 types of entities.
I would suggest the following:
wherever possible, use the Design Automation for Inventor to compute the precise areas/volumes
for file formats that cannot be imported into Inventor or any other Design Automation engine, you could use tools like https://github.com/petrbroz/forge-convert-utils to parse the SVF and compute (a very rough estimate of) the area/surface from the triangular meshes; however, this will be quite computationally expensive, and imprecise
I am working on a Wysiwyg Editor for CesiumJS content.
The user will be able to create many points, lines and other graphics, connect them according to definable relations and group them in separate Groups.
Now I am wondering what the best practises are in terms of performance.
At the moment I create one PointPrimitiveCollection for each Group
and then add points:
group.points = scene.primitives.add(new Cesium.PointPrimitiveCollection());
and then
group.points.add({
position : cartesian,
...
});
for each new point.
Polygons are created using:
network.hull_polygon = viewer.entities.add({
name : 'xxx',
polygon : {
hierarchy : Cesium.Cartesian3.fromDegreesArray(points_array),
material : color,
...
}
});
polylines similarly.
Now since the Objects can also be dragged around / animated, I was wondering where Cesiums entity logic would come in?
Thanks for all help!
Cesium's Entity logic is useful primarily for objects that move along a known path over time, for example the flight plan of an aircraft in the future, or a GPS recording of the route taken by a vehicle in the past. Such routes can be loaded into the Entity system (often via CZML), and the user can run the simulation time forwards and backwards at arbitrary speeds, to review the routes of all the vehicles. The Entity system owns the logic for updating graphics primitive positions based on simulation time changes.
Entities are also often used as a quick way to make some disparate graphics primitives associate with each other. For example, a polygon, a point, and a label can all be created as a single Entity even if they are three separate graphics primitives at the same location. This saves a bit of effort on the part of the application developer, and doesn't hurt performance too much since the properties involved are all marked as constants, so the Entity layer knows not to update them with simulation time.
But, it sounds like you may have a case where paths are not known in advance. For things like user interactive edits or real-time telemetry being received, the Entity system can't know what's coming up next, so its whole system for updating positions from simulation times is not doing you any good. In that case it may be better to skip the Entities, and deal exclusively with graphics primitives for this. This would mean you need to write your own update function to alter graphics positions as new information is being received, similar to the Entity layer's update functions, but based on your own live inputs instead of recorded paths.
Note that the public "Sandcastle" demos only include Entity demos. But, if you download and build the source for Cesium and run Sandcastle locally from a dev build, a separate tab appears in the Sandcastle Gallery called Development that shows a whole set of demos based on graphics primitives as opposed to Entities. This can be useful for seeing examples of how to control things at this layer.
Hopefully this is helpful in understanding how the different layers of Cesium interact.
Is there a way to reduce mesh polygons?
As an example project I use the TGA model provided by Autodesk. (https://knowledge.autodesk.com/support/revit-products/getting-started/caas/CloudHelp/cloudhelp/2019/ENU/Revit-GetStarted/files/GUID-61EF2F22-3A1F-4317-B925-1E85F138BE88-htm.html rme_advanced_sample_project.rvt)
If you add all instances to the scene you get a polygon count of about 1.3M.
For the computer this is no problem at all. The model is downloaded in about 1 min and displayed completely.
For my iPhone ( iPhone 8) this is clearly too big.
As soon as I start the AR Scene and download the model, the memory requirement rises to over 1.2 GB (bevore 0,15GB) and crashes the app.
Even if you exclude some instances (walls, ceilings, etc.) before processing the scene to display only the technical building equipment, the model is still too big for the iPhone.
Are there possibilities to reduce the mesh with the ar-vr-toolkit api . Do I have to do this manually in Revit?
Edit: 27.06.18
Here is the model i want to display in AR (Tris: 2.8m, Verts: 2.4M).
Steps:
1) Upload the original .rvt file (70mb) to my bucket.
2) Translated the file via forge.
3) Created a scene with ar-vr-toolkit api.
4) Processed scene witha ar-vr-toolkit api.
5) Downloaded the scene to unity.
6) Created a prefab.
The Meshes are way to detailed. The Graphics would not change a lot if i reduce the Vertex count to 10-15%.
In Unity i can use assets like Mesh Simplify (https://assetstore.unity.com/packages/tools/modeling/mesh-simplify-43658) to reduce the count.
An other way is to export the model to e.g. 3D max or Maya to reduce the count.
But i want to try to do this automatically.
My question is: Is there a way to to this with Forge?
Image 1
Image 2
My colleagues who are expert on this area are on vacation now, so let me try to answer your question first, and my colleague may add some more information later.
Unfortunately, the answer is No AFAIK. For the Forge AR|VR toolkit service, I remember there are some mesh reduction work done automatically at server side if it detect the client device is Hololens or DAQRI, you can get that info if checking https://github.com/wallabyway/ARVRToolkit/blob/master/unity-src/ARVRToolkit/Assets/Forge/ARKit/RequestQueue.cs#L155. But that's it, we do not provide any API to help reduce the mesh, and there is also no API within Forge which can do that either.
As you already know, you may need to do the mesh reduction in some other product, like 3ds Max, that's the current way I can think of.
My colleague may have more comments on this when they come back.
I'm using Autodesk Forge to integrate with our remodeling tool. In particular, I need to count objects of different families and types and determine to what room they actually belong. I use Model Derivative API for this purpose. To keep the room/area information I convert .rvt files to .nwc files as suggested here. However, when I retrieve data with GET /modelderivative/v2/designdata/{urn}/metadata/{guid}/properties I face the following problems from time to time:
Room information sometimes disappears from Objects for some reason
Objects disappear from result data for some reason (but they seem to exist when I browse them in A360)
I have no idea, what can be the reason for this.
I have no explanation for the disappearance of room data or objects for you.
If you can provide a reproducible case demonstrating that, I will gladly pass it on to the development team for analysis.
If you are interested in an immediate reliable solution and full control, which I assume is the case, I would suggest following the second bullet item in the advice provided by Eason in the previous answer that you refer to above:
Extract all the room information and object relationships you are interested in via the Revit API, store that data somewhere yourself, and use it later on wherever you like to your heart's content.
Then you will be completely safe and independent of all other components and their unpredictable behaviour.
If the only information that you need is the room containing each family instance, I can even implement a suitable Revit add-in for you.
Another suggestion that might help, if that is indeed the data you require: determine that information in a Revit add-in and attach it to each family instance in your own personal shared parameter. That will ensure that it remains intact through the translation process. Afaik, all shared parameter data is retained, independent of other behaviour.