I'm using Autodesk Forge to integrate with our remodeling tool. In particular, I need to count objects of different families and types and determine to what room they actually belong. I use Model Derivative API for this purpose. To keep the room/area information I convert .rvt files to .nwc files as suggested here. However, when I retrieve data with GET /modelderivative/v2/designdata/{urn}/metadata/{guid}/properties I face the following problems from time to time:
Room information sometimes disappears from Objects for some reason
Objects disappear from result data for some reason (but they seem to exist when I browse them in A360)
I have no idea, what can be the reason for this.
I have no explanation for the disappearance of room data or objects for you.
If you can provide a reproducible case demonstrating that, I will gladly pass it on to the development team for analysis.
If you are interested in an immediate reliable solution and full control, which I assume is the case, I would suggest following the second bullet item in the advice provided by Eason in the previous answer that you refer to above:
Extract all the room information and object relationships you are interested in via the Revit API, store that data somewhere yourself, and use it later on wherever you like to your heart's content.
Then you will be completely safe and independent of all other components and their unpredictable behaviour.
If the only information that you need is the room containing each family instance, I can even implement a suitable Revit add-in for you.
Another suggestion that might help, if that is indeed the data you require: determine that information in a Revit add-in and attach it to each family instance in your own personal shared parameter. That will ensure that it remains intact through the translation process. Afaik, all shared parameter data is retained, independent of other behaviour.
Related
We are working with a DDD framework in our company. We are changing a lot of core things in our API because we are still growing and we are still in our enfant phase when designing a good API.
The problem is that there are alot of flows already in the same api. Which are not compatible with eachother.
We have an order service and a product service.
Normally when the product model radically changes, we have a major impact in the order model.
Now im here listing all kind of red flags which should never happen but I simply dont have control over how it needs to be done. That is pretty much management pushing for a fast solution. And leading to bad shortcuts...
The way is has been decided to overcome that Order needs to adapt constantly. They made a property in the orderline called productConfiguration. This is in the contract of the service and is direcrtly translated as is in the DB tables. This contains the product model that can change. In json format.
For me its very clear that this is very dangerous to do this. Because i nthe end you need to change this json into an actual object. So you just move the restrictions from the service contract to code logic. Which makes it worse cause it will only cause an issue at run time...
Are there other major things I just know about, so I can bring it to the table to avoid this way of working...
Using strings that are directly converted into DB tables is not just in your opinion a bad design. It's an opinion shared by a lot of us.
What do you do when an object changes? For example, the new one requires an attribute that the old one didn't had. How do you manage this situation? I suppose that you've to change everything, including the objects stored before. Or build a kind of transformation layer where you translate objects from the old to the new design. A lot of extra work.
Anyway, given that the two domains are separated, what are the information that change so much and require such a design? I mean, for most of the things you could know at the beginning what do you need for your part of the domain. For the rest, I would prefer to have a kind of service that given an Id gives you the information from the other domain. You can change this service (here could be also json obj, if nothing than just showing is required) and adapt to your/their needs. But, it's just a solution that comes from my limited knowledge of your processes.
Other ways are also possible, as long as you can always understand which version of the design are you using.
Background
We currently use AutoDesk Forge to display 2D models of floor plans. Users have the ability to upload new floor plans (which uploads to OSS, then translates the file) to replace existing ones, that may include new objects in the room, or slightly different positioning of existing objects. Currently, we grab certain object dbId's via viewer.getSelection() to "bind" (using the term loosely here) some external data to the object and perform certain interactions within our web app. We are using .DWG files.
Issue
When uploading a new floor plan that removes an object, it shifts dbId's of other objects, and our external binding is then inaccurate.
Question
• Are there any IDs that persist between the uploads/translations?
*We do not maintain control of the .dwg files prior to their upload, so adding attributes on the drawing before it's translated likely won't be viable for our particular case - but if that's the best (or only) approach, I would like to know to propose it as a possible solution to my team.
Example
Let's say there's a simple square room with 5 chairs, and it's rendered and visible in the viewer since it's been uploaded as room1 (object key). We identify 3 chairs by their dbIds and save that, so a user can jump right to the object in question, and we put a label on it. Then someone comes along and removes one of the chairs, uploads/translates the document again with the same object key. Now, the dbIds are changed and assigned to different chairs and as a result. Question is, is there something other than dbId that persists between the different renderings? Or, is there something that I haven't considered that would be a better approach of keeping the binding accurate between uploads?
EDIT: Same scenario happens with element ID's. An interesting finding is the element ID and dbId are tightly coupled (meaning if Object A's dbId is 3 and element ID is 6E, then a deletion happens, a new object will have 3 and 6E respectively). Also, I believe the designers creating the AutoCAD files are making these as polylines, if that makes a difference
Contingency Plan
If there's not an ID of some sort that persists, I'm considering storing the coordinates of the object we want to bind to, and later find the closest object to those x/y/z coords. Is that a possibility, to find an object close to or overlapping xyz?
Use external id.
Useful blog: https://forge.autodesk.com/blog/get-dbid-externalid
When converting IFC files to SVF with the Autodesk Model Derivative API, I would like to start using the "modern" converter instead of the "legacy" one. However, when using Forge Viewer, the models that were created with the modern conversion method end up with a different position and/or orientation than the legacy ones.
I have tried to get to the bottom of what the difference consists of, and it seems to at least have something to do with the TrueNorth property of the IfcRepresentationContext in the IfcProject. Also, the ObjectPlacement of the IfcSite is probably part of it. But I haven't found a combination of properties to reliably compensate for the difference for all my IFC files. The IFCs typically come from Revit, but may have various origins.
Many of our customers have projects with several existing models, so ideally, any new models should align with the existing ones even if the conversion method has changed.
I submitted an issue item NWLMV-164 on your behalf to ask our engineering team to allocate time to investigate it. Now it can be traced back to NWLMV-88. As result, it needs some time to explore and find the solution to fix this issue. Please take a note of either NWLMV prefix ids. You're welcome to track updates in the future by sending an email to forge[DOT]help[AT]autodesk[DOT]com quoting the NWLMV prefix ids. We appreciate your understanding and patience.
I'm looking for best practices and performance-guided recommendations for recomputing a model's volume when it's missing from the source file. This is in the context of a web application I am working to build that enables:
Uploading 3D models in a variety of file formats
Interacting with these models using the AutoDesk Viewer
Displaying mass properties, eg volume and surface area, alongside the viewer (subject of this post)
Background
Some file formats have very reliable volume information that is computed and written to the file by the authoring application. For these files, we can access volume as a property via AutoDesk Viewer.
Other formats, however, do not carry volume information - at least not in a manner that is openly accessible using tools other than the authoring application (prime example here is SolidWorks). This leaves us with a giant gap to fill - we need to recompute the model's volume using what's in the file.
Known Workarounds and Options
AutoDesk published a blog post detailing an approach for approximating model volume using triangles of the model inside the viewer. I think it's an ideal solution for use cases that can afford to trade accuracy for a bump in performance - and it centers everything in the viewer making development and subsequent maintenance simpler. This application, however, cannot rely on such approximations. I'm left reviewing options for leveraging the AutoDesk Design Automation API to:
Spin up an instance of Inventor
Load the model file
Rely on iLogic to trigger a re-computation of the model's part properties (perhaps like this?)
Push that data back to my web application
Where I Need Help
My understanding is that an AppBundle and Activity are defined ahead of time and then every uploaded model would be submitted as a work item.
I am hoping for guidance in:
whether this is the only approach or whether there are other options worth considering
how best to orchestrate the end-to-end process from an order of operations/workflow standpoint to maximize performance
Current Thinking
For example, I'm thinking that my first step after the source file is uploaded is to immediately initialize two parallel processes: the first to translate the source file for the viewer, the second to spin up Inventor and trigger the related downstream process to get volume.
The other option I'm considering is handling all of the work in Inventor - and pushing out an SVF file to the viewer that's enriched with volume data. The advantage of this approach is that my frontend will have only one source for volume data, (it will be in the enriched SVF no matter whether it was supplied in the original file or not).
In an ideal world I'd be able to only invoke the Design Automation API when volume data is missing from the source file - but I'd only know that after translating the file and bringing it back to the viewer. Given that many of our files are created in SolidWorks and other high-end proprietary CAD platforms, my working hypothesis is that we'll be needing to fill in volume gaps more often than not.
Your understanding is correct:
appbundle is simply a collection of files (binaries, data) encapsulating a specific Inventor/Revit/3dsMax/AutoCad plugin
activity is a kind of a job template specifying which application should be invoked, which appbundle should be loaded into the application, what inputs will be provided to the job, and what outputs will be generated
work item is then a specific instance of a job, binding the activity inputs and outputs to specific URLs
There is currently no other way to access the Design Automation functionality other then using these 3 types of entities.
I would suggest the following:
wherever possible, use the Design Automation for Inventor to compute the precise areas/volumes
for file formats that cannot be imported into Inventor or any other Design Automation engine, you could use tools like https://github.com/petrbroz/forge-convert-utils to parse the SVF and compute (a very rough estimate of) the area/surface from the triangular meshes; however, this will be quite computationally expensive, and imprecise
I am building a prototype to have a single Application providing models for forge viewer.
Each user should have its model and not be able to see models of others.
For this I would consider "separation in application architecture” like described here: https://forge.autodesk.com/blog/accounts-apps-keys-and-ids (manual app creation per client is not an option).
If bucket are generated in a way that cannot be guessed, we could consider separation per bucket enough (even if not fully secure on “lucky” hit)
I tried to use scope viewables:read, which from documentation, is "only be able to read the end user’s viewable data”.
I cannot list buckets (GET buckets), as expected, but I can access bucket by name (GET buckets/:bucketKey/details).
Is viewables:read safe enough for what I described, or can I expect some other points to be readable, but not expected like this one?
Is viewables:read the option discussed in this article https://forge.autodesk.com/blog/securing-your-forge-viewer-token-behind-proxy ? Or is forge roadmap expecting some finer grain access control?
Thank you,