How to fetch the Layer States from 2D model (dxf) in ForgeViewer - autodesk-forge

I have a dxf file that I have uploaded to oss and translated it to svf.
How can I fetch the layer states in my 2D model using forgeviewer?
In autocad, I have these layer states Screenshot for autocad layer states.
Namely:
F1 Component Plan
F2 Electrical Plan
F3 Bracket Plan
But when in the forgeviewer, I cant find those lawyer states (Grouping).

I'm afraid this type of information may not be available. Generally, the Forge Model Derivative service always tries to extract a "balanced" amount of information from design files (enough for previewing purposes, but not too much, to make sure the output keeps a reasonable size).
When you load your DXF file into the viewer, you can try querying its metadata using viewer.model.getData(), and see if you find the layer state there. I did try that with one of my testing DXFs, and didn't see this information there.
Finally, if you really do need the layer state data, there's another option - you could use Design Automation for AutoCAD, with a custom AutoCAD plugin that would extract all the information you need directly from the source file.

Related

Retrieve object data with Forge Viewer (nested Families)

I am trying to use forge-Viewer with dashboards to analyze the data within the model. For that, I am using the getAllLeafComponent() method expressed in the Forge Tutorials: https://learnforge.autodesk.io/#/viewer/extensions/panel?id=enumerate-leaf-nodes.
Nevertheless, I am having some trouble with this method, because it will not recognize objects that have children (i.e. Revit Families with nested items).
Element with nested item (space of operation)
In the attached image, the green tetrahedron represents the transformer space of operation, and it is a nested item inside the transformer, so with the getAllLeafComponent() method I am unable to retrieve the transformer data, which is the important one; as this method does not recognize the transformer as a leaf, but rather as a parent element, which indeed it is, but it is also a model object, not a category or a family symbol.
Has anyone comes up with the same problem and/or with a way to solve it?
It is of uttermost importance for my Forge application, otherwise, I would not have reliable model information to analyze it.
Best Regards,
The Model Derivative service uses a specific, "reasonable" logic for each individual input file format to decide how granular it should go when building the logical hierarchy for the viewer. In case of Revit designs, the processing stops at the instance level, in other words, family instances are always output as leaf nodes, even if their families have some nested elements. For example, doors are always output as the smallest selectable elements, and you cannot select just the door knob. I'm afraid the same applies to your space of operation nested within the transformer family.
If you need to extract information that the Model Derivative service does not provide, you could consider using the Design Automation service instead. This service lets you execute your custom Revit (or AutoCAD, or Inventor, or 3ds Max) plugin on our servers, creating, modifying, or analyzing designs in any way you need, remotely.

Is it possible to force Inventor Design Automation to update all views in a drawing before outputting to pdf?

I've run into an issue with a Forge configurator that we are developing, whereby the pdf output does not reflect the configured model.
The general process that it follows is it opens the assembly, and sets the level of detail of various sub-assemblies to match user configured options. It then saves the assembly and opens a drawing that references it. From there it generates a pdf output.
The problem is that the views do not show the levels of detail that have been set on the model components. So I'm wondering if there is a way to force an update before outputting the pdf?
I have found a work around which works for small models (e.g. 57 components, total size 26MB) which is to suppress and then un-suppress every drawing view on every sheet, plus calling sheet.Update(). But unfortunately this does not work for the large models (e.g. 514 components, total size 287MB) that this system is designed to work with. This does however work when run locally, just doesn't work on Forge, where it appears as if it hasn't had time to show the drawing views again before creating the pdf, as they are all blank.
Tried in Inventor 2020, 2021, and 2022 with the same results.
Thanks in advance for any help that you can give.
Based on your dataset we found out that your filenames use diacritics and your ZIP file doesn't use UTF-8 encoding. This produces bad characters in your filenames when Forge server un-zip it. Therefore Inventor Server indicates ‘Unresolved Files’ status when a drawing/assembly is loaded with such filenames. How to solve this problem is described in the Forge Design Automation documentation, navigate to 'Troubleshooting' page and look into 'Non-English filenames in ZIP files' paragraph.

Is it possible to pull custom fields from SolidWorks files on Forge?

I have translated SolidWorks files on AutoDesk Forge, however, the Forge metatadata / objects / properties call of these files only provides the objectid and name. I know I've got several fields in the files, just wondering if I have to wire up some strange way to pull them out before sending them, figuring it may not be supported through the Forge API. Thanks!
The Model Derivative service is usually doing a pretty good job with extracting the metadata of your designs. Note however that the metadata might be available on a different level in the logical hierarchy.
Here's the metadata I see in one of my sample SolidWorks files when I simply click on one of the parts:
And this is the metadata I see when I select its parent element:

Ray RLllib: Export policy for external use

I have a PPO policy based model that I train with RLLib using the Ray Tune API on some standard gym environments (with no fancy preprocessing). I have model checkpoints saved which I can load from and restore for further training.
Now, I want to export my model for production onto a system that should ideally have no dependencies on Ray or RLLib. Is there a simple way to do this?
I know that there is an interface export_model in the rllib.policy.tf_policy class, but it doesn't seem particularly easy to use. For instance, after calling export_model('savedir') in my training script, and in another context loading via model = tf.saved_model.load('savedir'), the resulting model object is troublesome (something like model.signatures['serving_default'](gym_observation) doesn't work) to feed the correct inputs into for evaluation. I'm ideally looking for a method that would allow for easy out of the box model loading and evaluation on observation objects
Once you have restored from checkpoint with agent.restore(**checkpoint_path**), you can use agent.export_policy_model(**output_dir**) to export the model as a .pb file and variables folder.

What is the format for the training/testing data for a Computer Vision model

I am trying to build a CV model for detecting objects in videos. I have about 6 videos that have the content I need to train my model. These are things like lanes, other vehicles, etc. that I’m trying to detect.
I’m curious about the format of the dataset I need to train my model with. I can have each frame of each video turn into images and create a large repository of images to train with or I can use the videos directly. Which way do you think is better?
I apologize if this isn't directly a programming question. I'm trying to assemble my data and I couldn't make up my mind about this.
Yolo version 3 is a good starting point. The trained model will have a .weight file and a .cfg file which can be used to detect object from webcam, video in computer or, in Android with opencv.
In opencv python, cv.dnn.readNetFromDarknet("yolov3_tiny.cfg", "CarDetector.weights") can be used load the trained model.
In android similar code,
String tinyYoloCfg = getPath("yolov3_tiny.cfg", this);
String tinyYoloWeights = getPath("CarDetector.weights", this);
Net tinyYolo = Dnn.readNetFromDarknet(tinyYoloCfg, tinyYoloWeights);
Function reference can be found here,
https://docs.opencv.org/4.2.0/d6/d0f/group__dnn.html
Your video frames need to be annotated with a tool that generates bounding boxes in yolo format and there are quite a few available. In order to train custom model this repository contains all necessary information,
https://github.com/AlexeyAB/darknet