Detecting features like holes in components - autodesk-forge

Is it possible to get data on hole features on a component within a model - quantity and diameter? The specific use case we are looking at is to determine the size and quantity of bolts required for a component/assembly - bolts, nuts and washers are often not modelled by our customers. Example - if we can determine that a component has 4 x 13mm holes, we can programatically add 4 x M12 bolts to the BOM.

It is only possible to perform a geometric approach, the viewer is loading simple geometry, it is not a parametric CAD modeler. You could pre-process the CAD seed file to add custom properties in Inventor for example, you can then access those in the component properties in the viewer.
To get access to the geometry (vertices & faces) in the viewer take a look at this post: http://adndevblog.typepad.com/cloud_and_mobile/2015/07/accessing-mesh-information-with-the-view-data-api.html
Hope that helps

Related

Autodesk Forge - Do any IDs persist when translating a new source file with minor changes?

Background
We currently use AutoDesk Forge to display 2D models of floor plans. Users have the ability to upload new floor plans (which uploads to OSS, then translates the file) to replace existing ones, that may include new objects in the room, or slightly different positioning of existing objects. Currently, we grab certain object dbId's via viewer.getSelection() to "bind" (using the term loosely here) some external data to the object and perform certain interactions within our web app. We are using .DWG files.
Issue
When uploading a new floor plan that removes an object, it shifts dbId's of other objects, and our external binding is then inaccurate.
Question
• Are there any IDs that persist between the uploads/translations?
*We do not maintain control of the .dwg files prior to their upload, so adding attributes on the drawing before it's translated likely won't be viable for our particular case - but if that's the best (or only) approach, I would like to know to propose it as a possible solution to my team.
Example
Let's say there's a simple square room with 5 chairs, and it's rendered and visible in the viewer since it's been uploaded as room1 (object key). We identify 3 chairs by their dbIds and save that, so a user can jump right to the object in question, and we put a label on it. Then someone comes along and removes one of the chairs, uploads/translates the document again with the same object key. Now, the dbIds are changed and assigned to different chairs and as a result. Question is, is there something other than dbId that persists between the different renderings? Or, is there something that I haven't considered that would be a better approach of keeping the binding accurate between uploads?
EDIT: Same scenario happens with element ID's. An interesting finding is the element ID and dbId are tightly coupled (meaning if Object A's dbId is 3 and element ID is 6E, then a deletion happens, a new object will have 3 and 6E respectively). Also, I believe the designers creating the AutoCAD files are making these as polylines, if that makes a difference
Contingency Plan
If there's not an ID of some sort that persists, I'm considering storing the coordinates of the object we want to bind to, and later find the closest object to those x/y/z coords. Is that a possibility, to find an object close to or overlapping xyz?
Use external id.
Useful blog: https://forge.autodesk.com/blog/get-dbid-externalid

How to recognize and count objects with Firebase / ML Kit

I'd like to recognize and count objects in a picture, e.g. count the number of houses in a picture of a neighbourhood. What's the best way to do this with ML Kit?
Do I need to use the Object Detection API? Or is it possible to get multiple "house" tags using a straight-forward image-labeler?
The ML Kit Object Detection API (note that it is now offered as a standalone SDK) can count objects in an image / video stream, but it limited to the 5 largest objects. Also, you should evaluate if the object detection works for your use case. It is a very general localizer and works for most objects, however with when objects are close together / overlapping it may not distinguish between them.
If you need to detect more than 5 objects, I would recommend looking at directly using TensorFlow Lite with some of the pre-trained models available on TF Hub or train one yourself using AutoML Vision Edge if the general models don't fit your use case.
Fwiw, Image Labeling assigns labels that describe the scene of an image. However, it does not count the number of objects, you typically get a single label "house".

CesiumJS - Entity / Graphics Hierarchie in relation to performance

I am working on a Wysiwyg Editor for CesiumJS content.
The user will be able to create many points, lines and other graphics, connect them according to definable relations and group them in separate Groups.
Now I am wondering what the best practises are in terms of performance.
At the moment I create one PointPrimitiveCollection for each Group
and then add points:
group.points = scene.primitives.add(new Cesium.PointPrimitiveCollection());
and then
group.points.add({
position : cartesian,
...
});
for each new point.
Polygons are created using:
network.hull_polygon = viewer.entities.add({
name : 'xxx',
polygon : {
hierarchy : Cesium.Cartesian3.fromDegreesArray(points_array),
material : color,
...
}
});
polylines similarly.
Now since the Objects can also be dragged around / animated, I was wondering where Cesiums entity logic would come in?
Thanks for all help!
Cesium's Entity logic is useful primarily for objects that move along a known path over time, for example the flight plan of an aircraft in the future, or a GPS recording of the route taken by a vehicle in the past. Such routes can be loaded into the Entity system (often via CZML), and the user can run the simulation time forwards and backwards at arbitrary speeds, to review the routes of all the vehicles. The Entity system owns the logic for updating graphics primitive positions based on simulation time changes.
Entities are also often used as a quick way to make some disparate graphics primitives associate with each other. For example, a polygon, a point, and a label can all be created as a single Entity even if they are three separate graphics primitives at the same location. This saves a bit of effort on the part of the application developer, and doesn't hurt performance too much since the properties involved are all marked as constants, so the Entity layer knows not to update them with simulation time.
But, it sounds like you may have a case where paths are not known in advance. For things like user interactive edits or real-time telemetry being received, the Entity system can't know what's coming up next, so its whole system for updating positions from simulation times is not doing you any good. In that case it may be better to skip the Entities, and deal exclusively with graphics primitives for this. This would mean you need to write your own update function to alter graphics positions as new information is being received, similar to the Entity layer's update functions, but based on your own live inputs instead of recorded paths.
Note that the public "Sandcastle" demos only include Entity demos. But, if you download and build the source for Cesium and run Sandcastle locally from a dev build, a separate tab appears in the Sandcastle Gallery called Development that shows a whole set of demos based on graphics primitives as opposed to Entities. This can be useful for seeing examples of how to control things at this layer.
Hopefully this is helpful in understanding how the different layers of Cesium interact.

AR / VR Toolkit Reduce Model Mesh to display in AR

Is there a way to reduce mesh polygons?
As an example project I use the TGA model provided by Autodesk. (https://knowledge.autodesk.com/support/revit-products/getting-started/caas/CloudHelp/cloudhelp/2019/ENU/Revit-GetStarted/files/GUID-61EF2F22-3A1F-4317-B925-1E85F138BE88-htm.html rme_advanced_sample_project.rvt)
If you add all instances to the scene you get a polygon count of about 1.3M.
For the computer this is no problem at all. The model is downloaded in about 1 min and displayed completely.
For my iPhone ( iPhone 8) this is clearly too big.
As soon as I start the AR Scene and download the model, the memory requirement rises to over 1.2 GB (bevore 0,15GB) and crashes the app.
Even if you exclude some instances (walls, ceilings, etc.) before processing the scene to display only the technical building equipment, the model is still too big for the iPhone.
Are there possibilities to reduce the mesh with the ar-vr-toolkit api . Do I have to do this manually in Revit?
Edit: 27.06.18
Here is the model i want to display in AR (Tris: 2.8m, Verts: 2.4M).
Steps:
1) Upload the original .rvt file (70mb) to my bucket.
2) Translated the file via forge.
3) Created a scene with ar-vr-toolkit api.
4) Processed scene witha ar-vr-toolkit api.
5) Downloaded the scene to unity.
6) Created a prefab.
The Meshes are way to detailed. The Graphics would not change a lot if i reduce the Vertex count to 10-15%.
In Unity i can use assets like Mesh Simplify (https://assetstore.unity.com/packages/tools/modeling/mesh-simplify-43658) to reduce the count.
An other way is to export the model to e.g. 3D max or Maya to reduce the count.
But i want to try to do this automatically.
My question is: Is there a way to to this with Forge?
Image 1
Image 2
My colleagues who are expert on this area are on vacation now, so let me try to answer your question first, and my colleague may add some more information later.
Unfortunately, the answer is No AFAIK. For the Forge AR|VR toolkit service, I remember there are some mesh reduction work done automatically at server side if it detect the client device is Hololens or DAQRI, you can get that info if checking https://github.com/wallabyway/ARVRToolkit/blob/master/unity-src/ARVRToolkit/Assets/Forge/ARKit/RequestQueue.cs#L155. But that's it, we do not provide any API to help reduce the mesh, and there is also no API within Forge which can do that either.
As you already know, you may need to do the mesh reduction in some other product, like 3ds Max, that's the current way I can think of.
My colleague may have more comments on this when they come back.

GIS with Bing Silverlight and SQL 2008?

I have data that I want to create a GIS type application that will have the typical features of adding and removing layers of different types. What is the best architectural approach?
The data consists of property locations in Eastings and Northings.
I also have ordnance survey data in GML and Shapefiles.
I know this is a very broad question but the subject area also seems very broad to me and I am unsure which direction to go.
I was thinking of using SQL 2008 spatial and Bing Silverlight control to visualise that maps.To do this would I have to convert the eastings and northings to GWS84 geography datatype? But then if I converted the shapefiles to GML and imported all the GML files to sql using GeomFromGML they would be in geometry datatypes. Wouldn’t the two types be incompatible?
Also, should ESRI ArcGIS API for Silverlight
feature in the equation? Is this a good environment to create maps that I can point as SQL sqerver 2008 as the datasource (using a WCF service if necessary)?
Any advice greatly appreciated!
This is something I've done several times, using OS data from SQL Server in both Bing Maps AJAX and Silverlight controls. Some general comments below (in no particular order!):
Don't expect to implement full-blown
GIS functionality using Bing Maps.
Simple querying, retrieval and
display of the data is all fine (+
some simple editing), but after that
you'll be struggling with what can be
achieved in the browser.
All vector shapes supplied to Bing Maps need to be in (geography)
WGS85 coordinates, EPSG:4326.
However, all data will be projected
and displayed using (projected)
Spherical Mercator system, EPSG:3857.
In terms of vector shapes, you can expect to achieve a similar level of performance as you get in the SSMS spatial results tab - that is, (with careful architecture) you can plot up to about 5,000 features on the map at once, zoom / pan around them, click on them to bring up various properties and attributes etc. However, after that you'll find the UI becomes fairly unresponsive (which
I imagine is the reason why the spatial results tabs itself limits you to displaying 5,000 records at once).
If you want to display more features than this, one approach is to rasterize them by projecting them into the EPSG:3857 projection, creating a .PNG/.JPG image file of the features and then cutting that image into tiles according to the Bing Maps quadkey tile numbering system as explained here:
http://msdn.microsoft.com/en-us/library/bb259689.aspx and displaying them as a tilelayer. Tile layers are significantly faster than displaying the equivalent vector shapes, although it means the data is static.
If you do create raster tiles, you can either render them dynamically
or pre-render them to improve performance - i.e. you could set
up a job to render and update the tileset for slowly-changing data
every night/every month etc.
If you're talking about OS Mastermap data, the sheer level of detail
involved means that you need to think more carefully about what
features you want to display, and how you want to display them. Take
greater London, for example, which covers an area about 50km x 40km. To
create raster tiles (each of which are 256px x 256px) at zoom level 19
covering this area you'd need to render and store 1.3 million separate
tiles. If each of those is generated from a database query that takes, say
200ms to run, it's gonna take a looooonggggg time to prepare all your
data. Also, once the files have been generated, you might want to think
about storing them in a DB rather than saving them on a file system.
As for loading the OS data into SQL Server in the first place - there
are several tools that will import either from GML or shapefile into SQL
Server, and handle the projection from EPSG:27700 (Ordnance Survey
National Grid) to WGS84 along the way. Try GDAL/OGR or Safe FME for
starters.
I have a blog at http://alastaira.wordpress.com that has several blog posts that you may find useful describing various aspects of integrating Bing Maps and SQL Server. Particularly, you might want to look at:
http://alastaira.wordpress.com/2011/02/16/loading-ordnance-survey-open-data-into-sql-server-2008/
http://alastaira.wordpress.com/2011/01/23/the-google-maps-bing-maps-spherical-mercator-projection/
http://alastaira.wordpress.com/2011/02/21/using-ogr2ogr-to-convert-reproject-and-load-spatial-data-to-sql-server/