autodesk-forge Bim 360 to Unity3d - create Unity scene has Size Limit? - autodesk-forge

is there a Size Limit for Post Scene Job , Models up to 100 MB are loading in Unity, above 100Mb the manifest shows "succsess" but the Model is not loaded in Unity3d.
https://app.swaggerhub.com/apis/cyrillef/forge-ar_kit/2.0.0#/ARKit/assets_create
Did expect that the Post Scene Job works for larger Models(>100MB) too.
https://app.swaggerhub.com/apis/cyrillef/forge-ar_kit/2.0.0#/ARKit/assets_create

I'm not sure if there's a hard-coded size limit (I don't think there is) but due to the way the service is designed, what really matters here is the complexity of the design, not necessarily its file size. For example, you could have a design that's less than 100MB in size but it could contain so many individual design elements that trying to load it into Unity would take forever.
Btw if you're interested in bringing Forge designs to Unity, I would suggest that you look at an alternative approach - using the forge-convert-utils library to convert Forge models into glTF, optimize the glTF using something like gltfpack, and load it into Unity using something like glTFast. There's also a lightning talk video I recorded for Autodesk University 2022 where I explain this process in a bit more detail: https://www.youtube.com/watch?v=qBNF42Ykajo.

Related

Is it possible to upload large Revit models (1-2GB) and render faster onto Autodesk Forge viewer?

I am currently developing a web application integrated with Autodesk Forge platform. Application is hosted on AWS. Basically, users upload their Revit files, the model is translated and rendered on the viewer, and the metadata is extracted and do some visualization. Small models (upto 200 MB) are able uploaded and rendered on the viewer within 60 seconds. But when I upload a large (1-2 GB) Revit file, it takes more than 5 minutes(which is not a good user experience) to translate and render on the viewer. Is there a way to make this upload and render process faster? What are the factors this translation speed depends on? Is this something to be addressed by optimizing my code? I looked everywhere for a solution but couldn't find any. Please advise.
Thank you!
To make upload task faster, we can make use of resumable upload to upload the big model in chunks parallelly: https://stackoverflow.com/a/70034186/7745569
Note. We're migrating to the direct-to-s3 approach of uploading/downloading files to Forge OSS service, so here are the migration references:
https://forge.autodesk.com/blog/data-management-oss-object-storage-service-migrating-direct-s3-approach
https://forge.autodesk.com/blog/upload-large-file-chunks-s3-signed-url-opennetwork-revit-design-automation
https://forge.autodesk.com/blog/direct-s3-nodejs-samples
https://forge.autodesk.com/blog/direct-s3-net-samples
https://forge.autodesk.com/blog/design-automation-api-using-aws-s3
For viewing performance, I would advice you to check out the svf2 format. It aids to resolved large model performance issues.
https://forge.autodesk.com/blog/update-svf2-ga-new-streaming-web-format-forge-viewer-now-production-ready
https://forge.autodesk.com/blog/model-derivative-svf2-enhancements-part-1-viewer
https://forge.autodesk.com/blog/model-derivative-svf2-enhancements-part-2-metadata

What need for an automatic design project? My project is very slow

After showing the house to the forge viewer using a Ravit file, the user wants to modify the contents of the viewer and receive it as a Ravit file again. What function should I use to implement the above?
https://learnforge.autodesk.io/#/
I referred to it.
my project
I make input data and transfer this to design automation and get output file and file to translate viewer (this is 2 minutes).
This process is very slow I want better then this process.
https://www.autodesk.com/autodesk-university/class/Its-Not-Too-Late-Automate-Using-Forge-Design-Automation-Inventor-2021#video
This video looks very fast on work process. How to get fast work process like this video?
Thank you
The Forge Viewer is a viewer, not an editor.
The Forge viewer displays the translated version of a seed CAD model, in this case, a Revit RVT BIM.
You cannot edit or modify the CAD seed file in the viewer.
To achieve such a modification, you have to either use the original modelling software, in this case, Revit on the Windows desktop, or you can use the Forge Design Automation API for Revit.
That is what was used to create the Inventor sample you refer to.
Oops... re-reading your question, I see that you are already using design automation yourself as well. Congratulations on that.
However, there is no guarantee on the turn-around time for this process. The video may very well have been edited to eliminate a waiting period, of the user creating the video may just have been very lucky to achieve a faster turn-around time.
I am checking for you with the Forge team whether they used any additional tricks to speed things up in the video or in the true real-time processing. They confirm:
For Revit, a basic DA job should take up to 30-40 secs for the processing time alone. The derivative job for translation to viewer format could take another minute. So, 2 mins is expected. The sketch it demo video has a timer on the side to indicate real time.

Needed Download Speed

First of all, sorry for my English.
I need some kind of assistance with Forge. I need to display the download speed of the BIM model on the Unity.UI, be it the .RVT or whatever is downloaded from BIM360. Is it possible to know where it is downloading exactly to be able to put a count of downloaded bytes there?.
Another question. Currently the download time from Autodesk servers is approximately 110 seconds.
We understand that the download is by meshes packages. Is there a way to speed up this download? Our client needs this download to be faster.
The size (in Mb) depends of the model, so it hard to tell you how large a RVT image could be. It also depends on the asset quality, and what you actually requesting to view. So I am afraid I cannot really answer that question. However, if you are interested to create a progress bar in your UI, you can ask the number of triangles, material definitions, and textures from the model manifest, and calculate the % download from there, depending which technology you are using to access the data. There is an example in the AR|VR toolkit doing this, but again it depends of the technology you are using.
Assuming you are using the AR|VR Toolkit, mesh request are done in parallel, so the speed to download and instantiate meshes depends of your internet bandwidth, and speed of you device to run Unity. In the toolkit, you may accelerate the download, but accept to lose control of the UI during that period. It is a compromise to make due to the Unity limitation running singe threaded.
The toolkit can also convert models to glTF, and you could using the gltfast Unity plugin to get better performances, but lose metadata associated to objects. Another compromise due to the nature of glTF at this time.

Autodesk Viewer Performance

I have problem with large model on Forge Viewer. Although i had translate it to SVF2 but it take so long to load and with any action it rerender all time (i known it a part of process). Are there any solutions like Proxy in 3Dsmax or when camera far from model it show low LOD and when scroll in it show higher LOD? What can i do to speed up model? Appreciate any solutions.
UPDATE:
Would you be able to confirm that you are really using SVF2 in the viewer? For example, do you see the viewer communicating via WebSockets in the Network tab?
So far we've seen major performance improvements across all projects switching over to this new file format, but it's possible that your model is so large/complex that even SVF2 isn't helping. In that case I'm afraid we won't have other solutions, other than perhaps splitting your design into multiple models, and loading only those that you really need. For example, Navisworks designs are often split by area and/or discipline, and the models are then loaded selectively by specific users. Check out this demo (specifically the checkbox matrix in the sidebar): https://forge-industrial-construction.autodesk.io/facility/montreal.

Speeding up a very large Bing Maps polyline based layer

I am writing a Bing Map based Universal App for Windows Phone and Windows 8 that shows some quite large map layers.
Writing the initial app was no problem (the tutorial I followed is at http://blogs.msdn.com/b/rbrundritt/archive/2014/06/24/how-to-make-use-of-maps-in-universal-apps.aspx), however I now am experiencing major problems rendering a layer that contains thousands of polylines, with tens of thousands of co-ordinates.
The data is just too big - on Windows 8.1, the map crashes the application, while on Windows Phone 8.1, the layer takes a very long time to render.
According to http://blogs.msdn.com/b/bingdevcenter/archive/2014/04/23/visualize-large-complex-data-with-local-tile-layers-in-bing-maps-windows-store-apps-c.aspx, I should speed it up by converting it to a local tile layer, however, the program mentioned in the article (MapCruncher) requires a PNG as input. The question is, how do I convert my map data to a PNG? I can have the data as a shapefile, KML file, or a CSV file. Is there another way I should be doing this? I know I can do this via Geoserver, however my app has to have offline support and so cannot download from the web server the appropriate files as needed.
If anyone has an other ways I could approach this speed issue with large layers, then that would be greatly appreciated. I am aware that I can speed up rendering of a layer in Bing Maps via quadtrees, however most of what I have found is theoretical. If anyone has some code I can plug in to this, that would be very helpful.
Local tile layers are fine if you only have data in a small area, or only want to show the data for a few zoom levels. Otherwise the number of tiles grows drastically and will make your app huge. If your data changes regularly, or you want to support all zoom levels of the map you should store your data on a server and expose it as a dynamic tile layer. A dynamic tile layer is a web service that generates a till on demand from your data. You can add caching to the tiles for performance. This is the best way to handle large data sets and one I have used a lot. In fact I have a demo here: http://onsbingmapsdemo.cloudapp.net/ This data set consists of 175,000 complex polygons that equates to about 2GB of data.
I have an old blog post on how to do this here: http://rbrundritt.wordpress.com/2009/11/26/dynamic-tile-layers-in-the-bing-maps-silverlight-control/
If you prefer working with MVC you might find these project useful:
https://ajaxmapdataconnector.codeplex.com/
https://dataconnector.codeplex.com/