We have a large .nwd model (1.6GB) that we are uploading directly to forge. I have hidden some elements in the viewer so only some of the elements are translated in the view to make it lighter. I am still having issues with SVF and SVF2 times to fully load the models on Forge. However I did a test on uploading to our hub on BIM360 and noticed it was significantly faster to upload and also the viewer renders the model a lot faster.
My question: Is it better to directly upload to the OSS on Forge or are there benefits in terms of load and rendering times to take the files directly from BIM360 via the Plugin integration workflow? Note: I am only considering upload and rendering times in this question and not other factors that could be beneficial to the end user.
BIM360 is built on top of Forge and it uses the exact same stack to store, translate, and preview your designs, so there should be almost no difference in loading and performance.
If you do see a significant difference in rendering performance, try the following:
open the model in your custom Forge Viewer app, and run the following command in the browser console:
NOP_VIEWER.model.isSVF2()
This should tell you whether your model really uses the SVF2 format.
open the model in another sample app, for example, https://github.com/petrbroz/forge-simple-viewer-nodejs (there's a branch called test/svf2 that is configured to load your models in SVF2), and see if the performance is the same
This should rule out any potential issues in your app's own code that could be affecting the performance.
I am currently developing a web application integrated with Autodesk Forge platform. Application is hosted on AWS. Basically, users upload their Revit files, the model is translated and rendered on the viewer, and the metadata is extracted and do some visualization. Small models (upto 200 MB) are able uploaded and rendered on the viewer within 60 seconds. But when I upload a large (1-2 GB) Revit file, it takes more than 5 minutes(which is not a good user experience) to translate and render on the viewer. Is there a way to make this upload and render process faster? What are the factors this translation speed depends on? Is this something to be addressed by optimizing my code? I looked everywhere for a solution but couldn't find any. Please advise.
Thank you!
To make upload task faster, we can make use of resumable upload to upload the big model in chunks parallelly: https://stackoverflow.com/a/70034186/7745569
Note. We're migrating to the direct-to-s3 approach of uploading/downloading files to Forge OSS service, so here are the migration references:
https://forge.autodesk.com/blog/data-management-oss-object-storage-service-migrating-direct-s3-approach
https://forge.autodesk.com/blog/upload-large-file-chunks-s3-signed-url-opennetwork-revit-design-automation
https://forge.autodesk.com/blog/direct-s3-nodejs-samples
https://forge.autodesk.com/blog/direct-s3-net-samples
https://forge.autodesk.com/blog/design-automation-api-using-aws-s3
For viewing performance, I would advice you to check out the svf2 format. It aids to resolved large model performance issues.
https://forge.autodesk.com/blog/update-svf2-ga-new-streaming-web-format-forge-viewer-now-production-ready
https://forge.autodesk.com/blog/model-derivative-svf2-enhancements-part-1-viewer
https://forge.autodesk.com/blog/model-derivative-svf2-enhancements-part-2-metadata
Using the Autodesk Forge Model Derivative API, we've observed that for a few of our customers' models, the translation from RVT to IFC fails, after one or two hours of conversion.
Unfortunately these models are confidential, so they cannot be shared, but they are all above 250-300 MB (in Revit format). Is there a limit on the model size to be converted? I doubt that it's related to the upload itself, or the file being corrupted, because we have no problem with smaller files (~100MB), and all these models can be opened in Revit without any problem. (I've also checked for open source Revit sample models as test models, but they are all below 120 MB.)
Thanks for any recommendation on the subject.
Thanks to #EasonKang's help (and Forge support in general), the problem has been solved since the new release of the Model Derivative API (Autodesk Revit 2022.1).
is there any way to revive data about what has been changed between versions (remove and add and modify):
via forge Model Derivative API, now I am able to get all the metadata of any Revit files but in total so I am not sure what elements added or .....
the problem we have a lot of files, and it's really hard to run a test to compare modes for each object
thank you :)
I am not aware of any built-in BIM360 or Forge functionaloity for obtaining that information.
I would suggest that you very clearly define exactly what information you wish to keep track of and determine how that can be obtained from a model, e.g., as you suggest, via the Forge Model Derivative API.
Then, you can create a snapshot of that data yourself and implement the functionality to track changes in it as you wish.
The Buiilding Coder discusses and shows how to solve the exact same task for Revit BIMs using the Revit API on the Windows desktop:
Tracking Element Modification
Implementing the TrackChangesCloud External Event
Those articles provide ideas and guidelines on some aspects to take into consideration addressing the same task in Forge.
For storing data offline WebApp can use:
session storage, "advanced version of cookies"
key/value based Web Storage (AKA local/global/offline/DOM storage)
sql-based Web SQL Database (deprecated) and Indexed Database API
FileReader and FileWriter API (requires user to select files each time the application loads)
But apparently there is no File Storage. Of course, there is a manifest-based caching, but it's just a cache and is not supposed to be used as a user data storage.
Does it mean that the user of WebApp is forced to use some sort of a cloud file storage?
Is there any way to save large files on user's local machine? Or maybe some way to select a local folder web application can use to store user data?
Edit. Security. HTML5 already has the ability to write big portions of data to user's local machine. I don't see any security issues if a browser will provide another, file-based abstraction to store data. It can be some virtual machine, virtual filesystem, whatever.
Hm, I think, it is possible to write JS filesystem and store it as a blob in SQL...
Similar questions.
Update:
Hm... recently I've found this and this. Maybe it is what I'm looking for... Yes, it is! See the answer below.
At last, I've found it! Here's the answer:
I’ll have the DOMFileSystem with a side of read/write access please wrote:
Eric Uhrhane of Google has been
working on the working draft of the
File API: Directories and System specification which defines a set of
APIs to create a sandboxed filesystem
where a web app can read and write
data to.
Wow! I'm so excited!
Why not use localStorage while the user is editing a document and the FileWriter API when they want to save it to disk? Most people are used to seeing a save dialog pop up when saving a document.
The only scenario I can think of that warrants userless access to the FileWriter API is an autosave feature, but autosaving to localStorage can be just as good.
There is a way to save relatively large files to a users hard drive if you are willing to use Flash. Look into Downloadify:
http://www.bitrepository.com/downloadify-client-side-file-generation.html
Downloadify allows you to send data to a SWF and have that SWF create a file on the users machine. My recommendation would be to store the data via one of the methods you listed, Webstorage, sqlite database, etc. Put all your assets, including the SWF in the manifest file so everything is cached locally to the browser. You can then pull information from your db or webstorage and use the SWF to create the files you need.
I'm not sure if you will be able to read these files back into your web application.
Another option to save data is by using link tags with the data URI scheme. However, I'm not sure if it is supported in all the major browsers at the moment.
For security reasons you can't write files to a user's local filesystem in case it gets used for nefarious purposes by evil people.
That's not likely to change, and that's a good thing.
The HTML5 FileSystem API started landing in Chrome 8 and is fairly complete as of now (Chrome 11).
There's a nice tutorial on it here: http://www.html5rocks.com/tutorials/file/filesystem/
http://fsojs.com wraps the FileSystem API effectively, if you want an easy solution
As mentioned by others here, the FileWriter and FileSystem APIs can be used to store files on a client's machine from the context of a browser tab/window.
However, there are several things pertaining to these APIs which you should be aware of:
Implementations of the APIs currently exist only in Chromium-based browsers (Chrome & Opera)
Both of the APIs were taken off of the W3C standards track on April 24, 2014, and as of now are proprietary
Removal of the (now proprietary) APIs from implementing browsers in the future is a possibility
A sandbox (a location on disk outside of which files can produce no effect) is used to store the files created with the APIs
A virtual file system (a directory structure which does not necessarily exist on disk in the same form that it does when accessed from within the browser) is used represent the files created with the APIs
IsolatedStorage, which hasn't been mentioned as of yet, also allows for file i/o from a tab/window context, but it is made available through solely through Silverlight and requires the use of managed code to access. It, like FileSystem, also exists in a sandbox and makes use of a virtual file system.
Given the high market penetration of both Chromium-based browsers and Silverlight (support for which, interestingly enough has been dropped by such browsers), you may find a solution which uses the first of the above approaches available on a client machine satisfactory.
BakedGoods, a Javascript library that establishes a uniform interface that can be used to conduct common storage operations in all native (including FileSystem), and some non-native (including IsolatedStorage) storage facilities, is an example of such a solution:
//Write file to first of either FileSystem or IsolatedStorage
bakedGoods.set({
data: [{key: "testFile", value: "Hello world!", dataFormat: "text/plain"}],
storageTypes: ["fileSystem", "silverlight"],
options: {fileSystem:{storageType: Window.PERSISTENT}},
complete: function(byStorageTypeStoredItemRangeDataObj, byStorageTypeErrorObj){}
});
Just for the sake of complete transparency, BakedGoods is maintained by this guy right here :) .