want to make it easy for my make an automatic conversion (using a CLI or API) of 3D models (DWG/DXF) into STEP files.
I have tried to read the documentation for Autodesk Forge to see if that could help me, but have a hard time to understand if it can.
Can I do this with Autodesk Forge?
Is there some other way to do it?
Is there a better way to do it?
How about to start with a working sample? Below is a few good starting points for your requirements:
https://github.com/Autodesk-Forge/bucket.manager-csharp-sample.tool
https://github.com/Autodesk-Forge/forge.commandline-curl
https://github.com/Autodesk-Forge/forge.commandline-nodejs
And feel free to find more samples here:
https://github.com/Autodesk-Forge
https://forge-rcdb.autodesk.io
https://forge.autodesk.com/blog
https://autodesk-forge-showroom.herokuapp.com
Basically to either build scripts or app in any language of your choice (the beauty of RESTful API being language neutral) to automate the workflow for you requirements can be broken down to:
Persist model files to/Read from your persistence (entirely your domain) -> Persist the model to Forge OSS (doc here] > Call the conversion job(doc here and supported formats -> Poll for job status and retrieve a manifest for download (here)-> Dowbload and persist the output (derivatives) (here)
Edit:
To convert DWG to STP (which is not yet supported by the Translation Service), use the Design Automaion for AutoCAD service to automate the process. Basically you will need to create a .NET plug-in to export DWG to STP, submit the module as an AppPackage and invoke the automation activity via the service endpoints. See here for details.
Related
I have created a web application for viewing models using the AutoDesk Forge Viewer, and I want to be able to add this onto a website. I used this tutorial: https://learnforge.autodesk.io/#/tutorials/viewmodels (using node.js for the language option).
The goal is to have the user access the viewer application from the website. I have been using VS code live server for testing. However, when I link the page that has the viewer into my own website, the viewer does not load the buckets or allow for creation of new buckets. It is just stuck on a loading symbol like below:
[Loading screen][1]
Could I please have the following questions answered:
What is the proper way to embed this application onto a website in the manner I have described above?
What part of the code controls where the buckets are loaded in?
Thank You.
[1]: https://i.stack.imgur.com/4Xlfv.png
LearnForge tutorial is an example on how to work with Forge API. As a web app, it depends on how the developer(you) designs the user interface, workflow, and data management.
e.g. you can remove the panel of bucket & object lists ,keeping the viewer only in the UI. while you will need to design how to provide the object id (urn) which will be loaded in the viewer. Normally, you would need to setup your own users management, logging process etc, and set your own user permission. Then the user logs in, the web app lists all files (objects) he has permission to check, and when one file is selected, get the urn and load the model in Forge Viewer.
if the end user of your app is BIM360 users, you could take advantage of BIM360 data management workflow, which follows the same permission specified with BIM360. Then the other tutorial will be a good start.
https://learnforge.autodesk.io/#/tutorials/viewhubmodels
In any case, the workflow and UI are defined by yourself. I hope this explains. If you have any further questions that need a meeting call, please feel free to check the calendar of our team:
https://calendly.com/autodeskforge
I'm following the Design Automation API v3 tutorial for Revit.
When doing a workitem post I'm a little unclear about the "rvtFile" and "result" arguments. Can the rvtFile url be in an aws bucket? Also what are the restrictions for the result website? It states that it needs to be a signed url, but can this just be another aws bucket? Or do I need to create a website? (Note: I've never done any web development. Everything I know i learned from this tutorial)
Since Design Automation for Revit runs on cloud (and not your local machine), it needs a way to download your input files. You may put your files on any of the storage service providers (say Amazon S3) and provide direct download links to it. For Design Automation to have access to it, you will either need to make those files be public urls or keep them private and generate a signed url for it. When DA4R runs your workitem, the direct download urls provided in the workitem payload will be called to download your files to the worker machine.
Design Automation also does not store any of your result files. So, you will have to generate a signed url for uploading them to appropriate cloud location(s) (say a location in Amazon S3 bucket).
While Amazon S3 is just an example, there are several other storage providers. I also recommend reading Autodesk Forge's Data management APIs:
https://forge.autodesk.com/api/data-management-cover-page/
EDIT:
Useful links
Tutorials: https://learnforge.autodesk.io/
AU Class: https://www.autodesk.com/autodesk-university/class/Revit-Data-Forge-How-Can-Design-Automation-Revit-API-Help-Me-2018
We have integrated AutoDesk Forge Viewer. We are sending a request to the Forge API's for conversion (using Model derivative API). After closing the Viewer, If we need to show the same file again, Currently we are posting the dwg file again for conversion to view it.
Instead is there a way so save the svf file in my local system so that I need not call the Forge web service twice for the same file.
According to the pricing, for every simple conversion job, its going to cost 0.2 credits.
Please suggest how I can avoid this same conversion second and n number of times.
Thank you,
Shiva Kumar
Unless the DWG file has changed, you do not need to upload the DWG file again and/or POST a translation again. If you do this, you will effectively consume 0.2cc. Instead, just reference the URN you had received after upload/translation when starting the viewer. The 'bubble' or SVF persists on the backend depending of the storage policy you chose. For example, if you create a transient bucket, the file and bubble will persists for 24h, temporary for 1 month, and persistent forever.
I have found a reference to how to iterate through the bucket and see the files which are present in the bucket.
We can login in the below live demo using the forge credentials.
Live Demo
Source Code
Thanks.
I'm trying to figure out the best workflow to generate a PDF from a file in the Vault. I first tried referencing the URL in the address bar when logged into the Thin Client, didn't work.
Is this best workflow to accomplish this:
Download the file from Vault
Upload to cloud storage
Process the file in cloud storage with Forge API
Download resulting PDF
Check PDF in to Vault
Delete file from cloud storage
Before going with Forge (which is supported via Design Automation), I would suggest you reviewing Vault built-in feature to generate PDF, see the following links:
Publishing and Manage PDF Files
New Automated PDF Creation for Document Control
There are plenty of Vault APIs for checking in and out of Vault as well as performing Gets. We currently do not have Vault to Forge APIs, but you can use today the Forge APIs. You will need to build a custom application to perform the communications and file transfer between the two locations.
I have a requirement to build an application with the following features:
Statistical and Source data is presented on simple HTML pages
Some missing Source data can be added from that HTML page ( data will be both exact numerical values and discriptive text )
Some new Source data can be added from those pages
Confirmed and verified data will NOT be editable via the HTML interface
Data is stored and made continuously available via the HTML interface
Periodically the data added/changed from the interface needs to be pulled back into the source data - but in a VERY controlled way. All data changes and submissions will need verification and checking - and some will trigger re-runs of models ( some of which take hours to run ).
In terms of overview architecture I have:
Large DB that stores and manages the data - this is designed for import process's and analysis. It is not ideal for web presentation or interface
Code servers that manipulate the data for imports and analysis
Frontend server that works as a proxy to add layer of security to S3
Collection of generated html files on S3 presenting the data required
Before reading about the Google Drive Realtime API my rough plan was to simply serialize data from the HTML interface and post to S3. The import server scripts would then check for new information, grab it, check it, log it and process it into the main data set.
That basic process however would mean that once changes were submitted from the web page - they would be lost from the users view until they had been processed by the backend.
With the Google Drive Realtime API it would appear I could get the best of both worlds.
However for the above to work I would need to be able to access the Collaboration Document in code from the code servers and export the data.
The Realtime API gives javascript access to Export and hand off to a function - however in my use case I want to automate the Export from the Collaboration Document.
The Google Drive SDK does not as far as I can see give any hints on downloading/exporting a file of type "Collaboration File".
What "non-browser-user" triggered methods are there for interfacing with the Collaboration Documents and exporting them?
David
Server-side export is not supported right now. What you could do is save the realtime model to a regular drive file, and read from that using the standard Drive API. See https://developers.google.com/drive/realtime/models-files for some discussion on different ways to setup interactions between realtime models and Drive Files.