Forge Design Automation Revit Workitem Arguments - autodesk-forge

I'm following the Design Automation API v3 tutorial for Revit.
When doing a workitem post I'm a little unclear about the "rvtFile" and "result" arguments. Can the rvtFile url be in an aws bucket? Also what are the restrictions for the result website? It states that it needs to be a signed url, but can this just be another aws bucket? Or do I need to create a website? (Note: I've never done any web development. Everything I know i learned from this tutorial)

Since Design Automation for Revit runs on cloud (and not your local machine), it needs a way to download your input files. You may put your files on any of the storage service providers (say Amazon S3) and provide direct download links to it. For Design Automation to have access to it, you will either need to make those files be public urls or keep them private and generate a signed url for it. When DA4R runs your workitem, the direct download urls provided in the workitem payload will be called to download your files to the worker machine.
Design Automation also does not store any of your result files. So, you will have to generate a signed url for uploading them to appropriate cloud location(s) (say a location in Amazon S3 bucket).
While Amazon S3 is just an example, there are several other storage providers. I also recommend reading Autodesk Forge's Data management APIs:
https://forge.autodesk.com/api/data-management-cover-page/
EDIT:
Useful links
Tutorials: https://learnforge.autodesk.io/
AU Class: https://www.autodesk.com/autodesk-university/class/Revit-Data-Forge-How-Can-Design-Automation-Revit-API-Help-Me-2018

Related

Autodesk Forge Design Automation API and Vault

I'm trying to figure out the best workflow to generate a PDF from a file in the Vault. I first tried referencing the URL in the address bar when logged into the Thin Client, didn't work.
Is this best workflow to accomplish this:
Download the file from Vault
Upload to cloud storage
Process the file in cloud storage with Forge API
Download resulting PDF
Check PDF in to Vault
Delete file from cloud storage
Before going with Forge (which is supported via Design Automation), I would suggest you reviewing Vault built-in feature to generate PDF, see the following links:
Publishing and Manage PDF Files
New Automated PDF Creation for Document Control
There are plenty of Vault APIs for checking in and out of Vault as well as performing Gets. We currently do not have Vault to Forge APIs, but you can use today the Forge APIs. You will need to build a custom application to perform the communications and file transfer between the two locations.

hosting a JSON file for a 3rd party app/service to use

We currently use Jive Cloud N which can use the Rest API and allows the use of Custom Apps. Our UI devs have created an app which uses a JS GET to pull data from a JSON file for our "Birthdays and Anniversaries" tile.
At the moment, the JSON file is hosted on our UI dev's Google Cloud Apps account, but we wish to host it internally so we don't have to keep contacting them for changes.
I uploaded the file to our OneDrive for Business storage and created a public URL with full read permissions but the Jive platform is throwing an error trying to load the custom app.
The error is that the file
has been blocked by CORS policy: No "Access-Control-Allow-Origin"
header is present
Our dev said that to get it working on his Google Cloud App storage, he had to specify the allow-control-allow-origin field in the server's server app.yaml file. I don't know what this is and if there is an equivalent for ODfB/SharePoint.
To get to my question: How can I host this JSON file on ODfB or even somewhere on our Azure tenancy so that it can be used? Or am I better off trying to setup a Google Cloud App storage location and replicate our dev's setup? FYI - I'd prefer the former because we're using M$ for a number of cloud hosted services already.
Thanks in advance
To get to my question: How can I host this JSON file on ODfB or even somewhere on our Azure tenancy so that it can be used?
FYI - I'd prefer the former because we're using M$ for a number of cloud hosted services already.
Per my understanding, you could leverage Azure Blob Storage to store your JSON file, and you could use Microsoft Azure Storage Explorer to easily manage/share your files.
Moreover, You could manage anonymous read access to your containers and blobs, refer to this tutorial for more details. Also, you could leverage SAS to grant limited access to your storage account for other clients, you could follow this tutorial for getting started with SAS.
For a simple way, you could create your storage account and leverage Microsoft Azure Storage Explorer to manage/share your file as follows:
For cross domain accessing, you need to configure CORS Setting:
For sharing your file(blob), you could Set Container Public Access Level or leverage SAS to grant limited access to your file for other clients as follows:
Right click your container, select "Set Public Access Level":
Sample file for share: https://brucechen.blob.core.windows.net/brucechen/index.json
Also, you could right click your JSON file, click "Get Shared Access Signature":
Sample file for share: https://brucechen.blob.core.windows.net/brucechen/index.json?st=2017-02-28T08%3A04%3A00Z&se=2017-09-01T08%3A04%3A00Z&sp=r&sv=2015-12-11&sr=b&sig=rVkorHeNOd4j2YhkmmxZ6DfXVLf1FoN2smY6mNRIoWs%3D

What API to use for Offline Chrome App

I want to develop an offline chrome application.
As in offline app SQL is not available , so what API can serve the following purpose.
=>Large Storage
=>Efficient method to set and get values
=>Fast
=>Secured (user cannot temper the data)
Confused between IndexDB and File System API
I have knowledge of web languages and how online apps can store data on server. But don't know much about how to save data offline.
It all depends on your needs.
The Chrome apps have couple of limitations. Because they must to be very fast some web API's are disabled. You can't use localStorage and webSql for example.
However in apps you have different set of storage options:
chrome.storage.local - equivalent for localStorage but asynchronous. you can also save/read many objects at once
chrome.storage.sync - same as above but data are shared between different app instances (on other browser's profiles or machines)
web filesystem API - well known web filesystem API that can keep any kind of file in protected, browser storage. User's do not have direct access to this files, only the app have
extension to the above: chrome.syncFileSystem - it works similar to the above but files saved using this API are synced between app's instances (e.g. different machines) using Google Drive as a back-end. However user's can't see synced files in Drive UI because they are hidden.
chrome.fileSystem API - another extension to the web filesystem API and it gives you access to the user's sandboxed local filesystem. You can read from and write to selected by the user locations.
IndexedDB - quoting the docs: IndexedDB is an API for client-side storage of significant amounts of structured data, which also enables high performance searches of this data using indexes.
other custom solutions saving data on some server and syncing changes in all instances
You can choose one of above. As I can see you'll probably want to use IndexedDB API. It is not SQL and it is different approach to saving data. If you never use it before try some sample app first. However it's fast, efficient and combining with unlimitedStorage permission also can set large amount of data.
I also suggesting you to read Offline First page in Chrome Apps documentation where are examples of solutions for making an app offline.

What are some Online Cloud Storage Services that can download files directly from the cloud?

What are some Cloud storage Services that can download files from web directly.
For ex- I want to download a file -> www.example.com/god.avi
Now what are some cloud storage services that will allow me to directly upload the file to my account.
Google Drive and Dropbox are cloud services but they dont have this facility.
I assume since you're asking this question on Stack Overflow that you're asking this in the context of programming. In that case, Dropbox has the Saver, which lets a user download a file directly to Dropbox. (Dropbox transfers the file from the URL server-side.)
You can download/upload files pragmatically to Dropbox and also to Google Drive. They have API to do it.
However, if you are really looking for really complete service where you can scale up your data storage until you want, you really want to take a look at Amazon S3. In fact, as far as I know, Dropbox works on the top of Amazon S3 to provide their services.
If you would like to have an idea about how to upload/download a file to Amazon S3, you can take a look to this application example. If you want the same thing on Dropbox or Google Drive, there are a lot of examples on Internet. However, on these two providers you need a token to upload files, what I don't like. I prefer the way in which it works for Amazon S3 (just for programming purposes - GUI is better on GD or Dropbox).

Google Drive Live API: Server Export of Collaboration Document

I have a requirement to build an application with the following features:
Statistical and Source data is presented on simple HTML pages
Some missing Source data can be added from that HTML page ( data will be both exact numerical values and discriptive text )
Some new Source data can be added from those pages
Confirmed and verified data will NOT be editable via the HTML interface
Data is stored and made continuously available via the HTML interface
Periodically the data added/changed from the interface needs to be pulled back into the source data - but in a VERY controlled way. All data changes and submissions will need verification and checking - and some will trigger re-runs of models ( some of which take hours to run ).
In terms of overview architecture I have:
Large DB that stores and manages the data - this is designed for import process's and analysis. It is not ideal for web presentation or interface
Code servers that manipulate the data for imports and analysis
Frontend server that works as a proxy to add layer of security to S3
Collection of generated html files on S3 presenting the data required
Before reading about the Google Drive Realtime API my rough plan was to simply serialize data from the HTML interface and post to S3. The import server scripts would then check for new information, grab it, check it, log it and process it into the main data set.
That basic process however would mean that once changes were submitted from the web page - they would be lost from the users view until they had been processed by the backend.
With the Google Drive Realtime API it would appear I could get the best of both worlds.
However for the above to work I would need to be able to access the Collaboration Document in code from the code servers and export the data.
The Realtime API gives javascript access to Export and hand off to a function - however in my use case I want to automate the Export from the Collaboration Document.
The Google Drive SDK does not as far as I can see give any hints on downloading/exporting a file of type "Collaboration File".
What "non-browser-user" triggered methods are there for interfacing with the Collaboration Documents and exporting them?
David
Server-side export is not supported right now. What you could do is save the realtime model to a regular drive file, and read from that using the standard Drive API. See https://developers.google.com/drive/realtime/models-files for some discussion on different ways to setup interactions between realtime models and Drive Files.