Autodesk-Forge bucket system: New versioning - autodesk-forge

I am wondering of what is the best practise for handling new version of the same model in the Data Management API Bucket system
Currently, I have one bucket per user and the files with same name overwrites the existing model when doing a svf/svf2 conversion.
In order to handle model versioning in be the best manner, should I :
create one bucket per file converted
or
continue with one bucket per user.
If 1): is there a limitation of number of buckets which is possible to create?
else 2): How do I get the translation to accept an bucketKey different than the file name? (As it is now, the uploaded file need to be the filename to get the translation going.)
In advance, cheers for the assistance.

In order to translate a file, you do not have to keep the original file name, but you do need to keep the file extension (e.g. *.rvt), so that the Model Derivative service knows which translator to use. So you could just create files with different names: perhaps add a suffix like "_v1" etc or generate random names and keep track of which file is what version of what model in a database. Up to you.
There is no limit on number of buckets, but it might be an overkill to have a separate one for each file.

Related

Is it possible to get urns of models which are translated as references via zip translation?

When I upload and translate a zip-file with one rootFile and some models which act as references to Autodesk-Forge, I could only find one model-urn afterwards. Are all models uploaded separately under the hood and do you have the possibilty to get the urns of each model?
One usecase would be to open any other model from the package than the predefined root, to get to view the 2D-sheets from this model.
Another usecase would be to save data in relation to elements/referenced models with their dbId/guid and urn.
I was expecting to get each models urns by selecting parts from different models and running this.viewer.getAggregateSelection().lastItem.model as it would do the trick if I would've translated them separately and aggregated the view. But this way there's just one urn for all elements.
I also tried inspecting the buckets and objects via the awesome "Autdesk Forge Tools" extension for VSCode, but couldn't get any deeper than the .zip file as an object in the bucket.
Is the only possibility to upload/translate the same .zip-package for every model i want to open with a new defined rootFilename again? Is this still the only possibility as stated in an answer from 2016? (https://stackoverflow.com/a/38720162/19956654)
Appreciate any help with this one, thanks in advance!
Unfortunately, one ZIP will have one URN only. So, you will need to have the ZIP uploaded with different names and request translations with different rootFilenames separately.
However, you don't really need to upload the same file several times. Just call PUT buckets/:bucketKey/objects/:objectKey/copyto/:newObjectKey to duplicate the uploaded ZIP with different names.

Rename Bucket or transfer all models

I would like to know if it is possible to rename a Bucket.
If not, I would like to know if I can move all my models on the bucket I want to rename to a new bucket without translating each model again.
Thanks.
Unfortunately, it is not possible to rename a bucket, but it is possible to copy files (objects) across bucket with this API
For the viewables, it is a different story - they are not stored in OSS buckets, but on the Model Derivatives server. It means, you either need to translate them again if you want to use the new URN, or leave them where they are and map the old and new URNs. Viewables are destroyed only when you delete their manifest.

Autodesk Forge - Post Jobs - Must files be in buckets and proper URN

I am working on doing a post job and I am confused about where files need to be to run the job and the proper urn.
The examples all use a file that the user uploads to a bucket. I am trying to run the post job on a file that a user has created in Fusion 360 and that he has selected through a GUI I created. The urn in question is obtained by letting the user select the hub, project, folder(s), and file. I then use this file urn on the post job.
I keep getting back the response of :
Failed to download the design description for the input design.
My questions are:
Is it possible to do this from a users hub or do all items have to be in buckets?
Where are those translated files stored once created? If I want to get data like volume and mass without storing the translated file, is that possible?
I took the "urn:" off the front of the urn and got a different error, which I believe meant that it couldn't find any file.
Invalid 'design' parameter.
So, it looks like the urn I am using is finding a file but there is an issue somewhere that is preventing that file from being accessed or translated or something.
I keep getting back the response of : Failed to download the design description for the input design.
For Fusion 360 files make sure the extension name of the object is f2d/f3d. BTW Forge Viewer support these two formats directly so you don't have to translate to SVF for Viewer to visualize them.
Is it possible to do this from a users hub or do all items have to be in buckets?
For hub project items use the Data Management API to obtain the object ID - be sure to include the version parameter in your URN - see GET projects/:project_id/folders/:folder_id/contents and use the id of the item as your URN as well as tutorial here to help you understand how project folder items work.
Where are those translated files stored once created? If I want to get data like volume and mass without storing the translated file, is that possible?
The translated derivatives would be stored separately and you can access them through the derivative manifest. Use GET :urn/metadata/:guid/properties to query derivative properties but you will need to translate the model (to any format will do) in order to extract properties - see tutorial here.

Watson Conversation dialog for large number of entities?

I currently have a chat bot that has an entity for each stock symbol. There are over 3,000. For my dialog I want to be able to detect questions like #get #price #stockSymbol. Is there a way to deal with a large number of entities without writing an if statement for each one?
You are only allowed to have 100 entities in a single workspace. However those entities can have 100,000 values.
So you could create an entity called #StockSymbol and then each value would be the Stock identifier (eg. IBM).
So you would only need one IF statement to determine it is a stock, then pass back the entity information to your calling application to take action on the value.
To put this in programatically, if it is a one time thing you can create a CSV file like the following:
StockSymbol,IBM
StockSymbol,MSFT
StockSymbol,APPL
and so on. Then import that entity file. Alternatively you can use the workspace API to update an already deployed workspace.
I am sorry to say there is no process within the Conversation Service UI that has an automatic dialog creation method. In cases like this, many teams create an external script that can read a file with your entities in it, and then creates a workspace json file with the required dialog nodes. The workspace json file is a relatively simple format, and I have found you can easy merge any new json file into an already created workspace. In fact with the new API's it even possible to load the new elements into a running workspace. ( although if new to this, create a duplicate ws, and merge into this one, or download and merge via a good editor. )

Multiple or 1 JSON File

I'm creating a questionnaire application in Qt, where surveys are created, and users log on and complete these surveys. I am saving these as JSON.
Each survey could have 60+ questions, and are completed multiple teams by different people.
Is it more appropriate to save as 1 JSON file, or a file for each Survey?
I would use a Database rather than a JSON file. You can use JSON to serialize data and transfer it through processes and computers or servers, but you don't want to save big data to a JSON file.
Anyway if that's what you want to do I would save each survey in a different JSON file. Maybe keep them in order by assigning a unique identifier to each file (name of the file) so that you can find and search for them easily.
One single file would be a single point of failure, and when reading and writing it there would be concurrency problems. One file for each survey should soothe the problem.