Is it possible to delete or segment a bucket in the forge API - autodesk-forge

I'm building an app where users will add collections of CAD files to an engineering project.
My plan was to have one transient and temporary bucket for the whole app to use for temp storage. Then create a persistent bucket for each project to hold that projects CAD files for the life of the project.
I've worte the functions to create the new buckets for each project as they are created. I started to write the function to delete the bucket if the project is deleted and realised there is no API function to delete a bucket!
Now I'm wondering if I'm thinking about it wrong.
Rather than creating/deleting buckets with projects. Would it be better to have one persistent bucket segmented in some way to hold project files in each segment and delete that with the project?
How would I go about this? Or should I do something else alltogether?

Yes it is. It is simply not documented yet.
The API works like this when using OSS v2:
DELETE
https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey
requires 'bucket:delete' scope
action cannot be undone
It deletes the bucket and all files in it, but viewables will be preserved.
You can test it using the sample here. Checkout the bucketDelete command.

There is an API to delete the buckets but I'm not sure it's exposed to public API keys. It's using DELETE verb and requires 'bucket:delete' scope.
On the other hand, as you mentioned, there is not really a need for a per-project bucket, that's really up to you to manage how you create your buckets and place the files in them. To give you an example the Autodesk A360 cloud infrastructure is using a single bucket to place the files of all the customers!
You could get away with simply 3 buckets (one of each type), and manage project/files relationship using a third-party database or a prefix naming mechanism.

Related

How do I quickly list all Google Cloud projects in an organization?

I would like to quickly list all Google Cloud projects in an organization, without AppScript folders.
gcloud projects list can be very slow. This documentation is about speeding it up, but does not show how to retrieve the Appscript folder which is used for filtering. Can that be done from the command line?
Also, gcloud projects list does not have a way to filter by organization. It seems that that is impossible as projects are not linked to their organization except through a tree of folders.
The documentation shows a way of walking the tree, apparently with Resource Manager API, which might do the job, but only pseudocode is shown. How can this be done with gcloud -- or else with Python or another language?
And if there is no way to accelerate this: How do I page through results using gcloud projects list? The documentation shows that page-size can be set, but does not show how to step through page by page (presumably by sending a page number with each command).
See also below for a reference to code I wrote that is the imperfect but best solution I could find.
Unfortunately there isn’t a native Apps Script resource available to work with Cloud Resource Manager API.
Although, it is possible to make a HTTP call directly to the Resource Manager API projects.list() endpoint with the help of UrlFetchApp service.
Alternatively, using Python as mentioned, the recommended Google APIs client library for python supports calls to Resource Manager API. You can find the specific projects.list() method documentation here.
On additional note, if you happen to use a Cloud project to generate credentials and authenticate the API call, you may want to enable Cloud Resource Manager API on your project by following this URL.
I’d also recommend submitting a new Feature Request using this template.
Here is some code that lists projects in an organization as quickly as possible. It is in Clojure, but it uses Java APIs and you can translate it easily.
Key steps
Query all accessible projects using CloudResourceManager projects(), using setQuery to accelerate the query by filtering out, for example, the hundreds of sys- projects often generated by AppScript. The query uses paging.
From the results
Accept those that are the child of the desired org
Reject those that are the child of another org.
For those that are the child of a folder, do this (concurrently, for speed): Use gcloud projects get-ancestors $PROJECT_ID to find the projects in your organization. (I don't see a way to do that in Java, and so I call the CLI.)

Is there a simple way to host a JSON document you can read and update in Google Cloud Platform?

What I'm trying to do is host a JSON document that will then, essentially, serve as a hosted version of json-server. I'm aware I can do something similar with My JSON Server, but I plan to move my entire architecture to GCP so want to get more familiar with it.
At first I looked into the Storage JSON API, but it seems like that's just for getting data about buckets rather than the items in the buckets itself. I created a bucket called test-json-api and added a test-data.json, but there's seemingly no way to access the data in the json file via this API.
I'm trying to keep it as simple as possible for testing purposes. In time, I'll probably use a firestore allocation, but for now I'd like to avoid all that complexity, and instead have a simple GET and a PUT/PATCH to a json file.
The Storage JSON API you are talking about are only for getting and updating the metadata and not for getting and updating the data inside the object. Objects inside the Google Cloud Storage bucket are immutable and one way to update them may be to get the object data from Google Cloud Storage bucket within the code, updating it, then uploading it again into the Google Cloud Storage bucket.
As you want to deal with JSON files you may explore using Cloud Datastore or Cloud Firestore. Also if you wish to use Firebase then you may explore Firebase Realtime Database.
A very quick and dirty way to make it easy to read some of the information in the json doc is to use that information in the blob name. For example the key information in
doc = {'id':3, 'name':'my name', ... }
could be stored in an object called "doc_3_my name", so that it can be read while browsing the bucket. You can then download the right doc if you need to see all the original data. An object name can be up to 1024 bytes of UTF-8 (with some exclusions), which is normally sufficient for surfacing basic information.
Note that you should never store PII like this.
https://cloud.google.com/storage/docs/objects

How to get non-expired urn when upload and translation model with autodesk forge

I'm new to Autodesk Forge. I had created my own web application to upload and translate models and show them via viewer. But the urn I got expires in 24h. I mean, the objects expired in the created bucket. But the urn still can use to show in the viewer. how long will the urn expire? and how could I get the non-expired urn? Currently, I'm still using the trial. Is it after subscription, translation will return an urn that doesn't expire?
I tried searching but couldn't find the right answer. Please help.
You need to set the policy of your buckets to one of the following depending on how long you want to keep the URN valid. When creating buckets, it is required that applications set a retention policy for objects stored in the bucket. This cannot be changed at a later time. The retention policy on the bucket applies to all objects stored within. When creating a bucket, specifically set the policyKey to
transient (24 hours) what you have right now
temporary (30 days)
persistent (permanent until you decide to delete the bucket)
Check this for more information - https://forge.autodesk.com/en/docs/data/v2/developers_guide/retention-policy/
I think you are interested in the duration of the generated bubbles.
Actually, After translation for a design model from your bucket, the generated object(svf|svf2) will not be impacted by the original design model in your bucket, that means, even the design model is expired or deleted, the generated object will be still there unless you explicated delete it by https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/en/docs/model-derivative/v2/reference/http/urn-manifest-DELETE, you can consider it as non-expired as long as you keep the original urn.

AWS WorkDocs SDK - How to Search a Folder?

Given the ID for a folder in AWS WorkDocs, how can I search that folder for a file or sub-folder that has a specific name, using the SDK? And can such a search be recursively deep vs shallow?
Is there a better way besides fetching the metadata for all of the items and stopping once there's a match? It appears not, from this quote on a page that provides a Python example:
Note that this code snippet searches for the file only in the current folder in the user’s MyDocs. For files in subfolders, the code needs to iterate over the subfolders by calling describe_folder_contents() on them and performing the lookup.
I see that the pricing schedule mentions search ...
• $58/10K calls for SEARCH requests (0.0058/call)
... but neither the API reference nor the FAQ mentions search in the answer for "What specific actions can be taken on Amazon WorkDocs content programmatically using the Amazon WorkDocs SDK?" -- The FAQ says:
The Amazon WorkDocs SDK allows you to perform create, read, update, and delete (CRUD) actions on WorkDocs’ users, folders, files, and permissions. You can access and modify file attributes, tag files, and manage comments associated with files.
In addition to API actions, you can also subscribe to notifications that Amazon WorkDocs sends with Amazon SNS. The detailed information, including syntax, responses and data types for the above actions, is available in the WorkDocs API Reference Documentation.
The labelling API might be the answer...
The labelling API allows you to tag files and folders so that you can better organize them, and to use tags when searching for files programmatically.
... but I'm having trouble finding an example, or even which classes comprise the "labelling API". Are they referring to Package software.amazon.awssdk.services.resourcegroupstaggingapi ?
Description
Resource Groups Tagging API
A tag is a label that you assign to an AWS resource. A tag consists of
a key and a value, both of which you define. For example, if you have
two Amazon EC2 instances, you might assign both a tag key of "Stack."
But the value of "Stack" might be "Testing" for one and "Production"
for the other.
Tagging can help you organize your resources and enables you to
simplify resource management, access management and cost allocation.
You can use the resource groups tagging API operations to complete the
following tasks:
Tag and untag supported resources located in the specified Region for
the AWS account.
Use tag-based filters to search for resources located in the specified
Region for the AWS account.
List all existing tag keys in the specified Region for the AWS
account.
List all existing values for the specified key in the specified Region
for the AWS account.
In the list of supported resources on that page, it lists S3 (buckets only) and WorkSpaces, but there's no mention of WorkDocs. Is this what I'm looking for?

Google storage write only (no delete)

I would like to use google storage for backing up my database. However, for security reason, i would like to use a "service account" with a write only role.
But it seems like this role can also delete objects! So my question here: can we make a bucket truly "write only, no deletion"? And of course how?
This is now possible with the Google Cloud Storage Object Creator role roles/storage.objectCreator.
https://cloud.google.com/iam/docs/understanding-roles#storage.objectCreator
You cannot do this, unfortunately. There is currently no way to grant permission to insert new objects while denying the permission to delete or overwrite existing objects.
You could perhaps implement this using two systems, the first being the backup service which wrote to a temporary bucket, and the second being an administrative service that exclusively had write permission into the final backup bucket and whose sole job was to copy in objects if and only if there are no existing objects at that location. Basically you would trust this second job as an administrator.