AWS WorkDocs SDK - How to Search a Folder? - aws-sdk

Given the ID for a folder in AWS WorkDocs, how can I search that folder for a file or sub-folder that has a specific name, using the SDK? And can such a search be recursively deep vs shallow?
Is there a better way besides fetching the metadata for all of the items and stopping once there's a match? It appears not, from this quote on a page that provides a Python example:
Note that this code snippet searches for the file only in the current folder in the user’s MyDocs. For files in subfolders, the code needs to iterate over the subfolders by calling describe_folder_contents() on them and performing the lookup.
I see that the pricing schedule mentions search ...
• $58/10K calls for SEARCH requests (0.0058/call)
... but neither the API reference nor the FAQ mentions search in the answer for "What specific actions can be taken on Amazon WorkDocs content programmatically using the Amazon WorkDocs SDK?" -- The FAQ says:
The Amazon WorkDocs SDK allows you to perform create, read, update, and delete (CRUD) actions on WorkDocs’ users, folders, files, and permissions. You can access and modify file attributes, tag files, and manage comments associated with files.
In addition to API actions, you can also subscribe to notifications that Amazon WorkDocs sends with Amazon SNS. The detailed information, including syntax, responses and data types for the above actions, is available in the WorkDocs API Reference Documentation.
The labelling API might be the answer...
The labelling API allows you to tag files and folders so that you can better organize them, and to use tags when searching for files programmatically.
... but I'm having trouble finding an example, or even which classes comprise the "labelling API". Are they referring to Package software.amazon.awssdk.services.resourcegroupstaggingapi ?
Description
Resource Groups Tagging API
A tag is a label that you assign to an AWS resource. A tag consists of
a key and a value, both of which you define. For example, if you have
two Amazon EC2 instances, you might assign both a tag key of "Stack."
But the value of "Stack" might be "Testing" for one and "Production"
for the other.
Tagging can help you organize your resources and enables you to
simplify resource management, access management and cost allocation.
You can use the resource groups tagging API operations to complete the
following tasks:
Tag and untag supported resources located in the specified Region for
the AWS account.
Use tag-based filters to search for resources located in the specified
Region for the AWS account.
List all existing tag keys in the specified Region for the AWS
account.
List all existing values for the specified key in the specified Region
for the AWS account.
In the list of supported resources on that page, it lists S3 (buckets only) and WorkSpaces, but there's no mention of WorkDocs. Is this what I'm looking for?

Related

How do I quickly list all Google Cloud projects in an organization?

I would like to quickly list all Google Cloud projects in an organization, without AppScript folders.
gcloud projects list can be very slow. This documentation is about speeding it up, but does not show how to retrieve the Appscript folder which is used for filtering. Can that be done from the command line?
Also, gcloud projects list does not have a way to filter by organization. It seems that that is impossible as projects are not linked to their organization except through a tree of folders.
The documentation shows a way of walking the tree, apparently with Resource Manager API, which might do the job, but only pseudocode is shown. How can this be done with gcloud -- or else with Python or another language?
And if there is no way to accelerate this: How do I page through results using gcloud projects list? The documentation shows that page-size can be set, but does not show how to step through page by page (presumably by sending a page number with each command).
See also below for a reference to code I wrote that is the imperfect but best solution I could find.
Unfortunately there isn’t a native Apps Script resource available to work with Cloud Resource Manager API.
Although, it is possible to make a HTTP call directly to the Resource Manager API projects.list() endpoint with the help of UrlFetchApp service.
Alternatively, using Python as mentioned, the recommended Google APIs client library for python supports calls to Resource Manager API. You can find the specific projects.list() method documentation here.
On additional note, if you happen to use a Cloud project to generate credentials and authenticate the API call, you may want to enable Cloud Resource Manager API on your project by following this URL.
I’d also recommend submitting a new Feature Request using this template.
Here is some code that lists projects in an organization as quickly as possible. It is in Clojure, but it uses Java APIs and you can translate it easily.
Key steps
Query all accessible projects using CloudResourceManager projects(), using setQuery to accelerate the query by filtering out, for example, the hundreds of sys- projects often generated by AppScript. The query uses paging.
From the results
Accept those that are the child of the desired org
Reject those that are the child of another org.
For those that are the child of a folder, do this (concurrently, for speed): Use gcloud projects get-ancestors $PROJECT_ID to find the projects in your organization. (I don't see a way to do that in Java, and so I call the CLI.)

Google Drive API Key which is read only

Our server is customer deployed and uses a Google Drive API key to obtain a tutorial file
listing via
https://www.googleapis.com/drive/v3/files?q=%27FILE_ID%27+in+parents+and+trashed=false&maxResults=1000&key=API_KEY&fields=files(name,webViewLink,id,kind,mimeType)
and file contents via
https://www.googleapis.com/drive/v3/files/FILE_ID/export&key=API_KEY
It is unclear how we can set that API key to be read only though.
I do not see anything on these pages for example,
https://developers.google.com/drive/api/guides/about-auth
https://cloud.google.com/docs/authentication/api-keys
The restrictions you can set to an API key can be found here, so it is not possible to do it this way. Now, the way to achieve what you are trying to do would be by setting up the project with the correct OAuth scopes and using read only scopes, but that can limit your implementation as sometimes the API needs more scopes.
For example, if you were trying to list users using the Directory API, you can see the list of scopes needed here. If you check the list you will see that there is a read only scope listed there.
So, in your project you could just use this specific scope for your implementation, but again some actions require more than just this scope to work, so you could be limited by that as well depending on what your implementation is doing.
Same example for the Drive API in your case. The list of scopes for the Files: list method are here, and you also have read only scopes as you can see in the image below.

Check if an URL belongs to a resource within the same document domain

I'm developing a script that checks every link (URL) in a Presentations to meet the following criteria:
If the link follows to a document within the same domain of the Presentation that I'm currently editing then remove the link.
The question is, is there a property, or any method in GAPPS that allows me to check that information, because I've reviewed the whole API and I cannot find anything like that.
There is no direct way to get the domain that a file belongs to buy you might be able to get the Presentation owner by using getOwner ( a method of the Class File from the Drive Service) or the equivalent of the Drive Advanced Service.
Once you get the owner email address, then you could extract the domain by using JavaScript's global object methods like String.prototype.split, regular expressions, etc.
Related
Determine the owner of a Google Drive document with a service account
returning document owner for large list of Google Drive doc IDs

How to create proxies with wild card paths in Azure api manager?

Good Afternoon,
I have a situation where three swagger files will have different resources but they belong to the same domain. And I can't merge them into a single swagger as we have many such scenarios and managing of a single swagger and single api proxy will be a big over head.
For example:
I have 3 apis with the following paths and resources
/supermarket/v1/aisles/{aisleId}/itemcategories
/supermarket/v1/aisles/{aisleId}/itemcategories/{itemcategoryId}/seasonedvegetabletypes
/supermarket/v1/aisles/{aisleId}itemcategories/{itemcategoryId}/seasonedvegetabletypes/{vegetablestypeId}/apples
All the above 3 should be in 3 different swagger files, so I need to create 3 api proxies for the above.
Since the path suffix is same for all of them "/supermaket" the Azure API Manager will not allow to create another api proxy with the same path suffix as it MUST Be unique.
So to achieve this in Apigee Edge (Google Edge) api management product. I will have the basepaths as below
/supermarket/v1
/supermarket/v1/aisles//itemcategories/
/supermarket/v1/aisles//itemcategories/*/seasonedvegetabletypes
so that I can avoid the unique path constraint also achieve creating 3 api proxies.
But the Azure API Manager is not accepting the "wildcard" entries into the API Path Suffix field when creating the API Proxy.
Note:
You may suggest combining the 3 apis into a single swagger file might solve the issue but the example I gave above is only 30% of the swagger and we have many such paths that will fall into a single business domain so we must have them in different swagger files and different api proxies.
We should be in a position to deploy different API Proxies with the same path suffix by allowing wild cards or regex into the API Path Suffix.
Your help to resolve this is highly appreciated. Thanks.
At this point this is a limitation that can't be worked around. The only way to make APIM serve those URIs is to put all of them under single API, which is not what you want, unfortunatelly.

Is it possible to delete or segment a bucket in the forge API

I'm building an app where users will add collections of CAD files to an engineering project.
My plan was to have one transient and temporary bucket for the whole app to use for temp storage. Then create a persistent bucket for each project to hold that projects CAD files for the life of the project.
I've worte the functions to create the new buckets for each project as they are created. I started to write the function to delete the bucket if the project is deleted and realised there is no API function to delete a bucket!
Now I'm wondering if I'm thinking about it wrong.
Rather than creating/deleting buckets with projects. Would it be better to have one persistent bucket segmented in some way to hold project files in each segment and delete that with the project?
How would I go about this? Or should I do something else alltogether?
Yes it is. It is simply not documented yet.
The API works like this when using OSS v2:
DELETE
https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey
requires 'bucket:delete' scope
action cannot be undone
It deletes the bucket and all files in it, but viewables will be preserved.
You can test it using the sample here. Checkout the bucketDelete command.
There is an API to delete the buckets but I'm not sure it's exposed to public API keys. It's using DELETE verb and requires 'bucket:delete' scope.
On the other hand, as you mentioned, there is not really a need for a per-project bucket, that's really up to you to manage how you create your buckets and place the files in them. To give you an example the Autodesk A360 cloud infrastructure is using a single bucket to place the files of all the customers!
You could get away with simply 3 buckets (one of each type), and manage project/files relationship using a third-party database or a prefix naming mechanism.