Azure ARM JSON template deployment logic clarificatrion - json

i have a simple question about ARM templates deployment logic.
I have 2 storage accounts (A and B) in my template and I can successfully deploy them to a single resource group.
Now, I remove the storage account B from the template and I deploy the template againt on the same resource group.
What actually happens? nothing? Or should I expect ARM to delete the storage account B keeping only A?
Thanks,
F

There are 2 deployment modes in the ARM paradigm: complete and incremental.
Complete will delete all the resources from your resource groups that are absent from the template, so if you only have 1 storage account in your template all the resources except this storage account will get removed.
Incremental will just créate\update the resources you are declaring in the ARM template. It wont delete anything.

You should expect the ARM template deployment to remove Storage Account B (as long as it doesn't have something dependent on it that would prevent it from being deleted) if you are doing a complete deployment. If an incremental deployment is being used Storage Account B will not be removed.

Related

How do I quickly list all Google Cloud projects in an organization?

I would like to quickly list all Google Cloud projects in an organization, without AppScript folders.
gcloud projects list can be very slow. This documentation is about speeding it up, but does not show how to retrieve the Appscript folder which is used for filtering. Can that be done from the command line?
Also, gcloud projects list does not have a way to filter by organization. It seems that that is impossible as projects are not linked to their organization except through a tree of folders.
The documentation shows a way of walking the tree, apparently with Resource Manager API, which might do the job, but only pseudocode is shown. How can this be done with gcloud -- or else with Python or another language?
And if there is no way to accelerate this: How do I page through results using gcloud projects list? The documentation shows that page-size can be set, but does not show how to step through page by page (presumably by sending a page number with each command).
See also below for a reference to code I wrote that is the imperfect but best solution I could find.
Unfortunately there isn’t a native Apps Script resource available to work with Cloud Resource Manager API.
Although, it is possible to make a HTTP call directly to the Resource Manager API projects.list() endpoint with the help of UrlFetchApp service.
Alternatively, using Python as mentioned, the recommended Google APIs client library for python supports calls to Resource Manager API. You can find the specific projects.list() method documentation here.
On additional note, if you happen to use a Cloud project to generate credentials and authenticate the API call, you may want to enable Cloud Resource Manager API on your project by following this URL.
I’d also recommend submitting a new Feature Request using this template.
Here is some code that lists projects in an organization as quickly as possible. It is in Clojure, but it uses Java APIs and you can translate it easily.
Key steps
Query all accessible projects using CloudResourceManager projects(), using setQuery to accelerate the query by filtering out, for example, the hundreds of sys- projects often generated by AppScript. The query uses paging.
From the results
Accept those that are the child of the desired org
Reject those that are the child of another org.
For those that are the child of a folder, do this (concurrently, for speed): Use gcloud projects get-ancestors $PROJECT_ID to find the projects in your organization. (I don't see a way to do that in Java, and so I call the CLI.)

Retrieve Github Action metadata of GITHUB_TOKEN through API

I am trying to make sure we have a secure way to integrate our cloud and Github Actions.
We have multiple Accounts in our cloud to reduce the blast radius if there is an issue. For this we need to make sure we can assume the correct role to deploy to the correct sub-account. We where planning to do a discovery capability made based on the extraction of the metadata of the GITHUB_TOKEN generated in runtime.
Is there a way to obtain the repo name or action that generated the GITHUB_TOKEN?

AWS WorkDocs SDK - How to Search a Folder?

Given the ID for a folder in AWS WorkDocs, how can I search that folder for a file or sub-folder that has a specific name, using the SDK? And can such a search be recursively deep vs shallow?
Is there a better way besides fetching the metadata for all of the items and stopping once there's a match? It appears not, from this quote on a page that provides a Python example:
Note that this code snippet searches for the file only in the current folder in the user’s MyDocs. For files in subfolders, the code needs to iterate over the subfolders by calling describe_folder_contents() on them and performing the lookup.
I see that the pricing schedule mentions search ...
• $58/10K calls for SEARCH requests (0.0058/call)
... but neither the API reference nor the FAQ mentions search in the answer for "What specific actions can be taken on Amazon WorkDocs content programmatically using the Amazon WorkDocs SDK?" -- The FAQ says:
The Amazon WorkDocs SDK allows you to perform create, read, update, and delete (CRUD) actions on WorkDocs’ users, folders, files, and permissions. You can access and modify file attributes, tag files, and manage comments associated with files.
In addition to API actions, you can also subscribe to notifications that Amazon WorkDocs sends with Amazon SNS. The detailed information, including syntax, responses and data types for the above actions, is available in the WorkDocs API Reference Documentation.
The labelling API might be the answer...
The labelling API allows you to tag files and folders so that you can better organize them, and to use tags when searching for files programmatically.
... but I'm having trouble finding an example, or even which classes comprise the "labelling API". Are they referring to Package software.amazon.awssdk.services.resourcegroupstaggingapi ?
Description
Resource Groups Tagging API
A tag is a label that you assign to an AWS resource. A tag consists of
a key and a value, both of which you define. For example, if you have
two Amazon EC2 instances, you might assign both a tag key of "Stack."
But the value of "Stack" might be "Testing" for one and "Production"
for the other.
Tagging can help you organize your resources and enables you to
simplify resource management, access management and cost allocation.
You can use the resource groups tagging API operations to complete the
following tasks:
Tag and untag supported resources located in the specified Region for
the AWS account.
Use tag-based filters to search for resources located in the specified
Region for the AWS account.
List all existing tag keys in the specified Region for the AWS
account.
List all existing values for the specified key in the specified Region
for the AWS account.
In the list of supported resources on that page, it lists S3 (buckets only) and WorkSpaces, but there's no mention of WorkDocs. Is this what I'm looking for?

Oracle Cloud Infrastructure - Replicate Vault Across Regions

I have created a vault/key under a compartment.
As vault service is a regional service it is only available under the region I created it.
Even if tenancy subscribes to multiple region the compartment shows up but still Vault is not available for that region. Is there a way we could replicate Vault / Key /secrets while tenancy subscribes to multiple regions .
I have not done this myself, but you could try this approach and see if the following steps will work for you:
Step 1. Use the BackupKey/BackupVault API (from Vault Service) in the SOURCE region to create the relevant key/vault encrypted file(s).
Step 2. Use the CopyOBject API (from Object Storage Service) to copy the file(s) created in Step 1 from your SOURCE region to all DESTINATION regions.
Step 3. Use the RestoreKey/RestoreVault API (from Vault Service) to restore the key/vault in the DESTINATION regions. See

Is it possible to delete or segment a bucket in the forge API

I'm building an app where users will add collections of CAD files to an engineering project.
My plan was to have one transient and temporary bucket for the whole app to use for temp storage. Then create a persistent bucket for each project to hold that projects CAD files for the life of the project.
I've worte the functions to create the new buckets for each project as they are created. I started to write the function to delete the bucket if the project is deleted and realised there is no API function to delete a bucket!
Now I'm wondering if I'm thinking about it wrong.
Rather than creating/deleting buckets with projects. Would it be better to have one persistent bucket segmented in some way to hold project files in each segment and delete that with the project?
How would I go about this? Or should I do something else alltogether?
Yes it is. It is simply not documented yet.
The API works like this when using OSS v2:
DELETE
https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey
requires 'bucket:delete' scope
action cannot be undone
It deletes the bucket and all files in it, but viewables will be preserved.
You can test it using the sample here. Checkout the bucketDelete command.
There is an API to delete the buckets but I'm not sure it's exposed to public API keys. It's using DELETE verb and requires 'bucket:delete' scope.
On the other hand, as you mentioned, there is not really a need for a per-project bucket, that's really up to you to manage how you create your buckets and place the files in them. To give you an example the Autodesk A360 cloud infrastructure is using a single bucket to place the files of all the customers!
You could get away with simply 3 buckets (one of each type), and manage project/files relationship using a third-party database or a prefix naming mechanism.