Is there any systemic way to find the minimum access right or role required for each of Azure CLI commands? - azure-cli

I am working on a project in which I need to define the exact minimum security role for each operation.
Is there any systemic way or documentation to find the minimum access right or role required for each of Azure CLI commands?

Well, there is no systemic way or doc to find it directly, it needs some experience and test, you could refer to the things below, it applies to most situations.
Azure CLI commands essentially call the Azure REST API, you could use --debug parameter with a CLI command, then you can find the API the command calls.
For example, I use the az vm list to list all the VMs in a resource group.
az vm list -g <group-name> --debug
Then you will find it calls Virtual Machines - List API, then you can search for the resource provider and resource type i.e. Microsoft.Compute/virtualMachines in this doc, easily we can find Microsoft.Compute/virtualMachines/read, here you need some experience, from my sight, the action permission should be correct.
Then you can create a custom role with this action to have a test, and change the permissions depend on the result, in most situations, the command will include the action permission you need in the error message if you don't have enough permissions to do the operations.

Related

How do I quickly list all Google Cloud projects in an organization?

I would like to quickly list all Google Cloud projects in an organization, without AppScript folders.
gcloud projects list can be very slow. This documentation is about speeding it up, but does not show how to retrieve the Appscript folder which is used for filtering. Can that be done from the command line?
Also, gcloud projects list does not have a way to filter by organization. It seems that that is impossible as projects are not linked to their organization except through a tree of folders.
The documentation shows a way of walking the tree, apparently with Resource Manager API, which might do the job, but only pseudocode is shown. How can this be done with gcloud -- or else with Python or another language?
And if there is no way to accelerate this: How do I page through results using gcloud projects list? The documentation shows that page-size can be set, but does not show how to step through page by page (presumably by sending a page number with each command).
See also below for a reference to code I wrote that is the imperfect but best solution I could find.
Unfortunately there isn’t a native Apps Script resource available to work with Cloud Resource Manager API.
Although, it is possible to make a HTTP call directly to the Resource Manager API projects.list() endpoint with the help of UrlFetchApp service.
Alternatively, using Python as mentioned, the recommended Google APIs client library for python supports calls to Resource Manager API. You can find the specific projects.list() method documentation here.
On additional note, if you happen to use a Cloud project to generate credentials and authenticate the API call, you may want to enable Cloud Resource Manager API on your project by following this URL.
I’d also recommend submitting a new Feature Request using this template.
Here is some code that lists projects in an organization as quickly as possible. It is in Clojure, but it uses Java APIs and you can translate it easily.
Key steps
Query all accessible projects using CloudResourceManager projects(), using setQuery to accelerate the query by filtering out, for example, the hundreds of sys- projects often generated by AppScript. The query uses paging.
From the results
Accept those that are the child of the desired org
Reject those that are the child of another org.
For those that are the child of a folder, do this (concurrently, for speed): Use gcloud projects get-ancestors $PROJECT_ID to find the projects in your organization. (I don't see a way to do that in Java, and so I call the CLI.)

Permission denied when running scheduling Vertex Pipelines

I wish to schedule a Vertex Pipelines and deploy it from my local machine for now.
I have defined my pipeline which runs well I deploy it using: create_run_from_job_spec, on AIPlatformClient running it once.
When trying to schedule it with create_schedule_from_job_spec, I do have a Cloud Scheduler object well created, with a http endpoint to a Cloud Function. But when the scheduler runs, it fails because of Permission denied error. I used several service accounts with owner permissions on the project.
Do you know what could have gone wrong?
Since AIPlatformClient from Kubeflow pipelines raises deprecation warning, I also want to use PipelineJob from google.cloud.aiplatform but I cant see any direct way to schedule the pipeline execution.
I've spent about 3 hours banging my head on this too. In my case, what seemed to fix it was either:
disabling and re-enabling cloud scheduler api. Why did I do this? There is supposed to be a service account called service-[project-number]#gcp-sa-cloudscheduler.iam.gserviceaccount.com. If it is missing then re-enabling API might fix it
for older projects there is an additional step: https://cloud.google.com/scheduler/docs/http-target-auth#add
Simpler explanations include not doing some of the following steps
creating a service account for scheduler job. Grant cloud function invoker during creation
use this service account (see create_schedule_from_job_spec below)
find the (sneaky) cloud function that was created for you it will be called something like 'templated_http_request-v1' and add your service account as a cloud function invoker
response = client.create_schedule_from_job_spec(
job_spec_path=pipeline_spec,
schedule="*/15 * * * *",
time_zone="Europe/London",
parameter_values={},
cloud_scheduler_service_account="<your-service-account>#<project_id>.iam.gserviceaccount.com"
)
If you are still stuck, it is also useful to run gcloud scheduler jobs describe <pipeline-name> as it really helps to understand what scheduler is doing. You'll see cloudfunction url, POST payload which is some base64 encoded and contains pipeline yaml and you'll see that it is using OIDC/service account for security. Also useful is to view the code of the 'templated_http_request-v1' cloud function (sneakily created!). I was able to invoke the cloudfunction from POSTMAN using the payload obtained from scheduler job.

Retrieve Github Action metadata of GITHUB_TOKEN through API

I am trying to make sure we have a secure way to integrate our cloud and Github Actions.
We have multiple Accounts in our cloud to reduce the blast radius if there is an issue. For this we need to make sure we can assume the correct role to deploy to the correct sub-account. We where planning to do a discovery capability made based on the extraction of the metadata of the GITHUB_TOKEN generated in runtime.
Is there a way to obtain the repo name or action that generated the GITHUB_TOKEN?

How to create a ServiceNow Change Request as a step in TFS 2018 Release

I am trying to create a ServiceNow Change Request as one of the steps in my Release. I was trying to Agentless Phase step (Invoke Rest API: Post).
I found one article online that suggested to create a Generic Endpoint for ServiceNow. I tried that in the step failed, I'm sure I don't have it set up correctly.
2019-11-12T12:55:28.8833838Z POST https://xyzhelpdesk.service-now.com/api/now/table/change_request
Response Code: 0
Response: An error was encountered while processing request.
Exception: {"error":{"message":"Exception while reading request","detail":"Cannot decode: java.io.StringReader#90f857"},"status":"failure"}
Exception Message: The remote server returned an error: (400) Bad Request. (type WebException)
The endpoint has a username and password defined, but I think in the setup for Step I may need more information in the Header section.
I can create the CR via a Powershell script, I guess I could just use that but Not sure the correct way to go.
Basically I want to create a ServiceNow CR as part of my deployment process. Then there is a TFS plugin gated step that will check the status on the CR and when it's approves the process moves forward.
Who has examples ?
Thanks
Actually there is a built-in extension-- ServiceNow Change Management which provide by Microsoft almost fit your needs.
It includes:
A release gate to hold the pipeline till the change management
process signals implementation for a change request. You can create a
new change request for every deployment or use an existing change
request.
An agentless task to update a change request during the deployment
process. It is typically used as the last task in the stage.
However, this extension works only with Azure DevOps Services and Azure DevOps Server 2019 Update 1 onwards.. Not available to use on tfs2018. You could consider to upgrade your TFS to latest Azure DevOps Version.
With TFS 2018, suggest you to use a powershell script to handle this. It's able to use
ServiceNow and Azure DevOps Rest API. You could also take a look at great article blogs(similar for TFS):
Integrating VSTS Release Management with ServiceNow using Deployment Gate for Change Management
Implement an Azure DevOps Release Gate to ServiceNow

how to get number of pcf instances running in java code?

I have an app that uses spring rest and deployed on PCF. Now inside the code I have to get the number of PCF instances running currently. Can anyone help?
Before I answer this - why do you want to know? It's an anti-pattern for cloud native apps to know about their peers; they should each be working in total isolation.
You can discover this by looking up application details by GUID in the CloudController. You can get your current app's GUID in the VCAP_APPLICATION environment variable.
https://apidocs.cloudfoundry.org/245/apps/get_app_summary.html
In order to hit the CloudController your app will need to know the system domain of your Cloud Foundry (eg api.mycf.com) and credentials that allow it to make that request.