Azure CLI: az cdn endpoint purge | Not working with Service Principal - azure-cli

My Service Principal has the following two roles on the whole Resource Group Playground
CDN Endpoint Contributor
CDN Profile Contributor
I am trying to run the following commands
az login --service-principal --username="ca85199a-7e86-40eb-b6c8-a774a9edc010" --password="<pwd>" --tenant="<tenant-id>"
az cdn endpoint purge -n mopar --profile-name mopar-poc --content-paths "/*" --resource-group Playgroud --no-wait
I am getting the following error.
AuthorizationFailed: The client 'acd5dfea-f69a-4178-812c-4204963c6959' with object id 'acd5dfea-f69a-4178-812c-4204963c6959' does not have authorization to perform action 'Microsoft.Cdn/profiles/endpoints/purge/action' over scope '/subscriptions/b19669be-bfa2-4e86-b7d4-f1b4d98dd2a5/resourceGroups/Playgroud/providers/Microsoft.Cdn/profiles/mopar-poc/endpoints/mopar' or the scope is invalid. If access was recently granted, please refresh your credentials.
What am I missing here?

The two roles are enough, the command works fine on my side, please follow the steps below to troubleshoot.
1.Double-check the RBAC roles in the azure portal, make sure the correct service principal has the correct role in the correct scope.
2.Run az account clear first, then login again to make sure you are using the correct service principal.
3.Make sure you logged in to the correct subscription, just use az account set --subscription <subscription-id> after login to set it.

The error was misleading.
Turns out I had misspelled Playground in the command below.
az cdn endpoint purge -n mopar --profile-name mopar-poc --content-paths "/*" --resource-group Playgroud --no-wait

Related

HTTP request inside Azure CLI GitHub action fails with SSL expired error

We are using the AZ CLI GitHub Action azure/CLI (https://github.com/marketplace/actions/azure-cli-action)
The script that this workflow calls makes an HTTP request to an external API. This cURL call fails with the following:
curl: (60) SSL certificate problem: certificate has expired
More details here: curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
However I can confirm that the same request works locally.
The problem workflow step looks like this:
- name: Run script
uses: azure/CLI#1.0.4
with:
azcliversion: 2.0.72
inlineScript: |
$GITHUB_WORKSPACE/github/scripts/script.sh
Why does cURL think that the SSL cert for the external API domain is expired, when I can make the same call to the same API domain successfully on my own machine?
It seems the problem was that the azcliversion points to a version of the AZ CLI that has outdated certificates.
The problem was solved by removing the azcliversion field altogether, as the default version is latest, as specified in the docs for the action:
azcliversion – Optional Example: 2.0.72, Default: latest
So the step now looks like this:
- name: Run script
uses: azure/CLI#1.0.4
with:
inlineScript: |
$GITHUB_WORKSPACE/github/scripts/script.sh
Probably related to this: https://twitter.com/letsencrypt/status/1443621997288767491
Our cross-signed DST Root CA X3 expired today. If you are hitting an error, check out fixes in our community forum. We're seeing higher than normal renewals, so you may experience a slowdown in getting your certificates.

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

ERROR: (gcloud.beta.functions.deploy) ... message=[The caller does not have permission]

I am trying to deploy code from this repo:
https://github.com/anishkny/puppeteer-on-cloud-functions
in Google Cloud Build. My cloudbuild.yaml file contents are:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
I have given the following roles to my Cloud Build Service account (****#cloudbuild.gserviceaccount.com):
Cloud Build Service Account
Cloud Functions Developer
Yet, in my Cloud Build log I see the following error:
starting build "1f04522c-fe60-4a25-a4a8-d70e496e2821"
FETCHSOURCE
Fetching storage object: gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047
Copying gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047...
/ [0 files][ 0.0 B/ 835.0 B]
/ [1 files][ 835.0 B/ 835.0 B]
Operation completed over 1 objects/835.0 B.
tar: Substituting `.' for empty member name
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
What am I missing?
It would appear that the permissions changed when (perhaps) Cloud Functions went GA. Another customer raised this issue today and I recalled your question.
The Cloud Build robot (${NUM}#cloudbuild.gserviceaccount.com) additionally needs to be a serviceAccountUser of the ${PROJECT-ID}#appspot.gserviceaccount.com account:
NB While the Cloud Build robot local part is the project number (${NUM}), the appspot robot local part is the project ID (${PROJECT})
Please try:
PROJECT=[[YOUR-PROJECT-ID]]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
Let me know!
I struggled with this too after reading quite a bit of documentation. A combination of the above answers got me on the right track. Basically, something like the following is needed:
PROJECT=[PROJECT-NAME]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#[INSERT_YOUR_IAM_OWNER_SERVICE_ACCOUNT_NAME].iam.gserviceaccount.com \
--member='serviceAccount:service-${NUM}#gcf-admin-robot.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountUser'
Also, I added the "Cloud Functions Developer" role to my #cloudbuild.gserviceaccount.com account via the IAM Console.
According to Cloud Build documentation, for Cloud Functions you have to grant the "Project Editor" role to your service account.
But, Cloud Functions documentation states that alternatively to using the Project Editor role, you can use "the Cloud Functions Developer role [but you have to] ensure that you have granted the Service Account User role". Regarding Service Accounts, it indicates to have "the CloudFunctions.ServiceAgent role on your project" and to "have permissions for trigger sources, such as Pub/Sub or the Cloud Storage bucket triggering your function".
Due to those considerations, my understanding is that the documentation omitted to specify all the roles your service account would need and went directly to indicate to grant the Project Editor role.
You have to update Service Account permissions on Cloud Build settings page.
Here is instructions https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#fully-managed
You just have to set the status of the Cloud Run Admin role to ENABLED on that page:
start your cloud build with auth
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'activate-service-account', 'xoxox#xoxo-dev.iam.gserviceaccount.com', '--key-file=account.json', '--project=rabbito-dev']
and then simply your code deployment on cloud function
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
Please add 'Cloud Functions Service Agent' role to your service account alongside 'Cloud Functions Developer'.

GCE Service Account with Compute Instance Admin permissions

I have setup a compute instance called to run cronjobs on Google Compute engine using a service account with the following roles:
Custom Compute Image User + Deletion rights
Compute Admin
Compute Instance Admin (beta)
Kubernetes Engine Developer
Logs Writer
Logs Viewer
Pub/Sub Editor
Source Repository Reader
Storage Admin
Unfortunately, when I ssh into this cronjob runner instance and then run:
sudo gcloud compute --project {REDACTED} instances create e-latest \
--zone {REDACTED} --machine-type n1-highmem-8 --subnet default \
--maintenance-policy TERMINATE \
--scopes https://www.googleapis.com/auth/cloud-platform \
--boot-disk-size 200 \
--boot-disk-type pd-standard --boot-disk-device-name e-latest \
--image {REDACTED} --image-project {REDACTED} \
--service-account NAME_OF_SERVICE_ACCOUNT \
--accelerator type=nvidia-tesla-p100,count=1 --min-cpu-platform Automatic
I get the following error:
The user does not have access to service account {NAME_OF_SERVICE_ACCOUNT}. User: {NAME_OF_SERVICE_ACCOUNT} . Ask a project owner to grant you the iam.serviceAccountUser role on the service account.
Is there some other privilege besides compute instance admin that I need to be able to create instances with my instance?
Further notes: (1) when I try to not specify --service-account the error is the same except that the service account my user doesn't have access to is the default '51958873628-compute#developer.gserviceaccount.com'.
(2) adding/removing sudo doesn't change anything
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the iam.serviceAccountUser role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
Find out who you are first
if you are using Web UI: what email address did you use to login?
if you are using local gcloud or terraform: find the json file that contains your credentials for gcloud (often named similarly to myproject*.json) and see if it contains the email: grep client_email myproject*.json
GCP IAM change
Go to https://console.cloud.google.com
Go to IAM
Find your email address
Member -> Edit -> Add Another Role -> type in the role name Service Account User -> Add
(You can narrow it down with a Condition, but lets keep it simple for a while).
Make sure that NAME_OF_SERVICE_ACCOUNT is service account from current project.
If you change project ID, and don't change NAME_OF_SERVICE_ACCOUNT, then you will encounter this error.
This can be checked on Google Console -> IAM & Admin -> IAM.
Then look for service name ....-compute#developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.

Deleted Compute Engine default service account

I cannot create a virtual machines in GCE.. While creating it is showing the error message, i have attached my screen-shot of error message.. i will briefly explain what i have done..
--> I have deleted my compute engine default service account from my service account list.. later i created new service account..
--> While creating virtual machines i selected newly created service account, vm creating was failed but the error shows the deleted service account id is not found under service account..
--> While creating vm's it is referring my deleted service account id..
Now what i need to do? Is there is any solution to reactivate my Compute Engine default service account..
Completely iam struck now i cannot create new vms and kubernetes.
To restore your google compute default service account, run the following gcloud command within your project:
gcloud services enable compute
In previous versions the command was known to be:
gcloud service-management enable compute.googleapis.com
As stated in this issue: https://issuetracker.google.com/issues/69612457
You can now "undelete" service accounts by doing a curl request as below:
curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-length: 0" "https://iam.googleapis.com/v1/projects/-/serviceAccounts/SERVICE_ACCOUNT_ID:undelete"
SERVICE_ACCOUNT_ID is the id of the account you want to recover
You can get a list of service accounts by running:
gcloud logging read "resource.type=service_account" --freshness=10y
Reference:
https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
There are two default service accounts and I am not sure which one you are referring to:
Google API service account, in your case it is called: 933144605699#cloudservices.gserviceaccount.com. It is a special service account. It is always created but never listed in gcloud or the web console. It is intended to be used by some of the internal Google processes on user's behalf. GKE may be one of the services that uses this account (I am not sure).
It is impossible to delete this account, the only thing you could do is to remove it from any roles on the project. By default it is an Editor. You can add it back any time.
Default service account: 933144605699-compute#developer.gserviceaccount.com. This is a normal service account, which you may delete.
In the error message you pasted there is a different service account name, is it the new one you created? If this is the case, you might only need to go to IAM settings on the web console and add your user to service account actor. Take a look at this manual page: https://cloud.google.com/compute/docs/access/iam#the_serviceaccountactor_role
First you need to find the removed SERVICE_ACCOUNT_ID. Using Logging advanced queries is:
resource.type = "service_account"
protoPayload.authorizationInfo.permission = "iam.serviceAccounts.delete"
Example here:
==> unique_id value is SERVICE_ACCOUNT_ID
Use the API provided by #sherief-el-feky :
curl -X POST -H "Authorization: Bearer $ (gcloud auth print-access-token)" -H "Content-length: 0" https://iam.googleapis.com/v1/projects/-/serviceAccounts/SERVICE_ACCOUNT_ID : undelete "
Logging advanced queries: https://cloud.google.com/logging/docs/view/advanced-queries
As of Feb 2022, use
gcloud beta iam service-accounts undelete <ACCOUNT ID>
ACCOUNT ID is the 21 digit unique id (uid) which last part of the deleted service account.
For example,
deleted:serviceAccount:abc-project#kubeflow-ml.iam.gserviceaccount.com?uid=123451234512345123451
uid is the last part of the above service account.