Different results between Azure portal and shell - azure-cli

If I run the CLI command
az storage account list
from https://shell.azure.com I get the result [] i.e. no storage accounts found
However if I switch to https://portal.azure.com I can see 22 storage accounts listed.
Can anyone explain why I am getting different results for the same authenticated account.

If your storage accounts are ARM(Azure Resource Manager) model, the two results should be the same.
1.Make sure you have checked them in the same subscription, in the cloud shell, you can use az account show to get it.
2.The az storage account list is just for the ARM, if your storage account is classic, it will not return it.

Related

Trigger of a cloud function by a cloud storage bucket in different project

I have a requirement where I need to trigger a cloud function which in turn triggers a data flow job once a file is placed in google cloud storage bucket of another project. Google documentation says its not possible, please see here https://cloud.google.com/functions/docs/calling/storage
However, I tried doing this and my cloud function failed to deploy with below error. It says
Looks like this is a permission issue and if required permission are given , this will work.
Do I need to add and give owner permission to #appspot.gserviceaccount.com of the project(project A) from where I am trying to access the bucket of another project(Project B)
So if the above is true, In my project B IAM page, I will see 2 as below
#appspot.gserviceaccount.com OWNER
#appspot.gserviceaccount.com EDITOR
Any inputs on this is much appreciated.
It's not possible to catch event from other projects for now. But there is a workaround.
In the project with the bucket, create a PubSub notification on Cloud Storage
On the topic that you created, create a push subscription. Use the Cloud Functions URL, and secure the PubSub call (you can get inspiration from there. If you are stuck, let me know, I will take more time to describe this part)
On the Cloud Functions grant the PubSub service account as cloudfunctions.invoker role
EDIT 1
The security part isn't so easy at the beginning. In your project B (where you have your Cloud Storage), you have created a PubSub topic. On this topic you can create a notification with a service account created in the project B. Take care to well fill in the audience
Then, you need to grant this "project B" service account as roles/cloudfunctions.invoker on the Cloud Function of the project A
# Create the service account
gcloud iam service-accounts create pubsub-push --project=<ProjectB>
#Create the push subscription
gcloud pubsub subscriptions create \
--push-endpoint=https://<region>-<projectA>.cloudfunctions.net/<functionName> \
--push-auth-service-account=pubsub-push#<projectB>.iam.gserviceaccount.com \
--push-auth-token-audience=https://<region>-<projectA>.cloudfunctions.net/<functionName> \
--topic=<GCSNotifTopic> --project=<ProjectB>
#Grant the service account
gcloud functions add-iam-policy-binding --member=serviceAccount:pubsub-push#<ProjectB>.iam.gserviceaccount.com --role=roles/cloudfunctions.invoker <FunctionName> --project=<projectA>
Last traps:
The Cloud Functions in the project A haven't the same signature if it's an HTTP functions (callable by a pubsub push subscription) or a Background Function (callable by events, such as CLoud Storage event). You need to update this according to the documentation
The PubSub message sent to the Cloud Functions is slightly different. Take care to update the input param accordingly.
Google documentation says its not possible
This is all you need to know. It is not possible. There are no workarounds, regardless of what sort of error messages you might see.

Is there any systemic way to find the minimum access right or role required for each of Azure CLI commands?

I am working on a project in which I need to define the exact minimum security role for each operation.
Is there any systemic way or documentation to find the minimum access right or role required for each of Azure CLI commands?
Well, there is no systemic way or doc to find it directly, it needs some experience and test, you could refer to the things below, it applies to most situations.
Azure CLI commands essentially call the Azure REST API, you could use --debug parameter with a CLI command, then you can find the API the command calls.
For example, I use the az vm list to list all the VMs in a resource group.
az vm list -g <group-name> --debug
Then you will find it calls Virtual Machines - List API, then you can search for the resource provider and resource type i.e. Microsoft.Compute/virtualMachines in this doc, easily we can find Microsoft.Compute/virtualMachines/read, here you need some experience, from my sight, the action permission should be correct.
Then you can create a custom role with this action to have a test, and change the permissions depend on the result, in most situations, the command will include the action permission you need in the error message if you don't have enough permissions to do the operations.

Google Drive SA accounts can't access objects on a shared disks created by other users

I would like to have an access to objects on a shared disks created by other users using SA accounts.
I discovered that by making a call to https://www.googleapis.com/drive/v3/files with the following query:
q=mimeType!='application/vnd.google-apps.folder' and 'GOOGLE_DRIVE_FOLDER_ID' in parents and trashed=false&supportsTeamDrives=true&teamDriveId=GOOGLE_TEAM_DRIVE_ID&fields=files(id ,name ,webViewLink ,webContentLink)
I get different results depending on the account. If I am using access token generated for service account we get different result than if I am using access token generated for a user account.
Service account "sees" only files that were create by that particular service account whereas regular users "see" all the files created by other users as well.
Anyone had similar issue and know any solution or workaround?
I get different results depending on the account. If I am using access token generated for service account we get different result than if I am using access token generated for a user account.
What you need to understand is that you can only see the files that you have permission to see. If you are logged in on a normal user account you will only be able to see the files that you own, or have access to. The same goes for a service account, think of a service account as a dummy user. The service account can only see the files it has been granted access to.
Assuming that your shared disks that you are talking about is gsuite then you can have the gsuite admin set up domain wide delegation on the service account and grant it access to the files on the domain.
permissions
If you dont have gsuite or dont want to give the service account full access to the domain you. You might also want to try having the owner of the drive run a permissions.create and add the service account.

How to assign multiple service account credentials to Google Cloud Functions?

I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.

Google cloud instance instantiation - Authorized GAE

My task is to create mysql insided google cloud sql. Following instructions I try to set an instance unluckily. The problem is a message
"Authorized GAE applications must be in the same region as the database instance"
at the time when I have checked both instance and application for that region setting and it is matching. I don't know what shall I put in the box "authorized networks". Thanks in advance.
That message means you chose a region (EU for example) for your Cloud SQL that is different from the region of your App Engine application (US for example) where you created the Cloud SQL instance.
From the documentation
Note: An App Engine application must be in the same region (either
European Union or United States) as a Google Cloud SQL instance to be
authorized to access that Google Cloud SQL instance.
As the GAE location can't be changed, you should change the region of the Cloud SQL instance, which also can't be changed. So you'd need to create a new instance in the exact region of your app.
The Authorized networks is exactly what Paul said. The IPs or subnetworks you want to whitelist to access your instance, only if you plan to access your instance with mysql client.