Why gsutil restore a file from a bucket encrypted with KMS (using a service account without DECRYPT permission)? - google-cloud-kms

I am working with GCP KMS, and it seems that when I send a file to a GCP bucket (using gustil cp) it is encrypted.
However, I have a question related to the permission to restore that file from the same bucket, using a different service account. I mean, the service account that I am using to restore the file from the bucket, doesn't have Decrypt privilege and even so the gustil cp works.
My question is whether it's normal behavior, or if I'm missing something ?
Let me describe my question:
First of all, I confirm that the default encryption for the bucket is the KEY that I set up previously:
$ kms encryption gs://my-bucket
Default encryption key for gs://my-bucket:
projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY
Next, with gcloud config, I set a service account, which has "Storage Object Creator" and "Cloud KMS CryptoKey Encrypter" permissions:
$ gcloud config set account my-service-account-with-Encrypter-and-object-creator-permissions
Updated property [core/account].
I send a local file to the bucket:
$ gsutil cp my-file gs://my-bucket
Copying file://my-file [Content-Type=application/vnd.openxmlformats-officedocument.presentationml.presentation]...
| [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
After sending the file to the bucket, I confirm that the file is encrypted using the KMS key I created before:
$ gsutil ls -L gs://my-bucket
gs://my-bucket/my-file:
Creation time: Mon, 25 Mar 2019 06:41:02 GMT
Update time: Mon, 25 Mar 2019 06:41:02 GMT
Storage class: REGIONAL
KMS key: projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY/cryptoKeyVersions/1
Content-Language: en
Content-Length: 616959
Content-Type: application/vnd.openxmlformats-officedocument.presentationml.presentation
Hash (crc32c): 8VXRTU==
Hash (md5): fhfhfhfhfhfhfhf==
ETag: xvxvxvxvxvxvxvxvx=
Generation: 876868686868686
Metageneration: 1
ACL: []
Next, I set another service account, but this time WITHOUT DECRYPT permission and with object viewer permission (so that it be able to read files from the bucket):
$ gcloud config set account my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions
Updated property [core/account].
After set up the new service account (WITHOUT Decrypt permission), the gustil to restore the file from the bucket works smooth...
gsutil cp gs://my-bucket/my-file .
Copying gs://my-bucket/my-file...
\ [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
My question is whether is it a normal behavior ? Or, since the new service account doesn't have Decrypt permission, the gustil cp to restore the file shouldn't work ? I mean, it is not the idea that with KMS encryption, the 2nd gustil cp command should fail with a "403 permission denied" error message or something..
If I revoke "Storage object viewer" privilege from the 2nd service account (to restore the file from the bucket), in this case the gustil fails, but it is because it doesn't have permission to read the file:
$ gsutil cp gs://my-bucket/my-file .
AccessDeniedException: 403 my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions does not have storage.objects.list access to my-bucket.
I appreciate if someone else could give me a hand, and clarify the question....specifically I don't sure whether the command gsutil cp gs://my-bucket/my-file . should work or not.
I think it shouldn't work (because the service account doesn't have Decrypt permission), or should it work ?

This is working correctly. When you use Cloud KMS with Cloud Storage, the data is encrypted and decrypted under the authority of the Cloud Storage service, not under the authority of the entity requesting access to the object. This is why you have to add the Cloud Storage service account to the ACL for your key in order for CMEK to work.
When an encrypted GCS object is accessed, the KMS decrypt permission of the accessor is never used and its presence isn't relevant.
If you don't want the second service account to be able to access the file, remove its read access.

By default, Cloud Storage encrypts all object data using Google-managed encryption keys. You can instead provide your own keys. There are two types:
CSEK which you must supply
CMEK which you also supply, but this time is managed by Google KMS service (this is the one you are using).
When you use gsutil cp, you are already using the encryption method behind the curtains. So, as stated on the documentation for Using Encryption Keys:
While decrypting a CSEK-encrypted object requires supplying the CSEK
in one of the decryption_key attributes, this is not necessary for
decrypting CMEK-encrypted objects because the name of the CMEK used to
encrypt the object is stored in the object's metadata.
As you can see, the key is not necessary because it is already included on the metadata of the object which is the one the gsutil is using.
If encryption_key is not supplied, gsutil ensures that all data it
writes or copies instead uses the destination bucket's default
encryption type - if the bucket has a default KMS key set, that CMEK
is used for encryption; if not, Google-managed encryption is used.

Related

Google Cloud Functions using service account file not found

I am trying to list the buckets located in the other GCP Project, essentially using code from here: https://cloud.google.com/docs/authentication/production#auth-cloud-explicit-python
To do this I need to validate access with Json file.
Unfortunately I cant resolve error with linking my Json file to the function. This is the code I use:
def explicit(argument):
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'gs://PROJECT/PATH/service_account.json')
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
The error I am getting:
Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/__init__.py", line 99, in view_func return function(request._get_current_object()) File "/workspace/main.py", line 6, in explicit storage_client = storage.Client.from_service_account_json( File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/cloud/client.py", line 106, in from_service_account_json with io.open(json_credentials_path, "r", encoding="utf-8") as json_fi: FileNotFoundError: [Errno 2] No such file or directory: 'gs://PROJECT/PATH/service_account.json'
How can I properly refference the file to execute the function?
Google's client libraries all support Application Default Credentials (ADCs) very helpfully including finding credentials automatically. You should utilize this.
When your code runs on the Cloud Functions service, it runs as a Service Account identity. You're probably running the Cloud Functions using the default Cloud Functions Service Account. You can specify a user-defined Service Account when you deploy|update the Cloud Function.
NOTE It's better to run your Cloud Functions using user-defined, non-default Service Accounts.
When the Cloud Function tries to create a Cloud Storage client, it does so using it's identity (the Service Account).
A solution is to:
Utilize ADCs and have the Cloud Storage client be created using these i.e. storage_client = storage.Client()
Adjust the Cloud Function's Service Account to have the correct IAM roles|permissions for Cloud Storage.
NOTE You should probably adjust the Bucket's IAM Policy. Because you're using a Cloud Storage Bucket in a different project, if you want to adjust the Project's IAM Policy instead, ensure you use the Bucket's (!) Project's IAM Policy and not the Cloud Function's Project's IAM Policy.
See IAM roles for Cloud Storage: predefined roles and perhaps roles/storage.objectViewer as this includes storage.objects.list permission that you need.
You're trying to list buckets located in another GCP projects while authorizing your client by pulling a Service Account (SA) key file that you also pull from from Google Cloud Storage (GCS).
I'd recommend a different security pattern to resolve this. Essentially use a single SA that has permissions to invoke your cloud function and permissions to list contents of your GCS bucket. It'll circumvent the need for pulling in a file from GCS that contains the key file while still maintaining security since a bad actor will require access to your GCP account. The following steps show you how to do so.
Create a Service Account (SA) with the role Cloud Functions Invoker in your first Cloud Functions project
Grant your user account or user group the Service Account User role on this new SA
Change your Cloud Function to use this newly created SA, where to change Cloud Function Runtime SA
Grant the Cloud Function runtime SA a GCS role in the second project or GCS bucket
In this pattern, you will "actAs" the Cloud Function runtime SA allowing you to invoke the Cloud Function. Since the Cloud Function runtime SA has adequate permissions to your GCS bucket in your other project there's no need for an SA key since the runtime SA already has adequate permissions.

Need Azure Files shares to be mounted using SAS signatures

Friends, any idea on how to mount Azure file share using SAS signature in a container.
I was able to mount Azure file share using Storage Account name and Storage account Key but wasn't able to do using SAS token.
If you guys come across this kind of requirement, please free to share your suggestions.
Tried with below command to create secret:
kubectl create secret generic dev-fileshare-sas --from-literal=accountname=######### --from-literal sasToken="########" --type="azure/blobfuse"
volumes mount conf in container:
- name: azurefileshare
flexVolume:
driver: "azure/blobfuse"
readOnly: false
secretRef:
name: dev-fileshare-sas
options:
container: test-file-share
mountoptions: "--file-cache-timeout-in-seconds=120"
Thanks.
To mount a file share, you must use SMB. SMB supports mounting the file share using Identity based authentication (AD DS and AAD DS) or storage account key (not SAS). SAS key can only be used when accessing the file share using REST (for example, Storage Explorer).
This is covered in the FAQ: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs

GCE Service Account with Compute Instance Admin permissions

I have setup a compute instance called to run cronjobs on Google Compute engine using a service account with the following roles:
Custom Compute Image User + Deletion rights
Compute Admin
Compute Instance Admin (beta)
Kubernetes Engine Developer
Logs Writer
Logs Viewer
Pub/Sub Editor
Source Repository Reader
Storage Admin
Unfortunately, when I ssh into this cronjob runner instance and then run:
sudo gcloud compute --project {REDACTED} instances create e-latest \
--zone {REDACTED} --machine-type n1-highmem-8 --subnet default \
--maintenance-policy TERMINATE \
--scopes https://www.googleapis.com/auth/cloud-platform \
--boot-disk-size 200 \
--boot-disk-type pd-standard --boot-disk-device-name e-latest \
--image {REDACTED} --image-project {REDACTED} \
--service-account NAME_OF_SERVICE_ACCOUNT \
--accelerator type=nvidia-tesla-p100,count=1 --min-cpu-platform Automatic
I get the following error:
The user does not have access to service account {NAME_OF_SERVICE_ACCOUNT}. User: {NAME_OF_SERVICE_ACCOUNT} . Ask a project owner to grant you the iam.serviceAccountUser role on the service account.
Is there some other privilege besides compute instance admin that I need to be able to create instances with my instance?
Further notes: (1) when I try to not specify --service-account the error is the same except that the service account my user doesn't have access to is the default '51958873628-compute#developer.gserviceaccount.com'.
(2) adding/removing sudo doesn't change anything
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the iam.serviceAccountUser role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
Find out who you are first
if you are using Web UI: what email address did you use to login?
if you are using local gcloud or terraform: find the json file that contains your credentials for gcloud (often named similarly to myproject*.json) and see if it contains the email: grep client_email myproject*.json
GCP IAM change
Go to https://console.cloud.google.com
Go to IAM
Find your email address
Member -> Edit -> Add Another Role -> type in the role name Service Account User -> Add
(You can narrow it down with a Condition, but lets keep it simple for a while).
Make sure that NAME_OF_SERVICE_ACCOUNT is service account from current project.
If you change project ID, and don't change NAME_OF_SERVICE_ACCOUNT, then you will encounter this error.
This can be checked on Google Console -> IAM & Admin -> IAM.
Then look for service name ....-compute#developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.

Can't get Google Cloud Platform to recognize JSON service account key file. Error: PyOpenSSL is not available. Suggests JSON but I'm using a JSON key

I'm an utter newbie so forgive what may be a stupid question, but when I am trying to pass the location of my service account key file using Google Cloud Platform, I am receiving the message:
WARNING: .p12 service account keys are not recomended unless it is necessary for
backwards compatability. Please switch to a newer .json service account key for
this account.
ERROR: (gcloud.auth.activate-service-account) PyOpenSSL is not available. If you
have already installed PyOpenSSL, you will need to enable site packages by sett
ing the environment variable CLOUDSDK_PYTHON_SITEPACKAGES to 1. If that does not
work, see https://developers.google.com/cloud/sdk/crypto for details or consider using .json private key instead.
However I selected and downloaded a JSON key. Can anyone tell me what is happening and how to get around this? Not sure if I'm providing enough info so please ask if you need details. Thanks!
The error indicates that you're possibly using a deprecated p12 format service account key file (as well as unable to find the required crypto libraries for reading keys in that format) instead of the json format.
You might want to double confirm that the key file you downloaded is indeed JSON. A quick way to verify this is by opening this file in some text editor of if you're on *nix or OS X, you can just use cat. I've shown an example json service account key file:
$ cat my-service-account-key.json
{
"type": "service_account",
"project_id": "PROJECT_NAME",
"private_key_id": "YOUR_PRIVATE_KEY_ID",
"private_key": "-----BEGIN PRIVATE KEY-----\nYOUR_PRIVATE_KEY\n-----END PRIVATE KEY-----\n",
"client_email": "SERVICE_ACCOUNT_NAME#PROJECT_NAME.iam.gserviceaccount.com",
"client_id": "CLIENT_ID",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "URL",
}
To activate the service account you will have to run the gcloud auth activate-service-account command:
gcloud auth activate-service-account --key-file=/path/to/service-account-key.json
The key must be encoded based on base64, you can do it with the following bash command:
$ cat key_file.json | base64
Please take a look at gcloud setup instructions at:
https://github.com/GoogleCloudPlatform/github-actions/tree/master/setup-gcloud#inputs

Deleted Compute Engine default service account

I cannot create a virtual machines in GCE.. While creating it is showing the error message, i have attached my screen-shot of error message.. i will briefly explain what i have done..
--> I have deleted my compute engine default service account from my service account list.. later i created new service account..
--> While creating virtual machines i selected newly created service account, vm creating was failed but the error shows the deleted service account id is not found under service account..
--> While creating vm's it is referring my deleted service account id..
Now what i need to do? Is there is any solution to reactivate my Compute Engine default service account..
Completely iam struck now i cannot create new vms and kubernetes.
To restore your google compute default service account, run the following gcloud command within your project:
gcloud services enable compute
In previous versions the command was known to be:
gcloud service-management enable compute.googleapis.com
As stated in this issue: https://issuetracker.google.com/issues/69612457
You can now "undelete" service accounts by doing a curl request as below:
curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-length: 0" "https://iam.googleapis.com/v1/projects/-/serviceAccounts/SERVICE_ACCOUNT_ID:undelete"
SERVICE_ACCOUNT_ID is the id of the account you want to recover
You can get a list of service accounts by running:
gcloud logging read "resource.type=service_account" --freshness=10y
Reference:
https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
There are two default service accounts and I am not sure which one you are referring to:
Google API service account, in your case it is called: 933144605699#cloudservices.gserviceaccount.com. It is a special service account. It is always created but never listed in gcloud or the web console. It is intended to be used by some of the internal Google processes on user's behalf. GKE may be one of the services that uses this account (I am not sure).
It is impossible to delete this account, the only thing you could do is to remove it from any roles on the project. By default it is an Editor. You can add it back any time.
Default service account: 933144605699-compute#developer.gserviceaccount.com. This is a normal service account, which you may delete.
In the error message you pasted there is a different service account name, is it the new one you created? If this is the case, you might only need to go to IAM settings on the web console and add your user to service account actor. Take a look at this manual page: https://cloud.google.com/compute/docs/access/iam#the_serviceaccountactor_role
First you need to find the removed SERVICE_ACCOUNT_ID. Using Logging advanced queries is:
resource.type = "service_account"
protoPayload.authorizationInfo.permission = "iam.serviceAccounts.delete"
Example here:
==> unique_id value is SERVICE_ACCOUNT_ID
Use the API provided by #sherief-el-feky :
curl -X POST -H "Authorization: Bearer $ (gcloud auth print-access-token)" -H "Content-length: 0" https://iam.googleapis.com/v1/projects/-/serviceAccounts/SERVICE_ACCOUNT_ID : undelete "
Logging advanced queries: https://cloud.google.com/logging/docs/view/advanced-queries
As of Feb 2022, use
gcloud beta iam service-accounts undelete <ACCOUNT ID>
ACCOUNT ID is the 21 digit unique id (uid) which last part of the deleted service account.
For example,
deleted:serviceAccount:abc-project#kubeflow-ml.iam.gserviceaccount.com?uid=123451234512345123451
uid is the last part of the above service account.