How do I prevent insiders from accessing data or secrets encrypted with Cloud KMS? - google-cloud-kms

Is encrypting data with Cloud KMS sufficient to prevent my organization’s employees from accessing encrypted data? What best practices are there to avoid unnecessary exposure?

Resources in Cloud KMS are Google Cloud Platform resources for which access can be managed using IAM, and access audited using Cloud Audit Logging. You should set permissions that will limit the use of encryption keys to only those individuals who should have access.
You can apply the principle of separation of duties - the individual who manages encryption keys should not be the same individual who accesses what those keys protect, such as secrets. Practically, you should give one person key administration rights, like key rotation, etc. (IAM role: Cloud KMS Admin); and another person key use rights, like to encrypt/ decrypt to access data (IAM role: Cloud KMS CryptoKey Encrypter/Decrypter).
For further discussion on separation of duties in Cloud KMS: https://cloud.google.com/kms/docs/separation-of-duties
To give a user the ability to manage a key with role Cloud KMS Admin, using gcloud run:
gcloud beta kms cryptokeys add-iam-policy-binding \
CRYPTOKEY_NAME --location LOCATION --keyring KEYRING_NAME \
--member user:MY-USER#gmail.com \
--role roles/cloudkms.admin
To give a service account the ability to encrypt and decrypt using a key with role Cloud KMS CryptoKey Encrypter/Decrypter, using gcloud run:
gcloud beta kms cryptokeys add-iam-policy-binding \
CRYPTOKEY_NAME --location LOCATION --keyring KEYRING_NAME \
--member serviceAccount:MY-SERVICE_ACCOUNT#MY-PROJECT.iam.gserviceaccount.com \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter

Related

Microsoft Cognitive Services Speech Container - API Key

For the username and APIKey that connects the container back to azure, is there a way to store that in a local Azure Key Vault? How is this not exposed in code for the docker container?
If you mean to retrieve the key vault in the local or the docker, you could implement it with the rest API or the SDK.
Firstly about rest API, you could refer to this tutorial, it uses the python to do the rest request: Use Azure Key Vault with a Windows virtual machine in Python.
It describes how to assign an identity to the VM and assign permissions to the VM identity
And about the SDK, here is a sample about python SDK to access Key Vault: Azure Key Vault libraries for Python.

Where to keep the Initial Trust credentials of a Secrets Management tool?

For our product we have decided to implement a Secret Management tool (AWS secrets manager) that will securely store and manage all our secrets such as DB credentials, passwords and API keys etc.
In this way the secrets are not stored in code, database or anywhere in the application. We have to provide the AWS credentials - Access Key Id and Secret access key to programmatically access the APIs of Secrets manager.
Now the biggest question that arises is, where to keep this Initial Trust – the credentials to authenticate the AWS secrets manager.? This is a bootstrapping problem. Again, we have to maintain something outside of the secret store, in a configuration file or somewhere. I feel If this is compromised then there is no real meaning to store everything in a Secret management tool.
I read the AWS SDK developer guide and understand that there are some standard ways to store AWS credentials like – storing them in environmental variables, credentials file with different profiles and by Using IAM roles for Amazon EC2 Instances.
We don’t run/host our application in Amazon cloud, we just want to use AWS secrets manger service from AWS cloud. Hence, configuring the IAM roles might not be the solution for us.
Are there any best practices (or) a best place to keep the initial Trust credentials?
If you're accessing secrets from EC2 instance, ECS docker container, Lambda function, you can use Roles with policy that allows access to Secrets Manager.
if IAM Role is not an option, You can use Federation Login to get temporary credentials (IAM Role) with policy that allows access to Secrets Manager.
As #Tomasz Breś said, you can use federation if you are already using an on-premis Auth system like Active directory or Kerberos.
If you do not have any type of credentials already on the box, you are left with two choices: store your creds in a file and use file system permissions to protect them, or use hardware like an HSM or TPM to encrypt or store your creds.
In any case, when you store creds on the box (even AD/Kerberos), you should ensure only the application owner has access to that box (in the case of a stand alone app and not a shared CLI). You should also harden the box by turning off all un-necessary software and access methods.

openshift secret token expiry

We would like to create service user to manage ci/cd workflow for the different teams. Secret tokens can be generated for the service account to perform API operations.
oc create sa sample
oc policy add-role-to-user developer system:serviceaccount:sampleproject:sample
oc describe sa sample
oc describe sa secret sample-token-5s5kl
Above describe command gives us the secret token which we hand over to different teams for their API operations. But the problem we are facing currently is, secret token expires in 4 hrs or so. Is there a way to create never expiring secret tokens ?
If I am not wrong, they don't expire. Also, I quote from Openshift documentation "The generated API token and registry credentials do not expire, but they can be revoked by deleting the secret. When the secret is deleted, a new one is automatically generated to take its place."Please refer to this page for more info

Create Google Compute Instance with a service account from another Google Project

I would like to know whether it is possible to attached a service account created in my-project-a to a Google Compute Engine instance in say my-project-b?
The following command:
gcloud beta compute instances create my-instance \
--service-account=my-service-account#my-project-a.iam.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--project=my-project-b
gives me the following error:
(gcloud.beta.compute.instances.create) Could not fetch resource:
- The user does not have access to service account 'my-service-account#my-project-a.iam.gserviceaccount.com'. User: 'me#mysite.com'. Ask a project owner to grant you the iam.serviceAccountUser role on the service account. me#mysite.com is my account and I'm the owner of the org.
Not sure whether this is related, but looking at the UI (in my-project-b) there is no option to add a service account from any other projects. I was hoping to be able to add the account my-service-account#my-project-a.iam.gserviceaccount.com
You could follow these steps to authenticate a service account from my-project-a to an instance in my-project-b:
Create a service account in my-project-a with the proper role for compute engine
Download the JSON file.
Copy the my-project-a new service account email
On my-project-b, add a team member by using the copied email from the previous step
Connect via SSH to your instance in my-project-b
Copy the JSON file from the step 2 on your my-project-b instance
Run the following command to activate the service account:
gcloud auth activate-service-account --key-file=YOUR_JSON_FILE
Verify by using the following command:
gcloud auth list

Using service accounts on Compute Engine instances

I'm trying to do gcloud init on my fresh GCE instance using a service account that I've created in the Developers Console. In the Developers Console, I see a few service accounts under Permissions, which I can't generate private key files for; I also see a service account that I made under Service accounts which I can get private keys for.
When I do gcloud init on the GCE instance, under "Pick credentials to use", I only see the service accounts in the Permissions tab (for which I don't have private keys). I'd like to use the service account that I have private keys for.
I can log in with my personal account for now, but this isn't scalable. Any advice?
You can use gcloud auth activate-service-account command to get credentials via the private key for a service account. For more information and example please visit this link.
Elaborating on #Kamaran's answer after further discussion.
The basic solution is to enable the service account on the GCE instance.
First use gcloud compute copy-files <private json key file> <instance name>:remote/path/to/key to copy the file to the remote instance. Then run gcloud auth activate-service-account <service account address> --key-file remote/path/to/key command on the remote. The new service account will then be available in the gcloud init menu.