Compute Engine accessing DataStore get Invalid Credentials (code: 401) - google-compute-engine

I am following the tutorial on
https://cloud.google.com/datastore/docs/getstarted/start_nodejs/
trying to use datastore from my Compute Engine project.
Step 2 in the tutorial mentioned I do not have to create new service account credentials when running from Compute Engine.
I run the sample with:
node test.js abc-test-123
where abc-test-123 is my Project Id and that project have enabled all cloud API access including DataStore API.
After uploaded the code and executed the sample, I got the following error:
Adams: { 'rpc error': { [Error: Invalid Credentials] code: 401,
errors: [ [Object] ] } }
Update:
I did a workaround by changing the default sample code to use the JWT credential way (with a generated .json key file) and things are working now.
Update 2:
This is the scope config when I run
gcloud compute instances describe abc-test-123
And the result:
serviceAccounts:
scopes:
- https://www.googleapis.com/auth/cloud-platform
According to the doc:
You can set scopes only when you create a new instance, and cannot
change or expand the list of scopes for existing instances. For
simplicity, you can choose to enable full access to all Google Cloud
Platform APIs with the https://www.googleapis.com/auth/cloud-platform
scope.
I still welcome any answer about why the original code not work in my case~
Thanks for reading

This most likely means that when you created the instance, you didn't specify the right scopes (datastore and userinfo-email according to the tutorial). You can check that by executing the following command:
gcloud compute instances describe <instance>
Look for serviceAccounts/scopes in the output.

There are 2 way to create an instance with right credential:
gcloud compute instances create $INSTANCE_NAME --scopes datastore,userinfo-email
Using web: on Access & Setting Enable User Info & Datastore

Related

ERROR: (gcloud.beta.functions.deploy) ... message=[The caller does not have permission]

I am trying to deploy code from this repo:
https://github.com/anishkny/puppeteer-on-cloud-functions
in Google Cloud Build. My cloudbuild.yaml file contents are:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
I have given the following roles to my Cloud Build Service account (****#cloudbuild.gserviceaccount.com):
Cloud Build Service Account
Cloud Functions Developer
Yet, in my Cloud Build log I see the following error:
starting build "1f04522c-fe60-4a25-a4a8-d70e496e2821"
FETCHSOURCE
Fetching storage object: gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047
Copying gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047...
/ [0 files][ 0.0 B/ 835.0 B]
/ [1 files][ 835.0 B/ 835.0 B]
Operation completed over 1 objects/835.0 B.
tar: Substituting `.' for empty member name
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
What am I missing?
It would appear that the permissions changed when (perhaps) Cloud Functions went GA. Another customer raised this issue today and I recalled your question.
The Cloud Build robot (${NUM}#cloudbuild.gserviceaccount.com) additionally needs to be a serviceAccountUser of the ${PROJECT-ID}#appspot.gserviceaccount.com account:
NB While the Cloud Build robot local part is the project number (${NUM}), the appspot robot local part is the project ID (${PROJECT})
Please try:
PROJECT=[[YOUR-PROJECT-ID]]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
Let me know!
I struggled with this too after reading quite a bit of documentation. A combination of the above answers got me on the right track. Basically, something like the following is needed:
PROJECT=[PROJECT-NAME]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#[INSERT_YOUR_IAM_OWNER_SERVICE_ACCOUNT_NAME].iam.gserviceaccount.com \
--member='serviceAccount:service-${NUM}#gcf-admin-robot.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountUser'
Also, I added the "Cloud Functions Developer" role to my #cloudbuild.gserviceaccount.com account via the IAM Console.
According to Cloud Build documentation, for Cloud Functions you have to grant the "Project Editor" role to your service account.
But, Cloud Functions documentation states that alternatively to using the Project Editor role, you can use "the Cloud Functions Developer role [but you have to] ensure that you have granted the Service Account User role". Regarding Service Accounts, it indicates to have "the CloudFunctions.ServiceAgent role on your project" and to "have permissions for trigger sources, such as Pub/Sub or the Cloud Storage bucket triggering your function".
Due to those considerations, my understanding is that the documentation omitted to specify all the roles your service account would need and went directly to indicate to grant the Project Editor role.
You have to update Service Account permissions on Cloud Build settings page.
Here is instructions https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#fully-managed
You just have to set the status of the Cloud Run Admin role to ENABLED on that page:
start your cloud build with auth
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'activate-service-account', 'xoxox#xoxo-dev.iam.gserviceaccount.com', '--key-file=account.json', '--project=rabbito-dev']
and then simply your code deployment on cloud function
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
Please add 'Cloud Functions Service Agent' role to your service account alongside 'Cloud Functions Developer'.

ERROR: Failed to configure trigger GCS Bucket

While issuing command gcloud beta functions deploy I get following error message in Terminal: ERROR: (gcloud.beta.functions.deploy) OperationError: code=13, message=Failed to configure trigger GCS Bucket: 18_bucket
Even tried to make simple function in Web UI (https://console.cloud.google.com/) I get same error (attach screenshot)
Trigger type: Cloud Storage bucket
Event Type: Finalize/Create
I believe i'm doing everything correct, even tried same steps in other Cloud Project, only in this Project (new project created few hours back) I get an error! Any idea what could be issues...
NOTE:
Already tried Disable & Re-enable Cloud Functions API
both GCS Bucket & Cloud Function is in same project
Thanks a lot for any clue into it..
UPDATE: ok, created fresh new project > Enabled Billing > Enabled Cloud Functions API > Created GCS Bucket > Create New Function (google.storage.object.finalize) > keeps giving same error: Deployment failure: Failed to configure trigger GCS Bucket: {{BUCKET NAME}}. HTTP Triggers works fine though
The problem has been solved by Google and was not related to our code :
https://status.firebase.google.com/incident/Functions/18024#5700609697120256

Adding permissions to a project

I am trying to follow this tutorial https://tensorflow.github.io/serving/serving_inception
But I see this
$ gcloud container clusters create inception-serving-cluster --num-nodes 5
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/tensorflow-serving".
I did not see an option to add permissions to the project anywhere. How do I do this using the CLI or the UI?
EDIT:
I do have the project created
EDIT:
Just saw that it works fine from the cloud shell
Update: Your project's name is tensorflow-serving-1360, so you should be running gcloud container clusters create inception-serving-cluster --num-nodes 5 --project=tensorflow-serving-1360.
The project tensorflow-serving is not owned by you. It is the example project name used in the linked tutorial, but you need to replace it with the name of your own project as described in the line at the beginning of Part 2:
Here we assume you have created and logged in a gcloud project named
tensorflow-serving
(Tested on 2019.04.07)
Firstly, check the list of auth accounts:
gcloud auth list
Next set the active account:
gcloud config set account <email_address_from_above_output>
Then, specify the parameter for create cluster commamd:
gcloud container clusters create <cluster_name> --num-nodes=2 --project=<PROJECT_ID>
e.g.
gcloud container clusters create prod-myapp-cluster --num-nodes=2 --project=myapp-20394823094
Expected output:
kubeconfig entry generated for prod-myapp-cluster.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
prod-myapp-cluster asia-south1-a 1.11.7-gke.12 35.5xx.2xx.1xx n1-standard-1 1.11.7-gke.12 2 RUNNING
Get your project name or create a project if you have created on already at console.cloud.google.com
Enable Kubernetes engine API on the console
run this code on your command prompt
gcloud container clusters create bd-serving-cluster --num-nodes 5 -project=tensorflow-serving-264611 \
--zone=us-central1-f
replace 'bd' with the name of your serving cluster and 'tensorflow-serving-264611' with the project name you created in step 1 and you can choose your preferred zone or use the default 'us-central1-f'

Creating GCE Kube cluster v1.2 via API fails

I tried creating a new kube cluster via googleapis with oAuth authentication. But I am getting an error that
"HTTP Load Balancing requires the 'https://www.googleapis.com/auth/compute' scope.".
I came to know that google has updated the kube version to 1.2 the previous night in their console (until which I was able to create cluster using same method in v1.0)
I tried creating one via API explorer using google's oAuth, but it failed with same error.
I think the authscope has been updated, but I could not find the new authscope in any of 'google cloud platform container engine doc' or 'kubernetes latest release doc'. Can someone please help me in identifying the new authscope?
That error message was due to an error on our part while rolling out support for Kubernetes 1.2 in Google Container Engine. We've fixed the issues, and you can now create a container cluster using the api explorer. Sorry for the trouble.
That error message is referring to the scopes provided in the NodeConfig of the CreateCluster request. In 1.2, the "compute" scope is required to run the HTTP Load Balancer addon:
"nodeConfig": {
"oauthScopes": [
"https://www.googleapis.com/auth/compute"
]
}
If you don't want to add the https://www.googleapis.com/auth/compute scope to your nodes, you can also disable HTTP Load Balancing by passing in an AddonsConfig that disables it:
"addonsConfig": {
"httpLoadBalancing": {
"disabled": true
}
}

Google Compute Engine: how to delete access config with whitespace in name ("External NAT")?

I'm trying to delete the access config for one of my Google Compute Engine instances, and as described in some of the documentation, the access config for my instance is named "External NAT" rather than the default "external-nat". When I try to run:
gcloud compute instances delete-access-config my-instance-name --access-config-name="External NAT"
I get the following error:
ERROR: (gcloud.compute.instances.delete-access-config) unrecognized arguments: NAT
I'm assuming the error of the space in "External NAT". Seems like this should be a simple fix but I can't figure it out. Any help would be much appreciated!
You added "=" when in fact it is not needed. It worked as follows :
$ gcloud compute instances delete-access-config test-instance --access-config-name "External NAT"
Output:
Updated [https://www.googleapis.com/compute/v1/projects/test-project/zones/europe-west1-c/instances/test-instance].
gcloud compute instances delete-access-config test-instance --access-config-name="External NAT" --network-interface="nic0" --zone="us-east1-b"