I have a Linux VM on Google Compute Engine that I am accessing via SSH. It works just fine, but when I go to the Cloud Console, it asks me if I want to create a new VM as if I have none. I know I'm on the right account because it shows my billing balance has gone down.enter image description here Where did my server go?
It is weird. But it is important to make a differentiation that is not obvious once you start using Google Cloud Platform. The credentials you are using to access the Platform ( your email or a service account), the projects where an entity that any resource must be attached to and the billing account that is the payment profile that can have several projects associated.
In that case you could be in a different project, that is associated to the same billing account.
To check you can the project where your machine is, in the shell
Gcloud compute instances list
Here you will see the instances in your actual project. If nothing appears, reset gcloud configuration.
gcloud init
And change the project.
Related
Once you have a vpn tunnel up and running there does not appear to be a method to view all the details of the tunnel from either the Google Cloud Console or the gcloud command line. Specifically the route policies that were configured when the tunnel was initially setup are missing from the describe.
Is there a method to see this information?
This a known behavior. The Developers Console doesn't set the remoteTrafficSelector when creating the tunnels through it.
The Developers Console creates the necessary routes and shows the "Remote ranges" based on them.
The workaround is to create the VPN tunnels using the Compute API or Cloud SDK with the following command:
gcloud compute vpn-tunnels create NAME --region=REGION --peer-address=PEER_ADDRESS --shared-secret=SHARED_SECRET --target-vpn-gateway=TARGET_VPN_GATEWAY --local-traffic-selector=CIDR --remote-traffic-selector=CIDR
You can click on the star icon in the Public Issue Tracker to get updates when there is any progress on it.
Note: This doesn't have any impact on the VPN tunnel functionality.
I need to take a snaphots of all my servers from script in all projects in GCP.
Project count : 10
Servers per project : 05
Written the script in a Server: Automation-server
Script has:
gcloud compute --project $PROJECT_ID disks snapshot ${DEVICE_NAME} --snapshot-names gcp-${DEVICE_NAME}-${DATE} --zone ${INSTANCE_ZONE})
As of now i have configured my E-mail id in the Server X
with gcloud auth
(My E-mail id has access to all of my projects so that i can able to take snapshots of all the servers)
So i can able to do the same via scripting.
I wish not to do this via a user authentication(mentioning the E-mail id) .
Is there any possiblity for doing the above via any application or using any api-key etc..,
By granting the access of all the projects to a application or api-key and by using that , taking the snapshots from the script
This will be used in :
If a user X has access to 5 projects , and an user Y has access to another set of 5 projects.
Need to take snapshots for all the 10 projects using script
at this time if the gcloud auth was done via an application or api-key etc..,
Is it possible or any other way for the above case
This is possible:
Create a service account in cloud project.
Go to each of your 10 projects, and grant the service account either "Editor", "compute.instanceAdmin" or "compute.storageAdmin" IAM permission.
Use gcloud auth activate-service-account in your script to have the script run as the service account.
You could also use multiple service accounts for different projects, and switch between the.
From my workstation I can fire templated Dataflow jobs with the gcloud dataflow jobs command. The required authorization to insert a new job come from my workstation where I'm logged in.
On the Compute Engine instance I rely on it's service account. The one with (number)-compute#. Within the AIM section I enabled Dataflow/Dataflow Admin, Dataflow/Dataflow Developer and Dataflow/Dataflow Worker for this service account to be safe.
I even added Cloud Dataflow Service Agent when I came across that one.
Then I try to start a Dataflow from the command line but I get an error about insufficient authentication scopes: ERROR: (gcloud.dataflow.jobs.run) PERMISSION_DENIED: Request had insufficient authentication scopes.
If I do a gcloud config auth and login with my personal account, of course, it works.
Somehow I'm missing the proper permissions to set to the applied service account.
Is there a guideline I missed? Can somebody please point me into the right direction?
The error message indicates that the instance does not setup access scope properly. To launches a job from a GCE VM, the VM must have compute.read-only, compute, or cloud-platform scope for the project.
The way to verify it is using the command "gcloud compute instances describe --zone=[zone][instance]" and look for "scopes".
This document and this existing question may provide useful guidelines for you.
I'm trying to set up read/write access to a Cloud Storage bucket from a GCE instance, using a service account, but don't get the permissions. I have done the following:
Created service account, let's say 'my-sa'
Created a bucket, let's say 'my-bucket'
In IAM console for my project, assign role 'Cloud Storage admin' to service account
Created a new GCE instance via the console, assigned to service account 'my-sa'. Access scope is then automatically set to cloud-platform
Connect to instance using gcloud compute ssh as my user (project owner)
Run gsutil ls gs://my-bucket
Expected behaviour: get list of items in bucket
Observed behaviour:
gsutil takes about 5 seconds to think, then gives:
AccessDeniedException: 403 my-sa#my-project.iam.gserviceaccount.com does not have storage.objects.list access to bucket my-bucket.
Things I've tried:
gcloud auth list on the instance does show the service account, and shows it as being active
I've added more permissions to the service account (up to project owner), doesn't make a difference
I also can't use other permissions from the instance. When I give Compute Engine Admin role to the service account, I can't run gcloud compute instances list from the instance
I've removed the .gsutil dir to make sure the cache is cleared
With the default Compute Engine service account, I can list the buckets, but not write (as expected). When I add the Cloud Storage read/write access scope from the console, I can also write
I really don't have a clue on how to debug this anymore, so any help would be much apprreciated
I am being billed for an unused IP address. I can't find the item that's
charging me.
I've gone through the project using console.cloud.google.com looking in Compute Engine and Networking settings, but I can't find any IP addresses.
I'm only using the project for Cloud Storage of 1 text file, and a git
repository. I run these commands on the terminal, and I am getting 0 items.
$ gcloud --project=PROJECTNAME compute addresses list
The above command listed 0 items.
$ gcloud --project=PROJECTNAME compute forwarding-rules list
The above command listed 0 items.
Is there a way of telling where this static IP address is, or how I
can disable it? I can't find it anywhere. I'd rather not delete the entire
project because some of the services are being used by my production app.
I know that it's a global IP address because I can see it listed in my
Compute Engine quota. For me to be able to use a command line option to delete the address, I think that I need the name of the address, but I can't find that listed anywhere.
I'm thinking this could be related to me having one of these two
things enabled for the project in the past:
I was running an AppEngine project, but have since terminated it.
For the AppEngine project, I registered a custom domain to point
to it.
I had used AppEngine Flexible (aef). The unused IP was from my stopped version. This blocks the releasing of the static IP and so it was advised to first delete this version before trying to release the IP address again.
You cannot delete your previous version if that's the only one you have as you need to have at least one version for the default module.
To fix you could deploy a new version, say a Flexible VM (deployed to another region), or a Standard VM. Then as a workaround, if you do not have any app to replace it right now, you can deploy an empty app instead. You would need to create an app.yaml that uses only static files that does not have any script to execute so you would not be charged for any instance.
For a more detailed guide in doing this workaround, you may check this documentation [1].
[1] http://stackoverflow.com/questions/37679552/cannot-delete-version