gcloud compute instances create command fails when creating an instance - google-compute-engine

Creating an instance using gcloud does not seem to work:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-14-10 --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Unable to fetch a list of zones. Specifying [--zone] may fix this issue:
- Project marked for deletion.
Adding the zone name fails differently:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-14-10 --zone us-central1-a --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Failed to find image for alias [ubuntu-14-10] in public image project [ubuntu-os-cloud].
- Project marked for deletion.
Providing a different image name fails too:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-1410-utopic --zone us-central1-a --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Could not fetch image resource:
- Project marked for deletion.
What is the exact command to create an instance using gcloud?

Did you authenticate before and set the default project?
gcloud auth login
gcloud config set project PROJECT
The base setup of gcloud is in the Google Cloud documentation.
Or did you delete your project?
Project marked for deletion.

You have several things going on, one of which is reading the docs:
https://cloud.google.com/compute/docs/gcloud-compute/#creating
You syntax should be:
gcloud compute instances create minecraftinstance \
--image ubuntu-14-10 \
--zone [SOME-ZONE-ID] \
--machine-type [SOME-MACHINE-TYPE]
Where SOME-ZONE-ID is a geographic zone to create the instance in, found by running:
gcloud compute zones list
SOME-MACHINE-TYPE is the machince type to create. Valid types are found by running:
gcloud compute machine-types list
But specifically, you seem to be creating an instance in a Project that has been deleted:
- Project marked for deletion.
Also, you need to authenticate and set a default project:
gcloud auth
and
gcloud config set project [ID]

Billable resources can not be created for projects which has been flagged for deletion. For a project to be deletable, billing must be disabled first, and so, instances can not be created. As for the error messages, it seems gcloud command is not handling this situation correctly and replying bogus error codes instead.
The only compulsory arguments to gcloud compute instances create are the name, the zone and the project. A valid working project must be set either by using --project PROJECT flag to gcloud commands, or by using gcloud config set project PROJECT before. Similarly, to choose the zone you can either use the --zone ZONE flag or the gcloud config set compute/zone ZONE command before.
Enabling billing on your current project and undeleting it will work too. To figure out which project and zone the gcloud command is running in by default, use this:
gcloud config list

In my case I had to specify --image-project that got me going:
gcloud compute instances create core --image ubuntu-1604-xenial-v20180126 --machine-type f1-micro --zone us-east4-a --image-project ubuntu-os-cloud

My Case,Create a managed instance group using the instance template:
gcloud compute instance-groups managed create nginx-group \
--base-instance-name nginx \
--size 2 \
--template nginx-template \
--target-pool nginx-pool \
--zone us-central1-c

You have to specify the --image-project --image-family
Refer https://cloud.google.com/compute/docs/images#os-compute-support.

Related

How to create google compute engine template with a custom disk and an external ip

How do I create a google compute engine template, named my-template, with a custom disk named my-disk, an external ip, that's preemptible, and with the tags necessary to open the http server ports?
Can I use the a managed template group to automatically restart these preemptible instances?
Something like the following command will work. Note that I set it up to use a highmem machine with 8 cores.
gcloud compute instance-templates create my-template \
--disk=boot=yes,auto-delete=no,name=my-disk \
--machine-type=n1-highmem-8 \
--preemptible \
--network-interface=address=35.238.XXX.YYY \
--tags=http-server,https-server
As of Nov 2018, the following link is where you can setup your external IP:
https://console.cloud.google.com/networking/addresses/list
Yes, you'll be able to use a managed instance group to automatically restart the preemptible instance once compute resources are available.

gcloud: how to get ip addresses of group of managed instances

My problem is to create 5k instances and retrieve there public IP addresses.
Specifically for zone us-west1-a I can create a group of 50 instances by the following:
gcloud compute instance-groups managed create test --base-instance-name morning --size 50 --template benchmark-template-micro --zone us-west1-a
Questions:
How to specify the start-script to run each created instances? I can't find them here.
How to get the public IP addresses of those created instances?
the startup-script can be assigned to the template for the instance used; see here.
one can obtain information with gcloud compute instance-groups managed describe.
while there are no public IP addresses unless you'd assign external IP addresses.
As mentioned by Martin, the startup-script is configured in the instance template.
Unfortunately, there is no API that lists the ip addresses of the instances in the group. There are however APIs (and gcloud commands) to get the list of instances and the ip addresses of instances. Here is an example to fetch this information from the command line:
gcloud compute instance-groups list-instances $INSTANCE_GROUP --uri \
| xargs -I '{}' gcloud compute instances describe '{}' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'
To speed this up, you may want to use xarg's -P flag to parallelize the instance describe requests.
Since all instances in the group have the same prefix. You can also just do a list search by prefix. Although, this may pull in another that uses the same prefix even if not part of the instance group:
gcloud compute instances list --filter='name ~ ${PREFIX}*' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'

GCE Service Account with Compute Instance Admin permissions

I have setup a compute instance called to run cronjobs on Google Compute engine using a service account with the following roles:
Custom Compute Image User + Deletion rights
Compute Admin
Compute Instance Admin (beta)
Kubernetes Engine Developer
Logs Writer
Logs Viewer
Pub/Sub Editor
Source Repository Reader
Storage Admin
Unfortunately, when I ssh into this cronjob runner instance and then run:
sudo gcloud compute --project {REDACTED} instances create e-latest \
--zone {REDACTED} --machine-type n1-highmem-8 --subnet default \
--maintenance-policy TERMINATE \
--scopes https://www.googleapis.com/auth/cloud-platform \
--boot-disk-size 200 \
--boot-disk-type pd-standard --boot-disk-device-name e-latest \
--image {REDACTED} --image-project {REDACTED} \
--service-account NAME_OF_SERVICE_ACCOUNT \
--accelerator type=nvidia-tesla-p100,count=1 --min-cpu-platform Automatic
I get the following error:
The user does not have access to service account {NAME_OF_SERVICE_ACCOUNT}. User: {NAME_OF_SERVICE_ACCOUNT} . Ask a project owner to grant you the iam.serviceAccountUser role on the service account.
Is there some other privilege besides compute instance admin that I need to be able to create instances with my instance?
Further notes: (1) when I try to not specify --service-account the error is the same except that the service account my user doesn't have access to is the default '51958873628-compute#developer.gserviceaccount.com'.
(2) adding/removing sudo doesn't change anything
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the iam.serviceAccountUser role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
Find out who you are first
if you are using Web UI: what email address did you use to login?
if you are using local gcloud or terraform: find the json file that contains your credentials for gcloud (often named similarly to myproject*.json) and see if it contains the email: grep client_email myproject*.json
GCP IAM change
Go to https://console.cloud.google.com
Go to IAM
Find your email address
Member -> Edit -> Add Another Role -> type in the role name Service Account User -> Add
(You can narrow it down with a Condition, but lets keep it simple for a while).
Make sure that NAME_OF_SERVICE_ACCOUNT is service account from current project.
If you change project ID, and don't change NAME_OF_SERVICE_ACCOUNT, then you will encounter this error.
This can be checked on Google Console -> IAM & Admin -> IAM.
Then look for service name ....-compute#developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.

Cannot create dataproc cluster due to SSD label error

I've been creating dataproc clusters successfully over the past couple of weeks using the following gcloud command:
gcloud dataproc --region us-east1 clusters create test1 --subnet
default --zone us-east1-c --master-machine-type n1-standard-4
--master-boot-disk-size 250 --num-workers 10 --worker-machine-type n1-standard-4 --worker-boot-disk-size 200 --num-worker-local-ssds 1
--image-version 1.2 --scopes 'https://www.googleapis.com/auth/cloud-platform' --project MyProject
--initialization-actions gs://MyBucket/MyScript.sh
But today I'm getting the following error when I try to create dataproc cluster from either gcloud cli or the GCP web console:
ERROR: (gcloud.dataproc.clusters.create) Operation
[projects/MyProject/regions/us-east1/operations/SOMELONGIDHERE]
failed: Invalid value for field
'resource.disks[1].initializeParams.labels': ''. Cannot specify
initializeParams.labels for local SSD..
I tried changing the cluster name and the zone (not region), without any success.
Thanks in advance
There was an issue on Google's end that was corrected.
It should be working now.

Enable autoscaling on GKE cluster creation

I try to create an autoscaled container cluster on GKE.
When I use the "--enable-autoscaling" option (like the documentation indicates here : https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling) :
$ gcloud container clusters create mycluster --zone $GOOGLE_ZONE --num-nodes=3 --enable-autoscaling --min-nodes=2 --max-nodes=5
but the MIG (Managed Instanced Group) is not displayed as 'autoscaled' as shown by both the web interface and the result of the following command :
$ gcloud compute instance-groups managed list
NAME SIZE TARGET_SIZE AUTOSCALED
gke-mycluster... 3 3 no
Why ?
Then, I tried the other way indicated in the kubernetes docs (http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling) but got an error caused by the '=true' apparently :
$ gcloud container clusters create mytestcluster --zone=$GOOGLE_ZONE --enable-autoscaling=true --min-nodes=2 --max-nodes=5 --num-nodes=3
usage: gcloud container clusters update NAME [optional flags]
ERROR: (gcloud.container.clusters.update) argument --enable-autoscaling: ignored explicit argument 'true'
Is the doc wrong on this ?
Here is my gcloud version results :
$ gcloud version
Google Cloud SDK 120.0.0
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
core 2016.07.29
core-nix 2016.03.28
gcloud
gsutil 4.20
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.3.3
Last precision : the autoscaler seems 'on' in the description on the cluster :
$ gcloud container clusters describe mycluster | grep auto -A 3
- autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
Any idea to explain this behaviour please ?
Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes. The code is in the autoscaler repo if you want more info.
I've also sent out a PR to fix the invalid flag usage in the autoscaling docs. Thanks for catching that!