Run ssh commands across all gce instances - google-compute-engine

I have a bunch of GCE instances and I want to run the same shell command on all of them. Is it possible to do something like gcloud compute ssh --command="ls -al" my-instance1 my-instance2 my-instance3?

You can use gcloud compute instances list --format='value[separator=","](name,zone)' to get a list like:
my-instance1,my-zone1
my-instance2,my-zone2
my-instance3,my-zone3
Then you can use bash Substring Removal to extract the parts before and after the comma.
var="before,after"
before="${var%,*}"
after="${var#*,}"
Put it all in a loop and add trailing '&' to run things in the background:
for instance in $(gcloud compute instances list --format='value[separator=","](name,zone)'); do
name="${instance%,*}";
zone="${instance#*,}";
gcloud compute ssh $name --zone=$zone --command="ls -al" &
done

To add to Mike's answer, here is how to do the same without Substring Removal:
for i z in $(gcloud compute instances list --format='value(name, zone)'); do
gcloud compute ssh $i --command="ls -lah" --zone=$z;
done

Related

How to deploy multiple functions using gcloud command line?

I want to deploy multiple cloud functions. Here is my index.js:
const { batchMultipleMessage } = require('./gcf-1');
const { batchMultipleMessage2 } = require('./gcf-2');
module.exports = {
batchMultipleMessage,
batchMultipleMessage2
};
How can I use gcloud beta functions deploy xxx to deploy these two functions at one time.
Option 1:
For now, I write a deploy.sh to deploy these two cloud functions at one time.
TOPIC=batch-multiple-messages
FUNCTION_NAME_1=batchMultipleMessage
FUNCTION_NAME_2=batchMultipleMessage2
echo "start to deploy cloud functions\n"
gcloud beta functions deploy ${FUNCTION_NAME_1} --trigger-resource ${TOPIC} --trigger-event google.pubsub.topic.publish
gcloud beta functions deploy ${FUNCTION_NAME_2} --trigger-resource ${TOPIC} --trigger-event google.pubsub.topic.publish
It works, but if gcloud command line support deploy multiple cloud functions, that will be best way.
Option 2:
https://serverless.com/
If anyone is looking for a better/cleaner/parallel solution, this is what I do:
# deploy.sh
# store deployment command into a string with character % where function name should be
deploy="gcloud functions deploy % --trigger-http"
# find all functions in index.js (looking at exports.<function_name>) using sed
# then pipe the function names to xargs
# then instruct that % should be replaced by each function name
# then open 20 processes where each one runs one deployment command
sed -n 's/exports\.\([a-zA-Z0-9\-_#]*\).*/\1/p' index.js | xargs -I % -P 20 sh -c "$deploy;"
You can also change the number of processes passed on the -P flag. I chose 20 arbitrarily.
This was super easy and saves a lot of time. Hopefully it will help someone!

gcloud: how to get ip addresses of group of managed instances

My problem is to create 5k instances and retrieve there public IP addresses.
Specifically for zone us-west1-a I can create a group of 50 instances by the following:
gcloud compute instance-groups managed create test --base-instance-name morning --size 50 --template benchmark-template-micro --zone us-west1-a
Questions:
How to specify the start-script to run each created instances? I can't find them here.
How to get the public IP addresses of those created instances?
the startup-script can be assigned to the template for the instance used; see here.
one can obtain information with gcloud compute instance-groups managed describe.
while there are no public IP addresses unless you'd assign external IP addresses.
As mentioned by Martin, the startup-script is configured in the instance template.
Unfortunately, there is no API that lists the ip addresses of the instances in the group. There are however APIs (and gcloud commands) to get the list of instances and the ip addresses of instances. Here is an example to fetch this information from the command line:
gcloud compute instance-groups list-instances $INSTANCE_GROUP --uri \
| xargs -I '{}' gcloud compute instances describe '{}' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'
To speed this up, you may want to use xarg's -P flag to parallelize the instance describe requests.
Since all instances in the group have the same prefix. You can also just do a list search by prefix. Although, this may pull in another that uses the same prefix even if not part of the instance group:
gcloud compute instances list --filter='name ~ ${PREFIX}*' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'

Google Compute Engine: "attach-disk" command doesn't mount disk on the machine

I want to attach disk to an instance on Google Compute Engine, through commands below.
gcloud compute instances create pg-disk-formatter --image ubuntu-1604-lts --custom-cpu 1 --custom-memory 1
gcloud compute disks create pg-data-disk --size 50GB
gcloud compute instances attach-disk pg-disk-formatter --disk pg-data-disk
However, even I logged into the machine and cd to /dev/disk/by-id/, it doesn't show up on the list.
mkouhei0910#pg-data-disk:~$ cd /dev/disk/by-id/
google-persistent-disk-0 scsi-0Google_PersistentDisk_persistent-disk-0
google-persistent-disk-0-part1 scsi-0Google_PersistentDisk_persistent-disk-0-part1
google-pg-data-disk2 scsi-0Google_PersistentDisk_pg-data-disk2
I noticed it shows up after I attached new disk image from Google Platform Console, but how can I achieve it purely gloud command line?
You first command is not correct. It should be:
gcloud compute instances create pg-disk-formatter --image-project ubuntu-os-cloud --image-family ubuntu-1604-lts --custom-cpu 1 --custom-memory 1
The second and third commands are good. They will create a disk and will attach it to the VM instance. The additional disk is listed in the output of ls command that your provided:
google-pg-data-disk2
If you want guest operating system sees a different name for attached disk, you can use --device-name flag with the command.

Enable autoscaling on GKE cluster creation

I try to create an autoscaled container cluster on GKE.
When I use the "--enable-autoscaling" option (like the documentation indicates here : https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling) :
$ gcloud container clusters create mycluster --zone $GOOGLE_ZONE --num-nodes=3 --enable-autoscaling --min-nodes=2 --max-nodes=5
but the MIG (Managed Instanced Group) is not displayed as 'autoscaled' as shown by both the web interface and the result of the following command :
$ gcloud compute instance-groups managed list
NAME SIZE TARGET_SIZE AUTOSCALED
gke-mycluster... 3 3 no
Why ?
Then, I tried the other way indicated in the kubernetes docs (http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling) but got an error caused by the '=true' apparently :
$ gcloud container clusters create mytestcluster --zone=$GOOGLE_ZONE --enable-autoscaling=true --min-nodes=2 --max-nodes=5 --num-nodes=3
usage: gcloud container clusters update NAME [optional flags]
ERROR: (gcloud.container.clusters.update) argument --enable-autoscaling: ignored explicit argument 'true'
Is the doc wrong on this ?
Here is my gcloud version results :
$ gcloud version
Google Cloud SDK 120.0.0
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
core 2016.07.29
core-nix 2016.03.28
gcloud
gsutil 4.20
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.3.3
Last precision : the autoscaler seems 'on' in the description on the cluster :
$ gcloud container clusters describe mycluster | grep auto -A 3
- autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
Any idea to explain this behaviour please ?
Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes. The code is in the autoscaler repo if you want more info.
I've also sent out a PR to fix the invalid flag usage in the autoscaling docs. Thanks for catching that!

gcloud compute instances create command fails when creating an instance

Creating an instance using gcloud does not seem to work:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-14-10 --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Unable to fetch a list of zones. Specifying [--zone] may fix this issue:
- Project marked for deletion.
Adding the zone name fails differently:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-14-10 --zone us-central1-a --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Failed to find image for alias [ubuntu-14-10] in public image project [ubuntu-os-cloud].
- Project marked for deletion.
Providing a different image name fails too:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-1410-utopic --zone us-central1-a --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Could not fetch image resource:
- Project marked for deletion.
What is the exact command to create an instance using gcloud?
Did you authenticate before and set the default project?
gcloud auth login
gcloud config set project PROJECT
The base setup of gcloud is in the Google Cloud documentation.
Or did you delete your project?
Project marked for deletion.
You have several things going on, one of which is reading the docs:
https://cloud.google.com/compute/docs/gcloud-compute/#creating
You syntax should be:
gcloud compute instances create minecraftinstance \
--image ubuntu-14-10 \
--zone [SOME-ZONE-ID] \
--machine-type [SOME-MACHINE-TYPE]
Where SOME-ZONE-ID is a geographic zone to create the instance in, found by running:
gcloud compute zones list
SOME-MACHINE-TYPE is the machince type to create. Valid types are found by running:
gcloud compute machine-types list
But specifically, you seem to be creating an instance in a Project that has been deleted:
- Project marked for deletion.
Also, you need to authenticate and set a default project:
gcloud auth
and
gcloud config set project [ID]
Billable resources can not be created for projects which has been flagged for deletion. For a project to be deletable, billing must be disabled first, and so, instances can not be created. As for the error messages, it seems gcloud command is not handling this situation correctly and replying bogus error codes instead.
The only compulsory arguments to gcloud compute instances create are the name, the zone and the project. A valid working project must be set either by using --project PROJECT flag to gcloud commands, or by using gcloud config set project PROJECT before. Similarly, to choose the zone you can either use the --zone ZONE flag or the gcloud config set compute/zone ZONE command before.
Enabling billing on your current project and undeleting it will work too. To figure out which project and zone the gcloud command is running in by default, use this:
gcloud config list
In my case I had to specify --image-project that got me going:
gcloud compute instances create core --image ubuntu-1604-xenial-v20180126 --machine-type f1-micro --zone us-east4-a --image-project ubuntu-os-cloud
My Case,Create a managed instance group using the instance template:
gcloud compute instance-groups managed create nginx-group \
--base-instance-name nginx \
--size 2 \
--template nginx-template \
--target-pool nginx-pool \
--zone us-central1-c
You have to specify the --image-project --image-family
Refer https://cloud.google.com/compute/docs/images#os-compute-support.