gcloud compute config-ssh produces no Host entries - google-compute-engine

I'm trying to use gcloud compute config-ssh to set up Host entries in my $HOME/.ssh/config. The command runs with no errors but when I look in my $HOME/.ssh/config it has just added a comment block with no Host entries, like this:
# Google Compute Engine Section
#
# The following has been auto-generated by "gcloud compute config-ssh"
# to make accessing your Google Compute Engine virtual machines easier.
#
# To remove this blob, run:
#
# gcloud compute config-ssh --remove
#
# You can also manually remove this blob by deleting everything from
# here until the comment that contains the string "End of Google Compute
# Engine Section".
#
# You should not hand-edit this section, unless you are deleting it.
#
# End of Google Compute Engine Section
When I run gcloud compute instances list I see all my instances, so I know I have gcloud set up properly.

Related

How to force detach a non-boot disk on all VM that's attached

I have a shared read-only persistent disk that gets an update every month.
How can I force all VMs attached to this shared disk to detach without passing to the command the list of instances?
How can I detach a Read-Only disk from all instances?
A solution is to use gcloud compute instances detach-disk INSTANCE_NAME --disk DISK but I don't want to sequentially input a list of instance names that's attached.
'gcloud compute instances detach-disk' doesn't have a 'force' option.
Bear in mind that detaching a disk without first unmounting it may result an error for the applications that are using the data. To unmount a persistent disk on a Linux-based image, ssh into the instance and run:
sudo umount /dev/disk/by-id/google-DEVICE_NAME
Once the device is detached you can use this script sample to run the 'gcloud compute instances detach-disk' command:
#!/bin/bash
zone="ZONE"
disk="DEVICE_NAME"
for i in $(gcloud compute disks describe $disk --zone $zone | grep "^-" | rev | cut -d "/" -f1 | rev)
do
gcloud compute instances detach-disk $i --disk=$disk --zone=$zone
done
You can refer to this documentation[1] to get more information about the parameters of the command.
[1] https://cloud.google.com/sdk/gcloud/reference/compute/instances/detach-disk

Get a list of quota usage/limit of my project using gcloud command line

Can anyone show me how to get a list of used quota per project in GCE cloud?
I can only get this list from console: console.cloud.google.com/iam-admin/quotas?project=my-project&location=us-east1. but I don't know who we can listed using gcloud command line.?
Run the following command to check project-wide quotas. Replace myproject with your own project ID:
gcloud compute project-info describe --project myproject
Official reference here
To verify the capacity used vs available quota, you can run the following command.
$ gcloud compute project-info describe --project myproject
Or you can interact with Compute engine API to list some quotas and their limit, but the Persistent Disk or Local SSD… are a regional quota and the results of the command-line or Compute engine API don't list per-region quotas. In order to retrieve information regarding regional quota, you have to run:
$ gcloud compute regions describe example-region
i managed to get all quotas per region with this command:
gcloud compute regions list --project=production --format=json

Running task in the background?

If we are submitting a task to the compute engine through ssh from host machine and if we shut down the host machine is there a way that we can get hold of the output of the submitted task later on when we switch on the host machine?
From the Linux point of view ‘ssh’ and ‘gcloud compute ssh’ are commands like all the others, therefore it is possible to redirect their output to a file while the command is performed using for example >> to redirect and append stdout to a file or 2>> to store stderr.
For example if you run from the first instance 'name1':
$ gcloud compute ssh name2 --command='watch hostname' --zone=XXXX >> output.out
where 'name2' is the second instance, and at some point you shutdown 'name1' you will find stored into output.out the output provided by the command till the shutdown occurred.
Note that there is also the possibility to create shut down scripts, that in this scenario could be useful in order to upload output.out to a bucket or to perform any kind of clean-up operation.
In order to do so you can run the following command
$ gcloud compute instances add-metadata example-instance --metadata-from-file shutdown-script=path/to/script_file
Where the content of the script could be something like
#! /bin/bash
gsutil cp path/output.out gs://yourbucketname
Always keep in mind that Compute Engine only executes shutdown scripts on a best-effort basis and does not guarantee that the shutdown script will be run in all cases.
More Documentation about shutdown scrips if needed.

Setting up fully qualified domain name in Google Compute Instances

Many infrastructure configurations require a fully qualified domain name to set up.
How do we set an FQDN on google compute instances?
FQDN that can be used internally within the VPC and / or FQDN that can be used externally
Here is a documents that explain how the FQDN works in an instance of compute engine.
Maybe this solution fit your needs:
# edit the hosts file and add your FQDN
$ sudo vi /etc/hosts
# prevent the overwrite
$ sudo +i chattr -i /etc/hosts
Also you can use Google Cloud DNS and use it as internal DNS by editing the resolv.conf file

exporting files from GCE to my local machine

Is there a reverse command for gcutil push basically what I want to do is have a copy of my python files on my local machine so I'm looking for a way to import the files into my local machine exporting them from my google compute engine instance without using GIT or any other source control tool
Yep, there is gcutil pull. Here is the help file:
Local:~ mark$ gcutil help pull
Command line tool for interacting with Google Compute Engine.
Please refer to http://developers.google.com/compute/docs/gcutil/tips for more
information about gcutil usage.
USAGE: gcutil [--global_flags] <command> [--command_flags] [args]
pull Pull one or more files from a VM instance.
Usage: gcutil [--global_flags] pull
[--command_flags] <instance-name> <file-1> ...
<file-n> <destination>
Flags for pull:
gcutil_lib.instance_cmds:
--ssh_arg: Additional arguments to pass to ssh;
repeat this option to specify a list of values
(default: '[]')
--ssh_key_push_wait_time: Number of seconds to wait for updates to
project-wide ssh keys to cascade to the instances within the project
(default: '120')
(an integer)
--ssh_port: TCP port to connect to
(default: '22')
(an integer)
--zone: [Required] The zone for this request.
gflags:
--flagfile: Insert flag definitions from the given file into the command line.
(default: '')
--undefok: comma-separated list of flag names that it is okay to specify on
the command line even if the program does not define a flag with that name.
IMPORTANT: flags in this list that have arguments MUST use the --flag=value
format.
(default: '')
Run 'gcutil --help' to get help for global flags.
Run 'gcutil help' to see the list of available commands.
The Syntax of file uploading to GCE from your Local Machine as following
gcutil push --zone=us-central1-a \
my_instance \
~/local/path
/remote/file1 \
/remote/file2 \
for example in mac
example
gcutil push --zone=us-central1-a \your-instance\ ~/Desktop/Gcloud /home/munish/
The Syntax of file downloading from GCE to your Local Machine as following
gcutil pull --zone=us-central1-a \
my_instance \
/remote/file1 \
/remote/file2 \
~/local/path
for example in mac
example
gcutil pull --zone=us-central1-a \your-instance \ /home/munish/source-folder ~/Desktop/destination-folder