Is there a reverse command for gcutil push basically what I want to do is have a copy of my python files on my local machine so I'm looking for a way to import the files into my local machine exporting them from my google compute engine instance without using GIT or any other source control tool
Yep, there is gcutil pull. Here is the help file:
Local:~ mark$ gcutil help pull
Command line tool for interacting with Google Compute Engine.
Please refer to http://developers.google.com/compute/docs/gcutil/tips for more
information about gcutil usage.
USAGE: gcutil [--global_flags] <command> [--command_flags] [args]
pull Pull one or more files from a VM instance.
Usage: gcutil [--global_flags] pull
[--command_flags] <instance-name> <file-1> ...
<file-n> <destination>
Flags for pull:
gcutil_lib.instance_cmds:
--ssh_arg: Additional arguments to pass to ssh;
repeat this option to specify a list of values
(default: '[]')
--ssh_key_push_wait_time: Number of seconds to wait for updates to
project-wide ssh keys to cascade to the instances within the project
(default: '120')
(an integer)
--ssh_port: TCP port to connect to
(default: '22')
(an integer)
--zone: [Required] The zone for this request.
gflags:
--flagfile: Insert flag definitions from the given file into the command line.
(default: '')
--undefok: comma-separated list of flag names that it is okay to specify on
the command line even if the program does not define a flag with that name.
IMPORTANT: flags in this list that have arguments MUST use the --flag=value
format.
(default: '')
Run 'gcutil --help' to get help for global flags.
Run 'gcutil help' to see the list of available commands.
The Syntax of file uploading to GCE from your Local Machine as following
gcutil push --zone=us-central1-a \
my_instance \
~/local/path
/remote/file1 \
/remote/file2 \
for example in mac
example
gcutil push --zone=us-central1-a \your-instance\ ~/Desktop/Gcloud /home/munish/
The Syntax of file downloading from GCE to your Local Machine as following
gcutil pull --zone=us-central1-a \
my_instance \
/remote/file1 \
/remote/file2 \
~/local/path
for example in mac
example
gcutil pull --zone=us-central1-a \your-instance \ /home/munish/source-folder ~/Desktop/destination-folder
Related
If we are submitting a task to the compute engine through ssh from host machine and if we shut down the host machine is there a way that we can get hold of the output of the submitted task later on when we switch on the host machine?
From the Linux point of view ‘ssh’ and ‘gcloud compute ssh’ are commands like all the others, therefore it is possible to redirect their output to a file while the command is performed using for example >> to redirect and append stdout to a file or 2>> to store stderr.
For example if you run from the first instance 'name1':
$ gcloud compute ssh name2 --command='watch hostname' --zone=XXXX >> output.out
where 'name2' is the second instance, and at some point you shutdown 'name1' you will find stored into output.out the output provided by the command till the shutdown occurred.
Note that there is also the possibility to create shut down scripts, that in this scenario could be useful in order to upload output.out to a bucket or to perform any kind of clean-up operation.
In order to do so you can run the following command
$ gcloud compute instances add-metadata example-instance --metadata-from-file shutdown-script=path/to/script_file
Where the content of the script could be something like
#! /bin/bash
gsutil cp path/output.out gs://yourbucketname
Always keep in mind that Compute Engine only executes shutdown scripts on a best-effort basis and does not guarantee that the shutdown script will be run in all cases.
More Documentation about shutdown scrips if needed.
With scp I can add the -r flag to download directories to my local machine via ssh.
When using:
gcloud compute scp -r
it sais that '-r' is not an available option.
Without -r I get an error saying that my source path is a directory. (Implying I can only download single files.)
Is there an equivalent to -r flag for gcloud compute scp command?
Found it!
GCE offers an equivalent and it is --recurse.
My final command looks like this:
gcloud compute scp --recurse username#instance_name:./* "local_dir"
For some reason I also needed the * behind the source folder to avoid some security issue.
Your gutils already has the right credentials, so just simply do
gcloud compute scp --recurse [the_instance_name]:[the_path_on_gcp_instance_folder] [the_path_on_your_machine]
I got some issues when dealing with the Google VM instances,
I use this command to create a instance at the beginning,
gcloud compute instances create <instance name> \
--scopes storage-rw,bigquery,compute-rw \
--image-family container-vm \
--image-project google-containers \
--zone us-central1-b \
--machine-type n1-standard-1
The instance is working well, and I can do a lot of things on the instance.
However, in the project, there are several other users, and every user in the same project can SSH access to my instance and see my work, and they have sudo permission, which means they can also change my settings, documents and so on. It is not secure.
Is there a method to set up the instance to be personal instead of public to the project? In this case, everyone in the project can have his/her own VM, and no one else can access it except himself/herself.
If you want to allow only access to particular users in a VM and not to other project SSH keys, you must block the propagation of project level keys to the instance. In that way only the specific keys defined to the instance will have access to it. The details are explained in this article
The command to deploy such an instance at the creation time would look something like this
gcloud compute --project "myproject" instances create "myinstance" --zone "us-central1-f" --machine-type "n1-standard-1" --network "default" --metadata "block-project-ssh-keys=true,ssh-keys=MYPUBLICKEYVLUE" --maintenance-policy "MIGRATE" --scopes default="https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly" --image "/debian-cloud/debian-8-jessie-v20160923" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "myinstance"
Now if you have defined a Project Member as Owner/Editor, the key will still get automatically transferred to the instance when he SSH using gcloud or the Developer Console. This behaviour makes sense since the permissions at the project level allows him to even delete the VM.
If your instance is already created you must change block-project-ssh-keys metadata value to TRUE and delete any undesired keys in the VM, as explained in the same article
I have a GCE instance that I have customised and uploaded various applications to (such as PHP apps running under Apache). I now want to duplicate this instance - i.e. everything on it.
I originally thought clone might do this but I had a play around with it and it only seems to clone the instance config and not anything customised on it.
I've been googling it and it looks like what I need to do is create an image and use this image on a new instance or clone?
Is that correct?
If so, are there any could steps by steps out there to do this?
I had a look at the Google page on images and it talks about having to terminate the instance to do this. I'm a bit wary of this. Maybe it's just the language used in the docs, but I don't want to lose my existing instance.
Also, will everything be stored on the image?
So, for example, will the following all make it onto the image?
MySQL - config & databases schemas & data?
Apache - All installed apps under /var/www/html
PHP - php.ini, etc...
All other server configs/modifications?
You can create a snapshot of the source instance, then create a new instance selecting the source snapshot as disk. It will replicate the server very fast. For other attached disks, you have to create a new disk and copy file by net (scp, rsync etc)
In the Web Console, create a snapshot, then click on the snapshot and over CREATE INSTANCE button, you can customize the settings and then click where it says:
Equivalent REST or command line
and copy the command line, this will be your template.
From this, you can create a a BASH script (clone_instance.sh), I did something like this:
#!/bin/bash -e
snapshot="my-snapshot-name"
gcloud_account="ACCOUNTNUMBER-compute#developer.gserviceaccount.com"
#clone 10 machines
for machine in 01 02 03 04 05 06 07 08 09 10
do
gcloud compute --project "myProject" disks create "instance-${machine}" \
--size "220" --zone "us-east1-d" --source-snapshot "${snapshot}" \
--type "pd-standard"
gcloud compute --project "bizqualify" instances create "webscrape-${machine}" \
--zone "us-east1-d" --machine-type "n1-highmem-4" --network "default" \
--maintenance-policy "MIGRATE" \
--service-account "ACCOUNTNUMBER-compute#developer.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--tags "http-server","https-server" \
--disk "name=webscrape-${machine},device-name=webscrape-${machine},mode=rw,boot=yes,auto-delete=yes"
done
Now, in your terminal, you can execute your script
sh clone_instance.sh
In case you have other disks attached, the best way without actually unmounting them is changing the path of how they're mounted in /etc/fstab.
If you use the UUID in fstab and use the same disks from snapshots (which will have the same UUIDs) then you can do the cloning without unmounting anything.
Just change each disk in fstab to use UUID like this
UUID=[UUID_VALUE] [MNT_DIR] ext4 discard,defaults,[NOFAIL] 0 2
you can get the UUID from
sudo blkid /dev/[DEVICE_ID]
if you're unsure about your DEVICE_ID you can use
sudo lsblk
to get the list of device ids used by your system.
It's 2021 and this is now very simple:
Click the VM Instance you want to clone
Click "Create Machine Image" at the top
From Machine Images on the left, open your new image and click "Create VM Instance"
This will clone the machine specs and data.
As was mentioned, if the source instance has a secondary disk attached, it is not possible to ssh into the new instance.
I had to take a snapshot of a production instance, so I couldn't unmount the secondary disk without causing disruption.
I was able to fix the problem by creating a disk from the snapshot, mounting the disk on another instance, removing any reference to the secondary disk, i.e., removing the entry from /etc/fstab.
Once I had done that, I was able to use the disk as boot disk in a new instance, and ssh to it.
You can use the GCP Import VM option, to import this machine back to the project.
I'm launching instances using the following command:
gcutil addinstance \
--image=debian-7 \
--persistent_boot_disk \
--zone=us-central1-a \
--machine_type=n1-standard-1 \
--metadata_from_file=startup-script:install.sh \
instance-name
How can I detect when this instance has completed it's install script? I'd like to be able to place this launch command in a larger provisioning script that then goes on to issue commands to the server that depend on the install script having been successfully completed.
There is a number of ways: sending yourself an email, uploading to Cloud Storage, sending a jabber message, ...
One simple, observable way IMHO is to add a logger entry at the end of your install.sh script (I also tweak the beginning for symmetry). Something like:
#!/bin/bash
/usr/bin/logger "== Startup script START =="
#
# Your code goes here
#
/usr/bin/logger "== Startup script END =="
You can check then if the script started or ended in two ways:
From your Developer's Console, select "Projects" > "Compute" > "VM Instances" > your instance > "Serial console" > "View Output".
From CLI, by issuing a gcutil getserialportoutput instance-name.
I don't know of a way to do all of this within gcutil addinstance.
I'd suggest:
Adding the instance via gcutil addinstance, making sure to use the --wait_until_running flag to ensure that the instance is running before you continue
Copying your script over to the instance via something like gcutil push
Using gcutil ssh <instance-name> </path-to-script/script-to-run> to run your script manually.
This way, you can write your script in such a way that it blocks until it's finished, and the ssh command will not return until your script on the remote machine is done executing.
There really are a lot of ways to accomplish this goal. One that tickles my fancy is to use the metadata server associated with the instance. Have the startup script set a piece of metadata to "FINISHED" when the script is done. You can query the metadata server with a hanging GET that will only return when the metadata updates. Just use gcutil setmetadata
from within the script as the last command.
I like this method because the hanging GET just gives you one command to run, rather than a poll to run in a loop, and it doesn't involve any services besides Compute Engine.
One more hacky way:
startup_script_finished=false
while [[ "$startup_script_finished" = false ]]; do
pid=$(gcloud compute ssh $GCLOUD_USER#$GCLOUD_INSTANCE -- pgrep -f "\"/usr/bin/python /usr/bin/google_metadata_script_runner --script-type startup\"")
if [[ -z $pid ]]; then
startup_script_finished=true
else
sleep 2
fi
done
One possible solution would be to have your install script create a text file in a cloud storage bucket, as the last thing it does, using the host name as the filename.
Your main script that did the original gcutil addinstance command could then be periodically polling the contents of the bucket (using gsutil ls) until it sees a file with a matching name and then it would know the install had completed on that instance.