Google Compute Engine - Clone Instance - google-compute-engine

I have a GCE instance that I have customised and uploaded various applications to (such as PHP apps running under Apache). I now want to duplicate this instance - i.e. everything on it.
I originally thought clone might do this but I had a play around with it and it only seems to clone the instance config and not anything customised on it.
I've been googling it and it looks like what I need to do is create an image and use this image on a new instance or clone?
Is that correct?
If so, are there any could steps by steps out there to do this?
I had a look at the Google page on images and it talks about having to terminate the instance to do this. I'm a bit wary of this. Maybe it's just the language used in the docs, but I don't want to lose my existing instance.
Also, will everything be stored on the image?
So, for example, will the following all make it onto the image?
MySQL - config & databases schemas & data?
Apache - All installed apps under /var/www/html
PHP - php.ini, etc...
All other server configs/modifications?

You can create a snapshot of the source instance, then create a new instance selecting the source snapshot as disk. It will replicate the server very fast. For other attached disks, you have to create a new disk and copy file by net (scp, rsync etc)

In the Web Console, create a snapshot, then click on the snapshot and over CREATE INSTANCE button, you can customize the settings and then click where it says:
Equivalent REST or command line
and copy the command line, this will be your template.
From this, you can create a a BASH script (clone_instance.sh), I did something like this:
#!/bin/bash -e
snapshot="my-snapshot-name"
gcloud_account="ACCOUNTNUMBER-compute#developer.gserviceaccount.com"
#clone 10 machines
for machine in 01 02 03 04 05 06 07 08 09 10
do
gcloud compute --project "myProject" disks create "instance-${machine}" \
--size "220" --zone "us-east1-d" --source-snapshot "${snapshot}" \
--type "pd-standard"
gcloud compute --project "bizqualify" instances create "webscrape-${machine}" \
--zone "us-east1-d" --machine-type "n1-highmem-4" --network "default" \
--maintenance-policy "MIGRATE" \
--service-account "ACCOUNTNUMBER-compute#developer.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--tags "http-server","https-server" \
--disk "name=webscrape-${machine},device-name=webscrape-${machine},mode=rw,boot=yes,auto-delete=yes"
done
Now, in your terminal, you can execute your script
sh clone_instance.sh

In case you have other disks attached, the best way without actually unmounting them is changing the path of how they're mounted in /etc/fstab.
If you use the UUID in fstab and use the same disks from snapshots (which will have the same UUIDs) then you can do the cloning without unmounting anything.
Just change each disk in fstab to use UUID like this
UUID=[UUID_VALUE] [MNT_DIR] ext4 discard,defaults,[NOFAIL] 0 2
you can get the UUID from
sudo blkid /dev/[DEVICE_ID]
if you're unsure about your DEVICE_ID you can use
sudo lsblk
to get the list of device ids used by your system.

It's 2021 and this is now very simple:
Click the VM Instance you want to clone
Click "Create Machine Image" at the top
From Machine Images on the left, open your new image and click "Create VM Instance"
This will clone the machine specs and data.

As was mentioned, if the source instance has a secondary disk attached, it is not possible to ssh into the new instance.
I had to take a snapshot of a production instance, so I couldn't unmount the secondary disk without causing disruption.
I was able to fix the problem by creating a disk from the snapshot, mounting the disk on another instance, removing any reference to the secondary disk, i.e., removing the entry from /etc/fstab.
Once I had done that, I was able to use the disk as boot disk in a new instance, and ssh to it.

You can use the GCP Import VM option, to import this machine back to the project.

Related

Create a new google cloud instance using shut-down script

I am trying to use a shutdown-script to create a new instance from within the the instance that is shutting down now.
The script has three tasks,
1. creates an empty file
2. get the name of the new instance to be created
3. generates a name for the next new instance to be spawned
4. creates a new instance from within this instance with the name generated.
Here is the script:
#!/bin/bash
touch /home/ubuntu/newfile.txt
new_instance_name=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/next_instance_name -H "Metadata-Flavor: Google")
next_instance_name="instance-"$(printf "%04d" $((${new_instance_name: -4}+1)))
gcloud beta compute --project=xxxxxxxxx instances create $new_instance_name --zone=us-central1-c --machine-type=f1-micro --subnet=default --network-tier=PREMIUM --metadata=next_instance_name=$next_instance_name --maintenance-policy=MIGRATE --service-account=XXXXXXXX-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --image=image-1 --image-project=xxxxxxxx --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=$new_instance_name
This script is made executable using chmod +xand the file-name of the script is /home/ubuntu/shtudown_script.sh.he metadata shutdown-script for this instance is also /home/ubuntu/shtudown_script.sh.
All parts of the script runs fine when I run it manually from within the instance, so a new file is created and also a new instance is created when the current instance shuts-down.
But when it is invoked from API when I stop the instance, it only creates the file I create using touch command, but no new instance is created as before.
Am I doing something wrong here?
So I was able to reproduce the behavior you described. I ran a bash script similar to the one you have provided as a shutdown script, and it would only create the empty file called "newfile.txt".
I then decided to append the output of the gcloud command to see what was happening. I had to tweak the bash script to fit my project. Here is the bash script I ran to copy the output to a file:
#!/bin/bash
touch /home/ubuntu/newfile.txt
gcloud beta compute --project=xxx instances create instance-6 --zone=us-central1-c --machine-type=f1-micro --subnet=default --maintenance-policy=MIGRATE --service-account=xxxx-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=instance-6 > /var/output.txt 2>&1
The output I received was the following:
ERROR: (gcloud.beta.compute.instances.create) Could not fetch resource: - Insufficient Permission
This means that my default service account did not have the appropriate scopes to create the VM instance.
I then stopped my VM instance and edited the scopes to give the service account full access as described here. Once I changed the scopes, I started the VM instance back up and then stopped it again. At this point, it successfully created the VM instance called "instance-6". I would not suggest giving the default service full access. I would suggest specifying which scopes it should have, but make sure that it has full access to Compute Engine if you want the shutdown script to work.
If the shutdown script works when you stop the VM instance using the command:
$sudo shutdown -h now
And does not work when stopping the VM instance from the Cloud Console by pressing the “Stop” button, then I suspect this behavior is to be expected.
A shutdown script has a limited period of time to run when you stop a VM instance; however, this limit does not apply if you request the shutdown using the “sudo shutdown” command. You can read more about this behavior here.
If you would like to know more about the shutdown period, you can read about it here.
I already had given my instance proper scope by giving the service account full access (which is a bad practice).
But the actual problem was solved when I reinstalled google-cloud-sdk using
sudo apt-get install google-cloud-sdk
When I was running those scripts before reinstalling gcloud by sshing into the instance it was using the gcloud command from preinstalled directory /snap/bin/gcloud. But when it runs from the startup or shutdown script, for some reason it can not get an access to the preinstalled /snap/bin/ directory, and when I reinstall google cloud sdk using apt-get the gcloud command was being accessed from /usr/bin/gcloud which I think is accessible by the startup or shutdown script.

Running task in the background?

If we are submitting a task to the compute engine through ssh from host machine and if we shut down the host machine is there a way that we can get hold of the output of the submitted task later on when we switch on the host machine?
From the Linux point of view ‘ssh’ and ‘gcloud compute ssh’ are commands like all the others, therefore it is possible to redirect their output to a file while the command is performed using for example >> to redirect and append stdout to a file or 2>> to store stderr.
For example if you run from the first instance 'name1':
$ gcloud compute ssh name2 --command='watch hostname' --zone=XXXX >> output.out
where 'name2' is the second instance, and at some point you shutdown 'name1' you will find stored into output.out the output provided by the command till the shutdown occurred.
Note that there is also the possibility to create shut down scripts, that in this scenario could be useful in order to upload output.out to a bucket or to perform any kind of clean-up operation.
In order to do so you can run the following command
$ gcloud compute instances add-metadata example-instance --metadata-from-file shutdown-script=path/to/script_file
Where the content of the script could be something like
#! /bin/bash
gsutil cp path/output.out gs://yourbucketname
Always keep in mind that Compute Engine only executes shutdown scripts on a best-effort basis and does not guarantee that the shutdown script will be run in all cases.
More Documentation about shutdown scrips if needed.

Hide/obfuscate environmental parameters in docker

I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.

Add a new persistent disk to GCE which created with VirtualBox custom image

I followed the tutorial google has up on youtube for creating a custom image for compute engine using VirtualBox by the link as follow
https://www.youtube.com/watch?v=YlcR6ZLebTM
I have succeed in created custom images and imported it to the Google Compute Engine.
But when I try to follow this document to attached a new persistent disk :
https://cloud.google.com/compute/docs/disks/persistent-disks#attachdiskcreation
The document mentions a command line tool :
/usr/share/google/safe_format_and_mount
but the folder /usr/share/google does not exist in my custom image.
How can I install it ?
or is there another way to mount a new persistence disk in GCE instance
?
The /usr/share/google/safe_format_and_mountcommand comes with the Google Compute Engine image packages. You can see the source code here.
You can either install the packages or run these commands:
1- Determine the device location of your new persistent disk: ls -l /dev/disk/by-id/google-*. Let's suppose it's /dev/sdb
2- sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/sdb
3- sudo mount -o discard,defaults /dev/sdb <destination_folder>
Run df-h or mount to check if your disk is already mounted in the destination folder.

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!