how can I download a google compute engine image - google-compute-engine

How can I download a google compute engine image that was created from a snapshot of a persistent disk? There doesn't seem to be a direct way to do this through the console.

There isn't a direct way to download a image or snapshot from GCE, but there's a way to save an image and store it in Google Cloud Storage(GCS) where it can be downloaded. You can use the standard gcimagebundle tool to do this.
You can also create this image using the dd command. On a temporary disk that’s bigger than the one you want to image, run this:
dd if=/dev/disk/by-id/google-diskname of=disk.img bs=5M
You can then run this command to copy it over to GCS:
gsutil cp disk.img gs://bucket/image.img
And later, you can:
gsutil cat gs://bucket/image.img | dd of=/dev/disk/by-id/google-newdisk bs=5M
This will allow you to make an image of your disk and then send it to GCS where you can download it using either the web interface or gsutil.

As an addition to the current answer, you can it directly download a file using SSH / SCP, by adding your public key to the "SSH Keys". Then, using your own terminal :
sheryl:~ sangprabo$ scp prabowo.murti#123.456.789.012:/var/www/my-file.tar.gz .
Enter passphrase for key '/Users/sangprabo/.ssh/id_rsa':
I prefer that way so I don't need to create a bucket first. CMIIW.

Related

to resume the download by using gsutil

I have been downloading the file by using gsutil, and the process has crushed.
The documentation on gsutil is located at :
https://cloud.google.com/storage/docs/gsutil_install#redhat
The file location is described on : https://genebass.org/downloads
How can I resume the file download instead of starting from scratch ?
I have been looking for answers to a similar question, although those have been provided to different questions. For example :
GSutil resume download using tracker files
As mentioned in GCP docs, using the gsutil cp command:
gsutil automatically performs a resumable upload whenever you use the cp command to upload an object that is larger than 8 MiB. You do not need to specify any special command line options to make this happen. [. . .] Similarly, gsutil automatically performs resumable downloads (using standard HTTP Range GET operations) whenever you use the cp command, unless the destination is a stream. In this case, a partially downloaded temporary file will be visible in the destination directory. Upon completion, the original file is deleted and overwritten with the downloaded contents.
If you're also using gsutil in large production tasks, you may find useful information on Scripting Production Transfers.
Alternatively, you can achieve resumable download from Google Cloud Storage using the Range header (just take note of the HTTP specification threshold).
I'm not sure which command you're using (cp or rsync), but either way gsutil will fortunately take care of resuming downloads for you.
From the docs for gsutil cp:
gsutil automatically resumes interrupted downloads and interrupted resumable uploads, except when performing streaming transfers.
So, if you're using gsutil cp, it will automatically resume the partially downloaded files without starting them over. However, resuming with cp will also re-download the files that were already completed. To avoid this, use the -n flag so the files you've already downloaded are skipped, something like:
gsutil -m cp -n -r gs://ukbb-exome-public/300k/results/variant_results.mt .
If instead you're using gsutil rsync, then it will simply resume downloading.

Add a new persistent disk to GCE which created with VirtualBox custom image

I followed the tutorial google has up on youtube for creating a custom image for compute engine using VirtualBox by the link as follow
https://www.youtube.com/watch?v=YlcR6ZLebTM
I have succeed in created custom images and imported it to the Google Compute Engine.
But when I try to follow this document to attached a new persistent disk :
https://cloud.google.com/compute/docs/disks/persistent-disks#attachdiskcreation
The document mentions a command line tool :
/usr/share/google/safe_format_and_mount
but the folder /usr/share/google does not exist in my custom image.
How can I install it ?
or is there another way to mount a new persistence disk in GCE instance
?
The /usr/share/google/safe_format_and_mountcommand comes with the Google Compute Engine image packages. You can see the source code here.
You can either install the packages or run these commands:
1- Determine the device location of your new persistent disk: ls -l /dev/disk/by-id/google-*. Let's suppose it's /dev/sdb
2- sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/sdb
3- sudo mount -o discard,defaults /dev/sdb <destination_folder>
Run df-h or mount to check if your disk is already mounted in the destination folder.

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!

Google Compute Engine - Clone Instance

I have a GCE instance that I have customised and uploaded various applications to (such as PHP apps running under Apache). I now want to duplicate this instance - i.e. everything on it.
I originally thought clone might do this but I had a play around with it and it only seems to clone the instance config and not anything customised on it.
I've been googling it and it looks like what I need to do is create an image and use this image on a new instance or clone?
Is that correct?
If so, are there any could steps by steps out there to do this?
I had a look at the Google page on images and it talks about having to terminate the instance to do this. I'm a bit wary of this. Maybe it's just the language used in the docs, but I don't want to lose my existing instance.
Also, will everything be stored on the image?
So, for example, will the following all make it onto the image?
MySQL - config & databases schemas & data?
Apache - All installed apps under /var/www/html
PHP - php.ini, etc...
All other server configs/modifications?
You can create a snapshot of the source instance, then create a new instance selecting the source snapshot as disk. It will replicate the server very fast. For other attached disks, you have to create a new disk and copy file by net (scp, rsync etc)
In the Web Console, create a snapshot, then click on the snapshot and over CREATE INSTANCE button, you can customize the settings and then click where it says:
Equivalent REST or command line
and copy the command line, this will be your template.
From this, you can create a a BASH script (clone_instance.sh), I did something like this:
#!/bin/bash -e
snapshot="my-snapshot-name"
gcloud_account="ACCOUNTNUMBER-compute#developer.gserviceaccount.com"
#clone 10 machines
for machine in 01 02 03 04 05 06 07 08 09 10
do
gcloud compute --project "myProject" disks create "instance-${machine}" \
--size "220" --zone "us-east1-d" --source-snapshot "${snapshot}" \
--type "pd-standard"
gcloud compute --project "bizqualify" instances create "webscrape-${machine}" \
--zone "us-east1-d" --machine-type "n1-highmem-4" --network "default" \
--maintenance-policy "MIGRATE" \
--service-account "ACCOUNTNUMBER-compute#developer.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--tags "http-server","https-server" \
--disk "name=webscrape-${machine},device-name=webscrape-${machine},mode=rw,boot=yes,auto-delete=yes"
done
Now, in your terminal, you can execute your script
sh clone_instance.sh
In case you have other disks attached, the best way without actually unmounting them is changing the path of how they're mounted in /etc/fstab.
If you use the UUID in fstab and use the same disks from snapshots (which will have the same UUIDs) then you can do the cloning without unmounting anything.
Just change each disk in fstab to use UUID like this
UUID=[UUID_VALUE] [MNT_DIR] ext4 discard,defaults,[NOFAIL] 0 2
you can get the UUID from
sudo blkid /dev/[DEVICE_ID]
if you're unsure about your DEVICE_ID you can use
sudo lsblk
to get the list of device ids used by your system.
It's 2021 and this is now very simple:
Click the VM Instance you want to clone
Click "Create Machine Image" at the top
From Machine Images on the left, open your new image and click "Create VM Instance"
This will clone the machine specs and data.
As was mentioned, if the source instance has a secondary disk attached, it is not possible to ssh into the new instance.
I had to take a snapshot of a production instance, so I couldn't unmount the secondary disk without causing disruption.
I was able to fix the problem by creating a disk from the snapshot, mounting the disk on another instance, removing any reference to the secondary disk, i.e., removing the entry from /etc/fstab.
Once I had done that, I was able to use the disk as boot disk in a new instance, and ssh to it.
You can use the GCP Import VM option, to import this machine back to the project.

Copy PDF file from Google drive to remote server

I've built nice browsing window which shows all of the pdf files on my (or any user) Google Drive for managing purposes.
What i looking to do is simple, i want to take a pdf file from my google drive (i have all the info related to this file - "downloadUrl","webContentLink" etc) and just copy it to my server (remote).
Any thoughts?
I guess I'm pretty late here, but this may help other people too.
You could try using Grive. Here's a straightforward tutorial: http://xmodulo.com/2013/05/how-to-sync-google-drive-from-the-command-line-on-linux.html
Even if you don't have root access on the server, you can simply build from source, and:
$ mkdir ~/google_drive
$ cd ~/google_drive
$ grive -a
You'll receive an auth URL which you need to paste on your browser and click on "Allow Access" and you're done. Go to the google_drive dir on your server and run grive to sync between your local dir and your GDrive.