Download Directory from Google Cloud Compute Engine - google-compute-engine

I am trying to download a full recursive directory from Google Cloud Platform using the trial edition of the platform. I assumed that the "Download File" option under the SSH dropdown settings would work, but it does not, providing only a "Failed" message on the window.
Upon trying to look up the answer, I found people mentioning downloading files from storage buckets and such - that is not what this is and to my knowledge I don't have access to those on a trial edition of GCP. I have a compute engine running and can SSH into it and I am looking to download a full recursive directory from it.
Thank you for any advice that you can offer me!

If you already have SSH access, you can use the scp command to copy files(assuming it is available on the system to which you want to copy the files).
scp -r username#server:/path/to/your/directory /local/destination
Another option is to use SFTP if scp is not available. Various clients are available for this for various operating systems.
Either of these options will transfer the files over SSH without any additional configuration required on the server(compute instance in your case).

Related

How to start Fedora Atomic VM?

I downloaded a qcow2 image from Atomic official site, but I really frustrated with the steps to start this qcow2 image, and no helpful clear tips from Google.
Anyone can give me some clear hints on how to start the qcow2 vm? Thanks.
The image name is: Fedora-Atomic-25-20170131.0.x86_64.qcow2
The Fedora Atomic Host (FAH) qcow is a cloud image, so it expects a Metadata source. Metadata is all the configuration bits a generic cloud image uses to get configured. Specifically, it requires something that the cloud-init package recognizes. You can read more about cloud-init here. If you just want to fire off some one-off VMs for testing, a tool you can use is testcloud.
Using testcloud to launch the VM, you'll be able to log in with the user 'fedora' (which is the default in Fedora based cloud images) and the password 'passw0rd' (you can change this default in the config).
The other option is to download the installer ISO, and then you can install into a fresh VM and not have to worry about metadata at all. You can find that here, under "Other Downloads" on the right hand side.
The #fedora-cloud channel on Freenode is a good place to check if you have any other questions.

Google Compute Engine Client FTP Access

I have a client who's site is hosted on Google Compute Engine. They insist on having FTP access to the site. I am somewhat new to Google Compute Engine. I currently use FTP using Google's instructions. However my client will not know how to install the Google Cloud Console, download and convert the security keys, log in and move any files they upload in their folder using shell.
What I would like to do is install some kind of FTP server (proftpd or similar) install some kind of GUI they can log into to create users in the future and upload files using filezilla. However when I install proftpd on Compute Engine (Debian Wheezy) and I create a user to test before I install a GUI, the server keeps denying access.
Anyone with experience setting up FTP on Compute Engine for a client that can help?
You should be able to set this up, but as far as I remember you need to specifically set up firewall rules to open the FTP port on your instances.

Customizing a GCE Ubuntu VM image

I have a Google Cloud Platform account that I access from a VirtualBox VM. I am using the Google Compute Engine for a project that I am currently working on, and I had to create a custom image based on the Ubuntu 14.04 image that's available there.
I made changes to the Ubuntu image by ssh'ing into an Ubuntu 14.04 instance, (from my Vbox VM terminal) installing the Matlab compiler runtime, and downloading some other files that I needed. I created the custom image by following the steps according to the documentation.
However, now the changes I made are only available to me when I SSH from my Vbox VM terminal. I need to be able to run a certain matlab program Via startup scripts, how can I make it so that all users using this image have access to the customizations I made? Is there a way I can do this without having to make the edits by ssh'ing from the developers console and redoing all the changes?
EDIT: I don't think I was very clear so Ill give some examples. say my Google account is alexanderlang. When I ssh into an instance created from my custom image from the developers console, bash prompt looks like:
alexanderlang#myinstance $
My Vbox username is alex, and when I ssh into the same instance from my vbox terminal, bash prompt looks like:
alex#myinstance $
alex#myinstance can run matlab programs, but alexanderlang#myinstance cannot. I'm talking about the same instance, created from the same image. I think this might have something to do with the ssh keys for my custom image, but I don't know how to change or remove those keys.
When you connect to your VM instance via ssh by using either Developers Console or gcloud, the user account is dynamically created (if it doesn't already exist) by setting metadata on the VM. The question is: how does each tool choose your username?
When you use Google Developers Console, the only information it knows about you is your Google Account name, so it uses that, e.g., <first-name>_<last-name> or similar.
When you connect to your instance via gcloud, it knows the value of $USER so it uses that instead.
Note that in either case, your account has passwordless sudo access, so if you want to switch from one account to the other, you can run:
sudo su alex
while logged in as alexanderlang and then you have access to all the programs that alex does.
Similarly, you can run:
sudo su alexanderlang
while logged in as alex to do the reverse.
Startup scripts run as root. To run commands as another user, you need to do two things:
change to that username
run commands as that user
sudo su alex will create a new shell and hence ignore the rest of the script (until you manually exit the user shell, which is not what you want).
You can use sudo su alex -c 'command to run' but since what you want to run is a complex script, you need to first save the script to a file, and then run it.
Your options are:
pre-create the shell script to run
dynamically generate it from the startup script
Doing (1) is easy if the script never changes. For frequently-changing scripts (and it sounds like, many dynamically created VMs), you want to use option (2).
Here's how to do this in a startup script:
cat > /tmp/startup-script-helper.sh <<EOF
# ... put the script contents here ...
EOF
sudo su alex -c '/tmp/startup-script-helper.sh'
You can use Packer to create a derived image from a stock GCE VM image. Packer will let you do the following very easily:
boot a GCE VM using an image you specify
run some customization step, e.g., shell script, or Chef/Puppet/etc.
save the resulting image in your Google Cloud Platform project
Then, you can boot any number of new VMs using your newly-created image.
Note that since your VM image will be stored on Google Cloud Storage, you will be charged for the space it uses. Current pricing for Google Cloud Storage standard class is USD $0.026 / GB / month. A typical VM image should be less than 1GB.
You can see a complete example of how I used Packer to build VMs and pre-installed Ambari on it via my GitHub repo.

How do I setup my SFTP connection to my Google Compute Engine instance so that I can read and write files using Filezilla?

I created a LAMP stack instance on Google Compute Engine and followed the instructions for setting up FTP as described here.
Most of this worked, I can view files and ftp files to my local workstation FROM the instance. The problem is I can't ftp files TO the instance. Whenever I try to do so Filezilla gives me a permission denied error.
I tried right clicking on the "www" folder in Filezilla to set the permissions but that didn't work.
I'm guessing that write permissions have to be set by SSH-ing to the server and executing some sort of command but I'm not sure how to do that.
Any ideas as to how to go about doing this would be appreciated.
By default the /var/www directory is owned by 'www-data' on the debian instance. You should add your user to the 'www-data' group, and give the directory +rw (read and write) for groups.

MySQLdump to directory with WinSCP or similar

On my CentOS VPS server I currently backup all my hosted website files via an automated SFTP session using a script. I use WinSCP for this. Unfortunately, this does not include a backup of the MySQL databases which I have about 20 of.
Is it best to run a scheduled dump of the databases into a folder and then ftp this over, or can I use WinSCP to dump them, individually, on-the-fly into a folder during a session? I would prefer the latter option.
If so, how do I achieve this?
I want to end up with a working backup of my databases, on my local Windows PC, that can be reinstated if required.
Thanks in advance
You can use the WinSCP command call to run the mysqldump, before you start the download (i.e. before the get or synchronize command).
For examples see:
https://winscp.net/eng/docs/scriptcommand_call
You may need to increase a session timeout (15s default) to allow the call to finish in time.
Alternatively you can run the mysqldump using a more appropriate tool, like plink (from PuTTY suite), before you start WinSCP script:
https://the.earth.li/~sgtatham/putty/latest/htmldoc/Chapter7.html
scp (and its windows client variant you're using, WinSCP) are best for file copying. You could use ssh to run a command to dump each of your databases and then copy them.
But you are probably better off setting up a scheduled dump of each database that operates locally to the CentOS server.
If you happen to be using one of the well-known content management systems for your website (like WordPress, etc) you should investigate the various excellent and free (free as in speech, free as in beer) backup plugins available for those systems.