I'm thinking about upgrading my company's integration server with a the repos on a separate disk that would be shared with a backup server. Like so:
[Main Integration Server] ---R/W--- [Repo Vdisk] ---R/O--- [Backup Integration Server]
My problem is that according to the GCE docs, if I attach the same Vdisk to more than one instance, all instances must only access the disk in read-only mode. What I'm looking to do would be to have one instance access it in read-write, and one in read-only mode.
Is this at all possible without powering up a third instance to act as a sort of "storage server"?
As you quoted from the docs and as mentioned in my earlier answer, if you attach a single persistent disk to multiple instances, they must all mount it in read-only mode.
Since you're looking for a fully-managed storage alternative so you don't have to run and manage another VM yourself, consider using Google Cloud Storage and mount your bucket with gcsfuse which will make it look like a regular mounted filesystem.
Related
I'm trying to set up a Staging VM for a site that's in production that I have just inherited. The site is running Wordpress/Woocommerce and has not been updated in a while. The VM it's hosted on is running an old version of PHP. Obviously, this all needs to be fixed up but I'm unfamiliar with GCP Compute Engine. Also any attempt to run backup/clone plugins crashes the site and requires a restore from the daily snapshot which is very annoying.
Is it possible to clone the VM/disk to a new instance, point that at a temporary domain, and test/update the site? I have been trying to do this for a while now without much luck any suggestions would be much appreciated. Thanks.
Creating a clone of an existing VM is possible and quite easy.
Create a snapshot of the VM. If possible stop the VM before doing this to ensure 100% accuracy - this way you will have exact snapshot of the drive without any errors. You can do it while the VM is running too if stopping it is out of the question.
Create a VM from the shapshot - select as a boot disk a snapshot that you've just created. Remember to assign a static public IP to this VM (unless you want it changed after VM restart and since you're going to do some configuration this would likely happen). You can change the VM's specs at this time too - nothing stops you from adding/removing CPU's, RAM etc. It may well be that your VM is underutilised and you can use something smaller to save costs. Or the opposite.
Start the machine. Now you can modify your WP configuration to point to a new domain. Depending on the SSL certificate - you can either use external one or the one provided by GCP (most convinient solution).
If you already own a domain you want to use for staging you can host it in Cloud DNS or at some other provider - just point it to the external IP you just reserved.
If you will be hosting your domain in the Cloud DNS then you will find necessary infomration in the documentation about managed zones (domains).
You can also consider creating a new VM and setting it as a template for creating a group of VM's (managed autoscaled group) and creating an external HTTPS load balancer in front of it. But this adds a little to the complexity so it's just my idea if you needed to handle a lot more traffic.
I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.
Is it reasonable to use Kubernetes for a clustered database such as MySQL in production environment?
There are example configurations such as mysql galera example. However, most examples do not make use of persistent volumes. As far as I've understood persistent volumes must reside on some shared file system as defined here Kubernetes types of persistent volumes. A shared file system will not guarantee that the database files of the pod will be local to the machine hosting the pod. It will be accessed over network which is rather slow. Moreover, there are issues with MySQL and NFS, for example.
This might be acceptable for a test environment. However, what should I do in a production environment? Is it better to run the database cluster outside Kubernetes and run only application servers with Kubernetes?
The Kubernetes project introduced PetSets, a new pod management abstraction, intended to run stateful applications. It is an alpha feature at present (as of version 1.4) and moving rapidly. A list of the various issues as we move to beta are listed here. Quoting from the section on when to use petsets:
A PetSet ensures that a specified number of "pets" with unique identities are running at any given time. The identity of a Pet is comprised of:
a stable hostname, available in DNS
an ordinal index
stable storage: linked to the ordinal & hostname
In addition to the above, it can be coupled with several other features which help one deploy clustered stateful applications and manage them. Coupled with dynamic volume provisioning for example, it can be used to provision storage automatically.
There are several YAML configuration files available (such as the ones you referenced) using ReplicaSets and Deployments for MySQL and other databases which may be run in production and are probably being run that way as well. However, PetSets are expected to make it a lot easier to run these types of workloads, while supporting upgrades, maintenance, scaling and so on.
You can find some examples of distributed databases with petsets here.
The advantage of provisioning persistent volumes which are networked and non-local (such as GlusterFS) is realized at scale. However, for relatively small clusters, there is a proposal to allow for local storage persistent volumes in the future.
I have a compute engine instance on google cloud which is running fine. user base is increasing and I wish to upgrade to a bigger compute engine in terms of cpu and memory.
What is the most easy way to do such migration?
What is the snapshot, image, persistent disk features in google compute engine? Are they anyway useful to my task?
I figured it out. Lennert answer is good. I will add few more things to complete it. You can always stop a VM, edit the CPU/memory and restart the VM. But this action may change the external IP address and cause lot of issues. You can handle it but it may cause further downtime. You may have to update the new IP address at DNS and inside the code. One way to avoid this hassle is that you should Reserve a static IP adreess [in console, go to NETWORKING > EXTERNAL IP ADDRESS > RESERVE A STATIC IP ADDRESS]. If you do this, your ip address will not change once you restart the VM.
Image is aka Operating System. While creating a VM, you are asked to choose a boot disk, disk which is used to boot your VM from. You can select from pre-defined images.
Snapshot is the copy of the disk. If it is a boot disk, it contains the operating system image too. We can create a snapshot of an existing disk and use it as the boot disk while creating new VM.
Persistent Disk is the disk that can persists even if you delete the VM [provided you have deselect the option of deleting it while deleting the VM]. We can delete VM and use a persistent disk to create new ones. We can simply pay for persistent disk only, without having any VM.
The easiest way is to stop the machine, change the machine type from the console and start the machine again. No need to create backups (snapshots), new VM's, etc.
I am new to GCE. I was able to create new instance using gcutil tool and GCE console. There are few questions unclear to me and need help:
1) Does GCE provides persistent disk when a new instance is created? I think its 10GB by default, not sure though. What is the right way to stop the instance without loosing data saved on it and what will be the charge (US zone) if say I need 20GB of disk space for that?
2) If I need SSL to enable HTTPS, is there any extra step I should do? I think I will need to add firewall as per the gcutil addfirewall command and create certificate (or install it from third part) ?
1) Persistent disk is definitely the way to go if you want a root drive on which data retention is independent of the life cycle of any virtual machine. When you create a Compute Engine instance via the Google Cloud Console, the “Boot Source” pull-down menu presents the following options for your boot device:
New persistent disk from image
New persistent disk from snapshot
Existing persistent disk
Scratch disk from image (not recommended)
The default option is the first one ("New persistent disk from image"), which creates a new 10 GB PD, named after your instance name with a 'boot-' prefix. You could also separately create a persistent disk and then select the "Existing persistent disk" option (along with the name of your existing disk) to use an existing PD as a boot device. In that case, your PD needs to have been pre-loaded with an image.
Re: your question about cost of a 20 GB PD, here are the PD pricing details.
Read more about Compute Engine persistent disks.
2) You can serve SSL/HTTPS traffic from a GCE instance. As you noted, you'll need to configure a firewall to allow your incoming SSL traffic (typically port 443) and you'll need to configure https service on your web server and install your desired certificate(s).
Read more about Compute Engine networking and firewalls.
As alternative approach i would suggest deploying VMs using Bitnami. There are many stacks you can choose from. This will save you time when deploying the VM. I would suggest you go with the SSD disks, as the pricing is close between magnetic disks and SSDs, but the performance boost is huge.
As for serving the content over SSL, you need to figure out how will the requests be processed. You can use NGINX or Apache servers. In this case you would need to configure the virtual hosts for default ports - 80 for non-encrypted and 443 for SSL traffic.
The easiest way to serve SSL traffic from your VM is generate SSL certificates using the Letsencrypt service.