Google Compute Engine as an alternative to Amazon Web Services (EC2, ELB, etc...) - google-compute-engine

I am trying evaluate Google Compute Engine (GCE) for a cloud project in our company. We have some experience in working with Amazon Web Services but would like to know if GCE is a better alternative for our project.
I have following questions. Our choice for the project will be based on the answers for the questions so please help me with these queries.
Is there an equivalent of AWS Route53 and Elastic Load Balancer on Google cloud? If they are not available then how do we load balance GCE instances?
Is there a concept like regions? (such as us-east-coast-1, us-west-coast-1, etc…). Helpful in making sure that the service is not affected during natural calamities.
Is there an equivalent of Cloud Watch to help us auto scale compute engine instances based on load?
Can we setup a private cloud on Google cloud platform?
Can we get persistent public IP addresses for GCE instances?
Are there any advantages (in terms of tighter integration OR pricing) when using Google services such as Google Analytics, YouTube, DoubleClick, etc?

Load Balancing
Google Cloud Platform's Compute Engine (GCE) recently added a Load Balancing feature. It's lower level than ELB (it only supports UDP / TCP, not HTTP(S)).
Regions
GCE has feature parity. AWS Regions correspond to GCE Regions, and AWS Availability Zones to GCE Zones
Autoscaling (CloudWatch)
Google Compute Engine does not have autoscaling, but Google App Engine does. Third party tools such as Scalr or RightScale are however compatible with Google Compute Engine
Disclaimer: I do work at Scalr.
Private Cloud
Did you mean dedicated instances? Those are not available in GCE.
If you meant VPC, then you can use GCE networks to achieve isolation. You'll also wish to disable ephemeral external IP addresses for the instances you want to isolate.
Persistent IPs
GCE has persistent IPs, they are called "Reserved Addresses"
Integration with other services
You will likely get better latency to Google services you use in your backend (I recall a couple presentations at Google I/O talking about Google App Engine + BigQuery).
For frontend services (Google Analytics), you'll likely see not benefit, since this depends on your users, not your servers.

Related

Monitoring unhealthy hosts on google cloud

I am using an external monitoring service (not stackdriver)
I wish to monitor the number of unhealthy hosts on my load balancer.
It seems like the google cloud api doesn't expose this metrics
therefore I implemented a custom script that gets the instance groups of the load balancer, get the instances' data (dns) and performs the health check
pretty cumbersome. is there a simple way to do it?
You can use the command 'gcloud compute backend-services get-health' to get the status of each instance in your backend service. This command will provide the current status of each instance, HEALTHY or UNHEALTHY, that is part of your backend service.

DNS & Routing for Google Compute Engine VMs from Kubernetes on Container Engine

My kubernetes pods are all able to resolve hostnames and ping servers that are on the wider Internet, but they can't do either for our VMs running in the same zone & region on Google Compute Engine.
How does one tell kubernetes / docker to allow outbound traffic to the Google Compute Engine environment (our subnet is 10.240.0.0) and to resolve hostnames for that subnet using 10.240.0.1?
Very silly mistake on my part.
Our Google Container Cluster was configured to use a custom network in the Google Developer Console, while our Google Compute Engine VMs were all configured to use the default network.
That explains that. Make sure the machines are all on the same network and then everything works as you'd hope.

Google Container Engine Architecture

I was exploring the architecture of Google's IaaS/PaaS oferings, and I am confused as to how GKE (Google Container Engine) runs in Google data centers. From this article (http://www.wired.com/2012/07/google-compute-engine/) and also from some of the Google IO 2012 sessions, I gathered that GCE (Google Compute Engine) runs the provisioned VMs using KVM (Kernel-based Virtual Machine); these VMs run inside Google's cgroups-based containers (this allows Google to schedule user VMs the same way they schedule their existing container-based workloads; probably using Borg/Omega). Now how does Kubernetes figure into this, given that it makes you run Docker containers on GCE provisioned VMs, and not on bare metal? If my understanding is correct, then Kubernetes-scheduled Docker containers run inside KVM VMs which themselves run inside Google cgroups containers scheduled by Borg/Omega...
Also, how does Kubernetes networking fit into Google's existing GCE Andromeda software-defined networking?
I understand that this is a very low-level architectural question, but I feel understanding of the internals will ameliorate my understanding of how user workloads eventually run on bare metal. Also, I'm curious, if the whole running containers on VMs inside containers is necessary from a performance point of view? E.g. doesn't networking performance degrade by having multiple layers? Google mentions in its Borg paper (http://research.google.com/pubs/archive/43438.pdf) that they run their container-based workloads without a VM (they don't want to pay the "cost of virtualization"); I understand the logic of running public external workloads in VMs (better isolation, more familiar model, heteregeneous workloads, etc.), but with Kubernetes, can not our workloads be scheduled directly on bare metal, just like Google's own workloads?
It is possible to run Kubernetes on both virtual and physical machines see this link. Google's Cloud Platform only offers virtual machines as a service, and that is why Google Container Engine is built on top of virtual machines.
In Borg, containers allow arbitrary sizes, and they don't pay any resource penalties for odd-sized tasks.

GCP Cloud Trace for GCE

I have GCE instances and provide our own Web APIs on the VMs. I really want to use GCP Cloud Trace to check the latency. Can I use it on GCE? Some say that it works on GAE. As far as I've read docs, it does not seem that I can use it.

Load balancing mix of GCE and non-GCE nodes

I am planning a deployment to Google Compute Engine
and I have a customer requirement which forces me to
have some servers out of the Google Cloud.
Is it possible to load balance traffic to these instances
along with GCE instances?
Thanks in advance.
The short answer is : no you can't add external instances to a Google Cloud Load Balancer.
You could think of creating an instance proxying traffic to your external server but that is not a good, fast and reliable solution, and I think your customer is considering load balancing to have a reliable and fast configuration.
Regards.
Paolo