google cloud Stackdriver Metrics to scale an Manage instance group - Regional - google-compute-engine

What are the Stackdriver Metrics we can use to autoscale Regional Manage instance groups ? When i check the docs it says Regional managed instance groups do not support filtering for per-instance metrics. Regional managed instance groups do not support autoscaling using per-group metrics.
Is that mean I can not use any Stackdriver Metrics other than CPU ?

Based on the documentation, you're right by saying that Regional Manage Instance Group have some limitations:
You cannot autoscale based on Cloud Monitoring logs-based metrics.
Regional managed instance groups do not support filtering for per-instance metrics.
Regional managed instance groups do not support autoscaling using per-group metrics.
However, you can still using the autoscaler capability by using the standard form:
Scaling based on CPU utilization
Scaling based on load balancing serving capacity

Related

Isn't Google App Engine suppose to be more expensive than Google Kubernetes engine

I had my app in the app engine(Flex). But it's costing a lot with no traffic yet!
I decided to move that to Kubernetes Engine which utilizes the compute engine.
Another reason I moved to Kubernetes because I wanted to run docker container services like Memcached that come with an added cost in App Engine Flex.
If you are tempted to ask why am not using App Engine Standard which is economical, that's because I couldn't find any easy way if at all there's any for running services like GDAL & Memcached.
I thought Kubernetes should be a cheaper option, but what I am seeing is the opposite.
I have even had to change the machine type to g1-small from N1...
Am I missing something?
Any ideas on how to reduce cost in Kubernetes / compute engine instances?
Please have a look at the documentation GKE Pricing and App Engine Pricing:
GKE clusters accrue a management fee of $0.10 per cluster per hour,
irrespective of cluster size or topology. One zonal (single-zone or
multi-zonal) cluster per billing account is free.
GKE uses Compute Engine instances for worker nodes in the cluster. You
are billed for each of those instances according to Compute Engine's
pricing, until the nodes are deleted. Compute Engine resources are
billed on a per-second basis with a one-minute minimum usage cost.
and
Apps running in the flexible environment are deployed to virtual
machine types that you specify. These virtual machine resources are
billed on a per-second basis with a 1 minute minimum usage cost.
Billing for the memory resource includes the memory your app uses plus
the memory that the runtime itself needs to run your app. This means
your memory usage and costs can be higher than the maximum memory you
request for your app.
So, both GAE Flex and GKE cluster are "billed on a per-second basis with a 1 minute minimum usage cost".
To estimate usage cost in advance you can use Google Cloud Pricing Calculator, also you can use it to estimate how changing parameters of your cluster can help you to reduce cost and which solution is more cost effective.
In addition, please have a look at the documentation Best practices for running cost-optimized Kubernetes applications on GKE.

Are GCP CloudSQL instances billed by usage?

I'm starting a project where a CloudSQL instance would be a great fit however I've noticed they are twice the price for the same specification VM on GCP.
I've been told by several devops guys I work with that they are billed by usage only. Which would be perfect for me. However on their pricing page it states "Instance pricing for MySQL is charged for every second that the instance is running".
https://cloud.google.com/sql/pricing#2nd-gen-pricing
I also see several people around the web saying they are usage only.
Cloud SQL or VM Instance to host MySQL Database
Am I interpreting Googles pricing pages incorrectly?
Am I going to be billed for the instance being on or for its usage?
Billed by usage
All depend what you mean by USAGE. When you run a Cloud SQL instance, it's like a server (compute engine). Until you stop it, you will pay for it. It's not a pay-per-request pricing, as you can have with BigQuery.
With Cloud SQL, you will also pay the storage that you use. And the storage can grow automatically according with the usage. Be careful the storage can't be reduce!! even if you delete data in database!
Price is twice a similar Compute engine
True! A compute engine standard1-n1 is about $20 per month and a same config on Cloud SQL is about $45.
BUT, what about the price of the management of your own SQL instance?
You have to update/patch the OS
You have to update/patch the DB engine (MySQL or Postgres)
You have to manage the security/network access
You have to perform snapshots, ensure that the restoration works
You have to ensure the High Availability (people on call in case of server issue)
You have to tune the Database parameters
You have to watch to your storage and to increase it in case of needs
You have to set up manually your replicas
Is it worth twice the price? For me, yes. All depends of your skills and your opinion.
There are a lot of hidden configuration options that when modified can quickly halve your costs per option.
Practically speaking, GCP's SQL product only works by running 24/7, there is no time-based 'by usage' option, short of you manually stopping and restarting the compute engine.
There are a lot of tricks you can follow to lower costs, you can read many of them here: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332

Why Google compute engine autoscaling creates too much instances when multi-zone is selected?

I've been using autoscaling based on cpu usage. We used to set it up using a single zone, but to ensure instance availability we are now creating it with multi-zone enabled.
Now it seems to create much more instances than required according to cpu usage. I believe it has to do with the fact that instances are created among different zones and the total usage calculation somehow is not taking that into consideration.
From the documentation, the regional autoscaler will need at least 3 instances that will be located in 3 different zones, even though your utilisation is lower and it could be served from an instance in a single zone. This is to provide resiliency, because a region is less likely to go down than a single zone.

Compute Engine Autoscaler and the Monitoring Agent

When using Compute Engine Autoscaler, is it possible to autoscale based on a cloud metric obtained by the Monitoring Agent (Stackdriver)?
For example, I want to use the number of requests per second serviced by Apache as a metric.
Yes. Using the Developers Console select Monitoring metric as the value for the Autoscale based on drop down list and fill out Target monitoring metric with your desired Cloud Monitoring metric.
You can accomplish this by using gcloud command as well.

How does Google Cloud SQL Performance Tiers compare with Amazon AWS RDS for MySQL?

There are only 7 performance tiers in GCS (D0, D1, D2, D4, D8, D16, D32), RAM maxes out at 16GB (D32) as they are based on Google Compute Engine(GCE) machine types. See screenshot below (1)
By comparison, Amazon has 13 performance tiers with db.r3.8xlarge's RAM maxes out at 244GB. (2)
So my question is, what is the rough equivalent performance tier in AWS RDS for MySQL for a Google Cloud SQL's D32 tier?
Disclaimer: I am new to Google Cloud SQL. I only start to use Cloud SQL because I started a new job that's 100% Google Cloud. Previously I have been a AWS user since the early days.
The D0-D32 Cloud SQL tiers are not based on GCE VMs so a direct comparison is not straightforward. Note that the storage for D0-D32 is replicated geographically and that makes writes a lot slower. The ASYNC mode improves the performance for small commits. The upside is that the instances can be relocated quickly between location that are farther apart.
The connectivity for Cloud SQL is also different from RDS. RDS can be access using IPs and the latency is comparable with VMs talking over local IPs. Cloud SQL uses only external IPs. That makes the latency from GCE higher (~1.25ms) but it provides a slightly better for experience for connections coming from outside the Google Cloud because the TCP connections are terminated closer to the clients.
That being said, from a memory point of view, the db.m3.xlarge from RDS is the closest match for the D32 from Cloud SQL. If the working set fits in the memory the performance for some queries will be similar.
Currently in Alpha, there is a new feature of Cloud SQL that uses comparable performance to GCE machine types.
A Google Cloud SQL Performance Class instance is a new Google Cloud
SQL instance type that provides better performance and more storage
capacity, and is optimized for heavy workloads running on Google
Compute Engine. The new instance type is an always-on, long-lived
database as compared to a Standard Class instance type. Performance
Class instances are based on tiers that correspond to Google Compute
Engine (GCE) standard or highmem machine types.
Link: https://cloud.google.com/sql/docs/getting-started-performance-class
Anyway, very good question, comparing prices with AWS, I found out that there is a huge difference in resources for the smallest instances and the same price:
GCE, D0 = $0.025 per hour (0,128 GB RAM + "an appropriate amount of CPU")
AWS, db.t2.micro = $0.02 per hour (1 GB RAM + 1 vCPU)
For 1 GB RAM in GCE, one would have to pay $0.19 per hour. Unfortunately Google does not specify anything about SSD storage, something very important for performance comparison.