Google Cloud MySQL 2nd Generation vs Compute Instance with MySQL - mysql

The new Google Cloud MySQL 2nd Generation spins up its own VM instance to run the MySQL server. Please see the following picture:
What is the difference between using the 2nd Generation instance, or using my own Compute VM instance with a manually installed version of MySQL on it? Are there any advantages when it comes to high availability, security, or performance?

Adding to the answer that Terry posted, and answering your question in the comment:
You can create a highly available Cloud SQL Second Generation by doing the following:
Set up your master instance correctly including sizing it appropriately and setting up binary logging. The master instance must have one backup after binary logging has been enabled. You should place your master instance in a zone that's close to your other services. See preparing the master instance.
Create one failover replica in a different zone than the master. See creating a failover replica.
Optionally, create one or more read replicas. Note that a master instance with a failover replica is sufficient for creating a highly available configuration.
Optionally, test failover. Keep in mind that testing the failover moves the master to a new zone.
To answer your question "So what happens if the VM instance they create fails?"
A master instance falls out of high availability mode when the failover replica becomes unavailable. This can happen, for example, if the network connection between the master instance and failover replica is interrupted, or if the failover replica is down due to its own zone failure. During this time, the master instance is not in high availability mode, and you will not be able to failover to the replica because it is not safe to do so. The failover replica resumes replication on reconnection, and high availability mode is re-enabled when the failover replica finishes catching up.

The major difference is that Cloud SQL v2 does not have to be managed. Google Cloud handles management, replication, and snapshots. Additionally Cloud SQL v2 using Cloud SQL Proxy works with App Engine standard and flexible runtimes to allow for flexible, but secure connections to SQL from other clients.
In return you do not have any access to any of the underlying system.

Related

Google Cloud SQL Failover, how to setup?

I am trying to set up a new failover to my SQL instance.
This is my second instance. I made the first one a year ago, so I don't exactly remember the procedure to create a Failover instance.
When I am creating the main instance, under
"Backups, recovery, and high availability" > "Availability"
I select:
• High availability (regional)
Automatic failover to another zone within your selected region. Recommended for production instances. Increases cost.
Is this enough to ensure I have a failover?
I am asking because after I created the instance I see there is no failover.
While under my first instance - the old one I created a year ago - I see a MySQL failover. Like this:
Instance ID Type
- sql-old-instance MySQL 5.6
- sql-old-instance-failover MySQL Failover
- sql-new-instance MySQL 5.7
Why there is no Failover under the new one? Is there a different way of creating it?
Thank you
Failover replicas are now called standby replicas and are not available until a failover event occurs.
The new failover strategy works like the old strategy, but now the data is written synchronously to the master instance disk and the standby instance disk, thus reducing the downtime in a failover event.
If you marked the option High availability (regional) the standby replica will be created for your Cloud SQL instance.
You can test the high availability by performing a manual failover event this will be started the standby instance in another zone and it will atttend all queries.
To return the traffic to the master you need to perform a second manual failover event called failback.

what happens when Amazon is backuping RDS instance?

I'm using RDS(MySQL) with one of my Laravel project. but one question is floating in my mind that what happens to the project when amazon is creating a backup of the rds instance. Is it:
Freeze the project
The project throws an exception
working Normal
For a single instance RDS the database I/O may be suspended for a few seconds while the snapshot is created. During this period all requests to the database will be paused, but they will be resumed after the snapshot is created.
So if you have a webapp, requests received during the I/O suspension period will be served slower then usually.
You can mitigate this with a multi-AZ RDS deployment, because in case of multi-AZ, the snapshot is taken from the standby instance. So there is no I/O suspension on the master instance.
Relevant documentation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow
Your application will continue to work normally during backups. Since AWS RDS uses volume snapshots the MySQL service is running without any interruption. This is how manual snapshots or point-in-time recovery works as well.

persistently replicating RDS MySQL database to external slave

AWS now allows you to replicate data from an RDS instance to an external MySQL database.
However, according to the docs:
Replication to an instance of MySQL running external to Amazon RDS is only supported during the time it takes to export a database from a MySQL DB instance. The replication should be terminated when the data has been exported and applications can start accessing the external instance.
Is there a reason for this? Can I choose to ignore this if I want the replication to be persistent and permanent? Or does AWS enforce this somehow? If so, are there any work-arounds?
It doesn't look like Amazon explicitly states why they don't support ongoing replication other than the statement you quoted. In my experience, if AWS doesn't explicitly document a reason for why they do something then you're not likely to find out unless they decide to document it at a later time.
My guess would be that it has to do with the dynamic nature of Amazon instances and how they operate within RDS. RDS instances can have their IP address change suddenly without warning. We've encountered that on more than one occasion with the RDS instances that we run. According to the RDS Best Practices guide :
If your client application is caching the DNS data of your DB instances, set a TTL of less than 30 seconds. Because the underlying IP address of a DB instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service.
Given that RDS instances can and do change their IP address from time to time my guess is that they simply want to avoid the possibility of having to support people who set up external replication only to have it suddenly break if/when an RDS instance gets assigned a new IP address. Unless you set the replication user and any firewalls protecting your external mysql server to be pretty wide open then replication could suddenly stop if the RDS master reboots for any reason (maintenance, hardware failure, etc). From a security point of view, opening up your replication user and firewall port like that are not a good idea.

Can Google Cloud SQL perform automatic instance failover across zones?

According to documentation, data of Cloud SQL is replicated across multiple zones. But this can only prevent data loss in the event of zone outage. In order to ensure high availability of the service, does Cloud SQL offer cross-zone DB instance failover capability similar to Amazon RDS?
All Cloud SQL data is replicated in multiple zones. If there is a zone outage then the instance fails over to another, available, zone automatically.
See https://developers.google.com/cloud-sql/faq#replication
Failover is automatic and mostly transparent to the client, all database clients need to do is reconnect when the connection is lost
https://cloud.google.com/sql/docs/mysql/high-availability
This is not what many 'Enterprise' databases would consider high availability, Products such as Oracle RAC with hot/hot or master/master setups would failover transparently and clients would not notice that an instance had died, the application would keep on running without any outage.
With CloudSQL your failover instance is cold and gets started up after Google has noticed your primary instance has stopped responding for an minute. So there is still an outage for a few minutes. The main advantage is the replication, if a disaster takes out the whole zone you can be up and running on another zone in a few minutes.

Amazon RDS: can databases be setup in replicaton mode?

I am studying the new Amazon RDS product and it seems it can be scaled only vertically (i.e. put a stronger server).
Did anyone see a possibility to configure multiple instances so that one is master and the other/s is/are replication slaves?
Same question asked (and answered) here http://developer.amazonwebservices.com/connect/thread.jspa?threadID=37823
Looks like there are plans for Master-Master HA or similar but that's not the same a replicated scale-out offering.
According to the FAQ it is possible now, see http://aws.amazon.com/rds/faqs/#86 :
Q: What types of replication does
Amazon RDS support and when should I
use each?
Amazon RDS provides two distinct replication options to serve different
purposes.
If you are looking to use replication to increase database
availability while protecting your
latest database updates against
unplanned outages, consider running
your DB Instance as a Multi-AZ
deployment. When you create or modify
your DB Instance to run as a Multi-AZ
deployment, Amazon RDS will
automatically provision and manage a
“standby” replica in a different
Availability Zone (independent
infrastructure in a physically
separate location). In the event of
planned database maintenance, DB
Instance failure, or an Availability
Zone failure, Amazon RDS will
automatically failover to the standby
so that database operations can resume
quickly without administrative
intervention. Multi-AZ deployments
utilize synchronous replication,
making database writes concurrently on
both the primary and standby so that
the standby will be up-to-date in the
event a failover occurs. While our
technological implementation for
Multi-AZ DB Instances maximizes data
durability in failure scenarios, it
precludes the standby from being
accessed directly or used for read
operations. The fault tolerance
offered by Multi-AZ deployments make
them a natural fit for production
environments; to learn more about
Multi-AZ deployments, please visit
this FAQ section.
If you are looking to take advantage of MySQL 5.1’s built-in
replication to scale beyond the
capacity constraints of a single DB
Instance for read-heavy database
workloads, Amazon RDS makes it easier
with Read Replicas. You can create a
Read Replica of a given “source” DB
Instance using the AWS Management
Console or CreateDBInstanceReadReplica
API. Once the Read Replica is created,
database updates on the source DB
Instance will be propagated to the
Read Replica. You can create multiple
Read Replicas for a given source DB
Instance and distribute your
application’s read traffic amongst
them. Unlike Multi-AZ deployments,
Read Replicas use MySQL 5.1’s built-in
replication and are subject to its
strengths and limitations. In
particular, updates are applied to
your Read Replica(s) after they occur
on the source DB Instance
(“asynchronous” replication), and
replication lag can vary
significantly. This means recent
database updates made to a standard
(non Multi-AZ) source DB Instance may
not be present on associated Read
Replicas in the event of an unplanned
outage on the source DB Instance. As
such, Read Replicas do not offer the
same data durability benefits as
Multi-AZ deployments. While Read
Replicas can provide some read
availability benefits, they and are
not designed to improve write
availability.
With Amazon RDS, you can use Multi-AZ deployments and Read Replicas
in conjunction to enjoy the
complementary benefits of each. You
can simply specify that a given
Multi-AZ deployment is the source DB
Instance for your Read Replica(s).
That way you gain both the data
durability and availability benefits
of Multi-AZ deployments and the read
scaling benefits of Read Replicas.