Downgrade AWS RDS Instance - mysql

I am using t2.large RDS instance, I want to downgrade to t2.micro to fit my current business. I have a few question to ask:
- How can I downgrade RDS instance without losing data and downtime ?
Thanks,

You can't really do it without downtime, but you could minimize the downtime.
The easiest option is to Modify the DB instance. This will result in downtime because a new database will be provisioned, the data will be relocated and the DNS name will be changed to point to the new instance.
Seeing that you believe a t2.micro will be sufficient for your database, it would be fair to assume that there would be times when your database is not in use so that you can perform the Modify operation. It should only take a few minutes.
Officially, the best way to modify a database without downtime is to use Multi-AZ, which can update one node while traffic is still being served by another node. However, your goal seems to be to reduce cost, rather than spending more to ensure uptime.
By the way, a t2.micro is quite limited in terms of CPU and network bandwidth. You are trying to save 21c per day, at the potential cost of having a poorly-responding database.

You can consider creating a read replica (t2.micro) of the master instance (t2.large). Once the read replica is in sync with the master instance, you can promote the read replica and then point the application towards the new master instance (which is the promoted read replica).
For reference, see:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
https://aws.amazon.com/blogs/aws/amazon-rds-for-mysql-promote-read-replica/

Related

Creating a multi-region active-passive DR plan for Aurora MySQL?

I'm trying to create a disaster recovery plan for a cost efficient, maintainable and with little down for Aurora MySQL.
I want two read/write databases in two different regions, they can be separate databases called primary-us-east-1 and backup-us-east-2. I also want bidirectional replication between primary-us-east-1 to backup-us-east-2. Only one database will be connected to at all times so collisions are not a concern. In the event that region us-east-1 goes down, all I have to do is trigger a DNS switch to point to us-east-2 since backup-us-east-2 is already updated.
I've looked into Aurora Global Databases but this requires promoting a read replica in a secondary region to a master and then updating the DNS to recover from a region outage. I like the 0 work for data replication across several regions but I don't like losing the maintainability of the new resources in the process because the newly created resources (clusters/replicas) won't be maintainable in CDK if created through a lambda or by hand.
Is what I'm asking for possible? If yes, does anyone know of a replication solution so data can be copied primary-us-east-1 between backup-us-east-2?
UPDATE 1:
A potential solution is standing up the Aurora MySQL resources primary-us-east-1 and backup-us-east-2 using cdk. Keep them in sync using AWS Database Migration Service for continuous replication. Use a lambda to detect a region outage which will then perform the dns switch to point to backup-us-east-2. The only follow up task would be bringing primary-us-east-1 in sync with backup-us-east-2.
Whole region outages are very rare (see https://awsmaniac.com/aws-outages/). I would be cautious about how much effort you invest in trying to automate detection and failover for such cases. It's a lot of work to do this, if it's possible at all. It's extremely hard to do this right, it's hard to test and hard to keep working. Lots of potential for false-positive failover events, or out of control flip-flopping. Whole companies have started up and failed trying to create fully automated failover solutions. I would bet that even the FAANG companies don't achieve it, but rely on site reliability engineers to respond to outages.
IMO, it's more cost-effective to develop a nicely written runbook for manual cutover to the other region, and then make sure your staff practice region failover periodically. This ensures the docs are kept up to date, the tools work, and the team is familiar with the steps.
DNS updates are slow. What I would recommend instead is some sort of proxy server, so your apps can use a single endpoint, and the proxy can switch which database on the back-end to use dynamically. This is basically what MySQL Router is for, and I've also done a proof of concept with Envoy Proxy (sorry I don't have access to that code anymore), and I suppose you could do the same thing with ProxySQL.
My opinion is that AWS still has potential for improvement with respect to failover for RDS and Aurora. It works, but it can cause long downtimes on the order of several minutes. So it's hardly an improvement over manual failover. That is, some oncall engineer gets paged, checks out some dashboards to confirm that it's a legitimate outage, and then executes the runbook to do a manual failover.

How to check if a MySQL database backup is stale

I am working on an application that we need disaster recovery plans. We currently use RDS to host the db and have 2 hourly backups running (we do not use Aurora but have plans to upgrade in future).
If the database somehow got deleted we want to make sure the backup we will be recovering from is current and therefore need some way of telling that.
One way is to save a heartbeat in the db at certain intervals then i can check that against what is expected.
I was wondering if anyone may have any other ways of solving this issue?
Assuming this MySQL database is connected to your web application, you could have a server side thread in your application which periodically does a heartbeat which writes a record to the database. You may create a special heartbeat table, which will store the heartbeats. Then, you can easily examine any backup and know roughly the last time the database were "alive."
I am not an expert in AWS, and there may be another way of doing this which is easier than what I described above.e

Would this work to distribute traffic to my RDS Read Replicas?

I am using Amazon RDS for my database services and want to use the read replica feature to distributed the traffic amongst the my read replica volumes. I currently store the connection information for my database in a single config file. So my idea is that I could create a function that randomly picked from a list of my read-replica endpoints/addresses in my config file any time my application performed a read.
Is there a problem with this idea as long as I don't perform it on a write?
My guess is that if you have a service that has enough traffic to where you have multiple rds read replicas that you want to balance load across, then you also have multiple application servers in front of it operating behind a load balancer.
As such, you are probably better off having certain clusters of app server instances each pointing at a specific read replica. Perhaps you do this by availability zone.
The thought here is that your load balancer will then serve as the mechanism for properly distributing the incoming requests that ultimately lead to database reads. If you had the DB reads randomized across different replicas you could have unexpected spikes where too much traffic happens to be directed to one DB replica causing resulting latency spikes on your service.
The biggest challenge is that there is no guarantee that the read replicas will be up-to-date with the master or with each other when updates are made. If you pick a different read-replica each time you do a read you could see some strangeness if one of the read replicas is behind: one out of N reads would get stale data, giving an inconsistent view of the system.
Choosing a random read replica per transaction or session might be easier to deal with from the consistency perspective.

Performance effects of moving mysql db to another Amazon EC2 instance

We have an EC2 running both apache and mysql at the moment. I am wondering if moving the mysql to another EC2 instance will increase or decrease the performance of the site. I am more worried about the network speed issues between the two instances.
EC2 instances in the same availability zone are connected via a 10,000 Mbps network - that's faster than a good solid state drive on a SATA-3 interface (6Gb/s)
You won't see any performance drop by moving a database to another server, in fact you'll probably see a performance increase because of having separate memory and cpu cores for the two servers.
If your worry is network latency then forget about it - not a problem on AWS in the same availability zone.
Another consideration is that you're probably storing your website & db file on an EBS mounted volume. That EBS block is stored off-instance so you're actually storing a storage array on the same super-fast 10Gbps network.
So what I'm saying is... with EBS your website and database are already talking across the network to get their data, putting them on seperate instances won't really change anything in that respect - besides giving more resources to both servers. More resources means more data stored locally in memory and more performance.
The answer depends largely on what resources apache and MySQL are using. They can happily co-habit if demands on your website are low, and each are configured with enough memory that they don't shell out to virtual memory. In this instance, they are best kept together.
As traffic grows, or your application grows, you will benefit from splitting them out because they can then both run inside dedicated memory. Provided that the instances are in the same region then you should see fast performance between them. I have even run a web application in Europe with the DB in USA and performance wasn't noticeably bad! I wouldn't recommend that though!
Because AWS is easy and cheap, your best bet is to set it up and benchmark it!

Homemade cheap and cheerful clustering with MySQL+EC2?

I've got a Java web service backed by MySQL + EC2 + EBS. For data integrity I've looked into DRBD, MySQL cluster etc. but wonder if there isn't a simpler solution. I don't need high availability (can handle downtime)
There are only a few operations whose data I need to preserve -- creating an account, changing password, purchase receipt. The majority of the data I can afford to recover from a stale backup.
What I am thinking is that I could pipe selected INSERT/UPDATE commands to storage (S3, SimpleDB for instance) and when required (when the db blows up) replay these commands from the point of last backup. And wouldn't it be neat if this functionality was implemented in the JDBC driver itself.
Is this too silly to work, or am I missing another obvious and robust solution?
Have you looked into moving your MySQL into Amazon Web Services as well? You can use Amazon Relational Database Service (RDS). Also see MySQL Enterprise Support.
You always have a window where total loss of a server and associated file storage will result in some amount of lost data.
When I ran a modestly busy SaaS solution in AWS, I had a MySQL Master running on a large instance and a MySQL Slave running on a small instance in a different availability zone. The replication lag was typically no more than 2 seconds, though a surge in traffic could take that up to a minute or two.
If you can't afford losing 5 minutes of data, I would suggest running a Master/Slave setup over rolling your own recovery mechanism. If you do roll your own, ensure the "stale" backups and the logged/journaled critical data are in a different availability zone. AWS has lost entire zones before.