AWS RDS Read repica failure issue - mysql

We created read replica on our MySQL RDS server and our master instance has multi-AZ enabled, when we tried to force fail-over testing our read replica's IO thread got stopped and we were getting
Error 1236 fatal error our binary logs got corrupted.
To avoid this replica failure it's mandatory to enable innodb_flush_log_at_trx_commit=1 and sync_binlog=1 but if we set these variable as per recommendation then its degrade our write operation by 50% - 60%.
Is there any way through that we can avoid this replication error instead of setting above recommended value else if it's necessary to set as per recommendation then kindly suggest us a way how we can improve our write operations?

This Answer applies to MySQL replication in general, not just AWS.
If you are that close to exceeding the capacity of the system, you need to do some serious research into what is going on.
The short answer is to combine (where practicable) multiple transactions into one. innodb_flush_log_at_trx_commit=1 involves an 'extra' fsync at the end of each transaction. So, fewer transactions --> less I/O --> less contention.
Before I understood what was going on, I rand with sync_binlog=0. When something did crash, the binlogs were not actually "corrupt", but the Slaves would be pointing to an "impossible position". This is because the position information had been sent to the Slave before actually writing to disk on the Master. The solution was simple: Move the pointer (on the Slave) to the beginning (Pos=0 or 4) of the next binlog (on the Master).
I suspect (without any real evidence) that innodb_flatc has more impact on performance than sync_binlog.
Now for some AWS-specifics. If "multi-AZ" means that every disk write is writing to machines in two different datacenters, then the issue goes beyond just the two settings you brought up. If, instead, it means that the Slave is remote from the Master, then it is acting like ordinary MySQL Replication (for this Q&A).

Related

Is there any concept of load balancing in MySQL master-master architecture?

I am running a MySQL 5.5 Master-Slave setup. For avoiding too many hits on my master server, I am thinking of having one or may be more servers for MySQL and incoming requests will first hit the HAProxy and it accordingly forwards the requests either in round robin or any scheduling algorithm defined in HAProxy. So set up will be like -
APP -> API Gateaway/Server -> HAProxy -> Master Server1/Master Server2
So what can be pros and cons to this setup ?
Replication in MySQL is asynchronous by default, so you can't always assume that the replicas are in sync with their source.
If you intend to use your load-balancer to split writes over the two master instances, you could get into trouble with that because of MySQL's asynchronous replication.
Say you commit a row on master1 to a table that has a unique key. Then you commit a row with the same unique value to the same table on master2, before the change on master1 has been applied through replication. Both servers allowed the row to be committed, because as far as they knew, it did not violate the unique constraint. But then as replication tries to apply the change on each server, those changes do conflict with the row committed. This is called split-brain, and it's incredibly difficult to recover from.
If your load-balancer randomly sends some read queries to another instance, they might not return data that you just committed on the other instance. This is called replication lag.
This may or may not be a problem for your app, but it's likely that in your app, at least some of the queries require strong consistency, i.e. reading outdated results is not permitted. Other cases even with the same app may be more tolerant of some replication lag.
I wrote a presentation some years ago about splitting queries between source and replica MySQL instances: https://www.percona.com/sites/default/files/presentations/Read%20Write%20Split.pdf. The presentation goes into more details about the different types of tolerance for replication lag.
MySQL 8.0 has introduced a more sophisticated solution for all of these problems. It's called Group Replication, and it does its best to ensure that all instances are in sync all the time, so you don't have the risk of reading stale data or creating write conflicts. The downside of Group Replication is that to ensure no replication lag occurs, it may need to constrain your transaction throughput. In other words, COMMITs may be blocked until the other instances in the replication cluster respond.
Read more about Group Replication here: https://dev.mysql.com/doc/refman/8.0/en/group-replication.html
P.S.: Whichever solution you decide to pursue, I recommend you do upgrade your version of MySQL. MySQL 5.5 passed its end-of-life in 2018, so it will no longer get updates even for security flaws.

MySQL/MariaDB read preference from slave with max staleness

I am using Mysql/MariaDB with Innodb storage engine version 10.x.
I want to setup a cluster with master-slave configuration. There is an option to read data from slave using --innodb-read-only or --read-only.
However in addition to the above, client needs to read the data from slave if and only if max slave lag is less than x seconds.
Slaves can lag behind the primary due to network congestion, low disk throughput, long-running operations, etc. The read preference with max allowed staleness option should let application specify a maximum replication lag, or “staleness”, for reads from slaves. When a secondary’s estimated staleness exceeds, the client stops using it for read operations from slaves and start reading from master.
I would like to know if there is an option in MySql/InnoDB?
There's no automatic option for switching the query to the master. This is handled by application logic.
You can run a query SHOW SLAVE STATUS and one of the fields returned is Seconds_Behind_Master. You would have to write application code to check this, and if the lag is greater than your threshold, query the master instead.
You might find some type of proxy that can do this logic for you. See https://mydbops.wordpress.com/2018/02/19/proxysql-series-mysql-replication-read-write-split-up/
It's not always the best option to treat a replica with X seconds of lag as unusable. Some queries are perfectly okay regardless of the lag. I wrote a presentation about this some years ago, and it includes some example queries. Read / Write Splitting with MySQL and PHP (Percona webinar 2013)
There are many Proxy products that may have code for such.
If you automatically switch to the Master, then it may get overwhelmed, thereby leading to worse system problem.
If you try to switch to another Slave, it is too easy to get into a flapping situation.
Galera has a way to deal with "critical read", if you wanted to go to a Cluster setup instead of Master + Slaves.
If part of the problem is the distance between Master and Slave, and if you switch to the Master, where is the Client? If it is near the Slave, won't the added time to reach the master cancel out some of the benefit?
Avoid long-running queries, beef up the Slave to avoid slow disks, speed up queries that are hitting the disk a lot, look into network improvements.
In summary, I don't like the idea of attempt to move a query to the Master; I would work on dealing with the underlying problem.
MariaDB MaxScale has multiple ways of dealing with replication lag.
The simplest method is to limit the maximum allowed replication lag with the max_slave_replication_lag parameter. This works exactly the way you described: if a slave is too many seconds behind the master, other slaves and, as a last resort, the master is used. This is the most common method of dealing with replication lag in MaxScale.
Another option is to use the causal_reads feature which leverages the MASTER_GTID_WAIT and other features found in MariaDB 10.2 and newer versions. This allows read consistency without adding additional load on the master. This does come at the cost of latency: if the server is lagging several seconds behind the read could take longer. This option is used when data consistency is critical but the request latency is not as important.
The third option is to use the CCRFilter to force reads to the master after a write happens. This is a simpler approach compared to causal_reads but it provides data consistency at the cost of increased load on the master.

Why use GTIDs in MySQL replication?

When it comes to database replication, what is the use of global transaction identifiers? Why do we need it to prevent concurrency across the servers? How is that prevention achieved exactly?
I tried to read the documentation at
http://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html but still could not understand it clearly. This may sound very basic but I would really appreciate it if someone could explain the concepts to me.
The reason for the Global Transaction ID is to allow a MySQL slave to know if it has applied a given transaction or not, to keep things in sync between Master and Slave. It can also be used for restarting a slave if a connection goes down, again to know the point in time. Without using GTIDs, replication must be controlled based on the position in a given binary transaction log file (bin log). This is much harder to manage than the GTID method.
A master is the only server that is typically written to, so that slaves merely rebuild a copy of the master by applying each transaction in sequence.
It is also important to understand that MySQL replication can run in one of 3 modes:
Statement-based: Each SQL statement is logged to the binlog and replicated as a statement to the slave. This can be in some cases ambiguous at the slave causing the data to not match exactly. (Most of the time it is fine for common uses).
Row-based: In this mode MySQL replicates the actual data changes to each table, with a "before" and "after" picture of each row, which is fully accurate. This can result in a much larger binlog, for example if you have a bulk update query, like: UPDATE t1 SET c1 = 'a' WHERE c2 = 'b'.
Mixed: In this mode, MySQL will use a mix of statement-based and row-based logging in the binlog.
I only mention the modes of replication, because it is mentioned in the doc you referenced that Row-based is the recommended option if you are using GTIDs.
There is another option called Master-Master replication, where you can write to two masters (each acting as a slave for the other), but this requires a special configuration to ensure that the data written to each master is unique. It is much trickier to manage than a typical Master/Slave setup.
Therefore, the prevention of writes to a Slave is something that you must ensure from your application for a typical replication process to function correctly. It is fine to read from a Slave, but you should not write to it. Note that the Slave can be behind the Master if you are using it for reads, so it is best to perform queries for things that can be behind the Master (like reports that are not critical up to the second or millisecond). You can ensure no writes to the Slave by making your common application user a read-only user for the Slave server, and a read-write user for the Master.
Why do we need to prevent concurrency across the servers?
If I understood the question correctly, you are talking about consistency. If so, the answer is that you need keep a consistent state in a distributed system. For example, if my bank account information is replicated throughout several different servers it is fundamental that they have exactly the same € balance. Now imagine that I perform multiple money transactions (deposits/spendings) and at each one I was connected to a different server: concurrency problems would cause my account balance to be different at each server, which is unacceptable.
How is that prevention achieved exactly?
Using a master/slave approach. Amongst the servers, you have one server (the master) that is responsible for handling every writing operation, meaning that modifications to the database must be handled only by this server. The database of this master server is replicated to all other servers (the slaves), which are not allowed to modify the database but can be used to read the database (e.g. SELECT operations). Knowing that there is only one server allowed to modify the database, you do not have consistency issues.
what is the use of global transaction identifiers?
Communication between servers is asynchronous and a slave server is not required to be connected with the master at all times. Therefore, once a slave server reconnects with the master server, it may find that the master's database has been modified in the meanwhile, thus it must update its own database. The problem now is knowing amongst all modifications performed by the master server, which are the ones that the slave server already performed in a previous date and which are the ones that were not performed yet.
GTIDs address this issue: they uniquely identify each transaction performed by the master server. Now, the slave server can identify amongst all the transactions performed by the master server, which are the ones that were not seen before.

What is a good way to show the effect of replication in MySQL?

We have to show a difference to show the advantages of using replication. We have two computers, linked by teamviewer so we can show our class what we are doing exactly.
Is it possible to show a difference in performance? (How long it takes to execute certain queries?)
What sort queries should we test? (in other words, where is the difference between using/not using replication the biggest)
How should we fill our database? How much data should be there?
Thanks a lot!
I guess the answer to the above questions depends on factors such as which storage engine you are using, size of the database, as well as your chosen replication architecture.
I don't think replication will have much of an impact on query execution for simple master->slave architecture. If however, you have an architecture where there are two masters: one handling writes, replicating to another master which exclusively handles reads, and then replication to a slave which handles backups, then you are far more likely to be able to present some of the more positive scenarios. Have a read up on locks and storage engines, as this might influence your choices.
One simple way to show how Replication can be positive is to demonstrate a simple backup strategy. E.g. Taking hourly backups on a master server itself can bring the underlying application to a complete halt for the duration of the backup (Taking backups using mysqldump locks the tables so that no read/write operations can occur). Whereas replicating to a slave, then taking backups from there negates this affect.
If you want to show detailed statistics, it's probably better to look into some benchmarking/profiling tools (sysbench,mysqlslap,sql-bench to name a few). This can become quite complex though.
Also might be worth looking at the Percona Toolkit and the Percona monitoring plugins here: http://www.percona.com/software/
Replication has several advantages:
Robustness is increased with a master/slave setup. In the event of problems with the master, you can switch to the slave as a backup
Better response time for clients can be achieved by splitting the load for processing client queries between the master and slave servers
Another benefit of using replication is that you can perform database backups using a slave server without disturbing the master.
Using replication always a safe thing to do you should be replicating your Production server always incase of failure it will be helpful.
You can show seconds_behind_master value while showing replication performance, this shows indication of how “late” the slave is this value should not be more than 600-800 seconds but network latency does matter here.
Make sure that Master and Slave servers are configured correctly now
You can stop slave server and let Master server has some updates/inserts (bulk inserts) happening and now start slave server you will see larger seconds_behind_master value it should be keep on decreasing till reaches 0 value.
There is a tool called MONyog - MySQL Monitor and Advisor which shows Replication status in real-time.
Also what kind of replication to use whether statement based or row based has been explained here
http://dev.mysql.com/doc/refman/5.1/en/replication-sbr-rbr.html

Binlog MySQL Replication is a "Bag of Hurt". Are there any good alternatives?

I've honestly tried this left and right and still find that my mirror server, set up as a replication slave still lags behind. My app's user base keeps growing and now Ive reached the point where I can't keep "shutting down" to "resync" databases (not even on weekends).
Anyways, my question: Are there any plausible, affordable, alternatives to binlog replication? I have two servers so wouldn't consider buying a third for load balancing just yet, unless its the only option.
Cheers,
/mp
Your master executes in parallel and your slave executes in serial. If your master can process 1.5 hours of inserts/updates/executes in 1 real hour, your slave will fall behind.
If you can't find ways to improve the write performance on your slave (more memory, faster disks, remove unnecessary indexes), you've hit a limitation in your applications architecture. Eventually you will hit a point that you can't execute the changes in real time as fast as your master can execute them in parallel.
A lot of big sites shard their databases: consider splitting your master+slave into multiple master+slave clusters. Then split your customer base across these clusters. When a slave starts falling behind, it's time to add another cluster.
It's not cheap, but unless you can find a way to make binlog replication execute statements in parallel you probably won't find a better way of doing it.
Update (2017): MySQL now support parallel slave worker threads. There are still many variables that will cause a slave to fall behind, but slaves no longer need to write in serial order. Choosing to preserve the commit order of parallel slave threads is an important option to look at if the exact state of the slave at any point in time is critical.
Have you tried :
1) SET innodb_flush_log_at_trx_commit=0
2) SET sync_binlog=0
Both will help to speed up your Slave with a small level of added risk if you have a server failure.
Adding memory to the slave would probably help. We went from 32 to 128 megs and the lagging more or less went away. But its neither cheap nor will it be enough in all situations.
Buying a third server will probably not help that much though, you will most likely just get another lagging slave.