I have a database cluster A in AWS RDS mysql aurora.
I create a blue green deployment that generates a database A-green. I choose "Switch" in the AWS UI. The switch fails and it shows in the logs:
Switchover from DB cluster A to A-green was canceled due to external replication on A. Stop replication from an external database to A before you switch over.
Where can I find this replication that the error talks about? I don't see anything in the console.
Also, show slave status returns empty
Cheers
Related
I have a running RDS Aurora MySQL 8.0.23 cluster running in production. The database is unencrypted and I need to enable encryption for it. As far as I understand, this is not possible to do directly. The procedure I am evaluating is:
Create a read replica on the current cluster.
Stop replication on replica and annotate binlog filename and
position.
Promote the read replica to a new encrypted cluster (maybe it
requires to do a snapshot before).
Set up back replication with the original cluster using binlog file
and position annotated before.
Wait until replication lag is zero.
Redirect production traffic to the new cluster.
Stop replication.
[Optional] Delete old cluster.
I have two issues with the above procedure:
Once created the replica, running commands like SHOW SLAVE STATUS
or SHOW REPLICA STATUS return empty set, so I can't annotate
binlog file and position. Please note that replication is enabled on
the original cluster (binlog_format is set to ROW).
It seems I can't promote the Aurora read replica to a new cluster,
the option is missing on the available actions. But according to the documentation it should be possible.
Has anyone have feedback about the issues above? What is the current up-to-date procedure to encrypt an Aurora MySQL cluster with minimum downtime and no data loss?
I'm stuck trying to connect my Aurora serverless MySQL server to a Master MySQL server in the same VPC.
I checked everything to make it work, even extended the SecurityGroup from both the Master and the Aurora server to accept all connections from the VPC, still, I get a 2003 error from the Slave (Aurora):
error connecting to master 'user_repl#vpc_ip1:3306' - retry-time: 60 retries: 1
I even tried with the local name ip-{local-ip-vpc}.eu-west-3.compute.internal without any luck.
Trying to connect from another EC2 instances in the same VPC to that master, with the "user_repl" works fine, so it's not a problem of bind-address, security group on the master, passwodr or anything like that.
I wonder if Aurora serverless can replicate a master server and become slave, but if that wasn't the case, I would expect another error than just an "error connecting".
What is causing this issue?
Thank you in advance.
It turns out that AWS Aurora MySQL Serverless can not do replication like we do on a MySQL server.
In order to enable replication, you'll need to use AWS Database Migration Service (DMS) where you set up a source endpoint, a target endpoint, a replication server, and enable replication between the source and the target.
I just tested it on my end and it works fine. It's a bit more work - and different than a standard replication - but it works exactly the same in the end.
RDS Snapshots don't seem to work as I would expect when set up with replication. I'd like to get some guidance on if I'm making incorrect assumptions, or just doing something wrong.
Here's what happened:
I set up an RDS instance as a slave to an external mysql instance (outside of AWS)
I let the instance catch up, replication was running successfully for a few days, taking nightly snapshots of the slave on RDS.
Some queries were run on the slave accidentally, causing errors for the replication, and causing the databases to get completely out of sync.
I restored the slave from a snapshot.
What I expected:
After the snapshot restored, replication on the new slave database would be able to catch back up to the position of the master.
What actually happened:
After the snapshot restored, data was restored, but replication settings were not. show slave status returned null.
TLDR; The AWS documentation states that RDS snapshots back up the entire database instance, so I would expect all of its settings to be backed up as well, including settings for an external master, but that doesn't seem to be the case. What are the limitations of RDS's snapshot capabilities, and how should replication with an external master be handled if the slave gets too far out of sync?
Thanks!
If the replication errors that you mentioned in your question stopped replication for extended period, Amazon AWS RDS stops replication. This is done to prevent excessive storage requirements in the source side. When the RDS replica is restored using a snap shot, the new replica will never catch-up in that case because the binary logs are also deleted from the source in this case. This is mentioned in the AWS documentation but it also states that for this to happen the replication error should continue for a month.
I am trying to set up a replication between MySQL running on an EC2 instance and AWS RDS MYSQL instance.
I am following this guide.
My master MYSQL db (running on EC2) has GTID mode turned on. My intended to be Slave(AWS RDS MYSQL) has GTID mode off, and apparently, there is no way to turn it on.
Due to this, when I start replication, I get following error on slave:
The slave IO thread stops because the master has ##GLOBAL.GTID_MODE ON and this server has ##GLOBAL.GTID_MODE OFF
I can't turn off my master's gtid mode. How can I make this replication work?
You can't enable the "gtid_mode=on" db parameter on the AWS rds at this moment. Please find the amazon forum reference below.
Ref:
https://forums.aws.amazon.com/thread.jspa?messageID=474345
I suggest you following the below reference documentation from AWS to achieve this.
Ref:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MariaDB.Procedural.Replication.GTID.html
You can use the internal functionality of the aws rds named 'mysql.rds_set_external_master_gtid'.
Alternatively, You can use binary log method for the replication. You will find the bin-log parameter in the DB Parameter group.
Master instance of google cloud sql keeps failing randomly and when it goes to fail-over, I got "The MySQL server is running with the --read-only". I changed the fail-over(replica) mysql instance to read-only : false. This removed the read only error but now the master and replica are not in sync.
Also, it keeps randomly pointing to the replica, suggesting that the master is down.
How do I get the master and replica in sync again?
Why does the master keep failing?
Thank you team!