I've successfully copied data over the period of four hours from an external percona mysql database to an AWS Aurora cluster. Is it possible to configure the AWS Aurora database as a slave to avoid having to setup a fresh slave instance?
Yes you can do that, and details about how to are available here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Replication.MySQL.html
From that page:
Retrieve the binlog file and binlog position that are the starting place for replication. You retrieved these values from the SHOW SLAVE STATUS command when you created the snapshot of your replication master. If your database was populated from the output of the mysqldump command with the --master-data=2 option, then the binlog file and binlog position are included in the output.
Connect to the Aurora endpoint and issue CALL mysql.rds_set_external_master using the binary log information:
CALL mysql.rds_set_external_master (
host_name
, host_port
, replication_user_name
, replication_user_password
, mysql_binary_log_file_name
, mysql_binary_log_file_location
, ssl_encryption
);
The you should issue CALL mysql.rds_start_replication;
You will need a replication user on the external MySQL instance and also you should take the necessary precautions to secure the instance via security groups.
Related
I have a running RDS Aurora MySQL 8.0.23 cluster running in production. The database is unencrypted and I need to enable encryption for it. As far as I understand, this is not possible to do directly. The procedure I am evaluating is:
Create a read replica on the current cluster.
Stop replication on replica and annotate binlog filename and
position.
Promote the read replica to a new encrypted cluster (maybe it
requires to do a snapshot before).
Set up back replication with the original cluster using binlog file
and position annotated before.
Wait until replication lag is zero.
Redirect production traffic to the new cluster.
Stop replication.
[Optional] Delete old cluster.
I have two issues with the above procedure:
Once created the replica, running commands like SHOW SLAVE STATUS
or SHOW REPLICA STATUS return empty set, so I can't annotate
binlog file and position. Please note that replication is enabled on
the original cluster (binlog_format is set to ROW).
It seems I can't promote the Aurora read replica to a new cluster,
the option is missing on the available actions. But according to the documentation it should be possible.
Has anyone have feedback about the issues above? What is the current up-to-date procedure to encrypt an Aurora MySQL cluster with minimum downtime and no data loss?
We have a common database in MySQL 5.6 and many services are using that. one of the services want to migrate some tables from common database to new MySQL server 5.7.
The old MySQL server continuously using by another service. The total data size is around 400GB.
Is there any recommended procedure?
Two Approaches
Approach: 1
create a slave with mysql version 5.7 and replicate only the common database with the below option replicate-db
At the point of no feeds happening on master, and no lag in slave. Use this as a new server, by stopping the slave and disconnect the master from slave.
On slave:
STOP SLAVE
To use RESET SLAVE, the slave replication threads must be stopped
$> RESET SLAVE
On Master:
Remove the replication user
FLUSH LOGS
Approach:2
Try the backup method
Since the db size is 400 GB, the mysqldump won't be sufficient.
Try partial backup method using xtrabackup:
xtrabackup --backup --tables-file=/tmp/tables.txt
Once the Backup has been completed, verify and restore it into the new server version 5.7.
Reference:
https://www.percona.com/doc/percona-xtrabackup/2.4/xtrabackup_bin/xbk_option_reference.html#cmdoption-xtrabackup-tables-file
np: On both approaches, make sure to check the table/mysql version compatibility [5.6 vs 5.7]
We have a working MySQL Master-Slave replication in our Data center. We need to configure one read replica in AWS RDS from my slave server. How can I achieve this? I need the configuration like the following. Read replica should be configured fron Slave Server
Master --> Slave --> Read Replica [In RDS]
This document section seems just right for your requirement :
Replication with a MySQL or MariaDB Instance Running External to Amazon RDS
What required is some downtime to your DB. The steps consist of :
Make your DB read-only
Dump data to the RDS instance
Enable writing on your DB again
Create user with replication privileges on your DB
Start replicating your DB from RDS using the created user
I am having a difficult time in setting Master-Slave configuration.
Master Database runs on Ubuntu( Amazon AWS instance) and successfully set-up master replication.
I have localhost as a Slave Server. (Windows Machine).
Snapshot of Master Database
Master database has record
Binar Log Information
Process List on Master Replication
Status of Master Replication
I debug master replication which works okay I guess.
On the Salve Side:
Status on Slave Side
Even though MASTER_LOG and MASTER_POS are synced but data doesn't.
Currently, I have 0 table on Slave side and 34 tables on Master side.
Tables on Slave side
I am open to any suggestion or any reference do you have.
I spend an entire day and trying to find what I did wrong.
I want to Sync my Local database with a database hosted on remote-server.
Update: Thigs I did to debug the Master-Slave Replication
Checked Master Database is up and running.
Master Status and Connected Slaves. [Which includes unique id for
each server.]
Slave database is up and running [Including Slave IO Thread and
SQL thread is running.]
These three steps ensure that Master-Slave replication is up and running without any problem.
Handling Data Sync Problem
Created/update/delete data in the master database to check
whether data is sync on a server or not.
Checked Binary Log [Specifically I checked the file size. If I
entered data file size will continuously increasing.]
Thanks in advance.
we had similar problem - read more about gotchas in "binlog-do-db" and "replication-do-db" and related parameters. Here is a big problem with crossdatabase references. At the and we had to remove these settings limiting replication.
Why MySQL’s binlog-do-db option is dangerous
Gotchas in MySQL replication
As your show slave status output says you enabled Replicate_DO_DB for the DB "Arihantpos" at the same time you did Binglog_Do_Db for the same db
try to remove Binglog_Do_Db from config file and restart mysql and start replication again
How can I copy the content of a specific table (or the table as is) from my local database to a database instance stored on a cloud, lets say Amazon RDS?
note: it has to be done ones every hour.
EDIT:
Other I/O operations on the local database should no be suspended (e.g. no READ LOCKS).
You can set your local database server to be a master to the Amazon RDS instance which means the Amazon RDS instance becomes a slave in this setup. This is possible to do as mentioned in the AWS documentation here http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.External.Repl.html
You can also configure the slave to update only a specific table in the database after a specified interval of time.