We are setting ndb cluster to ndb cluster replication. from mysql documentation I found the below
https://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-replication-issues.html
Using --binlog-ignore-db=mysql means that no changes to tables in the mysql database are written to the binary log. In this case, you should also use --replicate-do-table=mysql.ndb_apply_status to ensure that mysql.ndb_apply_status is replicated.
but when I am setting --binlog-ignore-db=mysql on master mysqld node and --replicate-do-table=mysql.ndb_apply_status on slave mysqld node, the application database updates are not getting getting replicated. only mysql.ndb_apply_status is getting replicated from source. if I remove --replicate-do-table=mysql.ndb_apply_status then both mysql.ndb_apply_status and application database are replicated but not sure why mysql documentation says to use --replicate-do-table=mysql.ndb_apply_status on slave node and not sure if it breaks anything if I use --binlog-ignore-db=mysql and not set --replicate-do-table=mysql.ndb_apply_status on slave. Any help?
Related
I have a running RDS Aurora MySQL 8.0.23 cluster running in production. The database is unencrypted and I need to enable encryption for it. As far as I understand, this is not possible to do directly. The procedure I am evaluating is:
Create a read replica on the current cluster.
Stop replication on replica and annotate binlog filename and
position.
Promote the read replica to a new encrypted cluster (maybe it
requires to do a snapshot before).
Set up back replication with the original cluster using binlog file
and position annotated before.
Wait until replication lag is zero.
Redirect production traffic to the new cluster.
Stop replication.
[Optional] Delete old cluster.
I have two issues with the above procedure:
Once created the replica, running commands like SHOW SLAVE STATUS
or SHOW REPLICA STATUS return empty set, so I can't annotate
binlog file and position. Please note that replication is enabled on
the original cluster (binlog_format is set to ROW).
It seems I can't promote the Aurora read replica to a new cluster,
the option is missing on the available actions. But according to the documentation it should be possible.
Has anyone have feedback about the issues above? What is the current up-to-date procedure to encrypt an Aurora MySQL cluster with minimum downtime and no data loss?
I am trying to replicate a cloudsql MYSQL database to a GCE VM and I am following this guide.
https://cloud.google.com/sql/docs/mysql/replication/configure-external-replica
The error that I face is that is once I restore the dump and start my slave, the slave tries to execute the DDL commands that have already been dumped. In other words, the GTID based replication starts from 0.
What I expect is that it starts from the point where the dump has been taken.
What I am doing wrong here ?
I can see that I am getting the latest GTID set from the master. (left side is slave and right side is master).
So I have found out the issue.
The issue was that my dump file did not contain information about GTID from the source server. Because of this the destination has no idea about what GTID has been executed at the source.
So I must set gtid_purged to off when creating a mysql dump.
This will set the gtid_executed at the destination when restoring and ensure that there is not reexecution
We have a common database in MySQL 5.6 and many services are using that. one of the services want to migrate some tables from common database to new MySQL server 5.7.
The old MySQL server continuously using by another service. The total data size is around 400GB.
Is there any recommended procedure?
Two Approaches
Approach: 1
create a slave with mysql version 5.7 and replicate only the common database with the below option replicate-db
At the point of no feeds happening on master, and no lag in slave. Use this as a new server, by stopping the slave and disconnect the master from slave.
On slave:
STOP SLAVE
To use RESET SLAVE, the slave replication threads must be stopped
$> RESET SLAVE
On Master:
Remove the replication user
FLUSH LOGS
Approach:2
Try the backup method
Since the db size is 400 GB, the mysqldump won't be sufficient.
Try partial backup method using xtrabackup:
xtrabackup --backup --tables-file=/tmp/tables.txt
Once the Backup has been completed, verify and restore it into the new server version 5.7.
Reference:
https://www.percona.com/doc/percona-xtrabackup/2.4/xtrabackup_bin/xbk_option_reference.html#cmdoption-xtrabackup-tables-file
np: On both approaches, make sure to check the table/mysql version compatibility [5.6 vs 5.7]
I am trying to configure MySQL databases using the Master-Slave replication. Before I realized that I had to set up my environment using this replication, I already have 2 separate servers running their own MySQL DB. Each of these servers are configured the exact same. The MySQL DB are configured with hundreds of tables.
Is there a way that i can set up (Master-Slave) Replication using the configured DB's? Or will i have to start from scratch and configure the replication first and then load in all the DB tables?
You can delete all data from one of the servers. Remaining one with the data will be your Master. Then use mysqldump to backup all the data and insert it to the slave.
Take a look for the detailed instructions on the page below:
https://livecaller.io/blog/how-to-set-up-mysql-master-slave-replication/
If the data is exactly same in both the MySQL database then you can start master slave replication, but you need to be sure that the data is same. MySQL will not check that, and if there is some discrepancy in the primary key then it will throw error immediately after next DML statement.
To be on a safer side, drop the database from one server, and restore it using the MySQL dump of another server. This will give the surety that database is same on both the server.
Take the reference from the below link to establish replication between two MySQL servers.
https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
I have an existing mysql replication set up (Windows 2008 to Ubuntu 9.04) and created several new tables in the master database. These are not showing up in the slave database.
Do new tables automatically get copied to the slave DB, or do I need to set up replication again?
Thanks!
I'm going to assume that other data is successfully replicating.
Replication in mysql is per-server, so the most likely problems are that either you aren't binloging the events, or that the slave is ignoring them.
For binglogs, verify you aren't turning sql_log_bin off for the connection (which would require SUPER) and that the various options binary-log options are set correctly. You can verify this by running mysqlbinlog on the server's binlogs.
On the slave side, check the replication options.