I am trying to configure master-master replication however I am getting an error. I am sending my configuration below
Server A
server-id = 1
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 1
master-host = Kooler-PC
master-user = replicacao
master-password = replicacao
master-connect-retry = 60
replicate-do-db = gestao_quadra
log-bin = C:\mysql\log\log-bin.log
binlog-do-db = gestao_quadra
CHANGE MASTER TO MASTER_HOST='Kooler-PC', MASTER_USER='replicacao', MASTER_PASSWORD='replicacao', MASTER_LOG_FILE='log-bin.log ', MASTER_LOG_POS=0;
I am have done the same steps for other server changing server-id, host and created the file in the path.
I get this error:
130218 18:03:02 [Note] Slave I/O thread: connected to master 'replicacao#Kooler-PC:3306',replication started in log 'log-bin.log ' at position 4
130218 18:03:02 [ERROR] Error reading packet from server: Binary log is not open ( server_errno=1236)
130218 18:03:02 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: 'Binary log is not open', Error_code: 1236
130218 18:03:02 [Note] Slave I/O thread exiting, read up to log 'log-bin.log ', position 4
I am using MySQL 5.5
So if you read the mysql manual on replication an binary logging, it would tell you that this line:
log-bin = C:\mysql\log\log-bin.log
Does not create a log file with exactly that name. It specifies the base name. The log files that actually get created would be named:
C:\mysql\log\log-bin.log.000001
That is to say the actual logs have a sequence number appended to the end of the name you specified. To see the actual log names use the command:
SHOW MASTER STATUS
SHOW BINARY LOGS;
This part of your change master statement is not valid:
MASTER_LOG_FILE='log-bin.log ', MASTER_LOG_POS=0;
There's no part of any replication related instructions I've ever read which would lead you to use position 0. You have to use the master's binary log file and position that correspond to the snapshot of the data with which you initialized the slave.
See the manual for more info. Start with basic master->slave replication first before you attempt more complex replication structures. http://dev.mysql.com/doc/refman/5.5/en/replication.html
Related
In my MySQL 5.6 environment, I have A -> B -> C replication setup. On C slave, there is no mysql binlog generated since I set it up as a slave. But the replication is running per show slave status. The Slave_SQL_Running_State showed "Slave has read all relay log; waiting for the slave I/O thread to update it". The show master status on C slave showed it is on "mysql-bin.000003" position 120. That file was dated on Nov 29, not today. I have checked C slave my.cnf and I have binlog configured:
log-bin=/mysql-binlog/mysql-bin
binlog_format=mixed
max_binlog_size = 200M
There is relay-bin log on C slave generated and kept current.
What is the reason C slave has no binlog?
I use MySQL-8.0.12 to setup a master-slave replication cluster. But slave always gets following errors, does anyone know how to fix this ?
2018-11-01T04:17:58.327576Z 19 [ERROR] [MY-010834] [Server] next log
error: -1 offset: 50 log: ./mysql-relay-bin.000002 included: 1,
2018-11-01T04:17:58.327675Z 19 [ERROR] [MY-010596] [Repl] Error
reading relay log event for channel '': Error purging processed logs,
2018-11-01T04:17:58.327932Z 19 [ERROR] [MY-013121] [Repl] Slave SQL
for channel '': Relay log read failure: Could not parse relay log
event entry. The possible reasons are: the master's binary log is
corrupted (you can check this by running 'mysqlbinlog' on the binary
log), the slave's relay log is corrupted (you can check this by
running 'mysqlbinlog' on the relay log), a network problem, or a bug
in the master's or slave's MySQL code. If you want to check the
master's binary log or slave's relay log, you will be able to know
their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code:
MY-013121,
2018-11-01T04:17:58.327982Z 19 [ERROR] [MY-010586] [Repl] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with "SLAVE START". We stopped at log
'mysql-bin.000003' position 805
check disk space in slave
faced the same issue once.
during the replication, if slave server disk is full and no space left mysql replication thread wait for the disk to be freed wait time is 60 sec if the server restarted during that time then relay log cannot be recovered and slave cannot read the relay log.
I am trying to setup mysql master master replication as mentined in the
https://www.digitalocean.com/community/tutorials/how-to-set-up-mysql-master-master-replication
my two sever config.
Server C:
ubuntu version mysql /etc/mysql/my.cnf
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = example
auto-increment-increment = 2
Server D:
centos /etc/my.cnf
server-id = 2
log_bin = mysql-bin
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = example
auto-increment-increment = 2
auto-increment-offset = 2
I configures this and followed the post as mentioned in the link.
I can only replicate from server C to D.
D to C is not working
Server D
stop slave;
CHANGE MASTER TO MASTER_HOST = '192.168.0.203', MASTER_USER = 'hopereplicate', MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000002', MASTER_LOG_POS = 107;
start slave;
SERVER C
stop slave;
CHANGE MASTER TO MASTER_HOST = '192.168.0.205', MASTER_USER = 'hopereplicate', MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000008', MASTER_LOG_POS = 311;
strat slave;
Any help please ?
Error log on Server D.
2016-12-14T07:19:36.888867Z 0 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--r$
2016-12-14T07:19:36.980635Z 1 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PA$
2016-12-14T07:19:36.980976Z 1 [Note] Slave I/O thread for channel '': connected to master 'hopereplicate#192.168.0.203:3306',replication started in log 'mysql-bin.000231' at position 1720
2016-12-14T07:19:36.981192Z 1 [Warning] Slave I/O for channel '': Notifying master by SET #master_binlog_checksum= ##global.binlog_checksum failed with error: Unknown system variable 'binlog_checksum', E$
2016-12-14T07:19:36.981248Z 1 [Warning] Slave I/O for channel '': Unknown system variable 'SERVER_UUID' on master. A probable cause is that the variable is not supported on the master (version: 5.5.53-0u$
2016-12-14T07:19:36.983493Z 0 [Note] Event Scheduler: Loaded 0 events
2016-12-14T07:19:36.983572Z 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.7.16-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)
2016-12-14T07:19:36.988889Z 2 [Warning] Slave SQL for channel '': If a crash happens this configuration does not guarantee that the relay log info will be consistent, Error_code: 0
2016-12-14T07:19:37.159916Z 2 [Note] Slave SQL thread for channel '' initialized, starting replication in log 'mysql-bin.000231' at position 1720, relay log './server05-relay-bin.000002' position:$
2016-12-14T07:19:55.186781Z 2 [Note] Error reading relay log event for channel '': slave SQL thread was killed
2016-12-14T07:19:55.208979Z 1 [Note] Slave I/O thread killed while reading event for channel ''
2016-12-14T07:19:55.208999Z 1 [Note] Slave I/O thread exiting for channel '', read up to log 'mysql-bin.000231', position 1720
2016-12-14T07:24:10.481707Z 5 [Warning] IP address '192.168.0.203' could not be resolved: Name or service not known
2016-12-14T07:24:10.484472Z 5 [Note] Start binlog_dump to master_thread_id(5) slave_server(1), pos(mysql-bin.000007, 154)
2016-12-14T07:24:10.484491Z 5 [Warning] Master is configured to log replication events with checksum, but will not send such events to slaves that cannot process them
2016-12-14T07:25:25.669710Z 4 [Note] 'CHANGE MASTER TO FOR CHANNEL '' executed'. Previous state master_host='192.168.0.203', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New $
2016-12-14T07:25:34.553928Z 6 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PA$
2016-12-14T07:25:34.563892Z 7 [Warning] Slave SQL for channel '': If a crash happens this configuration does not guarantee that the relay log info will be consistent, Error_code: 0
2016-12-14T07:25:34.563921Z 7 [Note] Slave SQL thread for channel '' initialized, starting replication in log 'mysql-bin.000002' at position 107, relay log './server05-relay-bin.000001' position: 4
2016-12-14T07:25:34.749727Z 6 [Note] Slave I/O thread for channel '': connected to master 'hopereplicate#192.168.0.203:3306',replication started in log 'mysql-bin.000002' at position 107
2016-12-14T07:25:34.749962Z 6 [Warning] Slave I/O for channel '': Notifying master by SET #master_binlog_checksum= ##global.binlog_checksum failed with error: Unknown system variable 'binlog_checksum', E$
2016-12-14T07:25:34.750009Z 6 [Warning] Slave I/O for channel '': Unknown system variable 'SERVER_UUID' on master. A probable cause is that the variable is not supported on the master (version: 5.5.53-0u$
2016-12-14T07:26:26.348302Z 8 [Note] Start binlog_dump to master_thread_id(8) slave_server(1), pos(mysql-bin.000008, 311)
2016-12-14T07:26:26.348326Z 8 [Warning] Master is configured to log replication events with checksum, but will not send such events to slaves that cannot process them
2016-12-14T07:47:15.388940Z 0 [Note] Giving 2 client threads a chance to die gracefully
2016-12-14T07:47:15.388957Z 0 [Note] Shutting down slave threads
2016-12-14T07:47:15.388973Z 7 [Note] Error reading relay log event for channel '': slave SQL thread was killed
2016-12-14T07:47:15.437794Z 6 [Note] Slave I/O thread killed while reading event for channel ''
2016-12-14T07:47:15.437812Z 6 [Note] Slave I/O thread exiting for channel '', read up to log 'mysql-bin.000002', position 327
2016-12-14T07:47:15.446619Z 0 [Note] Forcefully disconnecting 0 remaining clients
The problem is that the two versions of MySQL server are not the same.
This created an incompatibility identified in the logs:
[Warning] Slave I/O for channel '': Unknown system variable
'SERVER_UUID' on master. A probable cause is that the variable is not
supported on the master (version: 5.5.53-0u$
Updating the servers to both the same should sort this issue out.
I want to create a replica to my Percona Server with GTID enabled, but got this error when i show slave status:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Normally, i would stop my slave, reset it, reset master (on the slave), and get new GTID_PURGED value from the master. But this time around, the master has a very unusual value(s) and i am not sure how to determine which one to use:
mysql> show master status\G
*************************** 1. row ***************************
File: mysqld-bin.000283
Position: 316137263
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546512667
1 row in set (0.00 sec)
From the slave with the new backup copy, i get this:
root#ubuntu:/var/lib/mysql# cat xtrabackup_binlog_info
mysqld-bin.000283 294922064 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960
One more thing, i just purged the binary logs on the master before i made a backup. automatic binlog purge is set to 7 days. So i know its not because the bin log has been purged as the error is suggesting.
I am running Ubuntu 14:04, and Percona server version 5.6.31-77.
How can i resolve this issue? What is the correct value of the master's GTID_PURGED?
mysql 5.6 GTID replication errors and fixes
What is GTID? 
4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
This is the server's 128 bit identification number (SERVER_UUID). It identifies where the transaction was originated. Every server has its own SERVER_UUID.
What problems GTID solves?
It is possible to identify a transaction uniquely across the replication servers. Make the automation of failover process much easier. There is no need to do calculations, inspect the binary log and so on. Just MASTER_AUTO_POSITION=1.
At application level, it is easier to do WRITE/READ split. After a write on the MASTER, you have a GTID so just check if that GTID has been executed on the SLAVE that you use for reads.
Development of new automation tools isn't a pain now.
How can I implement it?
Three variables are needed in ALL servers of the replication chain
gtid_mode: It can be ON or OFF (not 1 or 0). It enables the GTID on the server.
log_bin: Enable binary logs. Mandatory to create a replication environment.
log-slave-updates: Slave servers must log the changes that come from the master in its own binary log.
enforce-gtid-consistency: Statements that can't be logged in a transactionally safe manner are denied by the server.
ref: http://dev.mysql.com/doc/refman/5.6/en/replication-gtids-howto.html
Replication errors and fixes:
"'Got fatal error 1236 from master when reading data from binary log: "The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires." slave_io thread stop running.
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
Steps:
slave>stop slave;
slave> FLUSH TABLES WITH READ LOCK;
slave>show master status;
'4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-13030:13032-13317:13322-13325:13328-653183:653185-654126:654128-1400817:1400820-3423394:3423401-5779965′
(HERE 83345127 Last GTID executed on master and 5779965 Last slave GTID executed on Master )
slave> reset master;
slave>set global GTID_PURGED='4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-5779965′;
slave>start slave;
slave>unlock tables;
slave>show slave status;
NOTE: After this Re-start slave other chain-slaves if they stop replicating;
ERROR: 'Error "Table … 'doesn"t exist" on query. Default database: …Query: "INSERT INTO OR Last_SQL_Error: ….Error 'Duplicate entry' SKIP Transaction on slave (slave_sql Thread stop running) NOTE:
SQL_SLAVE_SKIP_COUNTER doesn't work anymore with GTID.
We need to find what transaction is causing the replication to fail.
– From binary log
– From SHOW SLAVE STATUS (retrieved vs executed)
Type of errors: (check last sql error in show slave status)
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
slave>show slave status;
copy the 'Executed_Gtid_Set' value. '4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-659731804,5b37def1-6189-11e3-bee0-e89a8f22a444:1-70734947-80436012:80436021-80437839'
-Seems that slave (with uuid 5b37def1-6189-11e3-bee0-e89a8f22a444) transaction '80437840' is causing the problem here.
slave> STOP SLAVE;
slave> SET GTID_NEXT="5b37def1-6189-11e3-bee0-e89a8f22a444:80437840"; (last_executed_slave_gtid_on_master + 1)
slave> BEGIN; COMMIT;
slave> SET GTID_NEXT="AUTOMATIC";
slave> START SLAVE;
slave> show slave status;
and it's ALL SET !!!
#If using xtrabackup is the backup in the main instance.
cat xtrabackup_info | grep binlog_pos
#Use the information you gave as an example:
binlog_pos = filename 'mysqld-bin.000283', position '294922064', GTID of the last change '1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960'
#copy GTID of the last change to gtid_purged
slave>STOP SLAVE;
slave>RESET MASTER;
slave>SET GLOBAL gtid_purged='1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960';
slave>change master to master_host='master ip',master_port=master port,MASTER_USER = 'Your replicate username', MASTER_PASSWORD = 'Your replicate password',master_auto_position=1;
slave>START SLAVE;
In-Short: My binary logs aren't starting even though log-bin is set and specified. I'm not sure how to fix it.
I have a MariaDB instance running as a service on windows that I am attempting to replicate to a MariaDB instance on a Ubuntu machine. I am using MySQL workbench 6.0 as much as I can to manage everything, and following the instructions from Oracle here for setting up master-slave replication: http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
I have made it to the fourth chapter, where I allegedly have the master and slave both configured, and I am about to read-lock the master tables for an initial data dump to the slave before I start up replication. So I flushed the tables with read lock and checked the master status:
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
That last line didn't return any binary log information. Checking further, I ran:
SHOW BINARY LOGS;
and an error message confirmed that:
Error Code: 1381. You are not using binary logging
Master Config is like this:
[mysqld]
datadir = "C:/mysql/data"
port=3306
sql_mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
default_storage_engine=innodb
innodb_buffer_pool_size=1535M
innodb_log_file_size=50M
feedback=ON
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
log-bin-index = "C:/mysql/logs/log-bin.index"
log-bin=mysql-bin
server-id=1
innodb_flush_log_at_trx_commit=1
[client]
port=3306
How do I make sure the binary logs are rolling so I can continue with this?