I want to create a replica to my Percona Server with GTID enabled, but got this error when i show slave status:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Normally, i would stop my slave, reset it, reset master (on the slave), and get new GTID_PURGED value from the master. But this time around, the master has a very unusual value(s) and i am not sure how to determine which one to use:
mysql> show master status\G
*************************** 1. row ***************************
File: mysqld-bin.000283
Position: 316137263
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546512667
1 row in set (0.00 sec)
From the slave with the new backup copy, i get this:
root#ubuntu:/var/lib/mysql# cat xtrabackup_binlog_info
mysqld-bin.000283 294922064 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960
One more thing, i just purged the binary logs on the master before i made a backup. automatic binlog purge is set to 7 days. So i know its not because the bin log has been purged as the error is suggesting.
I am running Ubuntu 14:04, and Percona server version 5.6.31-77.
How can i resolve this issue? What is the correct value of the master's GTID_PURGED?
mysql 5.6 GTID replication errors and fixes
What is GTID? 
4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
This is the server's 128 bit identification number (SERVER_UUID). It identifies where the transaction was originated. Every server has its own SERVER_UUID.
What problems GTID solves?
It is possible to identify a transaction uniquely across the replication servers. Make the automation of failover process much easier. There is no need to do calculations, inspect the binary log and so on. Just MASTER_AUTO_POSITION=1.
At application level, it is easier to do WRITE/READ split. After a write on the MASTER, you have a GTID so just check if that GTID has been executed on the SLAVE that you use for reads.
Development of new automation tools isn't a pain now.
How can I implement it?
Three variables are needed in ALL servers of the replication chain
gtid_mode: It can be ON or OFF (not 1 or 0). It enables the GTID on the server.
log_bin: Enable binary logs. Mandatory to create a replication environment.
log-slave-updates: Slave servers must log the changes that come from the master in its own binary log.
enforce-gtid-consistency: Statements that can't be logged in a transactionally safe manner are denied by the server.
ref: http://dev.mysql.com/doc/refman/5.6/en/replication-gtids-howto.html
Replication errors and fixes:
"'Got fatal error 1236 from master when reading data from binary log: "The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires." slave_io thread stop running.
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
Steps:
slave>stop slave;
slave> FLUSH TABLES WITH READ LOCK;
slave>show master status;
'4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-13030:13032-13317:13322-13325:13328-653183:653185-654126:654128-1400817:1400820-3423394:3423401-5779965′
(HERE 83345127 Last GTID executed on master and 5779965 Last slave GTID executed on Master )
slave> reset master;
slave>set global GTID_PURGED='4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-5779965′;
slave>start slave;
slave>unlock tables;
slave>show slave status;
NOTE: After this Re-start slave other chain-slaves if they stop replicating;
ERROR: 'Error "Table … 'doesn"t exist" on query. Default database: …Query: "INSERT INTO OR Last_SQL_Error: ….Error 'Duplicate entry' SKIP Transaction on slave (slave_sql Thread stop running) NOTE:
SQL_SLAVE_SKIP_COUNTER doesn't work anymore with GTID.
We need to find what transaction is causing the replication to fail.
– From binary log
– From SHOW SLAVE STATUS (retrieved vs executed)
Type of errors: (check last sql error in show slave status)
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
slave>show slave status;
copy the 'Executed_Gtid_Set' value. '4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-659731804,5b37def1-6189-11e3-bee0-e89a8f22a444:1-70734947-80436012:80436021-80437839'
-Seems that slave (with uuid 5b37def1-6189-11e3-bee0-e89a8f22a444) transaction '80437840' is causing the problem here.
slave> STOP SLAVE;
slave> SET GTID_NEXT="5b37def1-6189-11e3-bee0-e89a8f22a444:80437840"; (last_executed_slave_gtid_on_master + 1)
slave> BEGIN; COMMIT;
slave> SET GTID_NEXT="AUTOMATIC";
slave> START SLAVE;
slave> show slave status;
and it's ALL SET !!!
#If using xtrabackup is the backup in the main instance.
cat xtrabackup_info | grep binlog_pos
#Use the information you gave as an example:
binlog_pos = filename 'mysqld-bin.000283', position '294922064', GTID of the last change '1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960'
#copy GTID of the last change to gtid_purged
slave>STOP SLAVE;
slave>RESET MASTER;
slave>SET GLOBAL gtid_purged='1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960';
slave>change master to master_host='master ip',master_port=master port,MASTER_USER = 'Your replicate username', MASTER_PASSWORD = 'Your replicate password',master_auto_position=1;
slave>START SLAVE;
Related
I had MySQL master slave replication configured. I accidently ran Reset Slave on slave instance. I do not have a note of the last bin log position of the master that the slave had completed. Show slave status command returns a blank row as I have reset the slave.
Is there any way in which I can recover the last bin log position that the Slave had finished syncing? Or is there any other way in which I can fix the replication without setting it up fresh?
Master db MySQL db Server 2012
Slave db MySQL Win7 XAMPP
DB size 500MB
Table count 42
I have setup the replication successfully however it stopped last week and my slave was showing the error Slave_SQL_Running No. I realised that it was looking at an incorrect log file (00004 whereas it should have been 00006).
I have since sorted this by;
At the MASTER;
SHOW MASTER STATUS;
Copied the values of MASTER_LOG_FILE and MASTER_LOG_POS.
At the SLAVE;
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; (<- example values)
START SLAVE;
SHOW SLAVE STATUS \G;
On my master I tested the replication by editing the table members - I edited one of the row values (from 85 to 86 - this successfully replicated in my slave). However I notice that on my master members table there are 70652 members but on my slave there are only 70056.
I added two new members to my master members table and the total increases by 2 on both tables. However there still seems to be that 600 missing?
What could be the problem? Replication seems to be working but totals aren't. New members are added to the members table each day, but the aren't being added to my slave members table.
The results of my slave status table (from phpmyadmin) are;
Slave_IO_State Waiting for master to send event
Master_Host xxx.xxx.xxx.xxx
Master_User repl
Master_Port 3306
Connect_Retry 60
Master_Log_File mysql-bin.000006
Read_Master_Log_Pos 787956776
Relay_Log_File mysql-relay-bin.000004
Relay_Log_Pos 624412
Relay_Master_Log_File mysql-bin.000006
Slave_IO_Running Yes
Slave_SQL_Running Yes
Replicate_Do_DB
Replicate_Ignore_DB
Replicate_Do_Table
Replicate_Ignore_Table
Replicate_Wild_Do_Table
Replicate_Wild_Ignore_Table
Last_Errno 0
Last_Error
Skip_Counter 0
Exec_Master_Log_Pos 787956776
Relay_Log_Space 788197
Until_Condition None
Until_Log_File
Until_Log_Pos 0
Master_SSL_Allowed No
Master_SSL_CA_File
Master_SSL_CA_Path
Master_SSL_Cert
Master_SSL_Cipher
Master_SSL_Key
Seconds_Behind_Master 0
Is there something else that I could check or test?
Yes, when you stopped the replication was some rows changed or inserted. After this you have RESET the SLAVE and set the MASTER_LOG_POS. So the replication NEVER can gets the old changes.
You have 2 Options:
First:
Stop Replication
Dump the Master DB (with master position)
Restore it in the Slave
set set Position or check them
start slave
second
Stop slave
sync the masterDB to Slave DB with percona Toolkit - pt-table-sync
Start slave
I have a problem with mysql replication.
I configure two virtual host.
Server 1 Apache + mysql Ver 15.1 Distrib 5.5.41-MariaDB
Master and SLAVE OF Server2
Server 2 mysql Ver 14.14 Distrib 5.5.42
Master and SLAVE OF Server1
Topologi MASTER + MASTER
When I restart slaves all work good, short latency and fast update. But when I wait a few minutes the replication not work more. If I update some row or make a insert or delete the slave not update the changes.
The logs not write any error, but the master_position_log is diferent between master and slave.
And if I restart the slaves all works again, the bdd is updated and the replication works well.
I don't know what happen, seems the threads sleep or death.
Thanks for some idea for fix the problem
In two cases the processes seems ok.
SERVER1
Kill 168 system user None Connect 1146 Waiting for master to send event ---
Kill 169 system user None Connect 945 Slave has read all relay log; waiting for the slave I/O thread to update it ---
Kill 170 master XXXXXXX:59273 None Binlog Dump 1145 Master has sent all binlog to slave; waiting for binlog to be updated ---
SERVER2
Kill 73 root XXXXXX:55089 None Binlog Dump 1137 Master has sent all binlog to slave; waiting for binlog to be updated ---
Kill 76 system user None Connect 1137 Waiting for master to send event ---
Kill 77 system user None Connect 985 Slave has read all relay log; waiting for the slave I/O thread to update it ---
The problem is latency.
My solution, create a CRON every minut for stop and start slave.
Now all works.
Cristian
SHOW SLAVE STATUS;
on each server. That is likely to tell you what is wrong.
You do understand the potential problems with AUTO_INCREMENT and UNIQUE keys when you are writing to both heads of a dual-Master topology?
Quick questions about MySQL Master-Slave-Slave set-ups:
I currently have a Master-Slave set up right now and I would like to add another slave. Would it be possible to clone the server running the slave, and then spin up a new server with the image from the slave, and have it pick up right where it left off? So whatever the binlog was at the time of the copy it would just run until it catches up with the master?
Ideally - I'm trying to start another slave the connects to the master without shutting down the Master for a backup. Any advice or guidance would be great. Thanks!
Yes, you can shutdown slave instance, and copy all it's data to another slave (including logs).
Don't forget to edit my.cnf on second slave (you should change server-id)
Then start both slave servers
Yes this is possible. The best way would probably be to temporarily pause the replication on the slave, determine the master binary log position information, then make your dump from the replica while replication is still paused (and no other data is changing on the replica). After the dump is complete you can restart the replica.
On the new server, just install the dump, set the binlog coordinates and start up the replication. A word of caution though. Make sure your settings for purging the binary logs on the master will allow for retention of the binary logs for long enough for you to do this set up process and get the new slave caught up before the bin logs are purged.
Here's a good tutorial on how to setup multiple replication slaves for a master server:
http://arcib.dowling.edu/cgi-bin/info2html?%28mysql%29replication-howto
It doesn't explain your scenario, but gives important hints: you must assign a unique server-id to your second slave.
Regarding your problem: If your masters binary log is kept long enough, you should not get into trouble. Just shutdown your slave for a moment, clone it and write down: MASTER_LOG_FILE and MASTER_LOG_POS of the slave; then restart the original slave and setup the second slave correctly: that means with that given MASTER_LOG_POS and *_FILE set and a unique server-id in my.cnf;
Then start up your second slave. Use "START SLAVE" to start the replication and then have a look at "SHOW SLAVE STATUS;"
Regards,
Stefan
PS: Cannot promise this to work, but I'm quit sure it should do.
You can use existing mysql slave to make a new one just do the following steps,
Stop replication on slave.
execute show slave status; and note these values Master_Log_File: master-bin.000002 &
Read_Master_Log_Pos: 1307
Take mysqldump and restore it on new mysql slave server, you can copy my.cnf file from existing mysql slave server and just change server-id.
execute change master to command on new slave server providing details of mysql master server and log file name and log position which we obtained from existing mysql slave.
execute start slave; on existing mysql slave.
to verify slave status run show slave status.
that's it you have a new mysql slave server!!
Good luck !
I am trying to do Master Slave Replication for MySQL. When i am typing the following command:
CHANGE MASTER TO MASTER_HOST='10.1.100.1', MASTER_USER='slave_user', MASTER_PASSWORD='slave_password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=451228;
mysql> START SLAVE;
it throws the following error:
ERROR 1201 (HY000): Could not
initialize master info structure; more
error messages can be found in the
MySQL error log
Any help would be greatly appreciated.
TRY TO RESET IT, IT DOES MAGIC! ON SLAVE THE SLAVE MYSQL COMMAND TYPE:
RESET SLAVE;
THEN TRY AGAIN:
CHANGE MASTER TO MASTER_HOST='10.1.100.1', MASTER_USER='slave_user', MASTER_PASSWORD='slave_password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=451228;
mysql> START SLAVE;
Please check several things:
1) Make sure the Master's /etc/my.cnf has server_id actually set
Here is why: Replication relies on the server_id. Whenever a query is executed and is recorded in the master's binary log, the server_id of the master is recorded with it. By default, if a server_id is not defined in /etc/my.cnf, the server_id is defaulted to 1. However, the rules MySQL Replication demand that a server_id be explicitly defined in the master's /etc/my.cnf. In addition, for any given slave, mysqld checks the server_id of the SQL statement as it reads it from the relay log and makes sure it is different from the slave's server_id. That is how MySQL Replication knows it is safe to execute that SQL statement. This rule is necessary in the event Circular (Master-Master,MultiMaster) Replication is implemented.
use select ##server_id; in sql command line to check config really on server.
2) Make sure the Slave's /etc/my.cnf has server_id actually set
Here is why: Same reason as in #1
3) Make sure the server_id in the Master's /etc/my.cnf is different from the server_id in the Slave's /etc/my.cnf
Here is why: Same reason as in #1
As a side note : If you setup multiple slaves, please make sure each slave has a different server_id from its master and its sibling slaves.
Here is why : Example
A master with 2 slaves
MASTER has server_id 1
SLAVE1 has server_id 2
SLAVE2 has server_id 2
Replication will become agressively sluggish on SLAVE2 because a sibling slave has the same server_id. In fact, it will steadily fall behind, catch a break, process a few SQL statements. This is the master's fault for having one or more slaves with identical server_ids. This is a gotcha that is not really documented anywhere.
I've seen this dozens of times in my life time.
I had something very close to that and got same error messages.
Replication run fine, mariadb restart -> "cannot open relay log"
Solution from Neo helped in the first place.
But the root cause it seems were to small open file limits.
Try a lsof | wc and increase DefaultLimitNOFILE to 65535 in /etc/systemd/system.conf and /etc/systemd/user.conf
If nothing else helps and you are convinced everything is set correctly you will have to remove this file:
/var/lib/mysql/<relay_logname>-<connection>.info
after that perform the 'CHANGE MASTER' command as stated above