In my MySQL 5.6 environment, I have A -> B -> C replication setup. On C slave, there is no mysql binlog generated since I set it up as a slave. But the replication is running per show slave status. The Slave_SQL_Running_State showed "Slave has read all relay log; waiting for the slave I/O thread to update it". The show master status on C slave showed it is on "mysql-bin.000003" position 120. That file was dated on Nov 29, not today. I have checked C slave my.cnf and I have binlog configured:
log-bin=/mysql-binlog/mysql-bin
binlog_format=mixed
max_binlog_size = 200M
There is relay-bin log on C slave generated and kept current.
What is the reason C slave has no binlog?
I want to create a replica to my Percona Server with GTID enabled, but got this error when i show slave status:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Normally, i would stop my slave, reset it, reset master (on the slave), and get new GTID_PURGED value from the master. But this time around, the master has a very unusual value(s) and i am not sure how to determine which one to use:
mysql> show master status\G
*************************** 1. row ***************************
File: mysqld-bin.000283
Position: 316137263
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546512667
1 row in set (0.00 sec)
From the slave with the new backup copy, i get this:
root#ubuntu:/var/lib/mysql# cat xtrabackup_binlog_info
mysqld-bin.000283 294922064 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960
One more thing, i just purged the binary logs on the master before i made a backup. automatic binlog purge is set to 7 days. So i know its not because the bin log has been purged as the error is suggesting.
I am running Ubuntu 14:04, and Percona server version 5.6.31-77.
How can i resolve this issue? What is the correct value of the master's GTID_PURGED?
mysql 5.6 GTID replication errors and fixes
What is GTID? 
4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
This is the server's 128 bit identification number (SERVER_UUID). It identifies where the transaction was originated. Every server has its own SERVER_UUID.
What problems GTID solves?
It is possible to identify a transaction uniquely across the replication servers. Make the automation of failover process much easier. There is no need to do calculations, inspect the binary log and so on. Just MASTER_AUTO_POSITION=1.
At application level, it is easier to do WRITE/READ split. After a write on the MASTER, you have a GTID so just check if that GTID has been executed on the SLAVE that you use for reads.
Development of new automation tools isn't a pain now.
How can I implement it?
Three variables are needed in ALL servers of the replication chain
gtid_mode: It can be ON or OFF (not 1 or 0). It enables the GTID on the server.
log_bin: Enable binary logs. Mandatory to create a replication environment.
log-slave-updates: Slave servers must log the changes that come from the master in its own binary log.
enforce-gtid-consistency: Statements that can't be logged in a transactionally safe manner are denied by the server.
ref: http://dev.mysql.com/doc/refman/5.6/en/replication-gtids-howto.html
Replication errors and fixes:
"'Got fatal error 1236 from master when reading data from binary log: "The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires." slave_io thread stop running.
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
Steps:
slave>stop slave;
slave> FLUSH TABLES WITH READ LOCK;
slave>show master status;
'4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-13030:13032-13317:13322-13325:13328-653183:653185-654126:654128-1400817:1400820-3423394:3423401-5779965′
(HERE 83345127 Last GTID executed on master and 5779965 Last slave GTID executed on Master )
slave> reset master;
slave>set global GTID_PURGED='4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-5779965′;
slave>start slave;
slave>unlock tables;
slave>show slave status;
NOTE: After this Re-start slave other chain-slaves if they stop replicating;
ERROR: 'Error "Table … 'doesn"t exist" on query. Default database: …Query: "INSERT INTO OR Last_SQL_Error: ….Error 'Duplicate entry' SKIP Transaction on slave (slave_sql Thread stop running) NOTE:
SQL_SLAVE_SKIP_COUNTER doesn't work anymore with GTID.
We need to find what transaction is causing the replication to fail.
– From binary log
– From SHOW SLAVE STATUS (retrieved vs executed)
Type of errors: (check last sql error in show slave status)
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
slave>show slave status;
copy the 'Executed_Gtid_Set' value. '4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-659731804,5b37def1-6189-11e3-bee0-e89a8f22a444:1-70734947-80436012:80436021-80437839'
-Seems that slave (with uuid 5b37def1-6189-11e3-bee0-e89a8f22a444) transaction '80437840' is causing the problem here.
slave> STOP SLAVE;
slave> SET GTID_NEXT="5b37def1-6189-11e3-bee0-e89a8f22a444:80437840"; (last_executed_slave_gtid_on_master + 1)
slave> BEGIN; COMMIT;
slave> SET GTID_NEXT="AUTOMATIC";
slave> START SLAVE;
slave> show slave status;
and it's ALL SET !!!
#If using xtrabackup is the backup in the main instance.
cat xtrabackup_info | grep binlog_pos
#Use the information you gave as an example:
binlog_pos = filename 'mysqld-bin.000283', position '294922064', GTID of the last change '1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960'
#copy GTID of the last change to gtid_purged
slave>STOP SLAVE;
slave>RESET MASTER;
slave>SET GLOBAL gtid_purged='1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960';
slave>change master to master_host='master ip',master_port=master port,MASTER_USER = 'Your replicate username', MASTER_PASSWORD = 'Your replicate password',master_auto_position=1;
slave>START SLAVE;
Master db MySQL db Server 2012
Slave db MySQL Win7 XAMPP
DB size 500MB
Table count 42
I have setup the replication successfully however it stopped last week and my slave was showing the error Slave_SQL_Running No. I realised that it was looking at an incorrect log file (00004 whereas it should have been 00006).
I have since sorted this by;
At the MASTER;
SHOW MASTER STATUS;
Copied the values of MASTER_LOG_FILE and MASTER_LOG_POS.
At the SLAVE;
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; (<- example values)
START SLAVE;
SHOW SLAVE STATUS \G;
On my master I tested the replication by editing the table members - I edited one of the row values (from 85 to 86 - this successfully replicated in my slave). However I notice that on my master members table there are 70652 members but on my slave there are only 70056.
I added two new members to my master members table and the total increases by 2 on both tables. However there still seems to be that 600 missing?
What could be the problem? Replication seems to be working but totals aren't. New members are added to the members table each day, but the aren't being added to my slave members table.
The results of my slave status table (from phpmyadmin) are;
Slave_IO_State Waiting for master to send event
Master_Host xxx.xxx.xxx.xxx
Master_User repl
Master_Port 3306
Connect_Retry 60
Master_Log_File mysql-bin.000006
Read_Master_Log_Pos 787956776
Relay_Log_File mysql-relay-bin.000004
Relay_Log_Pos 624412
Relay_Master_Log_File mysql-bin.000006
Slave_IO_Running Yes
Slave_SQL_Running Yes
Replicate_Do_DB
Replicate_Ignore_DB
Replicate_Do_Table
Replicate_Ignore_Table
Replicate_Wild_Do_Table
Replicate_Wild_Ignore_Table
Last_Errno 0
Last_Error
Skip_Counter 0
Exec_Master_Log_Pos 787956776
Relay_Log_Space 788197
Until_Condition None
Until_Log_File
Until_Log_Pos 0
Master_SSL_Allowed No
Master_SSL_CA_File
Master_SSL_CA_Path
Master_SSL_Cert
Master_SSL_Cipher
Master_SSL_Key
Seconds_Behind_Master 0
Is there something else that I could check or test?
Yes, when you stopped the replication was some rows changed or inserted. After this you have RESET the SLAVE and set the MASTER_LOG_POS. So the replication NEVER can gets the old changes.
You have 2 Options:
First:
Stop Replication
Dump the Master DB (with master position)
Restore it in the Slave
set set Position or check them
start slave
second
Stop slave
sync the masterDB to Slave DB with percona Toolkit - pt-table-sync
Start slave
I'm experiencing some trouble setting up mysql replication between a master & a slave..
I did the setup successfully, but data doesn't update.
Master : show master status;
[File]: mysql-bin.000033
[Position]: 1757196
[Binlog_Do_DB]: ciel
Master : show processlist;
[User]: slave
[Host]: 92.222.177.xxx:57578 ( right slave ip )
[db]:
[Command]: Binlog Dump
[Time]: 1231
[State]: Has sent all binlog to slave; waiting for binlog to be updated
Slave : show slave status;
[Slave_IO_State]: Waiting for master to send event
[Master_Host]: 46.105.122.xxx
[Master_User]: slave
[Master_Port]: 3306
[Connect_Retry]: 60
[Master_Log_File]: mysql-bin.000033
[Read_Master_Log_Pos]: 1757196
[Relay_Log_File]: mysqld-relay-bin.000006
[Relay_Log_Pos]: 252
[Relay_Master_Log_File]: mysql-bin.000033
[Slave_IO_Running]: Yes
[Slave_SQL_Running]: Yes
[Replicate_Do_DB]: ciel
[Exec_Master_Log_Pos]: 1757196
[Relay_Log_Space]: 409
[Until_Condition]: None
[Master_SSL_Allowed]: No
[Master_SSL_Verify_Server_Cert]: No
[Master_Server_Id]: 1
Slave : show proccesslist;
[User]: system user
[Host]:
[db]:
[Command]: Connect
[Time]: 1231
[State]: Waiting for master to send event
[Info]:
[Id]: 2
[User]: system user
[Host]:
[db]:
[Command]: Connect
[Time]: 1231
[State]: Slave has read all relay log; waiting for the slave I/O thread to update it
then selecting data on master :
master: lastmod: 2014-10-26 17:14:55
slave: lastmod: 2014-10-26 15:45:45
I'm feeling lost, because I'm still not finding after 8 hours, how to set this up correctly.
Mysql Server1 is running as MASTER.
Mysql Server2 is running as SLAVE.
Now DB replication is happening from MASTER to SLAVE.
Server2 is removed from network and re-connect it back after 1 day. After this there is mismatch in database in master and slave.
How to re-sync the DB again as after restoring DB taken from Master to Slave also doesn't solve the problem ?
This is the full step-by-step procedure to resync a master-slave replication from scratch:
At the master:
RESET MASTER;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
And copy the values of the result of the last command somewhere.
Without closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master:
mysqldump -u root -p --all-databases > /a/path/mysqldump.sql
Now you can release the lock, even if the dump hasn't ended yet. To do it, perform the following command in the MySQL client:
UNLOCK TABLES;
Now copy the dump file to the slave using scp or your preferred tool.
At the slave:
Open a connection to mysql and type:
STOP SLAVE;
Load master's data dump with this console command:
mysql -uroot -p < mysqldump.sql
Sync slave and master logs:
RESET SLAVE;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98;
Where the values of the above fields are the ones you copied before.
Finally, type:
START SLAVE;
To check that everything is working again, after typing:
SHOW SLAVE STATUS;
you should see:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
That's it!
The documentation for this at the MySQL site is woefully out of date and riddled with foot-guns (such as interactive_timeout). Issuing FLUSH TABLES WITH READ LOCK as part of your export of the master generally only makes sense when coordinated with a storage/filesystem snapshot such as LVM or zfs.
If you are going to use mysqldump, you should rely instead on the --master-data option to guard against human error and release the locks on the master as quickly as possible.
Assume the master is 192.168.100.50 and the slave is 192.168.100.51, each server has a distinct server-id configured, the master has binary logging on and the slave has read-only=1 in my.cnf
To stage the slave to be able to start replication just after importing the dump, issue a CHANGE MASTER command but omit the log file name and position:
slaveserver> CHANGE MASTER TO MASTER_HOST='192.168.100.50', MASTER_USER='replica', MASTER_PASSWORD='asdmk3qwdq1';
Issue the GRANT on the master for the slave to use:
masterserver> GRANT REPLICATION SLAVE ON *.* TO 'replica'#'192.168.100.51' IDENTIFIED BY 'asdmk3qwdq1';
Export the master (in screen) using compression and automatically capturing the correct binary log coordinates:
mysqldump --master-data --all-databases --flush-privileges | gzip -1 > replication.sql.gz
Copy the replication.sql.gz file to the slave and then import it with zcat to the instance of MySQL running on the slave:
zcat replication.sql.gz | mysql
Start replication by issuing the command to the slave:
slaveserver> START SLAVE;
Optionally update the /root/.my.cnf on the slave to store the same root password as the master.
If you are on 5.1+, it is best to first set the master's binlog_format to MIXED or ROW. Beware that row logged events are slow for tables which lack a primary key. This is usually better than the alternative (and default) configuration of binlog_format=statement (on master), since it is less likely to produce the wrong data on the slave.
If you must (but probably shouldn't) filter replication, do so with slave options replicate-wild-do-table=dbname.% or replicate-wild-ignore-table=badDB.% and use only binlog_format=row
This process will hold a global lock on the master for the duration of the mysqldump command but will not otherwise impact the master.
If you are tempted to use mysqldump --master-data --all-databases --single-transaction (because you only using InnoDB tables), you are perhaps better served using MySQL Enterprise Backup or the open source implementation called xtrabackup (courtesy of Percona)
Unless you are writing directly to the slave (Server2) the only problem should be that Server2 is missing any updates that have happened since it was disconnected. Simply restarting the slave with "START SLAVE;" should get everything back up to speed.
I am very late to this question, however I did encounter this problem and, after much searching, I found this information from Bryan Kennedy: http://plusbryan.com/mysql-replication-without-downtime
On Master take a backup like this:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/dump.sql
Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later:
head dump.sql -n80 | grep "MASTER_LOG"
Copy the "dump.sql" file over to Slave and restore it:
mysql -u mysql-user -p < ~/dump.sql
Connect to Slave mysql and run a command like this:
CHANGE MASTER TO MASTER_HOST='master-server-ip', MASTER_USER='replication-user', MASTER_PASSWORD='slave-server-password', MASTER_LOG_FILE='value from above', MASTER_LOG_POS=value from above; START SLAVE;
To check the progress of Slave:
SHOW SLAVE STATUS;
If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”.
Look for Seconds_Behind_Master which indicates how far behind it is.
YMMV. :)
I think, Maatkit utilits helps for you! You can use mk-table-sync. Please see this link: http://www.maatkit.org/doc/mk-table-sync.html
Here is what I typically do when a mysql slave gets out of sync. I have looked at mk-table-sync but thought the Risks section was scary looking.
On Master:
SHOW MASTER STATUS
The outputted columns (File, Position) will be of use to us in a bit.
On Slave:
STOP SLAVE
Then dump the master db and import it to the slave db.
Then run the following:
CHANGE MASTER TO
MASTER_LOG_FILE='[File]',
MASTER_LOG_POS=[Position];
START SLAVE;
Where [File] and [Position] are the values outputted from the "SHOW MASTER STATUS" ran above.
Hope this helps!
Following up on David's answer...
Using SHOW SLAVE STATUS\G will give human-readable output.
Master:
mysqldump -u root -p --all-databases --master-data | gzip > /tmp/dump.sql.gz
scp master:/tmp/dump.sql.gz slave:/tmp/ Move dump file to slave server
Slave:
STOP SLAVE;
zcat /tmp/dump.sql.gz | mysql -u root -p
START SLAVE;
SHOW SLAVE STATUS;
NOTE:
On master you can run SET GLOBAL expire_logs_days = 3 to keep binlogs for 3 days in case of slave issues.
Here is a complete answer that will hopefully help others...
I want to setup mysql replication using master and slave, and since the only thing I knew was that it uses log file(s) to synchronize, if the slave goes offline and gets out of sync, in theory it should only need to connect back to its master and keep reading the log file from where it left off, as user malonso mentioned.
So here are the test result after configuring the master and slave as mentioned by: http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html ...
Provided you use the recommended master/slave configuration and don't write to the slave, he and I where right (as far as mysql-server 5.x is concerned). I didn't even need to use "START SLAVE;", it just caught up to its master. But there is a default 88000 something retries every 60 second so I guess if you exhaust that you might have to start or restart the slave. Anyways, for those like me who wanted to know if having a slave going offline and back up again requires manual intervention.. no, it doesn't.
Maybe the original poster had corruption in the log-file(s)? But most probably not just a server going off-line for a day.
pulled from /usr/share/doc/mysql-server-5.1/README.Debian.gz which probably makes sense to non debian servers as well:
* FURTHER NOTES ON REPLICATION
===============================
If the MySQL server is acting as a replication slave, you should not
set --tmpdir to point to a directory on a memory-based filesystem or to
a directory that is cleared when the server host restarts. A replication
slave needs some of its temporary files to survive a machine restart so
that it can replicate temporary tables or LOAD DATA INFILE operations. If
files in the temporary file directory are lost when the server restarts,
replication fails.
you can use something sql like: show variables like 'tmpdir'; to find out.
Adding to the popular answer to include this error:
"ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO",
Replication from slave in one shot:
In one terminal window:
mysql -h <Master_IP_Address> -uroot -p
After connecting,
RESET MASTER;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
The status appears as below: Note that position number varies!
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 | 98 | your_DB | |
+------------------+----------+--------------+------------------+
Export the dump similar to how he described "using another terminal"!
Exit and connect to your own DB(which is the slave):
mysql -u root -p
The type the below commands:
STOP SLAVE;
Import the Dump as mentioned (in another terminal, of course!) and type the below commands:
RESET SLAVE;
CHANGE MASTER TO
MASTER_HOST = 'Master_IP_Address',
MASTER_USER = 'your_Master_user', // usually the "root" user
MASTER_PASSWORD = 'Your_MasterDB_Password',
MASTER_PORT = 3306,
MASTER_LOG_FILE = 'mysql-bin.000001',
MASTER_LOG_POS = 98; // In this case
Once logged, set the server_id parameter (usually, for new / non-replicated DBs, this is not set by default),
set global server_id=4000;
Now, start the slave.
START SLAVE;
SHOW SLAVE STATUS\G;
The output should be the same as he described.
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Note: Once replicated, the master and slave share the same password!
Rebuilding the slave using LVM
Here is the method we use to rebuild MySQL slaves using Linux LVM. This guarantees a consistent snapshot while requiring very minimal downtime on your master.
Set innodb max dirty pages percent to zero on the master MySQL server. This will force MySQL to write all the pages to the disk which will significantly speed up the restart.
set global innodb_max_dirty_pages_pct = 0;
To monitor the number of dirty pages run the command
mysqladmin ext -i10 | grep dirty
Once the number stop decreasing you have reach the point to continue. Next reset the master to clear the old bin logs / relay logs:
RESET MASTER;
Execute lvdisplay to get LV Path
lvdisplay
Output will look like this
--- Logical volume ---
LV Path /dev/vg_mysql/lv_data
LV Name lv_data
VG Name vg_mysql
Shutdown the master database with command
service mysql stop
Next take a snaphot, mysql_snapshot will be the new logical volume name. If binlogs are place on the OS drive those need to be snapshot as well.
lvcreate --size 10G --snapshot --name mysql_snapshot /dev/vg_mysql/lv_data
Start master again with command
service mysql start
Restore dirty pages setting to the default
set global innodb_max_dirty_pages_pct = 75;
Run lvdisplay again to make sure the snapshot is there and visible
lvdisplay
Output:
--- Logical volume ---
LV Path /dev/vg_mysql/mysql_snapshot
LV Name mysql_snapshot
VG Name vg_mysql
Mount the snapshot
mkdir /mnt/mysql_snapshot
mount /dev/vg_mysql/mysql_snapshot /mnt/mysql_snapshot
If you have an existing MySQL slave running you need to stop it
service mysql stop
Next you need to clear MySQL data folder
cd /var/lib/mysql
rm -fr *
Back to master. Now rsync the snapshot to the MySQL slave
rsync --progress -harz /mnt/mysql_snapshot/ targethostname:/var/lib/mysql/
Once rsync has completed you may unmount and remove the snapshot
umount /mnt/mysql_snapshot
lvremove -f /dev/vg_mysql/mysql_snapshot
Create replication user on the master if the old replication user doesn't exist or password is unknown
GRANT REPLICATION SLAVE on *.* to 'replication'#'[SLAVE IP]' identified by 'YourPass';
Verify that /var/lib/mysql data files are owned by the mysql user, if so you can omit the following command:
chown -R mysql:mysql /var/lib/mysql
Next record the binlog position
ls -laF | grep mysql-bin
You will see something like
..
-rw-rw---- 1 mysql mysql 1073750329 Aug 28 03:33 mysql-bin.000017
-rw-rw---- 1 mysql mysql 1073741932 Aug 28 08:32 mysql-bin.000018
-rw-rw---- 1 mysql mysql 963333441 Aug 28 15:37 mysql-bin.000019
-rw-rw---- 1 mysql mysql 65657162 Aug 28 16:44 mysql-bin.000020
Here the master log file is the highest file number in sequence and bin log position is the file size. Record these values:
master_log_file=mysql-bin.000020
master_log_post=65657162
Next start the slave MySQL
service mysql start
Execute change master command on the slave by executing the following:
CHANGE MASTER TO
master_host="10.0.0.12",
master_user="replication",
master_password="YourPass",
master_log_file="mysql-bin.000020",
master_log_pos=65657162;
Finally start the slave
SLAVE START;
Check slave status:
SHOW SLAVE STATUS;
Make sure Slave IO is running and there are no connection errors. Good luck!
I recently wrote this on my blog which is found here... There are few more details there but the story is the same.
http://www.juhavehnia.com/2015/05/rebuilding-mysql-slave-using-linux-lvm.html
I created a GitHub repo with an script to solve this problem quickly. Just change a couple variables and run it (First, the script creates a backup of your database).
I hope this help you (and others people too).
How to Reset (Re-Sync) MySQL Master-Slave Replication
sometimes you just need to give the slave a kick too
try
stop slave;
reset slave;
start slave;
show slave status;
quite often, slaves, they just get stuck guys :)
We are using master-master replication technique of MySQL and if one MySQL server say 1 is removed from the network it reconnects itself after the connection are restored and all the records that were committed in the in the server 2 which was in the network are transferred to the server 1 which has lost the connection after restoration.
Slave thread in the MySQL retries to connect to its master after every 60 sec by default. This property can be changed as MySQL ha a flag "master_connect_retry=5" where 5 is in sec. This means that we want a retry after every 5 sec.
But you need to make sure that the server which lost the connection show not make any commit in the database as you get duplicate Key error Error code: 1062