I am using
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
to run MySQL replication, MySQL master and slave was running fine.
Last week I changed the replica of MySQL statefulset to 1, to stop the MySQL slave. Now I want to start the slave, when I change statefulset replicas to 2, I was getting
'Could not find first log file name in binary log index file'.
So I wanted to do a fresh clone from master, so deleted the PV on slave and recreated the slave pod, it created new PV with no data, xtrabackup cloned the files from MySQL master.
But when starting I am getting below error, (edited)
[ERROR] InnoDB: Page [page id: space=5868, page number=1181] log sequence number 4431188670431 is in the future! Current system log sequence number 4431180574947.
Even though the Statefulset runs
"xtrabackup --prepare /var/lib/mysql"
slave throws this error and fails to start, any ideas?
Related
The master and slave are mysql5.5.60 and binlog_format=MIXED;Now it is found that the slave loses some data,But the error log is empty and show slave status status is normal;
Parse the master binlog log and slave relay log and find that the lost data still exists, but sql_thread does not apply these logs. These logs are in row based format;what is this caused by?
I had MySQL master slave replication configured. I accidently ran Reset Slave on slave instance. I do not have a note of the last bin log position of the master that the slave had completed. Show slave status command returns a blank row as I have reset the slave.
Is there any way in which I can recover the last bin log position that the Slave had finished syncing? Or is there any other way in which I can fix the replication without setting it up fresh?
I want to create a replica to my Percona Server with GTID enabled, but got this error when i show slave status:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Normally, i would stop my slave, reset it, reset master (on the slave), and get new GTID_PURGED value from the master. But this time around, the master has a very unusual value(s) and i am not sure how to determine which one to use:
mysql> show master status\G
*************************** 1. row ***************************
File: mysqld-bin.000283
Position: 316137263
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546512667
1 row in set (0.00 sec)
From the slave with the new backup copy, i get this:
root#ubuntu:/var/lib/mysql# cat xtrabackup_binlog_info
mysqld-bin.000283 294922064 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960
One more thing, i just purged the binary logs on the master before i made a backup. automatic binlog purge is set to 7 days. So i know its not because the bin log has been purged as the error is suggesting.
I am running Ubuntu 14:04, and Percona server version 5.6.31-77.
How can i resolve this issue? What is the correct value of the master's GTID_PURGED?
mysql 5.6 GTID replication errors and fixes
What is GTID? 
4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
This is the server's 128 bit identification number (SERVER_UUID). It identifies where the transaction was originated. Every server has its own SERVER_UUID.
What problems GTID solves?
It is possible to identify a transaction uniquely across the replication servers. Make the automation of failover process much easier. There is no need to do calculations, inspect the binary log and so on. Just MASTER_AUTO_POSITION=1.
At application level, it is easier to do WRITE/READ split. After a write on the MASTER, you have a GTID so just check if that GTID has been executed on the SLAVE that you use for reads.
Development of new automation tools isn't a pain now.
How can I implement it?
Three variables are needed in ALL servers of the replication chain
gtid_mode: It can be ON or OFF (not 1 or 0). It enables the GTID on the server.
log_bin: Enable binary logs. Mandatory to create a replication environment.
log-slave-updates: Slave servers must log the changes that come from the master in its own binary log.
enforce-gtid-consistency: Statements that can't be logged in a transactionally safe manner are denied by the server.
ref: http://dev.mysql.com/doc/refman/5.6/en/replication-gtids-howto.html
Replication errors and fixes:
"'Got fatal error 1236 from master when reading data from binary log: "The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires." slave_io thread stop running.
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
Steps:
slave>stop slave;
slave> FLUSH TABLES WITH READ LOCK;
slave>show master status;
'4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-13030:13032-13317:13322-13325:13328-653183:653185-654126:654128-1400817:1400820-3423394:3423401-5779965′
(HERE 83345127 Last GTID executed on master and 5779965 Last slave GTID executed on Master )
slave> reset master;
slave>set global GTID_PURGED='4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-5779965′;
slave>start slave;
slave>unlock tables;
slave>show slave status;
NOTE: After this Re-start slave other chain-slaves if they stop replicating;
ERROR: 'Error "Table … 'doesn"t exist" on query. Default database: …Query: "INSERT INTO OR Last_SQL_Error: ….Error 'Duplicate entry' SKIP Transaction on slave (slave_sql Thread stop running) NOTE:
SQL_SLAVE_SKIP_COUNTER doesn't work anymore with GTID.
We need to find what transaction is causing the replication to fail.
– From binary log
– From SHOW SLAVE STATUS (retrieved vs executed)
Type of errors: (check last sql error in show slave status)
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
slave>show slave status;
copy the 'Executed_Gtid_Set' value. '4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-659731804,5b37def1-6189-11e3-bee0-e89a8f22a444:1-70734947-80436012:80436021-80437839'
-Seems that slave (with uuid 5b37def1-6189-11e3-bee0-e89a8f22a444) transaction '80437840' is causing the problem here.
slave> STOP SLAVE;
slave> SET GTID_NEXT="5b37def1-6189-11e3-bee0-e89a8f22a444:80437840"; (last_executed_slave_gtid_on_master + 1)
slave> BEGIN; COMMIT;
slave> SET GTID_NEXT="AUTOMATIC";
slave> START SLAVE;
slave> show slave status;
and it's ALL SET !!!
#If using xtrabackup is the backup in the main instance.
cat xtrabackup_info | grep binlog_pos
#Use the information you gave as an example:
binlog_pos = filename 'mysqld-bin.000283', position '294922064', GTID of the last change '1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960'
#copy GTID of the last change to gtid_purged
slave>STOP SLAVE;
slave>RESET MASTER;
slave>SET GLOBAL gtid_purged='1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960';
slave>change master to master_host='master ip',master_port=master port,MASTER_USER = 'Your replicate username', MASTER_PASSWORD = 'Your replicate password',master_auto_position=1;
slave>START SLAVE;
For school I have to use master slave replication with MySQL on the same computer.
Since you can't run multiple instances of the same MySQL version on your computer, I'm using MySQL 5.6 for the master (port 3306) and MySQL 5.5 for the slave (port 3307).
After performing the following query:
stop slave;
CHANGE MASTER TO
MASTER_HOST='localhost',
MASTER_PORT=3306,
MASTER_USER='MySQL_SLAVE',
MASTER_PASSWORD='mypasswordgoeshere',
MASTER_LOG_FILE='mysql-bin.000007',
MASTER_LOG_POS=1571;
start slave;
show slave status
I see both Slave_IO_Running and Slave_SQL_Running is successful.
However, after inserting data in the master database, the Slave_SQL_Running value switches from 'Yes' to 'No'.
The Last_Error column gives this:
1594 - Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
Using the mysqlbinlog command on the binary logs of my master and slave I see no errors.
Since I run these two instances on one computer I'm pretty sure my problem isn't caused by a network problem. Since I just imported the master's data to the slave's data, I'm pretty sure this also isn't caused by the MySQL code.
Any thoughts?
Thanks for your time!
Solved the problem by changing binlog_format from 'ROW' to 'MIXED' on the master.
Quick questions about MySQL Master-Slave-Slave set-ups:
I currently have a Master-Slave set up right now and I would like to add another slave. Would it be possible to clone the server running the slave, and then spin up a new server with the image from the slave, and have it pick up right where it left off? So whatever the binlog was at the time of the copy it would just run until it catches up with the master?
Ideally - I'm trying to start another slave the connects to the master without shutting down the Master for a backup. Any advice or guidance would be great. Thanks!
Yes, you can shutdown slave instance, and copy all it's data to another slave (including logs).
Don't forget to edit my.cnf on second slave (you should change server-id)
Then start both slave servers
Yes this is possible. The best way would probably be to temporarily pause the replication on the slave, determine the master binary log position information, then make your dump from the replica while replication is still paused (and no other data is changing on the replica). After the dump is complete you can restart the replica.
On the new server, just install the dump, set the binlog coordinates and start up the replication. A word of caution though. Make sure your settings for purging the binary logs on the master will allow for retention of the binary logs for long enough for you to do this set up process and get the new slave caught up before the bin logs are purged.
Here's a good tutorial on how to setup multiple replication slaves for a master server:
http://arcib.dowling.edu/cgi-bin/info2html?%28mysql%29replication-howto
It doesn't explain your scenario, but gives important hints: you must assign a unique server-id to your second slave.
Regarding your problem: If your masters binary log is kept long enough, you should not get into trouble. Just shutdown your slave for a moment, clone it and write down: MASTER_LOG_FILE and MASTER_LOG_POS of the slave; then restart the original slave and setup the second slave correctly: that means with that given MASTER_LOG_POS and *_FILE set and a unique server-id in my.cnf;
Then start up your second slave. Use "START SLAVE" to start the replication and then have a look at "SHOW SLAVE STATUS;"
Regards,
Stefan
PS: Cannot promise this to work, but I'm quit sure it should do.
You can use existing mysql slave to make a new one just do the following steps,
Stop replication on slave.
execute show slave status; and note these values Master_Log_File: master-bin.000002 &
Read_Master_Log_Pos: 1307
Take mysqldump and restore it on new mysql slave server, you can copy my.cnf file from existing mysql slave server and just change server-id.
execute change master to command on new slave server providing details of mysql master server and log file name and log position which we obtained from existing mysql slave.
execute start slave; on existing mysql slave.
to verify slave status run show slave status.
that's it you have a new mysql slave server!!
Good luck !