mysql5.5 sql_thread not running row based relay log - mysql

The master and slave are mysql5.5.60 and binlog_format=MIXED;Now it is found that the slave loses some data,But the error log is empty and show slave status status is normal;
Parse the master binlog log and slave relay log and find that the lost data still exists, but sql_thread does not apply these logs. These logs are in row based format;what is this caused by?

Related

How do I find the command causing a replication error in MySQL

I am trying to figure out what command is trying to be executed on the replication slave that is causing an error. The slave status is no help and the log file tells me it is trying to perform a delete so I am not too worried about skipping this error but I can't figure out what exactly it is trying to delete.
Here is the error.log entries...
2022-05-20T12:13:52.721940Z 7 [ERROR] [MY-010584] [Repl] Slave SQL for channel '': Worker 1 failed executing transaction 'ANONYMOUS' at master log mysql-bin.000004, end_log_pos 8759277; Could not execute Delete_rows event on table puzzleswaps_com.puzzle; Can't find record in 'puzzle', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log FIRST, end_log_pos 8759277, Error_code: MY-001032
2022-05-20T12:13:52.722594Z 6 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000004'
So the error is coming from the master's binlog file mysql-bin.000004 just before position 8759277
But I am looking at the mysqlbinlog output and it makes no sense to me since it is spitting out a huge amount of hex code (maybe base 64 encoded?) but it ends like this...
CwgY2FyZHMsIGNvbGxlY3RvciwgZWFtZXMsIG1vZGVybtwFAAsxMjAwIC0gMTc5OQAAAAAAAAAA
AJmszQNHYoVftwAAAAAAAAAAAA5TaGVsbGV5IERhdmllcxNDb2xsZWN0b3IncyBFZGl0aW9uAIAA
AAABAAAABAEnxSkIAAAAAApjYWNoZV9ob21lwuoHwtpRAAAQAExha2UgQ29tbyBCcmVlemUlAGFy
dCwgbGFrZSBjb21vLCBwYXN0ZWwsIHZlbHZldCBwaWVjZXPuAgAJNjAwIC0gODk5AAAAAAAAAAAA
mazNAzdihV+3AAAAAAAAAAAADlBhdWwgSm9yZ2Vuc2VuFVZlbHZldCAtIFRvdWNoIFBpZWNlcwCA
AAAAAQAAAAQHKcUpCAAAAAAKY2FjaGVfaG9tZcTqB8LaUQAAAwBPd2wEAGJpcmRKAQAJMzAwIC0g
NTk5AAAAAAAAAAAAmazNAxxihV+3AAAAAAAAAAAAAIAAAAABAAD2Es8r
nq6GYiACAAAAowAAAO2nhQAAAJYCAAAAAAEAAgAg/////wIAeILH6wfC2lEAAB0AV2lsZCBXaGlt
c3kgLSBXb29kbGFuZCBXaGltc3kAACYCADEAAAAAAAAAAAAAAAkAAAAAAAAAAACZrOTbxwxTcGFj
ZWJveWpvbm+ZrOTbxwxTcGFjZWJveWpvbm8AAAAAAAGQAQgBmazk28lihVtNU1taRg==
'/*!*/;
ROLLBACK /* added by mysqlbinlog */ /*!*/;
SET ##SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=#OLD_COMPLETION_TYPE*/;
/*!50530 SET ##SESSION.PSEUDO_SLAVE_MODE=0*/;
Is there any way to see what exactly it is trying to delete?
Here is the thing that lets say you get the above error and it says you need to check at binlog position 8759277 and file mysql-bin.000004.
mysqlbinlog --read-from-remote-server -h <ip-of-the-host> mysql-bin.000004 --verbose --base64-output=DECODE-ROWS|grep -A10 -B10 8759277
Above command will give you which command is executed on that position. once you get that you can also confirm the record on the replica. if the record is not there either you can skip the record only if there is only that statement at that position. or you can insert the record on the replica making sure you are not hitting any constraint.

MySQL-8.0.12 slave replication failed

I use MySQL-8.0.12 to setup a master-slave replication cluster. But slave always gets following errors, does anyone know how to fix this ?
2018-11-01T04:17:58.327576Z 19 [ERROR] [MY-010834] [Server] next log
error: -1 offset: 50 log: ./mysql-relay-bin.000002 included: 1,
2018-11-01T04:17:58.327675Z 19 [ERROR] [MY-010596] [Repl] Error
reading relay log event for channel '': Error purging processed logs,
2018-11-01T04:17:58.327932Z 19 [ERROR] [MY-013121] [Repl] Slave SQL
for channel '': Relay log read failure: Could not parse relay log
event entry. The possible reasons are: the master's binary log is
corrupted (you can check this by running 'mysqlbinlog' on the binary
log), the slave's relay log is corrupted (you can check this by
running 'mysqlbinlog' on the relay log), a network problem, or a bug
in the master's or slave's MySQL code. If you want to check the
master's binary log or slave's relay log, you will be able to know
their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code:
MY-013121,
2018-11-01T04:17:58.327982Z 19 [ERROR] [MY-010586] [Repl] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with "SLAVE START". We stopped at log
'mysql-bin.000003' position 805
check disk space in slave
faced the same issue once.
during the replication, if slave server disk is full and no space left mysql replication thread wait for the disk to be freed wait time is 60 sec if the server restarted during that time then relay log cannot be recovered and slave cannot read the relay log.

Mysql replication recovery after Slave Reset

I had MySQL master slave replication configured. I accidently ran Reset Slave on slave instance. I do not have a note of the last bin log position of the master that the slave had completed. Show slave status command returns a blank row as I have reset the slave.
Is there any way in which I can recover the last bin log position that the Slave had finished syncing? Or is there any other way in which I can fix the replication without setting it up fresh?

MySQL error 1236 When using GTID

I want to create a replica to my Percona Server with GTID enabled, but got this error when i show slave status:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Normally, i would stop my slave, reset it, reset master (on the slave), and get new GTID_PURGED value from the master. But this time around, the master has a very unusual value(s) and i am not sure how to determine which one to use:
mysql> show master status\G
*************************** 1. row ***************************
File: mysqld-bin.000283
Position: 316137263
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546512667
1 row in set (0.00 sec)
From the slave with the new backup copy, i get this:
root#ubuntu:/var/lib/mysql# cat xtrabackup_binlog_info
mysqld-bin.000283 294922064 1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,
c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,
cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960
One more thing, i just purged the binary logs on the master before i made a backup. automatic binlog purge is set to 7 days. So i know its not because the bin log has been purged as the error is suggesting.
I am running Ubuntu 14:04, and Percona server version 5.6.31-77.
How can i resolve this issue? What is the correct value of the master's GTID_PURGED?
mysql 5.6 GTID replication errors and fixes
What is GTID? 
4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
This is the server's 128 bit identification number (SERVER_UUID). It identifies where the transaction was originated. Every server has its own SERVER_UUID.
What problems GTID solves?
It is possible to identify a transaction uniquely across the replication servers. Make the automation of failover process much easier. There is no need to do calculations, inspect the binary log and so on. Just MASTER_AUTO_POSITION=1.
At application level, it is easier to do WRITE/READ split. After a write on the MASTER, you have a GTID so just check if that GTID has been executed on the SLAVE that you use for reads.
Development of new automation tools isn't a pain now.
How can I implement it?
Three variables are needed in ALL servers of the replication chain
gtid_mode: It can be ON or OFF (not 1 or 0). It enables the GTID on the server.
log_bin: Enable binary logs. Mandatory to create a replication environment.
log-slave-updates: Slave servers must log the changes that come from the master in its own binary log.
enforce-gtid-consistency: Statements that can't be logged in a transactionally safe manner are denied by the server.
ref: http://dev.mysql.com/doc/refman/5.6/en/replication-gtids-howto.html
Replication errors and fixes:
"'Got fatal error 1236 from master when reading data from binary log: "The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires." slave_io thread stop running.
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
Steps:
slave>stop slave;
slave> FLUSH TABLES WITH READ LOCK;
slave>show master status;
'4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-13030:13032-13317:13322-13325:13328-653183:653185-654126:654128-1400817:1400820-3423394:3423401-5779965′
(HERE 83345127 Last GTID executed on master and 5779965 Last slave GTID executed on Master )
slave> reset master;
slave>set global GTID_PURGED='4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-83345127,5b37def1-6189-11e3-bee0-e89a8f22a444:1-5779965′;
slave>start slave;
slave>unlock tables;
slave>show slave status;
NOTE: After this Re-start slave other chain-slaves if they stop replicating;
ERROR: 'Error "Table … 'doesn"t exist" on query. Default database: …Query: "INSERT INTO OR Last_SQL_Error: ….Error 'Duplicate entry' SKIP Transaction on slave (slave_sql Thread stop running) NOTE:
SQL_SLAVE_SKIP_COUNTER doesn't work anymore with GTID.
We need to find what transaction is causing the replication to fail.
– From binary log
– From SHOW SLAVE STATUS (retrieved vs executed)
Type of errors: (check last sql error in show slave status)
Resolution: Considering following are the master – slave UUID's
MASTER UUID: 4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4
SLAVE UUID: 5b37def1-6189-11e3-bee0-e89a8f22a444
slave>show slave status;
copy the 'Executed_Gtid_Set' value. '4c2ad77f-697e-11e3-b2c3-c80aa9f17dc4:1-659731804,5b37def1-6189-11e3-bee0-e89a8f22a444:1-70734947-80436012:80436021-80437839'
-Seems that slave (with uuid 5b37def1-6189-11e3-bee0-e89a8f22a444) transaction '80437840' is causing the problem here.
slave> STOP SLAVE;
slave> SET GTID_NEXT="5b37def1-6189-11e3-bee0-e89a8f22a444:80437840"; (last_executed_slave_gtid_on_master + 1)
slave> BEGIN; COMMIT;
slave> SET GTID_NEXT="AUTOMATIC";
slave> START SLAVE;
slave> show slave status;
and it's ALL SET !!!
#If using xtrabackup is the backup in the main instance.
cat xtrabackup_info | grep binlog_pos
#Use the information you gave as an example:
binlog_pos = filename 'mysqld-bin.000283', position '294922064', GTID of the last change '1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960'
#copy GTID of the last change to gtid_purged
slave>STOP SLAVE;
slave>RESET MASTER;
slave>SET GLOBAL gtid_purged='1570dee1-165b-11e6-a4a2-00e081e93212:1-3537,c73f3ee7-e8d4-ee19-6507-f898a9930ccd:1-18609,cdb70eaa-f753-ee1b-5c95-ecb8024ae729:1-2357789559:2357789561-2357790104:2357790106-2514115701:2514115703-2514115705:2514115707-2546400960';
slave>change master to master_host='master ip',master_port=master port,MASTER_USER = 'Your replicate username', MASTER_PASSWORD = 'Your replicate password',master_auto_position=1;
slave>START SLAVE;

MySQL replication Slave_SQL_Running fails after inserting data

For school I have to use master slave replication with MySQL on the same computer.
Since you can't run multiple instances of the same MySQL version on your computer, I'm using MySQL 5.6 for the master (port 3306) and MySQL 5.5 for the slave (port 3307).
After performing the following query:
stop slave;
CHANGE MASTER TO
MASTER_HOST='localhost',
MASTER_PORT=3306,
MASTER_USER='MySQL_SLAVE',
MASTER_PASSWORD='mypasswordgoeshere',
MASTER_LOG_FILE='mysql-bin.000007',
MASTER_LOG_POS=1571;
start slave;
show slave status
I see both Slave_IO_Running and Slave_SQL_Running is successful.
However, after inserting data in the master database, the Slave_SQL_Running value switches from 'Yes' to 'No'.
The Last_Error column gives this:
1594 - Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
Using the mysqlbinlog command on the binary logs of my master and slave I see no errors.
Since I run these two instances on one computer I'm pretty sure my problem isn't caused by a network problem. Since I just imported the master's data to the slave's data, I'm pretty sure this also isn't caused by the MySQL code.
Any thoughts?
Thanks for your time!
Solved the problem by changing binlog_format from 'ROW' to 'MIXED' on the master.