MySQL-8.0.12 slave replication failed - mysql

I use MySQL-8.0.12 to setup a master-slave replication cluster. But slave always gets following errors, does anyone know how to fix this ?
2018-11-01T04:17:58.327576Z 19 [ERROR] [MY-010834] [Server] next log
error: -1 offset: 50 log: ./mysql-relay-bin.000002 included: 1,
2018-11-01T04:17:58.327675Z 19 [ERROR] [MY-010596] [Repl] Error
reading relay log event for channel '': Error purging processed logs,
2018-11-01T04:17:58.327932Z 19 [ERROR] [MY-013121] [Repl] Slave SQL
for channel '': Relay log read failure: Could not parse relay log
event entry. The possible reasons are: the master's binary log is
corrupted (you can check this by running 'mysqlbinlog' on the binary
log), the slave's relay log is corrupted (you can check this by
running 'mysqlbinlog' on the relay log), a network problem, or a bug
in the master's or slave's MySQL code. If you want to check the
master's binary log or slave's relay log, you will be able to know
their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code:
MY-013121,
2018-11-01T04:17:58.327982Z 19 [ERROR] [MY-010586] [Repl] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with "SLAVE START". We stopped at log
'mysql-bin.000003' position 805

check disk space in slave
faced the same issue once.
during the replication, if slave server disk is full and no space left mysql replication thread wait for the disk to be freed wait time is 60 sec if the server restarted during that time then relay log cannot be recovered and slave cannot read the relay log.

Related

mysql5.5 sql_thread not running row based relay log

The master and slave are mysql5.5.60 and binlog_format=MIXED;Now it is found that the slave loses some data,But the error log is empty and show slave status status is normal;
Parse the master binlog log and slave relay log and find that the lost data still exists, but sql_thread does not apply these logs. These logs are in row based format;what is this caused by?

MySQL replication Slave_SQL_Running fails after inserting data

For school I have to use master slave replication with MySQL on the same computer.
Since you can't run multiple instances of the same MySQL version on your computer, I'm using MySQL 5.6 for the master (port 3306) and MySQL 5.5 for the slave (port 3307).
After performing the following query:
stop slave;
CHANGE MASTER TO
MASTER_HOST='localhost',
MASTER_PORT=3306,
MASTER_USER='MySQL_SLAVE',
MASTER_PASSWORD='mypasswordgoeshere',
MASTER_LOG_FILE='mysql-bin.000007',
MASTER_LOG_POS=1571;
start slave;
show slave status
I see both Slave_IO_Running and Slave_SQL_Running is successful.
However, after inserting data in the master database, the Slave_SQL_Running value switches from 'Yes' to 'No'.
The Last_Error column gives this:
1594 - Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
Using the mysqlbinlog command on the binary logs of my master and slave I see no errors.
Since I run these two instances on one computer I'm pretty sure my problem isn't caused by a network problem. Since I just imported the master's data to the slave's data, I'm pretty sure this also isn't caused by the MySQL code.
Any thoughts?
Thanks for your time!
Solved the problem by changing binlog_format from 'ROW' to 'MIXED' on the master.

Percona xtradb cluster node crash

I have a PXC with three nodes. But when one of the node crashed, the other nodes followed suit.
What does the first paragraph (Trying to get some variables...) of the error below mean? In what circumstance does this error or message happen?
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7f7bf80f5c30): is an invalid pointer
Connection ID (thread ID): 323203618
Status: NOT_KILLED
You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
150809 05:13:51 mysqld_safe Number of processes running now: 0
150809 05:13:51 mysqld_safe WSREP: not restarting wsrep node automatically
150809 05:13:51 mysqld_safe mysqld from pid file /var/lib/mysql/db-3-2.pid ended

MySQL 5.5 'Binary log is not open', Error_code: 1236

I am trying to configure master-master replication however I am getting an error. I am sending my configuration below
Server A
server-id = 1
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 1
master-host = Kooler-PC
master-user = replicacao
master-password = replicacao
master-connect-retry = 60
replicate-do-db = gestao_quadra
log-bin = C:\mysql\log\log-bin.log
binlog-do-db = gestao_quadra
CHANGE MASTER TO MASTER_HOST='Kooler-PC', MASTER_USER='replicacao', MASTER_PASSWORD='replicacao', MASTER_LOG_FILE='log-bin.log ', MASTER_LOG_POS=0;
I am have done the same steps for other server changing server-id, host and created the file in the path.
I get this error:
130218 18:03:02 [Note] Slave I/O thread: connected to master 'replicacao#Kooler-PC:3306',replication started in log 'log-bin.log ' at position 4
130218 18:03:02 [ERROR] Error reading packet from server: Binary log is not open ( server_errno=1236)
130218 18:03:02 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: 'Binary log is not open', Error_code: 1236
130218 18:03:02 [Note] Slave I/O thread exiting, read up to log 'log-bin.log ', position 4
I am using MySQL 5.5
So if you read the mysql manual on replication an binary logging, it would tell you that this line:
log-bin = C:\mysql\log\log-bin.log
Does not create a log file with exactly that name. It specifies the base name. The log files that actually get created would be named:
C:\mysql\log\log-bin.log.000001
That is to say the actual logs have a sequence number appended to the end of the name you specified. To see the actual log names use the command:
SHOW MASTER STATUS
SHOW BINARY LOGS;
This part of your change master statement is not valid:
MASTER_LOG_FILE='log-bin.log ', MASTER_LOG_POS=0;
There's no part of any replication related instructions I've ever read which would lead you to use position 0. You have to use the master's binary log file and position that correspond to the snapshot of the data with which you initialized the slave.
See the manual for more info. Start with basic master->slave replication first before you attempt more complex replication structures. http://dev.mysql.com/doc/refman/5.5/en/replication.html

MySQL replication slave losing connection each 10 minutes

I have multiple servers set up with MySQL one-way replication for backup purposes. On one of these slaves I have a problem. Exactly each 10 minutes it loses connection and reconnects without problems. Example from error log:
121216 18:05:49 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000002' at position 782733912
121216 18:05:49 [ERROR] Slave I/O: error reconnecting to master 'repl#127.0.0.1:5002' - retry-time: 60 retries: 86400, Error_code: 2013
121216 18:06:49 [Note] Slave: connected to master 'repl#127.0.0.1:5002',replication resumed in log 'mysql-bin.000002' at position 782733912
121216 18:15:49 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)
121216 18:15:49 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000002' at position 822218944
121216 18:15:49 [ERROR] Slave I/O: error reconnecting to master 'repl#127.0.0.1:5002' - retry-time: 60 retries: 86400, Error_code: 2013
121216 18:16:49 [Note] Slave: connected to master 'repl#127.0.0.1:5002',replication resumed in log 'mysql-bin.000002' at position 822218944
121216 18:25:49 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)
121216 18:25:49 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000002' at position 850106111
121216 18:25:49 [ERROR] Slave I/O: error reconnecting to master 'repl#127.0.0.1:5002' - retry-time: 60 retries: 86400, Error_code: 2013
So, everything works, but the error log is flooded with messages.
I looked at various MySQL settings, but I don't see any set to 10 minutes or 600 seconds.
FWIW, replication works through SSH tunnel using AutoSSH. I looked into sshd_config, but also do not see any timeout setting.
Which setting should I look into?
I am looking at some similar problems lately and it turns out that our firewall blocks autossh monitoring port thus autossh restarts ssh every 10 mins. This may happen to you too.
Check your autossh log. It is usually /var/log/syslog unless you specifies AUTOSSH_LOGFILE
As #interskh pointed out, the culprit may be ssh. My /var/log/syslog contained messages like the following:
Sep 15 16:34:57 servername autossh[2799]: timeout polling to accept read connection
Sep 15 16:34:57 servername autossh[2799]: port down, restarting ssh
Sep 15 16:34:57 servername autossh[2799]: starting ssh (count 136)
Sep 15 16:34:57 servername autossh[2799]: ssh child pid is 11664
I found a Debian bug report thread that suggested that contrary to many tutorials, it isn't necessary to include the -M parameter. Since version 1.4a-1, autossh will use a randomly selected "high" port by default (which is arguably better than manually specifying a monitoring port with -M).
Omitting the -M flag solved the problem for me.
Previous command (restarts the SSH connection every 10 minutes)
autossh -p2223 -M 20000 -f username#example.com -L 12345:127.0.0.1:3306 -N
New (working) command
autossh -p2223 -f username#example.com -L 12345:127.0.0.1:3306 -N
In case it helps anyone, our SSH client is running Ubuntu and the SSH server is running CentOS.