My MySQL Slave server has stopped replicating its' Master - mysql

I have a backup server that replicates my production server's mysql database, awhile ago I setup a staging server with the same php script with the only difference being that I gave it a read only mysql user so it can't make changes to the database, and only read from it. I'm not sure if it had anything to do with this but today I saw the backup had stopped working and it has the following error:
Got fatal error 1236 from master when reading data from binary log: 'Client requested master to start replication from impossible position'
Here is the output from the maser:
File: mysql-bin.000208
Position: 24383202
Binlog_Do_DB: sexxymofo
Binlog_Ignore_DB:
and the slave:
Slave_IO_State:
Master_Host: 184.168.76.5
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000193
Read_Master_Log_Pos: 54442531
Relay_Log_File: mysqld-relay-bin.000155
Relay_Log_Pos: 54442676
Relay_Master_Log_File: mysql-bin.000193
Slave_IO_Running: No
Slave_SQL_Running: Yes
Replicate_Do_DB: sexxymofo
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 54442531
Relay_Log_Space: 54442875
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Client requested master to start replication from impossible position'
Last_SQL_Errno: 0
Last_SQL_Error:
as you can see the slave has got ahead of the master and into a non-existent position, can anyone help? I'm probably gonna restart the process but I would like to make sure it doesn't happen again.

Position is position within the file so the slave is way behind, on file mysql-bin.000193, and the master is already on mysql-bin.000208.
The impossible about this setup might be that the master has purged the file mysql-bin.000193 and since the slave isn't running IO that file hasn't been transferred.

Related

Last_SQL_Error: Could not execute Delete_rows event on table Replication error?

I am in the process of migrating an on prem mysql database to amazon aurora and used the official AWS documentation. Per the Docs these are the steps that i have followed so far:-
Set up Percona Xtrabackup.
Took a backup and moved the data to S3.
Went to RDS Dashboard and restored the database from S3.
The Database is up and running successfully.
Now i am in the process of setting up binlog replication where my Aurora instance will be the replica while my on prem mysql is the master. To set up replication, i followed the steps as laid down in the official docs:-
On my master, created a user 'replica-aurora' and provided the necessary permissions using GRANT REPLICATION SLAVE ON *.* TO 'replica-aurora'#'%' IDENTIFIED BY 'my-secret-password';
Ran Show master status on my master which in on prem.
File: mysql-bin.004762
Position: 68093017
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)
Connected to my aurora instance and ran the following command :-
CALL mysql.rds_set_external_master ('Ip-of-master', 3306, 'replica-aurora', 'password', 'mysql-bin.004762', 68093017, 0);
Ran CALL mysql.rds_start_replication;
When i check the show slave Status\G. I keep running into the following error message :-
Slave_IO_State: Waiting for master to send event
Master_Host: Master-IP
Master_User: replica-aurora
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.004762
Read_Master_Log_Pos: 71341857
Relay_Log_File: relaylog.000002
Relay_Log_Pos: 195715
Relay_Master_Log_File: mysql-bin.004762
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table: mysql.rds_replication_status,mysql.rds_monitor,mysql.rds_sysinfo,mysql.rds_configuration,mysql.rds_history
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1032
Last_Error: Could not execute Delete_rows event on table db.feed; Can't find record in 'feed', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.004762, end_log_pos 68293560
Skip_Counter: 0
Exec_Master_Log_Pos: 68288412
Relay_Log_Space: 3249360
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1032
Last_SQL_Error: Could not execute Delete_rows event on table db.feed; Can't find record in 'feed', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.004762, end_log_pos 68293560
I tried setting sql_slave_skip_counter=1 and ran into this :-
mysql> SET GLOBAL sql_slave_skip_counter = 1; ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) for this operation
I tried skipping the current position using CALL mysql.rds_skip_repl_error; but ran into the following :-
mysql> CALL mysql.rds_skip_repl_error;
+-------------------------------------+
| Message |
+-------------------------------------+
| Statement in error has been skipped |
+-------------------------------------+
1 row in set (0.04 sec)
+-----------------------------------------------------------------------------------+
| Message |
+-----------------------------------------------------------------------------------+
| Slave has encountered a new error. Please use SHOW SLAVE STATUS to see the error. |
+-----------------------------------------------------------------------------------+
1 row in set (2.14 sec)
Query OK, 0 rows affected (2.14 sec)
which is essentially the same error message as posted above. ERROR 1032.
Any one has ideas on how to fix this problem? I sure would appreciate that.

Mysql master-slave replication breaks when issuing commands from master's phpmyadmin

I have successfully configured MYSQL slave from existing master SQL server using this https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
The setup works fine and changes made to master through MySQL shell are perfectly reflected on the slave. But if I issue commands from PHPMyAdmin, the system breaks down. it shows following error
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: ***.***.***.***
Master_User: *****
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.0000**
Read_Master_Log_Pos: 107
Relay_Log_File: mysql-relay-bin.00000*
Relay_Log_Pos: 2407
Relay_Master_Log_File: mysql-bin.000**
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1146
Last_Error: Error 'Table 'phpmyadmin.pma_table_uiprefs' doesn't exist' on query. Default database: ''. Query: 'REPLACE INTO `phpmyadmin`.`pma_table_uiprefs` VALUES ('**', '******', '******', '{"sorted_col":"`******`.`date` DESC","CREATE_TIME":"2016-12-19 09:35:35","col_order":["1","2","3","4","5","0","6","7","8","9"]}', NULL)'
Skip_Counter: 0
Exec_Master_Log_Pos: 2261
Relay_Log_Space: 8323
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1146
Last_SQL_Error: Error 'Table 'phpmyadmin.pma_table_uiprefs' doesn't exist' on query. Default database: ''. Query: 'REPLACE INTO `phpmyadmin`.`pma_table_uiprefs` VALUES ('******', '******', '******', '{"sorted_col":"`******`.`date` DESC","******":"2016-12-19 09:35:35","col_order":["1","2","3","4","5","0","6","7","8","9"]}', NULL)'
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec)
Phpmyadmin is not configured on the slave(I installed it without configuring it to slave parameters) and in the master configuration, I've explicitly set PHPmyadmin to not get replicated on slave through the following command.
binlog_ignore_db = phpmyadmin
How to remove this error
'phpmyadmin.pma_table_uiprefs' doesn't exist'
please guide.
The problem it's that the system it's trying to replicate the phpadmin tables that doesn't exist in your replica. To make it work, you need to exclude the phpmyadmin db.
In the slave mysqld.cnf file, you need to add this:
replicate-ignore-db=phpmyadmin
and restart the replica
So I have solved the issue by following steps
installing MySQL and PHPMyAdmin on slave through following guide http://usefulangle.com/post/35/how-to-install-linux-apache-mysql-php-phpmyadmin-lamp-stack-on-ubuntu-16-04
importing PHPMyAdmin tables from my master DB to slave DB. The problems were occurring due to the mismatch in table names in master and slave phpmyadmin tables.
What I've not understood is why slave was replicating PHPMyAdmin tables despite setting binlog_ignore_db = phpmyadmin in master configuration.

Issues with MySql replication on MariaDB

I have been trying to get MySQL replication set up on digital ocean with forge servers & Maria DB.
I keep getting this error when running slave status\g :
Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; these ids must be different for replication to work (or the --replicate-same-server-id option must be used on slave but this does not always make sense; please check the manual before using it).
This is the tutorial I followed:
https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
I've checked the server-id in both my.conf files and the master is set to 1 and the slave 2.
Here's a dump of the full status\g output
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: *****
Master_User: slave_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mariadb-bin.000017
Read_Master_Log_Pos: 642
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 4 <br>
Relay_Master_Log_File: mariadb-bin.000017
Slave_IO_Running: No
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 1
Exec_Master_Log_Pos: 642
Relay_Log_Space: 249
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1593
Last_IO_Error: Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; these ids must be different for replication to work (or the --replicate-same-server-id option must be used on slave but this does not always make sense; please check the manual before using it).
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
Can anyone help?
Check that the config file is being used. It is probably /etc/my.cnf (not my.conf).
Run SHOW VARIABLES LIKE 'server_id'; on both servers.
Check that server_id is in the [mysqld] section of my.cnf.

Mysql slave out of sync after crash

We have a "1 master, 1 slave" MySQL setup. We had a sudden power outage that took down the slave. After getting the machine back up, I found that the slave was out of sync with the master:
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.0.0.1
Master_User: slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-log.001576
Read_Master_Log_Pos: 412565824
Relay_Log_File: mysqld-relay-bin.002671
Relay_Log_Pos: 6930
Relay_Master_Log_File: mysql-log.001573
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table: blah.table2
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1032
Last_Error: Could not execute Update_rows event on table blah.info; Can't find record in 'info', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-log.001573, end_log_pos 689031225
Skip_Counter: 0
Exec_Master_Log_Pos: 689030864
Relay_Log_Space: 2944772417
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1032
Last_SQL_Error: Could not execute Update_rows event on table blah.info; Can't find record in 'info', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-log.001573, end_log_pos 689031225
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec)
We're using a binlog format of "ROW", so when I try to use mysqlbinlog to look at the offending row, I don't see anything of use. I don't want to simply set the skip counter, because I think that would throw my table even further out of sync.
Is there anything I can do on the slave that would essentially "roll back" to a given point in time, where I could then reset the master log number, poition, etc? If not, is there anything at all that I can do to get back in sync?
One can usually recover from small discrepancies using pt-table-checksum and pt-table-sync.
It looks to me like your slave lost its place in the binary log sequence when it crashes. The slave continually writes its last processed binlog event into datadir/relay-log.info, but this file uses buffered writes, so it is susceptible to losing data in a crash.
That's why Percona Server created a crash-resistant replication feature to store the same replica info in an InnoDB table, to recover from this scenario.
MySQL 5.6 has implemented a similar feature: you can set relay_log_info_repository=TABLE so the replica saves its state in a crash-resistant way.
Re your comment:
Yes, in theory pt-table-sync can fix any amount of replication drift, but it's not necessarily the most efficient way to correct large discrepancies. At some point, it's quicker and more efficient to trash the outdated replica and reinitialize it using a new backup from the master.
Check out How to setup a slave for replication in 6 simple steps with Percona Xtrabackup.

Mysql Slave not updating

I have replication set up every thing looks fine I have not errors , but the data is not being moved to the Slave
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: xxxxx
Master_User: xxxxxx
Master_Port: xxxx
Connect_Retry: 30
Master_Log_File: mysql-bin.000006
Read_Master_Log_Pos: 98
Relay_Log_File: xxxxx-relay-bin.002649
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000006
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 98
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)
Run a show master status or show master status\G on the master DB. It will give you the correct values to update your slave with.
From your slave status, it looks like your slave has successfully connected to the master and is awaiting log events. To me, this means your slave user has been properly set up, and has the correct access. It really seems like you just need to sync the correct log file position.
Careful, because to get a good sync, you should probably stop the master, dump the DB, record the master log file positions, then start the master, import the DB on the slave, and finally start the slave in slave mode using the correct master log file pos. I've done this about 30 times, and if you don't follow those steps almost exactly, you will get a bad sync.
there could be couple of issues
master did not know about slave.
slave and master are not in sync with relay log file.
you have to sync the slave with master from where it did not updated. then you start slave. it should work fine.