I have mysql replication set up with one master and one slave. Due to a bug in the code, somewhere in the middle the entries started to get written on slave server and it was detected a few days later on.
Now I am thinking of how to switch it correctly without any hassle or minimal down time, what would be the best way to do this? Lets consider only one table...
Solution 1
Simply start writing to master from now on after setting auto_increment to slave's last id. Wondering if it will be troublesome to keep master and slave out of sync.
Solution 2
Clear all the data from master, stop the app from making any more entries refill the data using mysqldump and then switching the app back on with correct config.
stop slave
// load the dump
start slave
Will this stop master from re-attempting to write to slave the same data?
Any help appreciated. Any other solutions also welcomed.
Thanks
Sushil
I think you are on the correct track with solution 2. Simply stopping the slave will not prevent the master from writing to it's binary log. So when you start the slave again it will just replicate all the SQL statements from the master.
However, you can use this to your advantage if you have included 'DROP TABLE' before each table creation. This will mean that you have the following:
1) Stop the app from making any more entries in the master table(s)
2) Dump data from slave (ensure that mysqldump includes 'DROP TABLE' before each table import - it should do as it is a default option of mysqldump)
3) Run dump against master
4) Check slave status using SHOW SLAVE STATUS\G. Once Seconds_Behind_Master reaches 0 then you are good to switch on the app again (make sure it is writing to the master!!)
Step 3 will drop and recreate the tables on the master using the data from the slave. This drop and recreate will be replicated on to the slave so you should end up with the two in sync and a correct master slave set up.
Good luck!
I think your best option is to reset the slave/master completely. If the data on the slave is correct reload the data from it and then export export a new dump from the master and import it to the slave, then execute a new "CHANGE MASTER TO..." command
I would recommend setting the "read_only" global variable on the slave.
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_read-only
Related
I've set up a slave replication of MySQL database. And for development requirements, I want to write sth into the slave database, but it would cause the broken of replication.
Since the database is huge, I don't want to restore the slave database from MySQL dump file every time after I finished some development work.
My requirement:
All the changes in the slave database can be reverted by a simple command.
The replication keeps working.
One method is to use LVM filesystem snapshots. Before you begin testing:
Stop replication.
Take an LVM snapshot.
Do your tests. Replication is still off, but data is up to date
After you finish testing:
Stop mysqld.
Restore the snapshot. This reverts all files to the state they were at the moment you created the LVM snapshot above.
Start mysqld and start replication. It will need to catch up and apply all changes since you stopped replication before your testing. This will take a little while, depending on how many changes happened on your master database.
See https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/ for a nice tutorial on using LVM snapshots.
This method only works if your development database instance is on Linux.
Insert the new records using a primary key that is not expected to be used by the master database (e.g. add a sufficiently large offset like 2^10 or negative numbers if allowed...).
In this way, the insertions coming from the master won't clash.
I'm running MySQL as the database on Ubuntu instances. I'm using MySQL Master-Slave replication where master's changes will be written to slave and slave's changes will not be reflected on the master. That's fine. I'm using a HAProxy load balancer to front the MySQL instances where all the requests will be sent to master MySQL instance. If the master MySQL instance is down slave MySQL instance will act as master and HAProxy will send all the requests to salve. Active-Passive scenario.
HAProxy - 192.168.A.ABC
MySQL Master - 192.168.A.ABD
MySQL Slave - 192.168.A.ABE
Let's assume that the MySQL master(192.168.A.ABD) is down. Now all the requests will be sent to MySQL slave(192.168.A.ABE) by HAProxy where now he acts as the master MySQL server for the time being.
My problems are
What happens when original master MySQL instance(192.168.A.ABD) is up?
Will changes written to new MySQL master (192.168.A.ABE) be replicated to original master(192.168.A.ABD) again?
How should I address this scenario?
First of all I should say that I have never used HA Proxy so con't comment on that directly.
However, in your current setup the Master (ABD) will be out of sync, and wont catch up. You will have to rebuild that using mysqlDump or similar tool.
What you would need is a Master < > Master setup (as opposed to Master > Slave), which enables you to write to either database and have it reflected in the other. This isn't quite as straight forward as it sounds though.
Assuming you already have your master > slave setup, and they are in sync
On the Master (ABD) you want to add:
auto_increment_increment=2
auto_increment_offset=1
log-slave-updates
On the Slave (ABE) add:
auto_increment_increment=2
auto_increment_offset=2
log-slave-updates
to your my.cnf files. Restart the Database. This will help to prevent Duplicate Key Errors. (n.b. that log-slave-updates isn't strictly required but makes it easier to add another slave in future)
Next you want to tell the Master (ABD) to replicate from the Slave (ABE).
Depending on what version of MySQL and if you are using GTID etc. the exact process differs slightly. But basically you are going to issue a CHANGE MASTER statement on the Master so it replicates from the slave.
And away you go. You probably want to avoid writing to both at the same time as that opens up a whole other kettle of fish. But if the Master goes down, you can switch your writes to the slave, and when the master comes back up, it will simply start replicating the missing data.
I am considering you scenario
Master - 192.168.A.ABD
Slave - 192.168.A.ABE
You cannot directly add the master in system. To Add master in system you need to perform below steps:
1) When master is up you can add this as a slave. So now this happens
Master - 192.168.A.ABE
Slave - 192.168.A.ABD
2) Then Now U can put master Down. Means You can put 192.168.A.ABD Down
3) Then Again Add this as slave. So After this You will get below scenarion
Master - 192.168.A.ABD
Slave - 192.168.A.ABE
You can refer this link
https://dev.mysql.com/doc/refman/5.5/en/replication-solutions-switch.html
I need to make working mysql replication from master to slave. (tried it once already)
The database is quite large (over 100GB) and it will take some hours to make it ready for new slave.
The database has MyIsam and innoDB engine and both are being written
I think my only choice is to copy the data files from master to a new slave? (or make a database dump which im referring later in the topic of ROUND 2)
Before that I have to run down all the services which uses the database and
make writelock for tables or should i shut down the whole database?
After data directory sync to the new replication server I started it up and the database with the tables was there. First error that I got rid off by changing bin.log to 007324 and position to 0.
Error 1:
140213 4:52:07 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
140213 4:52:07 [Note] Slave I/O thread exiting, read up to log 'bin-log.007323', position 46774422
After that I got new problems from database and this error came out from every table.
Error 2:
Error 'Incorrect information in file: './database/table.frm'' on query. Default database: 'database'.
Seems that something went wrong.
ROUND 2!
After this scene I started to think that can this be done without long service break.
Master database has been already configured and it works ok to another slave.
So i did some googling and this is what i came up with.
Making read lock to tables:
FLUSH TABLES WITH READ LOCK;
Taking dump:
mysqldump --skip-lock-tables --single-transaction --flush-logs --master-data=2 -A > dbdump.sql
Packaging and moving:
gzip (pigz) the the dbdump and moving it to slave server after that finding the MASTER_LOG_FILE and MASTER_LOG_POS from the dump.
After that i don't think that i want to import the dbdump.sql because its over 100GB and
will take time. So i think SOURCE would be ok option for it.
On SLAVE server:
CREATE DATABASE dbdump;
USE dbdump;
SOURCE dbdump.db;
CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication',MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=X;
start slave;
SHOW SLAVE STATUS \G
I haven't tested this yet, am I on to something?
--bp
Realize that issuing a SOURCE command is the same as running an import of the dumped SQL from shell. Either way, it is going to take a long time. Outside of that, you have the steps correct - flush table with read lock on master, make a database dump of master, make sure you note master binlog coordinates, import dump on slave, set binlog coordinates, start replication. Do not work with the raw binaries unless you REALLY know what you are doing (especially for INNODB tables).
If you have a number of large tables (i.e. not just one big one), you could consider parallelizing your dumps/imports by table (or groups of tables) to speed things along. There are actually tools out there to help you do this.
You CAN work with the raw binaries, but it is not for the faint of heart. In the past, I have used rsync to differentially update the raw binaries between master and slave (you still must use flush table with read lock and gather master binlog coordinates before doing this). For MyISAM tables this works pretty well actually. For InnoDB, it can be more tricky. I prefer to use the option to set InnoDB to write index and data files per table. You would need to rsync the ibdata* files. You would delete ib_logfile* files from slave.
This whole thing is a bit of a high wire act, so I would not resort to doing this unless you have no other viable options. Absolutely take a traditional SQL dump before even thinking about attempting a binary file sync, and each time until you are VERY comfortable that you actually know what you are doing.
I am new to MySQL and after a long search I am able to configure master-slave ROW based replication. I thought it would be safe and I would not have to recheck it again and again.
But today when I did SHOW SLAVE STATUS; on slave then I found following
could not execute Write_rows event on
table mydatabasename.atable; Duplicate
entry '174465' for key 'PRIMARY',
Error_code: 1062; handler error
HA_ERR_FOUND_DUPP_KEY; the event's
master log mysql-bin.000004,
end_log_pos 60121977
Can someone tell me how this can even come when master has no such error and schema on both server is the same then how could this happen. And how to fix it to make this work again and how to prevent such thing in future.
Please also let me know what else unexpected I should expect other than this.
It would never happen on master, why?
The series of SQL are replicated from master,
if the record already exist in master, mysql reject on master
but on slave, if fails and the replication position does not advanced to next SQL (it just halted)
Reason?
The insert query of that record is write directly into slave without using replication from the master
How to fix?
Skip the error on slave, like
SET GLOBAL sql_slave_skip_counter = N;
details - http://dev.mysql.com/doc/refman/5.0/en/set-global-sql-slave-skip-counter.html
Or delete the duplicate record on slave, resume the slave again (let the replication do the insertion)
The worse scenario, required you to re-do the setup again to ensure data integrity on slave.
How to prevent?
Check application level, make sure no write directly into slave
This including how you connect to mysql in command prompt
Split mysql user that can do write and read,
So, your application should use read user (master and slave) when does not require write.
Use write user (master only) for action require write to database.
skip counter is not a viable solution always, you are skipping the records but it might affect the further records.
Here is the complete details on why sql slave skip counter is bad.
http://www.mysqlperformanceblog.com/2013/07/23/another-reason-why-sql_slave_skip_counter-is-bad-in-mysql/
You can delete bigger than duplicate rows in slave db;
DELETE FROM mydatabasename.atable WHERE ID>=174465;
then
START SLAVE;
I'm running a master-slave MySQL binary log replication system (phew!) that, for some data, is not in sync (meaning, the master holds more data than the slave). But the slave stops very frequently on the slightest MySQL error, can this be disabled? (perhaps a my.cnf setting for the replicating slave ignore-replicating-errors or some of the sort ;) )
This is what happens, every now and then, when the slave tries to replicate an item that does not exist, the slave just dies. a quick check at SHOW SLAVE STATUS \G; gives
Slave-IO-Running: Yes
Slave-SQL-Running: No
Replicate-Do-DB:
Last-Errno: 1062
Last-Error: Error 'Duplicate entry '15218' for key 1' on query. Default database: 'db'. Query: 'INSERT INTO db.table ( FIELDS ) VALUES ( VALUES )'
which I promptly fix (once I realize that the slave has been stopped) by doing the following:
STOP SLAVE;
RESET SLAVE;
START SLAVE;
... lately this has been getting kind of tiresome, and before I spit out some sort of PHP which does this for me, i was wondering if there's some my.cnf entry which will not kill the slave on the first error.
Cheers,
/mp
stop slave; set global sql_slave_skip_counter=1; start slave;
You can ignore only the current error and continue the replication process.
Yes, with --slave-skip-errors=xxx in my.cnf, where xxx is 'all' or a comma sep list of error codes.
First, do you really want to ignore errors? If you get an error, it is likely that the data is not in sync any more. Perhaps what you want is to drop the slave database and restart the sync process when you get an error.
Second, I think the error you are getting is not when you replicate an item that does not exist (what would that mean anyway?) - it looks like you are replicating an item that already exists in the slave database.
I suspect the problem mainly arises from not starting at a clean data copy. It seems that the master has been copied to the slave; then replication has been turned off (or failed); and then it has started up again, but without giving the slave the chance to catch up with what it missed.
If you ever have a time when the master can be closed for write access long enough to clone the database and import it into the slave, this might get the problems to go away.
Modern mysqldump commands have a couple options to help with setting up consistent replication. Check out --master-data which will put the binary log file and position in the dump and automatically set when loaded into slave. Also --single-transaction will do the dump inside a transaction so that no write lock is needed to do a consistent dump.
If the slave isn't used for any writes other than the replication, the authors of High Performance MySQL recommend adding read_only on the slave server to prevent users from mistakenly changing data on the slave as this is will also create the same errors you experienced.
i think you are doing replication with out sync the database first sync the database and try for replication and servers are generating same unique ids and try to set auto incerment offset