How does MySQL replication work? - mysql

I have a questions here about MySQL replication. I have a very limited knowledge about database. Please someone help me to clarify this. My goal is to be able to do a deployment that can avoid downtime.
Suppose I have a DB replicated (master and slave). Suppose I want to do a new release, and I need to run a migration script. My plan is to stop the replication. And run the script in the slave. The migration script can be as:
Based on some business logic, running multiple queries to set new values for a column in a table.
Adding new column
What would actually happen when I start the replication again? The slave will catch up of any changes on the master. But how would the master get the changes that was applied to the slave? If i run the same database script, the migration script won't be run again against the same data set on the master.
Would it make sense, if once the slave catch up with the master, to use the snapshot of the slave and use it as the new slave. And old slave become master?
I hope this actually is clear. Thanks. Any help is really appreciated.

You either have to do cross master replication in order for the slave to catch up with the master and the master to copy the modifications carried out on the salve or have some down time and run the script of the master.
1- You can change the slave master replcaition to cross master without any down time.
2- stop the ex-slave from replicating the master.
3- run your script.
4- start the ex-slave again.
I recommand that you setup a testing environment using a tool like vmware and try it out. That's what I have done.
HERE IS A LINK THAT EXPLAINS HOW TO SET IT UP
http://onlamp.com/onlamp/2006/04/20/advanced-mysql-replication.html
I can't stress enough on testing before applying the changes on a real environment, so test again and again untill you think that you're ready. When that happens test one more time. DON'T FORGET TO MAKE A BACKUP TOO

Related

How to setup a slave replication of mysql database for development?

I've set up a slave replication of MySQL database. And for development requirements, I want to write sth into the slave database, but it would cause the broken of replication.
Since the database is huge, I don't want to restore the slave database from MySQL dump file every time after I finished some development work.
My requirement:
All the changes in the slave database can be reverted by a simple command.
The replication keeps working.
One method is to use LVM filesystem snapshots. Before you begin testing:
Stop replication.
Take an LVM snapshot.
Do your tests. Replication is still off, but data is up to date
After you finish testing:
Stop mysqld.
Restore the snapshot. This reverts all files to the state they were at the moment you created the LVM snapshot above.
Start mysqld and start replication. It will need to catch up and apply all changes since you stopped replication before your testing. This will take a little while, depending on how many changes happened on your master database.
See https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/ for a nice tutorial on using LVM snapshots.
This method only works if your development database instance is on Linux.
Insert the new records using a primary key that is not expected to be used by the master database (e.g. add a sufficiently large offset like 2^10 or negative numbers if allowed...).
In this way, the insertions coming from the master won't clash.

writes to mysql slave server by mistake

I have mysql replication set up with one master and one slave. Due to a bug in the code, somewhere in the middle the entries started to get written on slave server and it was detected a few days later on.
Now I am thinking of how to switch it correctly without any hassle or minimal down time, what would be the best way to do this? Lets consider only one table...
Solution 1
Simply start writing to master from now on after setting auto_increment to slave's last id. Wondering if it will be troublesome to keep master and slave out of sync.
Solution 2
Clear all the data from master, stop the app from making any more entries refill the data using mysqldump and then switching the app back on with correct config.
stop slave
// load the dump
start slave
Will this stop master from re-attempting to write to slave the same data?
Any help appreciated. Any other solutions also welcomed.
Thanks
Sushil
I think you are on the correct track with solution 2. Simply stopping the slave will not prevent the master from writing to it's binary log. So when you start the slave again it will just replicate all the SQL statements from the master.
However, you can use this to your advantage if you have included 'DROP TABLE' before each table creation. This will mean that you have the following:
1) Stop the app from making any more entries in the master table(s)
2) Dump data from slave (ensure that mysqldump includes 'DROP TABLE' before each table import - it should do as it is a default option of mysqldump)
3) Run dump against master
4) Check slave status using SHOW SLAVE STATUS\G. Once Seconds_Behind_Master reaches 0 then you are good to switch on the app again (make sure it is writing to the master!!)
Step 3 will drop and recreate the tables on the master using the data from the slave. This drop and recreate will be replicated on to the slave so you should end up with the two in sync and a correct master slave set up.
Good luck!
I think your best option is to reset the slave/master completely. If the data on the slave is correct reload the data from it and then export export a new dump from the master and import it to the slave, then execute a new "CHANGE MASTER TO..." command
I would recommend setting the "read_only" global variable on the slave.
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_read-only

Strategies for copying data from a live MySQL database to a staging server

I have a live MySQL DB which is too large to regularly copy the live data to the staging server.
Is there a way of getting just the changes from the last week with the intention of running that script weekly? Would every table have to have an updated timestamp field added to it to achieve this?
I don't know how large "too large to regularly copy" is, but I use SQLyog to synchronize databases. It intelligently does insert/update/deletes for only the records that have changed. I recommend it highly.
One way of going about this would be to make the staging server a replication slave of the production server. However, if you don't want the staging machine to be constantly up to date with the production master, you can keep the slave mode turned off.
Then weekly, run a script that starts the slave for a few hours, allowing it to bring itself up to date with the master, and stop the slave again.
START SLAVE;
-- Wait a while
-- Trial and error to determine how long it takes to come into sync
STOP SLAVE;
This will save it in a state consistent with the master for the current week. On the other hand, if you don't really need it as a weekly snapshot you can just leave the slave running all the time so it just stays in sync.

Mysql 4.x LOAD DATA FROM MASTER; slave

I have a scenario where there are multiple mysql 4.x servers. These databases were supposed to be replicating to another server. After checking things out on a slave it appears that this slave has not replicated any databases in some time.
Some of these databases are > 4G in size and one is 43G(which resides on another server). Has anyone out there replicated databases without creating a snapshot to copy over to a slave? I cannot shutdown the master server because of the downtime. It will probably take over an hour and 40 minutes to create a snapshot. So this is out of the question.
I was going to perform a load data from master on the slave to pull everything from scratch. Any idea how long this will take on databases ranging from 1-4G and the 43G database will be for another day. All of the tables on the master are myIsam so I don't think I will have a problem with the load from master method.
What are the best methods on the slave to clean things up or reset things so I can just start from a clean slate?
Any suggestions?
Thanks in advance
You need a snapshot to start replication. Snapshots require either the database to be locked (at least) read-only. So you can have a consistent place to start from.
Downtime is a necessary thing, customers usually understand it as long as it doesn't happen too often.

setting up replication in mysql pros and cons

Basically I want to setup a replication server for mysql datbase. I am completely new to this concept and appreciate any help pointing me in the right direction.
If at all the slave goes down, will it effect the master in anyway?
Thanks.
No, it will not affect the master if the slave goes down. The Slaves connect to the master and request the changes.
If you intend on having 2 servers replicated, then you can use Master-Master replication. This means that either one of the database servers can go down without loss of data or access. This is extremely resilient to failure however you can get duplicate key errors on fast successive inserts, like using MySQL to handle sessions. This can be solved programatically through.
The downside of the Master-Slave set up is that if the master fails you have to manually assign another Master, fix the failed master and then bring back into the group. Otherwise failure of any or all Slaves will not affect the Master.