Making new MYSQL replication - mysql

I need to make working mysql replication from master to slave. (tried it once already)
The database is quite large (over 100GB) and it will take some hours to make it ready for new slave.
The database has MyIsam and innoDB engine and both are being written
I think my only choice is to copy the data files from master to a new slave? (or make a database dump which im referring later in the topic of ROUND 2)
Before that I have to run down all the services which uses the database and
make writelock for tables or should i shut down the whole database?
After data directory sync to the new replication server I started it up and the database with the tables was there. First error that I got rid off by changing bin.log to 007324 and position to 0.
Error 1:
140213 4:52:07 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
140213 4:52:07 [Note] Slave I/O thread exiting, read up to log 'bin-log.007323', position 46774422
After that I got new problems from database and this error came out from every table.
Error 2:
Error 'Incorrect information in file: './database/table.frm'' on query. Default database: 'database'.
Seems that something went wrong.
ROUND 2!
After this scene I started to think that can this be done without long service break.
Master database has been already configured and it works ok to another slave.
So i did some googling and this is what i came up with.
Making read lock to tables:
FLUSH TABLES WITH READ LOCK;
Taking dump:
mysqldump --skip-lock-tables --single-transaction --flush-logs --master-data=2 -A > dbdump.sql
Packaging and moving:
gzip (pigz) the the dbdump and moving it to slave server after that finding the MASTER_LOG_FILE and MASTER_LOG_POS from the dump.
After that i don't think that i want to import the dbdump.sql because its over 100GB and
will take time. So i think SOURCE would be ok option for it.
On SLAVE server:
CREATE DATABASE dbdump;
USE dbdump;
SOURCE dbdump.db;
CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication',MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=X;
start slave;
SHOW SLAVE STATUS \G
I haven't tested this yet, am I on to something?
--bp

Realize that issuing a SOURCE command is the same as running an import of the dumped SQL from shell. Either way, it is going to take a long time. Outside of that, you have the steps correct - flush table with read lock on master, make a database dump of master, make sure you note master binlog coordinates, import dump on slave, set binlog coordinates, start replication. Do not work with the raw binaries unless you REALLY know what you are doing (especially for INNODB tables).
If you have a number of large tables (i.e. not just one big one), you could consider parallelizing your dumps/imports by table (or groups of tables) to speed things along. There are actually tools out there to help you do this.
You CAN work with the raw binaries, but it is not for the faint of heart. In the past, I have used rsync to differentially update the raw binaries between master and slave (you still must use flush table with read lock and gather master binlog coordinates before doing this). For MyISAM tables this works pretty well actually. For InnoDB, it can be more tricky. I prefer to use the option to set InnoDB to write index and data files per table. You would need to rsync the ibdata* files. You would delete ib_logfile* files from slave.
This whole thing is a bit of a high wire act, so I would not resort to doing this unless you have no other viable options. Absolutely take a traditional SQL dump before even thinking about attempting a binary file sync, and each time until you are VERY comfortable that you actually know what you are doing.

Related

why issue 'reset master' when resyncing a mysql slave

Ive recently needed to perform some DB resync's and have a question regarding (what appears to be) the common practice of issuing a 'RESET MASTER' before dumping the DB on the master.
Just about all of the documentation i have found surrounding this process has a 'RESET MASTER' prior to dumping the databases from the master.
example: https://stackoverflow.com/a/3229580/1570785
In a production environment, however, this seems to be counter-productive mainly because the the 'RESET MASTER' command will clear the existing binary logs. So if something goes wrong with your master while replication is broken, you end up with an inconsistent/corrupt master and an out of sync slave.
Given that this process needs to be performed in the first place (ie something has gone wrong with mysql replication), it seems unwise to be wiping out binlogs (that could be used to recover from a COMPLETE disaster) just because the slave needs to be resynced.
What i am really asking: what am i missing - is there valid reason to perform a 'RESET MASTER' before taking a dump from the master?
This is not necessary.
If you use mysqldump to create dump, add thse options:
--single-transaction - to not lock innodb tables and create a consistent snapshot.
--master-data - to add master's binary log position, that slave should start replicating from.

mysql replication insert only

I have several slave dbs replicated from the same master db, however, for one of the slaves, i would like to keep it as a backup db, which will never have rows updated or deleted.
basically the purpose is to have a backup db with all rows stored by using the replication(mysqldump is waaay slow to do the backup), no update/delete query get replicated, insert query only. i know there will be some conflicts going on no doubt, but still wonder if any filtering options on statement/query on the slave end or any other solutions.
You should never run a production database without a working backup scheme in place - at least as long as you value your data. If you fear that a wrong sql instruction can ruin your database, then you may try point in time recovery.
If you already use replication your master server will log all write/update operations to its binlog - which it will send to the slave servers for replication. You can do for example nightly backups of you complete database. If you destroy your database in the morning, you can import the backup from the night and reapply the instructions from the binlog from after the backup till before the instruction that killed your database.
You could then skip this instruction and apply the instructions that came afterwards. This can also cause consistency issues, as the instruction after the skipped instruction see different data in the database as they did when they were originally executed.
I have similar problem. I know it's old thread but it can help others:
link: mysql replication works only if I choose database by USE database

MySql - create replication with minimal downtime

I have a ~80GB MySql DB.
I want to create a replication on that DB while having the current DB as master and setting up a slave for it.
My main question is how can i move the data (all 80GB) of it from the master to the new slave with as minimal downtime as possible, preferably none.
my initial thought was to stop the DB (after taking the log position), and then copy the files from the mysqldata lib, and then re start the server but just copying the files would take ~2 hours.
any thoughts?
On July 8, 2011 I addressed a similar question. I wrote scripts that would zap binary logs and starting performing an rsync.
On June 16, 2011, I wrote a post contrasting doing an rsync versus using XtraBackup.
On May 23, 2011, I discussed what considerations to make when doing this kind of backup.
Rather than reinvent the wheel and rewrite in the information I already wrote in those posts, I simply provided the links to my own posts that address this question.
Please read them carefully.
Give it a Try !!!
CAVEAT
The only downtime in my rsync algorithm is when after you have performed multiple rsyncs as specified, you shutdown mysql, perform one more rsync, and then start up mysql.
I would like to clarify the reason for the shutdown:
When you shutdown mysql:
All open MyISAM tables are closed, There is a header that marks how many file handles are open to the MyISAM table. That must be at zero(0) for the table to be OK. Otherwise, a closed MyISAM tables with a nonzero value in this header field marks the table as crashed and in need of a table repair. Shutting down mysql cleans all of that up.
All InnoDB tables that have either data pages or index pages in the Buffer Pool that are marked dirty needs to be flushed to disk. Performing a shutdown triggers a full flush of the Buffer Pool. Naturally, the bigger the pool and the higher the number of dirty pages, the longer the Buffer Pool flush time will be. To shorten this phase of the mysqld's shutdown, run SET GLOBAL innodb_max_dirty_pages_pct = 0; before performing any of the rsyncs. All transactions are completed (either commited or rolled back).
I think you have some misunderstanding.
before it start, you must enable binary log on the master
restart mysql on master
login to master
lock ALL tables from write
record the master binary position
copy binary data from master (DIRECTLY copy *.MYI, *.MYD...etc, you can copy to another location in master database)
after copy is completed, remove write lock
scp data to slave (depends on the network distance)
setup relevant master information into slave (binary log position, and remember to disable binary log)
start slave
After that, it should have huge delay on slave,
and slave will try to catch up with master automatically,
once it catch up, your slave is ready!
So, the down-time is only when you locking table and copy the binary data into another location in your master database.
docs:- http://dev.mysql.com/doc/refman/5.1/en/replication-howto.html
I've found the following tool to be of GREAT help and efficiency. The author currently works for facebook and used to work for dema in japan.
It's quite easy to set-up and you will reach 4 9's HA. ;-)
MHA tool for MySQL replication high availability
I have to say though that MySQL cluster is better, lol ;-)

writes to mysql slave server by mistake

I have mysql replication set up with one master and one slave. Due to a bug in the code, somewhere in the middle the entries started to get written on slave server and it was detected a few days later on.
Now I am thinking of how to switch it correctly without any hassle or minimal down time, what would be the best way to do this? Lets consider only one table...
Solution 1
Simply start writing to master from now on after setting auto_increment to slave's last id. Wondering if it will be troublesome to keep master and slave out of sync.
Solution 2
Clear all the data from master, stop the app from making any more entries refill the data using mysqldump and then switching the app back on with correct config.
stop slave
// load the dump
start slave
Will this stop master from re-attempting to write to slave the same data?
Any help appreciated. Any other solutions also welcomed.
Thanks
Sushil
I think you are on the correct track with solution 2. Simply stopping the slave will not prevent the master from writing to it's binary log. So when you start the slave again it will just replicate all the SQL statements from the master.
However, you can use this to your advantage if you have included 'DROP TABLE' before each table creation. This will mean that you have the following:
1) Stop the app from making any more entries in the master table(s)
2) Dump data from slave (ensure that mysqldump includes 'DROP TABLE' before each table import - it should do as it is a default option of mysqldump)
3) Run dump against master
4) Check slave status using SHOW SLAVE STATUS\G. Once Seconds_Behind_Master reaches 0 then you are good to switch on the app again (make sure it is writing to the master!!)
Step 3 will drop and recreate the tables on the master using the data from the slave. This drop and recreate will be replicated on to the slave so you should end up with the two in sync and a correct master slave set up.
Good luck!
I think your best option is to reset the slave/master completely. If the data on the slave is correct reload the data from it and then export export a new dump from the master and import it to the slave, then execute a new "CHANGE MASTER TO..." command
I would recommend setting the "read_only" global variable on the slave.
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_read-only

mysqldump | mysql yields 'too many open files' error. Why?

I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).
This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).
I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?
mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process.
Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html--lock-all-tables, -x
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.
mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:
[3 Feb 2007 22:00] Sergei Golubchik
This is not really a bug.
mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to
be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big
number of tables will inevitably exhaust all available file descriptors, as LOCK needs all
tables to be opened.
Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively,
--lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all
tables in all databases (without opening them). In this case mysqldump will automatically
disable --lock-tables because it makes no sense when --lock-all-tables is used.
Edit: Please check Dave's workaround for InnoDB in the comment below.
If your database is that large you've got a few issues.
You have to lock the tables to dump the data.
mysqldump will take a very very long time and your tables will need to locked during this time.
importing the data on the new server will also take a long time.
Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.
In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.
I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.
Thought this might be helpful tip here.