Does running mysqldump modify the binary logs? - mysql

I've read the answers to similar questions, but I don't think they answer my specific question, sorry if I am repeating here.
I am setting up replication with existing data between a master and a slave, both MyISAM. I have a master database that gets written to during the day but not overnight (ie, not now). As explained on the dev.mysql.com site, I first ran FLUSH TABLES WITH READ LOCK on the master and obtained the binary log position using SHOW MASTER STATUS.
In another session, I then ran mysqldump on the master in order to copy this data to the slave. I ran mysqldump with the --lock-all-tables option.
However, after running mysqldump, I checked the master status again and the binary log position had increased by about 30. It has not moved up since the mysqldump finished.
Is this increase due to the mysqldump? Or did the lock not take affect and I need to re-dumo the master data?
Again, apologies if I'm repeating a question! Thanks.

Mysqldump should not cause the binary log position to change.
You need to investigate why it changed. Look inside the binary logs to get an idea of what was written to it. Use mysqlbinlog command for this.
For example if you recorded the initial position as 1234 in binlog.0000003, then execute:
mysqlbinlog --start-position=1234 binlog.0000003
This should show you the changes that were applied after certain position in a binary log.

Related

Data change during doing Database backup

I have a question regarding to mysql DB backup:
1.AT 8.00.00 AM ,I do backup Database by using mysqldump command.It somtime takes 5 second to finish.
2.While database backup is inprogress(At 8.00.01 AM ), someone make some changes to DB
Will backup version containt data changes of step 2 ?
I have goolge but not found explaination yet.Pls help me !
Percona provides a free tool for this purpose called xtrabackup.
For InnoDB tables MySQL uses log files to store DML commands, so that commands can be rolled back and some other things.
You can backup your database (in a non locking way), and after you created the backup you can apply the logs, so that you have a backup which holds the database status when the backup is finished. Don't know the commands without looking them up now, sorry, you'll have to have a look into the documentation.
It depends on your mysqldump command and which table the dump is on at the time of the update:
If you used the --single-transaction option, then, no, it will not contain the late change.
If you did not use that option then:
if the table being updated has not been dumped yet, then yes it will have the change.
If the table being updated has been dumped already, then no it won't have the change.
Chances are, you don't want that change in your backup, because you would like your backup to be consistent to a point in time. Here is some more discussion about all those kinds of things:
How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?

Making new MYSQL replication

I need to make working mysql replication from master to slave. (tried it once already)
The database is quite large (over 100GB) and it will take some hours to make it ready for new slave.
The database has MyIsam and innoDB engine and both are being written
I think my only choice is to copy the data files from master to a new slave? (or make a database dump which im referring later in the topic of ROUND 2)
Before that I have to run down all the services which uses the database and
make writelock for tables or should i shut down the whole database?
After data directory sync to the new replication server I started it up and the database with the tables was there. First error that I got rid off by changing bin.log to 007324 and position to 0.
Error 1:
140213 4:52:07 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
140213 4:52:07 [Note] Slave I/O thread exiting, read up to log 'bin-log.007323', position 46774422
After that I got new problems from database and this error came out from every table.
Error 2:
Error 'Incorrect information in file: './database/table.frm'' on query. Default database: 'database'.
Seems that something went wrong.
ROUND 2!
After this scene I started to think that can this be done without long service break.
Master database has been already configured and it works ok to another slave.
So i did some googling and this is what i came up with.
Making read lock to tables:
FLUSH TABLES WITH READ LOCK;
Taking dump:
mysqldump --skip-lock-tables --single-transaction --flush-logs --master-data=2 -A > dbdump.sql
Packaging and moving:
gzip (pigz) the the dbdump and moving it to slave server after that finding the MASTER_LOG_FILE and MASTER_LOG_POS from the dump.
After that i don't think that i want to import the dbdump.sql because its over 100GB and
will take time. So i think SOURCE would be ok option for it.
On SLAVE server:
CREATE DATABASE dbdump;
USE dbdump;
SOURCE dbdump.db;
CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication',MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=X;
start slave;
SHOW SLAVE STATUS \G
I haven't tested this yet, am I on to something?
--bp
Realize that issuing a SOURCE command is the same as running an import of the dumped SQL from shell. Either way, it is going to take a long time. Outside of that, you have the steps correct - flush table with read lock on master, make a database dump of master, make sure you note master binlog coordinates, import dump on slave, set binlog coordinates, start replication. Do not work with the raw binaries unless you REALLY know what you are doing (especially for INNODB tables).
If you have a number of large tables (i.e. not just one big one), you could consider parallelizing your dumps/imports by table (or groups of tables) to speed things along. There are actually tools out there to help you do this.
You CAN work with the raw binaries, but it is not for the faint of heart. In the past, I have used rsync to differentially update the raw binaries between master and slave (you still must use flush table with read lock and gather master binlog coordinates before doing this). For MyISAM tables this works pretty well actually. For InnoDB, it can be more tricky. I prefer to use the option to set InnoDB to write index and data files per table. You would need to rsync the ibdata* files. You would delete ib_logfile* files from slave.
This whole thing is a bit of a high wire act, so I would not resort to doing this unless you have no other viable options. Absolutely take a traditional SQL dump before even thinking about attempting a binary file sync, and each time until you are VERY comfortable that you actually know what you are doing.

why issue 'reset master' when resyncing a mysql slave

Ive recently needed to perform some DB resync's and have a question regarding (what appears to be) the common practice of issuing a 'RESET MASTER' before dumping the DB on the master.
Just about all of the documentation i have found surrounding this process has a 'RESET MASTER' prior to dumping the databases from the master.
example: https://stackoverflow.com/a/3229580/1570785
In a production environment, however, this seems to be counter-productive mainly because the the 'RESET MASTER' command will clear the existing binary logs. So if something goes wrong with your master while replication is broken, you end up with an inconsistent/corrupt master and an out of sync slave.
Given that this process needs to be performed in the first place (ie something has gone wrong with mysql replication), it seems unwise to be wiping out binlogs (that could be used to recover from a COMPLETE disaster) just because the slave needs to be resynced.
What i am really asking: what am i missing - is there valid reason to perform a 'RESET MASTER' before taking a dump from the master?
This is not necessary.
If you use mysqldump to create dump, add thse options:
--single-transaction - to not lock innodb tables and create a consistent snapshot.
--master-data - to add master's binary log position, that slave should start replicating from.

MySql - create replication with minimal downtime

I have a ~80GB MySql DB.
I want to create a replication on that DB while having the current DB as master and setting up a slave for it.
My main question is how can i move the data (all 80GB) of it from the master to the new slave with as minimal downtime as possible, preferably none.
my initial thought was to stop the DB (after taking the log position), and then copy the files from the mysqldata lib, and then re start the server but just copying the files would take ~2 hours.
any thoughts?
On July 8, 2011 I addressed a similar question. I wrote scripts that would zap binary logs and starting performing an rsync.
On June 16, 2011, I wrote a post contrasting doing an rsync versus using XtraBackup.
On May 23, 2011, I discussed what considerations to make when doing this kind of backup.
Rather than reinvent the wheel and rewrite in the information I already wrote in those posts, I simply provided the links to my own posts that address this question.
Please read them carefully.
Give it a Try !!!
CAVEAT
The only downtime in my rsync algorithm is when after you have performed multiple rsyncs as specified, you shutdown mysql, perform one more rsync, and then start up mysql.
I would like to clarify the reason for the shutdown:
When you shutdown mysql:
All open MyISAM tables are closed, There is a header that marks how many file handles are open to the MyISAM table. That must be at zero(0) for the table to be OK. Otherwise, a closed MyISAM tables with a nonzero value in this header field marks the table as crashed and in need of a table repair. Shutting down mysql cleans all of that up.
All InnoDB tables that have either data pages or index pages in the Buffer Pool that are marked dirty needs to be flushed to disk. Performing a shutdown triggers a full flush of the Buffer Pool. Naturally, the bigger the pool and the higher the number of dirty pages, the longer the Buffer Pool flush time will be. To shorten this phase of the mysqld's shutdown, run SET GLOBAL innodb_max_dirty_pages_pct = 0; before performing any of the rsyncs. All transactions are completed (either commited or rolled back).
I think you have some misunderstanding.
before it start, you must enable binary log on the master
restart mysql on master
login to master
lock ALL tables from write
record the master binary position
copy binary data from master (DIRECTLY copy *.MYI, *.MYD...etc, you can copy to another location in master database)
after copy is completed, remove write lock
scp data to slave (depends on the network distance)
setup relevant master information into slave (binary log position, and remember to disable binary log)
start slave
After that, it should have huge delay on slave,
and slave will try to catch up with master automatically,
once it catch up, your slave is ready!
So, the down-time is only when you locking table and copy the binary data into another location in your master database.
docs:- http://dev.mysql.com/doc/refman/5.1/en/replication-howto.html
I've found the following tool to be of GREAT help and efficiency. The author currently works for facebook and used to work for dema in japan.
It's quite easy to set-up and you will reach 4 9's HA. ;-)
MHA tool for MySQL replication high availability
I have to say though that MySQL cluster is better, lol ;-)

mysqldump | mysql yields 'too many open files' error. Why?

I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).
This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).
I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?
mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process.
Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html--lock-all-tables, -x
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.
mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:
[3 Feb 2007 22:00] Sergei Golubchik
This is not really a bug.
mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to
be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big
number of tables will inevitably exhaust all available file descriptors, as LOCK needs all
tables to be opened.
Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively,
--lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all
tables in all databases (without opening them). In this case mysqldump will automatically
disable --lock-tables because it makes no sense when --lock-all-tables is used.
Edit: Please check Dave's workaround for InnoDB in the comment below.
If your database is that large you've got a few issues.
You have to lock the tables to dump the data.
mysqldump will take a very very long time and your tables will need to locked during this time.
importing the data on the new server will also take a long time.
Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.
In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.
I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.
Thought this might be helpful tip here.