I have a Master - Slave setup with MySql v5.1.39 running ~10 db's on 12 core Linux machine. I had to move the bin-log files to a separate disk for performance issues. So I followed these steps:
Stop everything that's using the db
Stop Slave
Stop master
Change paths in /my.cfg to /mysql/log/* to /mysql/newlog/* on master and slave
Copy /mysql/log/* to /mysql/newlog/. on master and slave
Start Slave
All Ok!
Start Master
First problem! on the slave:
150113 12:21:22 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
150113 12:21:22 [Note] Slave I/O thread exiting, read up to log 'bin-log.005523', position 716864371
Now a quick Google didn't resolve anything and since downtime is an issue. I stopped the Master, changed the configuration back and restarted. Now the second "problem"!
...
150113 13:02:22 InnoDB: Error: page 182380 log sequence number 3407 300161079
InnoDB: is in the future! Current system log sequence number 3407 299353326.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
InnoDB: for more information.
...
I quote problem because everything works fine. Replication to the slave restarted and worked. I started the applications and those work fine. But when start MySql on the Master, I get the errors above, about 50 of them with different page- and sequence numbers.
How does moving files affect page- and sequence numbers and where do they come from? How big is my problem? everything seems to work fine.
Please ask if you need anymore information and thanks for your help.
First problem was caused by the file /mysql/log/bin-log.index. I forgot to change the contents of this file to point to the new directory of the log files:
/mysql/log/bin-log.000028 -> /mysql/newlog/bin-log.000028
/mysql/log/bin-log.000029 -> /mysql/newlog/bin-log.000029
/mysql/log/bin-log.000030 -> /mysql/newlog/bin-log.000030
/mysql/log/bin-log.000031 -> /mysql/newlog/bin-log.000031
The second problem was caused by the timestamps of some files. I should have preserved the timestamps with cp -p log/* newlog/. or rsync -avrx log/* newlog/..
Related
I'm trying to clone raw data from all databases on a MySQL instance in Live to a test environment. The network guys have told me the data has been synched and copied across but I can't start the MySQL instance in the test environment. I'm using the innodb engine and I can see the ibdata1 file, mysql-bin files and ib_logfiles copied over along with the relevant db folders.
The error I'm getting in the error log looks like the following:
130911 13:53:08 InnoDB: Error: table <table-name>
InnoDB: in InnoDB data dictionary has tablespace id <id>,
InnoDB: but tablespace with that id or name does not exist. Have
InnoDB: you deleted or moved .ibd files?
The cloning process doesn't stop the Live MySQL instance and I'm wondering is this the problem. I don't want to use mysqldump or another backup tool. I just want to copy the raw data across. Thanks for any advice.
You can't hot-copy these files and expect them to magically work.
You can use the innobackupex tool to create a stable snapshot. This will take care of adjusting the files as necessary to be consistent and complete.
I am running MySQL 5.1 on Ubuntu 10.04, and until recently, I used the default data dir location. Some other parts of the config file were tuned for performance, but the paths stayed default.
Recently, I started running out of space and decided to add another hard disk, mount it as /mysql and use it solely for MySQL data. So, I changed the paths, copied the old data dir into the new data dir and thought that would be the end of it.
Unfortunately, it wasn't - and it later turned out that apparmor was the issue, even though I updated the MySQL profile in apparmor to reflect the new path(s). After some messing about, disabling apparmor, the server would work and I was able to import the big database that is the original reason I needed more space.
Now, that was yesterday - the whole 200GB database was imported, keys were sorted and everything seemed fine until I tried to start the server today. Here's the error that I see in the log:
120913 13:53:38 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
InnoDB: File name /home/{my_username}/mysql/data/ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.
Here's a few strange things with that:
a) I'm sudo-ed in as root, and I'm using the 'service mysql start' command to start it
b) There's no mention of /home/{my_username}... path ANYWHERE in any of the configs.
I couldn't find any info or bug reports regarding this type of a problem. I don't even know what I would search for, since the problem can't really be explained in less than 2 paragraphs.
Further information: Manually setting innodb_data_home_dir eliminates the earlier problem, however, now I get this instead:
120913 14:08:06 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
InnoDB: File name /home/poplar/mysql/innodb-logs/ib_logfile0
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.
Now, there's no "poplar" user on this box, and I haven't got the faintest idea why would it want to be trying to put the log file there.
Well, it turned out that setting innodb_log_group_home_dir to the new MySQL data dir (where my log was before, and where it should default to anyway) did the trick.
The server now starts properly and all the data seems to be there.
I still don't know where it got the 'poplar' username as a good idea to try placing log files, but it could be some leftover (mis)configuration from AppArmor that wasn't cleanly re-set when I uninstalled it.
I am moving a MySQL slave from one set of HDs to another. The configuration of the machine denies me the ability to have both old and new hard drives on it at the same time. So I rsync'ed the data directory to another machine.
Whe the new hard drives came online, I rsyn'ed the data dir back. This worked fine.
However, I cannot start replication. This is the error I get.
120314 4:23:07 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem.
120314 4:23:07 [ERROR] Failed to open the relay log '/var/lib/mysqllogs/mysqld-relay-bin.000273' (relay_log_pos 677043943)
120314 4:23:07 [ERROR] Could not find target log during relay log initialization
120314 4:23:07 [ERROR] Failed to initialize the master info structure
I found this comment:
https://serverfault.com/questions/61471/moving-a-mysql-slave-to-a-new-host-failed-to-open-the-relay-log
If it is just complaining about the relay logs, in most cases, they
are disposable if the master still has the binary logs around. You can
just run CHANGE MASTER TO on the slave and it will flush the existing
relay logs and start anew. You don't need to make a new fresh copy.
This seems to suggest that I do not need these log files.
The host name is not changing.
My Questions:
Do I need these log files?
If not, what do I need to do to get replication started? Will it remember where it left off?
If I do need these log files, is there anything else I'm forgetting?
I don't think you need the relay bin loge files to get it to work. It might remember where it left off, did you try the command mysql>RESET SLAVE; ? You should get the position from the SLAVE by SHOW SLAVE STATUS; to see if still rememvers, then check to see if the logfile on the master server still exists, because it only keeps it for as long as you set the max file size. But try RESET SLAVE; if you haven't it does magic. You are probably going to have to start the whole process over by dumping the existing server data right after you lock the tables and do "SHOW MASTER STATUS" I wouldn't recommend trying to save this process if you have the option to start replication from scratch.
I have problem starting the MySql server.
The log says:
InnoDB: Error in opening ./ibdata1
111220 16:16:43 InnoDB: Operating system error number 11 in a file operation.
InnoDB: Error number 11 means 'Resource temporarily unavailable'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/operating-system-error-codes.html
InnoDB: Could not open or create data files.
InnoDB: If you tried to add new data files, and it failed here,
InnoDB: you should now edit innodb_data_file_path in my.cnf back
InnoDB: to what it was, and remove the new ibdata files InnoDB created
InnoDB: in this failed attempt. InnoDB only wrote those files full of
InnoDB: zeros, but did not yet use them in any way. But be careful: do not
InnoDB: remove old data files which contain your precious data!
/usr/libexec/mysqld: Disk is full writing './mysql-bin.000028' (Errcode: 28). Waiting for someone to free space... Retry
in 60 secs
After I check the disk - it says it's full.
So, after searching for solution - I found that I need to purge the binary log.
However, in order to purge - I need to start the MySql server, but all the spacein the disk is taken by the binary log, so I can't start...
It's also not advised to simply delete the binary logs.
So, I am kind of stuck.
Can't run the mysql to purge logs and can't purge logs because can't run server.
Any help? :)
Edit: The disk contains only the logs, there's nothing else.
If the disk is ext[2|3|4] you can use tune2fs to set the portion of the disk reserved for root to 0, giving you maybe enough breathing room to start the server
this would be tune2fs -m 0 /dev/whatever (after unmounting, ofcourse)
Try to start the mysql server with the option --expire_logs_days=, it should delete the log and older than days directory at startup.
bye
Gianluca
I've got a replication set up on pair of servers. One is a master and second is a slave.
Recently on master the binlog files were purged too early (by filename so mysql haven't prevented too early removal of file).
Now the SLAVE has status:
Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
I wan't to restore the missing binlog files so the slave will restart reading from the point it finished.
The files are already in place but how can I force master to 'unpurge' it's log list (so they are visible in SHOW BINARY LOGS)?
Ok I made it. However this solution isn't perfect/100% safe.
I've entered all filenames to my mysql-bin.index
find /var/log/mysql/ -wholename '/var/log/mysql/mysql-bin.0*' | sort > mysql-bin.index
(if you will use it check the filename format in mysql-bin.index file first and adjust to your needs)
Then restart mysql and mysql reloads that file on start.
the MASTER is ready.
Now it's enough to do
SLAVE STOP;
and
SLAVE START;
on SLAVE and it will continue his job.