We have recently deployed a Drupal 6 install and the directory storing the mysqld-bin.XXX files /var/run/mysqld has caused the mysql server to crash. The size of these files (log) are over 1g in size each and there are tons of them. I would like to know a way we can either enable log rotation or something similar to insure this does not happen again.
Any insight is greatly appreciated.
Thanks in advance,
JN
Short answer, yes you can:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html
But you need to make sure you have a good backup system in place so you can take snapshots of your DB, then recover with your binlog.
Specifically, check the
max_binlog_size
option
Set this to something like 128 or 256Mb
then you can then just add on a cron job to copy old binlogs off with each database dump, then delete old binlogs.
Related
i will really appreciate if someone help me with this.
I have spend like 8hours googling around and found no solution to problem.
I have MySQL server version 5.7.7 on Windows server 2008 R2
Table engine is innodb
innodb_file_per_table = 1
I get error "Table is full" when table reaches 4Gb.
MySQL documentation sais that there is actualy only one limit on table size, filesystem.
(http://dev.mysql.com/doc/refman/5.7/en/table-size-limit.html)
HDD where are data stored uses NTFS, just to be sure i created 5Gb file without problems. And sure there is more than 10Gb of free space.
I understand setting "innodb_data_file_path" is irrelevant if "innodb_file_per_table" is enabled, but i tried to set it. No differences.
I have tried to do clean install of mysql. Same result.
EDIT
Guy that installed MySQL server before me accidentally installed 32bit version. Migration to 64bit mysql solved that problem
About the only way for 4GB to be a file limit is if you have a 32-bit version of MySQL. Also check for 32-bit Operating system. (Moved from comment, where it was verified.)
i am also not sure but read this it may help you.
http://jeremy.zawodny.com/blog/archives/000796.html
one more thing one guy had same problem.he had made changes to
NNODB settings for the innodb_log_file_size and innodb_log_buffer_size!changes were :
1) shutdown mysql
2) cd /var/lib/mysql
3) mkdir oldIblog
4) mv ib_logfile* oldIblog
5) edit /etc/my.cnf find the line innodb_log_file_size= and increase it to an appropriate value (he went to 1000MB as he was dealing with a very large dataset... 250million rows in one table). If you are not sure I suggest doubling the number every time you get a table is full error. he set innodb_log_buffer_size to 1/4 of the size of his log file and the problems went away.
I didnt find solution to this, i have no idea why mysql is unable to create more than 4Gb table.
As a workaround i moved only this table back to ibdata by setting "innodb_file_per_table" back to 0 and recreated that table.
Interesting is that even ibdata1 reported table is full when it reached 4Gb, even without setting max and enabled autoexpand.
So i created ibdata2 and let it autoexpand, now i am able to write new data to that table.
I have a problem with my unix server. This started a week ago. One day after a backup (I used to keep 3 backup files) I visited a website on the server but it wouldn't work. I restarted the server and it seemed to be working fine except the mysql service. My attempts to restart it failed. Then I figured that was because the server was full, so I deleted one of the backups, cleaned up some space and the mysql service restarted successfully. Than I figured tables in one of the databases (MYIsam tables) were corrupt. So I repaired them through myisamchk command via ssh and all worked fine. However, the very next day I woke up they were corrupt again (despite mysql was working fine), and this time there was no disk space problem on the server. I repaired them again. The next day the same thing happenned; and this time innodb tables that were part of another database were corrupt as well. I've fixed them too, so now all is working well but I guess the same thing will happen after tonight's backup.
I can't identify the problem and I don't know what logs to look into to understand the problem. Can anyone please help me out? Thanks very much in advance.
No easy answer here. My immediate thought is that the dbase is still busy when the backups commence, possibly corrupting indexes, interferring with caches, etc. Turn on full logging and check for problems when the backup starts happens. Maybe you will find something.
Look for the my.cnf file. On my CentOs it is located in /etc/my.cnf. It will have a config setting for the location of the error log.
My strongest suspect is OOM kill by the kernel or some other issue that results from running the system out of memory. Try this:
Start top on the server and press M to sort by memory so the biggest memory user is at the top.
note the pid of mysqld
manually perform the backup as you observe the value of the RES column in the top output (resident memory size)
once the backup is over see if the pid of mysqld has changed
If the pid has changed (meaning restart took place), and you saw the memory footprint of mysqld take up something comparable to the total amount of system memory, then my suspicion is correct, and we need to lower some settings in my.cnf to make it use less memory, e.g key_buffer_size and innodb_buffer_pool_size.
EDIT - From the log you posted, there are additional issues although it is not clear how they could be contributing to the table corruption. Your server appears to be running with --skip-innodb and your backup script is not able to deal with the absence of InnoDB storage engine printing exception error messages, but nevertheless continuing. It is also attempting to do a repair, which is failing due to the lack of system privileges (error 1 is Operation not permitted). It is possible that encountering those errors triggers some faulty logic in your backup script that leaves the tables corrupted.
At this point I would recommend disabling MySQL backup using the cPanel tool, and using mysqldump or some other solution (e.g. Xtrabackup (https://www.percona.com/doc/percona-xtrabackup/2.3/index.html)) from a cron job instead.
EDIT2 - from the test results. The manual backup does not run the system out of memory and does not crash the server. The jury is still out on the automatic one.
Don't kill mysqld; shut it down gracefully.
Switch from MyISAM to InnoDB; the latter does not suffer from that 'error'.
Is it correct to use a backup of a vm as a means of restoring a MySQL database?
Are there any dangers in doing this?
My own feeling is that a vm backup/snapshot is at the os not the db level and therefore may not backup the database in the correct way. Has anybody any advice on this?
It's perfectly fine as long as you do one of two things:
Either ensure consistency of the tables by either shutting down the database or using something like FLUSH TABLES WITH READ LOCK while doing the snapshot (you probably don't want to do this)
Use a transactionally-safe storage engine such as InnoDB (the default) for all tables that are likely to change around the time of the snapshot, and rely on its ability to recover from what looks like a crashed state, i.e. the copy of a running server.
Once you realise that taking a snapshot of a running VM and booting the snapshot on another machine looks just like pulling the plug on that server and rebooting it, your choice becomes relatively easy: Make sure the system can recover from pulling the plug, and it can recover from a VM snapshot backup.
Based on a recommendation from Jeff Hunter posted on the VMWare blog, the answer is no, it's not safe to rely on the snapshots for MySQL backups. His recommendation is basically to dump the db through a separate process (and then allow the snapshot to copy the dump).
I am working on a project where i need to create a database with 300 tables for each user who wants to see the demo application. it was working fine but today when i was testing with a new user to see a demo it showed me this error message
1030 Got error 28 from storage engine
After spending some time googling i found it is an error that is related to space of database or temporary files. I tried to fix it but i failed. now i am not even able to start mysql. How can i fix this and i would also like to increase the size to maximum so that i won't face the same issue again and again.
Mysql error "28 from storage engine" - means "not enough disk space".
To show disc space use command below.
myServer# df -h
Results must be like this.
Filesystem Size Used Avail Capacity Mounted on
/dev/vdisk 13G 13G 46M 100% /
devfs 1.0k 1.0k 0B 100% /dev
To expand on this (even though it is an older question); It is not not about the MySQL space itself probably, but about space in general, assuming for tmp files or something like that.
My mysql data dir was not full, the / (root) partition was
I had the same issue in AWS RDS. It was due to the Freeable Space (Hard Drive Storage Space) was Full. You need to increase your space, or remove some data.
My /tmp was %100. After removing all files and restarting mysql everything worked fine.
My /var/log/apache2 folder was 35g and some logs in /var/log totaled to be the other 5g of my 40g hard drive. I cleared out all the *.gz logs and after making sure the other logs werent going to do bad things if I messed with them, i just cleared them too.
echo "clear" > access.log
etc.
Check your /backup to see if you can delete an older not needed backup.
I had a similar issue, because of my replication binary logs.
If this is the case, just create a cronjob to run this query every day:
PURGE BINARY LOGS BEFORE DATE_SUB( NOW(), INTERVAL 2 DAY );
This will remove all binary logs older than 2 days.
I found this solution here.
A simple:
$sth->finish();
Would probably save you from worrying about this. Mysql uses the system's tmp space instead of it's own space.
sudo su
cd /var/log/mysql
and lastly type: > mysql-slow.log
This worked for me
Drop the problem database, then reboot mysql service (sudo service mysql restart, for example).
If you want to use the tokudb plugin
This can happen if you have less than 5% (by default) of free space.
see the option: tokudb_fs_reserve_percent
I have a master and slave database running on different nodes. The master DB is subjected to huge no. of inserts/updates. The master DB size is close to 6 GB, while the log files are now occupying a space of more than 120 GB. I am running out of disk space and need to get rid of the log files.
Will deleting the log files in anyway affect the slave DB ? Presently, the slave is just a couple of seconds behind the master.
Is there someplace where I can see what steps I need to follow to delete those files
eg.
1)Shut down the slave
2)Shut down the master
3)Delete the log files
4)Start the Master
5)Start the Slave
Do I need to inform the slave that the log files have been deleted ?? If yes, what is the way to do it ?
Any help would be appreciated.
Thanks
Yes, you can delete the OLD bin_log files. Make sure they're super old.
Also, I would do mysql flush_logs
You should also set your config file to expire your log files after X days.
This MySql documentation has all the information beautifully laid out
http://dev.mysql.com/doc/refman/4.1/en/purge-binary-logs.html