out of memory error from mysql - mysql

we have a web application (racktables) that's giving us grief on our production box. whenever users try to run a search, it gives the following error:
Pdo exception: PDOException
SQLSTATE[HY000]: General error: 5 Out of memory (Needed 2057328 bytes) (HY000)
I cannot recreate the issue on our backup server. The servers match except for the fact that in production we have 16GB RAM and our backup we have 8GB. It's a moot point though because both are running 32 bit os's and so are only using 4GB of RAM. we also have set up a swap partition...
Here's what i get back from the "free -m" command in production:
prod:/etc# free -m
total used free shared buffers
Mem: 3294 1958 1335 0 118
-/+ buffers: 1839 1454
Swap: 3817 109 3707
prod:/etc#
I've checked to make sure that my.cnf on both boxes match. The database from production was replicated onto the backup server... so the data matches as well.
I guess our options are to:
A) convert the o/s to 64 bit so we can use more RAM.
B) start tweaking some of the innodb settings in my.cnf.
But before I try either A or B, I wanted to know if there's anything else I should compare between the two servers... seeing how the backup is working just fine. There must be a difference somewhere that we are not accounting for.
Any suggestions would be appreciated.

I created a script to simulate load on the backup server and was able to then to recreate the out of memory error message.
In the end, i added the "join_buffer_size" setting to my.cnf and set it 3 MBs. That has resolved the issue.
ps. I downloaded and ran tuning-primer.sh as well as mysqltuner.pl to narrow down the issues.

Related

MySQLDump on Windows performance degrading over time

I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.

HDD space in Ubuntu Apache server is running out

I've created a 10 GB HDD and 3.75 GB RAM instance in Google Cloud and hosted a quite heavy DB transaction application's backend/API there. The OS is Ubuntu 14.04 LTS and I'm using Apache web server with PHP and MySQL for the backend. The problem here is that the HDD space has almost run out of memory very quickly.
Using Linux (Ubuntu) commands, I've found that my source code (/var/www/html) size is about 200 MB and the MySQL DB folder (/var/lib/mysql) size is 3.7 GB (around 20,000,000 records in my project DB). I'm confused how rest of my HDD space is occupied (except OS files). As of today, I only have 35 MB left. Once for testing purpose, I copied the source code to another folder. Even then I had the same problem. When I realized that my HDD space is running out, I deleted that folder and freed around 200 MB. But later (around 10 minutes) that freed space has also gone!!!
I figured that some log file like Apache error log, access log, MySQL error log or CakePHP debug log may occupy that space but I've disabled and truncated those files long ago and checked if these file are creating again but it doesn't. So how????????
I'm seriously worried about this project to continue with this instance. I thought about adding additional HDD to remedy this situation but I need to be sure how my HDD space is being occupied first. Any help will be highly appreciated.
You can start by searching all the largest files in your system.
On the / directory type:
sudo find . -type f -size +5000k -exec ls -lh {} \;
Once you find the files you can start to troubleshoot.
If you get many file you can increase +5000k to aim for the larger files.

MariaDB/MySQL Errors - Table & Server Crashes (WHM Server)

I don't know what's going on. My server has been fine for probably a year. Now I'm having a severe problem with MariaDB/MySQL. The DB server keeps crashing. When it does and I bring it back online I get errors, several tables are marked crashed and I have to repair them. Here are the server specs...
CloudLinux Server release 6.6 installed on Centos 6.5 (x64)
WHM/Cpanel 11.50.1 Build 1 (Current)
MariaDB 10.0.21
RAM: 3,820MB (3750MB+ in use)
Swap: 1,023MB (1,023MB in use)
4 Cores (Low idle load)
Available Disk Space: 26GB
I suspect it has to do with memory. Here's a memory alert I get in WHM:
Here's what I get when I try to visit a web site on my server that uses MySQL (As expected):
Warning: mysql_connect(): Connection refused in /home/mysite/public_html/index.php on line 19
Unable to connect to server.
Here's a link to the main error log of my database server (Too much to post here): http://wikisend.com/download/182056/proton.myserver.com.err.txt
This is what happens when I restart my database server from WHM. Each time I restart the db server, random tables are marked as crashed. Sometimes a lot of tables, sometimes just a few and then I have to repair them:
Here is the contents of the /etc/my.cnf file:
root#proton [~]# cat /etc/my.cnf
[mysqld]
default-storage-engine=MyISAM
innodb_file_per_table=1
max_allowed_packet=268435456
open_files_limit=10000
innodb_buffer_pool_size=123731968
The only thing I've tried to fix this is setting this option in WHM:
I only have a handful of sites on the server. Any help is greatly appreciated.
SHOW VARIABLES LIKE '%buffer%';
Do you have other products running in the same VM/server? How much of the 3750MB are they using? Consider increasing RAM as a quick fix. Otherwise, lets look for what is chewing up RAM.
You are probably no using any InnoDB tables? If not then change this to 0:
innodb_buffer_pool_size=123731968
For MyISAM, the most important factor is key_buffer_size; it should be no more than about 500M for your case.
What is WHM?
Abrupt stops of mysql (for any reason) leads to the need to REPAIR MyISAM tables ("marked crashed"). (Consider moving to InnoDB to avoid this recurring nuisance.)

MySQL my.cnf -- open-files-limit causing CPU Overload

I got this server Intel Xeon Quadcore E3-1230v2 with 8GBs of DDR3 RAM Round the clock I see that this server is running out of CPU. It looks badly overloaded. After observing "Daily Process Log" I realized that below process is eating 25% of the CPU resources & there were three such processes (technically errors). Below is the process (error):
/usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --log-error/var/lib/mysql/server.yacart.com.err --open-files-limit16384 --pid-file/var/lib/mysql/server.yacart.com.pid
As visible in the above error, It appears something is wrong with open-files-limit16384, I tried increasing open-files-limit in my.cnf to 16384 but in vain. Below is how my my.cnf now looks like:
[mysqld]
innodb_file_per_table=1
local-infile=0
open_files_limit=9978
Can anyone advise me a good configuration for my my.cnf ? Which would help me get rid of CPU overload?
There is a GoogleBot like robot script I am running in slave servers to mine data from internet. Its crawling the entire internet. When I shutdown this script, everything gets in order. I wonder if there is a fix I could apply to this script?
This robot program has got about 40 databases, each with a size of 50 - 800 MBs, total DB size of about 14 GBs so far & I expect this to shoot upto 500 GBs in future. At one point (whole day long) only ONE DB is used. Next day, I use next DB & so on. I was thinking of increasing RAM once the biggest DB reaches 2 GBs. Currently RAM does not seem to be an issue at all.
Thanks in advance for any help you guys can offer.
regards,
Sam
If you have WHM, look for this under Server Configuration >> Tweak Settings >> SQL
** Let cPanel determine the best value for your MySQL open_files_limit configuration ? [?]
cPanel will adjust the open_files_limit value during each MySQL restart depending on your total number of tables.

Increasing the number of simultaneous request to mysql

Recently we changed app server of our rails website from mongrel to passenger [with REE and Rails 2.3.8]. The production setup has 6 machines pointing to a single mysql server and a memcache server. Before each machine had 5 mongrel instance. Now we have 45 passenger instance as the RAM in each machine is 16GB with 2, 4 core cpu. Once we deployed this passenger set up in production. the Website became so slow. and all the request starting to queue up. And eventually we had to roll back.
Now we suspect that the cause should be the increased load to the Mysql server. As before there where only 30 mysql connection and now we have 275 connection. The mysql server has the similar set up as our website machine. bUt all the configs were left to the defaul limit. The buffer_pool_size is only 8 mb though we have 16GB ram. and number of Concurrent threads is 8.
Will this increased simultaneous connection to mysql would have caused mysql to respond slowly than when we had only 30 connections? If so, how can we make mysql perform better with 275 simultaneous connection in place.
Any advice greatly appreciated.
UPDATE:
More information on the mysql server:
RAM : 16GB CPU: two processors each having 4 cores
Tables are innoDB. with only default innodb config values.
Thanks
An idle MySQL connection uses up a stack and a network buffer on the server. That is worth about 200 KB of memory and zero CPU.
In a database using InnoDB only, you should edit /etc/sysctl.conf to include vm.swappiness = 0 to delay swapping out processes as long as possible. You should then increase innodb_buffer_pool_size to about 80% of the systems memory assuming a dedicated database server machine. Make sure the box does not swap, that is, VSIZE should not exceed system RAM.
innodb_thread_concurrency can be set to 0 (unlimited) or 32 to 64, if you are a bit paranoid, assuming MySQL 5.5. The limit is lower in 5.1, and around 4-8 in MySQL 5.0. It is not recommended to use such outdated versions of MySQL in a machine with 8 or 16 cores, there are huge improvements wrt to concurrency in MySQL 5.5 with InnoDB 1.1.
The variable thread_concurrency has no meaning inside a current Linux. It is used to call pthread_setconcurrency() in Linux, which does nothing. It used to have a function in older Solaris/SunOS.
Without further information, the cause for your performance problems cannot be determined with any security, but the above general advice may help. More general advice geared at my limited experience with Ruby can be found in http://mysqldump.azundris.com/archives/72-Rubyisms.html That article is the summary of a consulting job I once did for an early version of a very popular Facebook application.
UPDATE:
According to http://pastebin.com/pT3r6A9q , you are running 5.0.45-community-log, which is awfully old and does not perform well under concurrent load. Use a current 5.5 build, it should perform way better than what you have there.
Also, fix the innodb_buffer_pool_size. You are going nowhere with only 8M of pool here.
While you are at it, innodb_file_per_table should be ON.
Do not switch on innodb_flush_log_at_trx_commit = 2 without understanding what that means, but it may help you temporarily, depending on your persistence requirements. It is not a permanent solution to your problems in any way, though.
If you have any substantial kind of writes going on, you need to review the innodb_log_file_size and innodb_log_buffer_size as well.
If that installation is earning money, you dearly need professional help. I am no longer doing this as a profession, but I can recommend people. Contact me outside of Stack Overflow if you want.
UPDATE:
According to your processlist, you have very many queries in state Sending data. MySQL is in this state when a query is being executed, that is, the main interior Join Loop/Query Execution loop is busy. SHOW ENGINE INNODB STATUS\G will show you something like
...
--------------
ROW OPERATIONS
--------------
3 queries inside InnoDB, 0 queries in queue
...
If that number is larger than say 4-8 (inside InnoDB), 5.0.x is going to have trouble. 5.5.x will perform a lot better here.
Regarding the my.cnf: See my previous comments on your InnoDB. See also my comments on thread_concurrency (without innodb_ prefix):
# On Linux, this does exactly nothing.
thread_concurrency = 8
You are missing all innodb configuration at all. Assuming that you ARE using innodb tables, you are not performing well, no matter what you do.
As far as I know, it's unlikely that merely maintaining/opening the connections would be the problem. Are you seeing this issue even when the site is idle?
I'd try http://www.quest.com/spotlight-on-mysql/ or similar to see if it's really your database that's the bottleneck here.
In the past, I've seen basic networking craziness lead to behaviour similar to what you describe - someone had set up the new machines with an incorrect submask.
Have you looked at any of the machine statistics on the database server? Memory/CPU/disk IO stats? Is the database server struggling?