Ubuntu 10.10 - Memory Issue with MySQL - mysql

All of a sudden my Database Server is running around 98% Memory Allocation (I have a 16GB box running only a MySQL Instance).
Here is what is displayed when I do a free -m:
total used free shared buffers cached
Mem: 15498 14565 932 0 76 8081
-/+ buffers/cache: 6408 9089
Swap: 31743 0 31743
I've already rebooted the machine - It's running on a very high availability server. MySQL claims that it's running 562 queries per second.
Total ø per hour ø per minute ø per second
22 M 2.03 M 33.77 k 562.90
Is this normal?

There is nothing wrong with that memory stats. You still have about 9 GB free and no swap is used.

Related

nodejs mysql memory leak on large volume of high frequency queries

TL;DR
https://gist.github.com/anonymous/1e0faaa99b8bb4afe3749ff94e52c84f - Demonstrates memory consumption implying a leak. Is this in my code or in the mysql package?
Full version :
I am seeing lot of memory leak (and eventual crash every few hours) in my program. This program receives a lot of data over an UDP socket, stores the relevant bits in an in-memory Hash and writes the data in his Hash to a Mysql DB (AWS RDS instance) once every 10 seconds. node.js version is 6.9.4, running on RHEL 7.1
I tried to do some profiling using the "--inspect" option and Chrome devtools and my initial suspicion is the mysql package. To that end I made a simple node.js program which just makes a lot of queries to a local DB and observed that it indeed consumes a lot of memory very fast. Here is the program : https://gist.github.com/anonymous/1e0faaa99b8bb4afe3749ff94e52c84f
I tried a few variations of the program and all of them consumed memory at a rate which was clearly pointing towards an Out of Memory crash. The variations are :
Use a single connection
Use a pool with 30 connections (This is the production setup)
Use a valid query statement
Use an invalid query statement that results in a parse error (The space before the string 123 on line 27 makes it a bad query. Removing the space makes is a valid query)
This above program has nothing like the in-memory DB. It does one thing only : Issue a lot of UPDATE queries to the Mysql DB at a high frequency.
I have set the frequency to 2 seconds to demonstrate the memory consumption easily. Decreasing this frequency will slow down the memory consumption but it will nevertheless be growing. It only delays the crash, but the crash itself is inevitable.
The real usage requirement for frequency is 10 seconds and number of update queries during each run will go up to 10,000. So the numbers in the sample program are pretty close to the real world scenario and are not just some hypothetical simulation numbers.
Here is the output of "--trace-gc" which shows that the memory consumption rises to 400MB within a minute's time :
[29766:0x36c5120] 52326 ms: Scavenge 324.9 (365.1) -> 314.7 (369.1) MB, 8.3 / 0.0 ms [allocation failure].
[29766:0x36c5120] 53292 ms: Scavenge 330.3 (370.1) -> 317.4 (372.1) MB, 3.3 / 0.0 ms [allocation failure].
[29766:0x36c5120] 53477 ms: Scavenge 333.4 (374.1) -> 329.0 (375.1) MB, 15.6 / 0.0 ms [allocation failure].
[29766:0x36c5120] 53765 ms: Scavenge 335.5 (375.1) -> 331.9 (385.1) MB, 20.8 / 0.0 ms [allocation failure].
[29766:0x36c5120] 54701 ms: Scavenge 346.4 (386.1) -> 334.4 (388.1) MB, 5.3 / 0.0 ms [allocation failure].
[29766:0x36c5120] 55519 ms: Scavenge 349.9 (389.1) -> 338.9 (390.1) MB, 5.7 / 0.0 ms [allocation failure].
[29766:0x36c5120] 55614 ms: Scavenge 353.1 (392.1) -> 350.0 (395.1) MB, 17.8 / 0.0 ms [allocation failure].
[29766:0x36c5120] 56081 ms: Scavenge 356.8 (395.1) -> 351.3 (405.1) MB, 18.5 / 0.0 ms [allocation failure].
[29766:0x36c5120] 57195 ms: Scavenge 367.5 (406.1) -> 354.2 (407.1) MB, 3.2 / 0.0 ms (+ 20.1 ms in 236 steps since last GC) [allocation failure].
[29766:0x36c5120] 57704 ms: Scavenge 369.9 (408.1) -> 362.8 (410.1) MB, 12.5 / 0.0 ms (+ 15.7 ms in 226 steps since last GC) [allocation failure].
[29766:0x36c5120] 57940 ms: Scavenge 372.6 (412.1) -> 369.2 (419.1) MB, 21.6 / 0.0 ms (+ 8.5 ms in 139 steps since last GC) [allocation failure].
[29766:0x36c5120] 58779 ms: Scavenge 380.4 (419.1) -> 371.1 (424.1) MB, 11.4 / 0.0 ms (+ 11.3 ms in 165 steps since last GC) [allocation failure].
[29766:0x36c5120] 59795 ms: Scavenge 387.0 (426.1) -> 375.3 (427.1) MB, 10.6 / 0.0 ms (+ 14.4 ms in 232 steps since last GC) [allocation failure].
[29766:0x36c5120] 59963 ms: Scavenge 392.0 (431.3) -> 388.8 (434.3) MB, 19.1 / 0.0 ms (+ 50.5 ms in 207 steps since last GC) [allocation failure].
[29766:0x36c5120] 60471 ms: Scavenge 395.4 (434.3) -> 390.3 (444.3) MB, 20.2 / 0.0 ms (+ 19.2 ms in 96 steps since last GC) [allocation failure].
[29766:0x36c5120] 61781 ms: Scavenge 406.2 (446.3) -> 393.0 (447.3) MB, 3.7 / 0.0 ms (+ 107.6 ms in 236 steps since last GC) [allocation failure].
[29766:0x36c5120] 62107 ms: Scavenge 409.0 (449.3) -> 404.1 (450.3) MB, 16.0 / 0.0 ms (+ 41.0 ms in 227 steps since last GC) [allocation failure].
[29766:0x36c5120] 62446 ms: Scavenge 411.3 (451.3) -> 407.7 (460.3) MB, 22.6 / 0.0 ms (+ 20.2 ms in 103 steps since last GC) [allocation failure].
Questions :
Is this kind of memory consumption expected for so many queries or is this indicative of a leak?
Is my code leaking the memory? Anything obvious that I have missed? Am I using the package in a wrong way?
if this is indeed a leak in the package, are there any immediate workarounds until the leak is fixed?
I am more than happy to provide any other information that is needed to get to the root of this. Please let me know.
Putting an answer here just for the benefit of anyone facing a similar issue.
In my case the problem was not a memory leak but throughput. The Mysql server(s) that I was running was just not capable of handling so many queries in such a short time. With such a frequency I was sort of just choking the Mysql server.
Nodejs would just keep creating a new connection and/or a query object for each new query. This object would get released from the memory once the query completed. But the client was sending so many queries at such a high frequency that the Mysql server took a lot of time to complete each query.
Simply put, the rate at which queries were being made was much higher than the rate at which those queries were being completed. As such a the query / connection objects just started piling up on the client side resulting in an ever increasing memory usage.
This looked like a leak. But it wasn't.
One technique that I learnt for distinguishing between a leak and throughput issue is to stop the creation of work (in this case stopping new queries) and checking if the memory usages comes down. If it does then it is a throughput issue, if not it could be a memory leak.
In my case, up to about 8,000 queries per second would work fine. Around 8.5k to 9k would result in this throughput issue eventually resulting in a crash.

Errors and issues in the RDS with MySQL 5.7 and Wordpress website

I have a medium website made in Wordpress with about 2,000 daily hits and 4000 posts.
The website run on EC2 instance m3.xlarge without any problems for over one year. But I constantly had problems with RDS MySql.
My RDS instance is as follows:
Engine: MySQL 5.7.11
Instance: db.m3.medium
Type: Standard - Current
Generation vCPU: 1
vCPU Memory: 3.75 GiB
EBS Optimized: no
Network Performance: Moderate
Storage Type: General Purpose (SSD)
IOPS: disabled
Storage: 100 GB
Parameter groups: default
Option groups: With Memcached
Memcached configuration (default, I have no idea how to use these settings):
INNODB_API_BK_COMMIT_INTERVAL: 5
ERROR_ON_MEMORY_EXHAUSTED: 0
MAX_SIMULTANEOUS_CONNECTIONS: 1024
CHUNK_SIZE_GROWTH_FACTOR: 1.25
VERBOSITY: v
INNODB_API_DISABLE_ROWLOCK: 0
BINDING_PROTOCOL: auto
CHUNK_SIZE: 48
INNODB_API_TRX_LEVEL: 0
CAS_DISABLED: 0
DAEMON_MEMCACHED_W_BATCH_SIZE: 1
DAEMON_MEMCACHED_R_BATCH_SIZE: 1
INNODB_API_ENABLE_MDL: 0
BACKLOG_QUEUE_LIMIT: 1024
The most common problems are:
1) CPU RDS instance by 100% for several minutes hampering navigation on the website (apparently resolved after including the Memcached option);
2) Frequent log error messages are: "Aborted connection 698,774 to db: 'unconnected' user 'mydbuser' host: '+my computer ip+' (Got an error reading communication packets)" and "Warning Host name '+my provider hostname+ 'could not be resolved: Name or service not known "
3) Constant Wordpress error message while edit posts: "Connection lost. Saving has been disabled until you’re reconnected..."
4) Number of connections simultaneous has a limit of 296, but sometimes jumps from an average of 5 connections to 39 connections for serveral minutes 1 or 2 times a day, and drops the website.
The global settings that this instance has by default: http://pastebin.com/Ws1D5VDs

Determine Issue with Slow CentOS 6 Apache Server

Background: I have 2 servers - 1 Apache server and 1 MySQL server running CentOS 6. The web server connects to the MySQL server via a private IP behind the router. Both have /etc/hosts.allow/deny setup, and iptables is limited to specific ports and IP's being able to access they system on port 22 and/or 3306. Port 80 and 443 are open to all on the web server.
Starting last week after a yum update the web server is extremely slow, on occasion, serving data. Some Javascript files served locally take 30 seconds or more to deliver. Running top rarely shows any resources being used:
top - 17:54:32 up 6 days, 21:37, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 123 total, 1 running, 122 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.0%sy, 0.0%ni, 99.5%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1916548k total, 1696236k used, 220312k free, 189352k buffers
Swap: 0k total, 0k used, 0k free, 386004k cached
free -m shows:
[gnet#itv ~]$ free -m
total used free shared buffers cached
Mem: 1871 1662 209 0 184 378
-/+ buffers/cache: 1099 772
Swap: 0 0 0
I cannot see anywhere where an issue could be found. I have checked /var/log/secure and /var/log/messages with no issues. I ran mysqlcheck on the DB and all tables report OK. Pinging the servers on the private network is as expected (quick). I see very few long queries in the MySQL query log.
My host (Rackspace) tells me there are no 'noisy neighbors' and that the parent node is fine, and there are no network issues.
What can I look at to see what the issue might be? I have run iotop and show only minor, quick writes. But when I'm ssh'd in, the connection "acts" like the server is under tremendous load.
Any ideas would be appreciated!
It seems the issue was, indeed, hardware. They migrated my VM to another host and the problem has disappeared.

Apache Mysql MaxClients

I have running system on centos + apache + mysql with "Bitrix" CMS (Russian local CMS).
Some times system goes CPU overload and apache log message "Max Client reached". After restartig httpd daemon CPU utilisation becomes OK. ab 20-30%.
At first i think it was memory leaks. But then i reduce mysql memory and now i have ab. 2GB free RAM min.
My apache settings:
<IfModule mpm_prefork_module>
StartServers 32
MinSpareServers 32
MaxSpareServers 32
MaxClients 64
MaxRequestsPerChild 5000
</IfModule>
<IfModule worker.c>
StartServers 2
MaxClients 300
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 2500
</IfModule>
on my shame i did not realize difference between this modules.
My system config: 16 GB RAM, 1 HTTPD procces eats 50-150MB, max users online 200-250 at one moment.
Why CPU goes overload when connections reaches maxclients value?
What am i doing wrong?
Thx!
htop when CPU overloads and MaxClients reaches some value

Intermittent MySQL error under high load: "Unknown MySQL server host 'XXXX'"

I have a MySQL server under heavy load (~960 queries per second) with ~ 400 clients continuously running queries against it. It is on a powerful machine (8 cores, Xeon, 3.3 GHz) and it looks like it can keep up with the load, no problem.
Occasionally (once per week), all client processes will error at the same time with the message "Unknown MySQL server 'XXX'". Then, without me doing anything, they all come back to life a short time later.
I have max_connections set to 500, but I think that if I exceed that number then I should get a "Too many connections" error, and not the one I'm seeing.
Can anyone help me figure out why I'm getting these errors?
Thanks,
Jonathan
System specs:
8 cores, Xeon 3.2 GHz
Ubuntu 8.04
Kernel: Linux 2.6.24-27
MySQL 5.0.51a-3ubuntu5.7