Mysql is taking more & more RAM on my linux/Centos server - mysql

I'm running a dedicated server with 16Go or RAM and 1Go of SWAP.
My real time statistics on the server show that more than half of my ram and 99% of my SWAP is used my :
/usr/libexec/mysqld --basedir?/usr --datadir?/home/mysql --user?mysql --log-error?/var/log/mysqld.log --pid-file?/var/run/mysqld/mysqld.pid --socket?/
It keeps increasing with time and even restarting mysql won't change it
When I do a
mysql> SHOW PROCESSLIST
I get as result
+------+-----------+-----------------+-------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+-----------+-----------------+-------+---------+------+-------+------------------+
| 7 | root | localhost:51312 | mysql | Sleep | 5 | | NULL |
| 7156 | mailadmin | localhost:58878 | mail | Sleep | 3406 | | NULL |
| 9302 | mailadmin | localhost:32868 | mail | Sleep | 749 | | NULL |
| 9305 | mailadmin | localhost | mail | Sleep | 747 | | NULL |
| 9802 | mailadmin | localhost | mail | Sleep | 9 | | NULL |
| 9803 | mailadmin | localhost | mail | Sleep | 9 | | NULL |
| 9807 | mailadmin | localhost | mail | Sleep | 9 | | NULL |
| 9808 | mailadmin | localhost | mail | Sleep | 9 | | NULL |
| 9825 | root | localhost | NULL | Query | 0 | NULL | SHOW PROCESSLIST |
+------+-----------+-----------------+-------+---------+------+-------+------------------+
9 rows in set (0.00 sec)
and a
free -m -l
shows me :
total used free shared buffers cached
Mem: 16094 14431 1663 0 1318 5404
Low: 16094 14431 1663
High: 0 0 0
-/+ buffers/cache: 7708 8385
Swap: 1021 996 25
I have no idea on how to deal with this. It seems like I will reach the RAM limit of the server and it will probably cause slowness.
Thank you in advance, I stay here, ready to provide you with more informations.

I think you are being spammed. Your mail server is taking too much mails or sending. It is better to check your incoming/outgoing mails.
And will you consider setup Spamassasin/Amavisd or something like that ? I think if you turn of your mail server you will see that it lowers and it will be verified that its Mail server by spams.

Your statement that "restarting mysql won't change it" seems to imply that it's not mysqld that's using all the memory.
A rudimentary way to find processes that are using the most memory, you could run htop and sort by one of the memory columns, like VIRT. It may not be just one process, it could be whole slew of processes each using memory. (Some of the memory reported is shared, you can't just add up the memory for all the mysql processes... in htop, use keypresses F5 and H to get a "tree view".
In this example, mysql is using 11G, 73% of available memory. That's expected, because that's what we allocated, the bulk of that is allocated to the InnoDB buffer pool. (dedicated MySQL server)
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
19510 mysql 20 0 11.0G 5849M 3808 S 16.0 73.1 81h33:04 /opt/mysql/bin/mysqld --basedir=/opt/mysql --datadir=/opt/mysql_data --user=mysql --log-error=/
1016 syslog 20 0 220M 940 580 S 0.0 0.0 5:28.12 rsyslogd -c4
1651 root 20 0 145M 1100 784 S 0.0 0.0 8:26.81 /usr/sbin/automount
1243 root 20 0 98496 1348 1036 S 0.0 0.0 3h19:31 /usr/sbin/vmtoolsd
13816 root 20 0 90868 1340 404 S 0.0 0.0 0:00.02 sshd: xxxxxx [priv]
13905 mysql 20 0 81548 1120 428 S 0.0 0.0 0:00.02 su - mysql
1674 Debian-e 20 0 64724 408 332 S 0.0 0.0 0:09.08 /usr/sbin/exim4 -bd -q30m
1030 root 20 0 63256 472 360 S 0.0 0.0 1:32.65 /usr/sbin/sshd
1 root 20 0 61840 996 472 S 0.0 0.0 1:05.14 /sbin/init
(There's probably a lot better ways to see what's using memory, but htop does a pretty good of showing me processes that are running.)

Related

Google Cloud functions + SQL Broken Pipe error

I have various Google Cloud functions which are writing and reading to a Cloud SQL database (MySQL). The processes work however when the functions happen to run at the same time I am getting a Broken pipe error. I am using SQLAlchemy with Python, MySQL and the processes are cloud functions and the db is a google cloud database.I have seen suggested solutions that involve setting timeout values to longer. I was wondering if this would be a good approach or if there is a better approach? Thanks for your help in advance.
Heres the SQL broken pipe error:
(pymysql.err.OperationalError) (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))")
(Background on this error at: http://sqlalche.me/e/13/e3q8)
Here are the MySQL timeout values:
show variables like '%timeout%';
+-------------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_semi_sync_master_async_notify_timeout | 5000000 |
| rpl_semi_sync_master_timeout | 3000 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 30 |
| wait_timeout | 28800 |
+-------------------------------------------+----------+
15 rows in set (0.01 sec)
If you cache your connection, for performance, it's normal to lost the connection after a while. To prevent this, you have to deal with disconnection.
In addition, because you are working with Cloud Functions, only one request can be handle in the same time on one instance (if you have 2 concurrent requests, you will have 2 instances). Thus, set your pool size to 1 to save resource on your database side (in case of huge parallelization)

Are "cleaned up" MySQL connections from a connection pool safe to delete?

Consider following list of connections:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 15 | cleaned up |
| 90850182 | Sleep | 105 | cleaned up |
| 88009697 | Sleep | 38 | delayed commit ok done |
| 88000267 | Sleep | 6 | delayed commit ok done |
| 88009819 | Sleep | 38 | delayed commit ok done |
| 90634882 | Sleep | 21 | cleaned up |
| 90634878 | Sleep | 21 | cleaned up |
| 90634884 | Sleep | 21 | cleaned up |
| 90634875 | Sleep | 21 | cleaned up |
+----------+---------+------+------------------------+
After some short time under minute:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 9 | cleaned up |
| 88009697 | Sleep | 32 | delayed commit ok done |
| 88000267 | Sleep | 9 | delayed commit ok done |
| 88009819 | Sleep | 31 | delayed commit ok done |
| 90634882 | Sleep | 14 | cleaned up |
| 90634878 | Sleep | 14 | cleaned up |
| 90634884 | Sleep | 14 | cleaned up |
| 90634875 | Sleep | 14 | cleaned up |
+----------+---------+------+------------------------+
8 rows in set (0.02 sec)
enter code here
After I finished writing this stackoverflow post:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 0 | cleaned up |
| 88009697 | Sleep | 53 | delayed commit ok done |
| 88000267 | Sleep | 0 | delayed commit ok done |
| 88009819 | Sleep | 52 | delayed commit ok done |
| 90634882 | Sleep | 5 | cleaned up |
| 90634878 | Sleep | 5 | cleaned up |
| 90634884 | Sleep | 5 | cleaned up |
| 90634875 | Sleep | 5 | cleaned up |
+----------+---------+------+------------------------+
Context:
This is some 3rd vendor app opening connections (source code isn't available to us, so we don't know details). We know that their connection management is awful , they know it as well. It is awful because connections leak which you can see in first table - 90850182. If others have their timers reset, then this one starts to age infinitely. In older versions of the app it would stay forever. In newer version it is eventually captured by a "patch" which vendor introduced , which effectively cleans connections after the x seconds you specify. So it's "a leak healing patch".
The problem:
We are hosting hundreds of such vendor apps and most of them have much more than 8 connections as they have more traffic. That results in disgusting number(talking thousands) of connections we have to maintain. About 80% of connections sit in "cleaned up" state and under 120 seconds (cleaned eventually by aforementioned configurable app parameter).
This is all handled by Aurora RDS and AWS engineers told us that if the app doesn't close properly connections the standard "wait_timeout" isn't going to work. Well, "wait_timeout" becomes useless decoration in AWS Aurora, but let us take it with Jeff in other thread/topic.
So regardless, we have this magic configurable parameter from third party vendor set on this obscure app which controls eviction of stale connections and it works.
The questions:
Is it safe to evict connections which are in "cleaned up" state immediately?
At the moment this happens after 120 seconds which results in huge number of such connections. Yet in the tables above you can see that the timers are reset meaning that something is happening to these connections and they are not entirely stale. I.e. connection pooling of the app "touches" them for further re-use?
I don't posses knowledge of connection pools inner guts as how they are seen from within database. Are all reserved connections of a connection pool by default are "sleeping" in "cleaned up" state?
So say if you start cleaning too much, you will fight connection pool aggressively creating more to replenish?
Or reserved connections have some different state?
Even if you don't fully understand the context I'd expect veteran DBA or connection pool library maintainer to help with such questions. Otherwise will get my hands dirty and answer this myself eventually, would try apache connection pool, hikari, observe them and try to kill their idle connections (simulating magic parameter) and try this 3rd party app connection with 0 seconds magic parameter, see if it still works.
Appreciate your time :bow:.
The Answer
Yes, from AWS forum (https://forums.aws.amazon.com/thread.jspa?messageID=708499)
In Aurora the 'cleaned up' state is the final state of a connection
whose work is complete but which has not been closed from the client
side. In MySQL this field is left blank (no State) in the same
circumstance.
Also from the same post:
Ultimately, explicitly closing the connection in code is the best
solution here
From my personal experience as a MySQL DBA, and knowing that "cleaned up" represents a blank state, I'd definitely kill those connections.

apache/mysql generate too many request on amazon centos

My issue is, my website open too many connections in mysql server this is hanged my website and also generate many requests on apache. i have installed the apache module mod-evasive and mod-security i have also empty the iptables rules basically i am using amazon ec2 so it gives security ip i have blocked all outbound and inbound i have opened only 443,80 and ssh 22 port only but still when i take netstat it shows
netstat -anp |grep 'tcp\|udp' | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
4 --0.0.0.0
4 ---119.159.195.199
25 ---54.69.254.252
374 ---
on above my question is why my server 374 apache connection on ::1:80 how can block this or reduce this
mysql connection stat are
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Connections | 5208 |
| Threads_cached | 0 |
| Threads_connected | 54 |
| Threads_created | 5207 |
| Threads_running | 54 |
+-------------------+-------+
5 rows in set (0.00 sec)
my second question is why my sql connections is increasing.
Please any one help me i will really appreciated.

Killing sleeping processes in Mysql?

Can anyone tell me how can I kill all the sleeping processes?
I searched for it and I found that we can do it by command
mk-kill --match-command Sleep --kill --victims all --interval 10
I connected the DB server(Linux) but I find the message that command not found.
I tried to connect via MYSQL administrator and it doesn't say that command not found but also doesn't executes the query , just says you have an SQl error
login to Mysql as admin:
mysql -uroot -ppassword;
And than run command:
mysql> show processlist;
You will get something like below :
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| 49 | application | 192.168.44.1:51718 | XXXXXXXX | Sleep | 183 | | NULL ||
| 55 | application | 192.168.44.1:51769 | XXXXXXXX | Sleep | 148 | | NULL |
| 56 | application | 192.168.44.1:51770 | XXXXXXXX | Sleep | 148 | | NULL |
| 57 | application | 192.168.44.1:51771 | XXXXXXXX | Sleep | 148 | | NULL |
| 58 | application | 192.168.44.1:51968 | XXXXXXXX | Sleep | 11 | | NULL |
| 59 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
You will see complete details of different connections. Now you can kill the sleeping connection as below:
mysql> kill 55;
Query OK, 0 rows affected (0.00 sec)
kill $queryID; is helpful but if there is only one query causing an issue;
Having a lot of MySQL sleeping processes can cause a huge spike in your CPU load or IO
Here is a simple one-line command (if behind the MySQL server is linux) which would kill all of the current sleeping MySQL processes:
for i in `mysql -e "show processlist" | awk '/Sleep/ {print $1}'` ; do mysql -e "KILL $i;"; done
This is only a temporary repair; I strongly advise identifying and addressing the problem's main cause.
For instance, you may set the wait timeout variable to the amount of time you want MySQL to hold open connections before shutting them.
But if the issue still persists and you have to investigate the DB queries that cause the problem there is another way. In screen session, you can use another while cycle to continuously kill the sleeping queries. (while there is an output of the mysql show processlit | grep -i sleep | awk id column and kill it.) If you are using MySQL replication between different hosts this will help them to catch up. So when using show slave status\G; Seconds_behind_master will be going to catch up.
Of course, you should investigate the root cause again.

How to delete sleep process in Mysql

I found that my mysql sever have many of connection who is sleep. i want to delete them all.
so how i can configure my mysql server than then delete or dispose the connection who is in sleep not currently in process.
are this possible to delete this thing in mysql tell me how i can do following
a connection allow only one time datareader open and destroy the connection [process] after giving resposnse of query.
If you want to do it manually you can do like this:
login to Mysql as admin:
mysql -uroot -ppassword;
And than run command:
mysql> show processlist;
You will get something like below :
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| 49 | application | 192.168.44.1:51718 | XXXXXXXX | Sleep | 183 | | NULL ||
| 55 | application | 192.168.44.1:51769 | XXXXXXXX | Sleep | 148 | | NULL |
| 56 | application | 192.168.44.1:51770 | XXXXXXXX | Sleep | 148 | | NULL |
| 57 | application | 192.168.44.1:51771 | XXXXXXXX | Sleep | 148 | | NULL |
| 58 | application | 192.168.44.1:51968 | XXXXXXXX | Sleep | 11 | | NULL |
| 59 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
You will see complete details of different connections. Now you can kill the sleeping connection as below:
mysql> kill 52;
Query OK, 0 rows affected (0.00 sec)
Why would you want to delete a sleeping thread? MySQL creates threads for connection requests, and when the client disconnects the thread is put back into the cache and waits for another connection.
This reduces a lot of overhead of creating threads 'on-demand', and it's nothing to worry about. A sleeping thread uses about 256k of memory.
you can find all working process execute the sql:
show process;
and you will find the sleep process, if you want terminate it, please remember the processid and excute this sql:
kill processid
but actually you can set a timeout variable in my.cnf:
wait_timeout=15
connect_timeout=10
interactive_timeout=100
for me with MySql server on windows,
I update the file (because cannot set variable with sql request due privileges):
D:\MySQL\mysql-5.6.48-winx64\my.ini
add the lines:
wait_timeout=61
interactive_timeout=61
restart service, and acknowledge new values with:
SHOW VARIABLES LIKE '%_timeout';
==> i do a connection tests and after 1 minutes all 10+ connections in sleep are disapeared!