Mysql show processlist lists many processes sleep and info = null? - mysql

I'm injecting a stress test into my web app that connects to a mysql server and I'm monitoring the show processlist of mysql.
When the load is high (high swap i/o) I get many processes like that:
| 97535 | db| localhost | userA | Sleep | 515 | | NULL
| 97536 | db| localhost | userA | Sleep | 516 | | NULL
| 97786 | db| localhost | userA | Sleep | 343 | | NULL
| 97889 | db| localhost | userA | Sleep | 310 | | NULL
But I can't understand why are they still there and are not killed? This eventually leads to my app using all max_connections and stop processing incoming requests...
Any idea what are those processes and what are they doing there :) ?

Those are idle connections being held by a client. You should make sure that whatever client library you are using (JDBC, ...) is configured to not keep unused connections open so long, or that your # clients * max # of connections isn't too big.

My guess is that you are using persistent connections, e.g. pconnect in php:
[..] when connecting, the function would first try to find a (persistent) link that's already open with the same host, username and password. If one is found, an identifier for it will be returned instead of opening a new connection
and
[..] the connection to the SQL server will not be closed when the execution of the script ends. Instead, the link will remain open for future use
I had a similar situation, and was using Codeigniter with pconnect turned on. After turning it to off (see how) every connection was closed down properly after use, and my MySQL processlist was empty.
Performance: The above does not argue about performance, but simply tries to explain why you might see a lot of Sleeping connections in MySQL. It might not be negative, with regard to performance, to have the connections stay active.
More info at: http://www.mysqlperformanceblog.com/2006/11/12/are-php-persistent-connections-evil/

Related

AWS Cloudwatch DatabaseConnections metric differs to mysql Threads_connected

In our mysql RDS database on AWS, I can see the number of database connections by going into CloudWatch Metrics and selecting the DatabaseConnections metric. It reports 13, which is as expected:
2 Docker containers containing our app are pointing to the database container. Each app has 2 database connections (read and write). Read connection has 5 threads in its connection pool in the database, write has 1. So 12 threads in the database's connection pool in total.
CloudWatch reports 13, not 12, but that's me connecting to the database to check its status.
However, when I connect to the database and run:
show status where `variable_name` like '%threads_connected%';
I get this:
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 17 |
+-------------------+-------+
That's 4 more than I expected. Where could the other 4 be coming from? This makes monitoring the database connections feel unreliable if mysql is telling me one thing and CloudWatch is telling me another.
Any ideas?

Managing MySQL connections using HikariCP and Slick

I'm running a Scala application on this software stack:
Mac OS X 10.6.8 Snow Leopard
MySQL 5.1.46
Java 1.6.0_65
Scala 2.11.2
ConnectorJ 5.1.33
HikariCP 2.1.0
Slick 2.1.0
I cannot get why open connections to MySQL keep staying open even after shutting the Scala app down. The only correct aspect is that the Threads_connected drops from 16 down to 1 (that is the console from which I'm executing the 'show status' command.
mysql> show status like '%onn%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Aborted_connects | 0 |
| Connections | 77 |
| Max_used_connections | 19 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 1 |
+--------------------------+-------+
7 rows in set (0.00 sec)
The strange thing is I always see the open connections to the DB growing up by the maximum number of open connections set in the connection pool (HikariCP maximumPoolSize) every time I run the app hence I can state the connections are never given back to the connection pool for reuse.
According to Slick documentation using
db withSession { implicit session =>
/* entering the scope, getting 1 conn from the pool */
/*
do something within the session using the connection I've gotten
*/
}
/* here I'm out of the 'withSession' scope, and the
connection should be released */
will take a connection from the pool when entering its scope and will release it just out of the scope
Am I doing something wrong or did I get something wrong about connection pool usage on this software stack?
Connections is a counter of how many connection attempts have been made since the last time you started mysqld. This counter always increases; it does not decrease when the connections end.
That counter is not the number of current connections -- that's Threads_connected.

php pdo construct mysql call extremely slow on drupal 7 site using nginx and php-fpm

I have a drupal website that is calling the static PDO __contruct for a mysql connection, on a remote server at first and then changed to a local server to remove network latency for sanity checking.
On the local db call I am getting the following generated from xhprof and an extremely slow page load.
Function Name | Calls | Calls% | Incl. Wall Time(ms) | IWall% | Excl. Wall Time(ms) | EWall%
PDO::__construct | 6 | 0.0% | 120,084,724 | 91.6% | 120,084,724 | 91.6%
The php version is 5.4 on debian wheezy. The website is on a nginx and php5-fpm stack. The MySQL version is 5.5.
The tables are MyISAM but were originally InnoDB and had the same issue.
Does anyone know what could be causing this delay in the connection?

SHOW PROCESSLIST in MySQL command: sleep

When I run SHOW PROCESSLIST in MySQL database, I get this output:
mysql> show full processlist;
+--------+------+-----------+--------+---------+-------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+------+-----------+-------+---------+-------+-------+-----------------------+
| 411665 | root | localhost | somedb | Sleep | 11388 | | NULL |
| 412109 | root | localhost | somedb | Query | 0 | NULL | show full processlist |
+--------+------+-----------+-------+---------+-------+-------+------------------------+
I would like to know the process "Sleep" that is under Command. What does it mean? Why it is running since a long time and showing NULL? It is making the database slow and when I kill the process, then it works normally. Please help me.
It's not a query waiting for connection; it's a connection pointer waiting for the timeout to terminate.
It doesn't have an impact on performance. The only thing it's using is a few bytes as every connection does.
The really worst case: It's using one connection of your pool; If you would connect multiple times via console client and just close the client without closing the connection, you could use up all your connections and have to wait for the timeout to be able to connect again... but this is highly unlikely :-)
See MySql Proccesslist filled with "Sleep" Entries leading to "Too many Connections"? and https://dba.stackexchange.com/questions/1558/how-long-is-too-long-for-mysql-connections-to-sleep for more information.
"Sleep" state connections are most often created by code that maintains persistent connections to the database.
This could include either connection pools created by application frameworks, or client-side database administration tools.
As mentioned above in the comments, there is really no reason to worry about these connections... unless of course you have no idea where the connection is coming from.
(CAVEAT: If you had a long list of these kinds of connections, there might be a danger of running out of simultaneous connections.)
I found this answer here: https://dba.stackexchange.com/questions/1558. In short using the following (or within my.cnf) will remove the timeout issue.
SET GLOBAL interactive_timeout = 180;
SET GLOBAL wait_timeout = 180;
This allows the connections to end if they remain in a sleep State for 3 minutes (or whatever you define).
Sleep meaning that thread is do nothing.
Time is too large beacuse anthor thread query,but not disconnect server,
default wait_timeout=28800;so you can set values smaller,eg 10.
also you can kill the thread.

Tracking down MySQL connection leaks

I have an application server (jetty 6 on a linux box) hosting 15 individuals applications (individual war's). Every 3 or 4 days I get an alert from nagios regarding the number of open TCP connections. Upon inspection, I see that the vast majority of these connections are to the MySQL server.
netstat -ntu | grep TIME_WAIT
Shows 10,000+ connections on the MySQL server from the application server (notice the state is TIME_WAIT). If I restart jetty the connections drop to almost zero.
Some interesting values from a show status:
mysql> show status;
+--------------------------+-----------+
| Variable_name | Value |
+--------------------------+-----------+
| Aborted_clients | 244 |
| Aborted_connects | 695853860 |
| Connections | 697203154 |
| Max_used_connections | 77 |
+--------------------------+-----------+
A "show processlist" doesn't show anything out of the ordinary (which is what I would expect since most of the connections are idle - remember the TIME_WAIT state from above).
I have a TEST env for this server but it never has any issues. It obviously doesn't get much traffic and the application server is constantly getting restarted so debugging there isn't much help. I guess I could dig into each individual app and write a load test which would hit the database code, but this would take a lot of time / hassle.
Any ideas how I could track down the application that is grabbing all these connections and never letting go?
The answer seems to be adding the following entries in my.cnf under [mysqld]
:
wait_timeout=60
interactive_timeout=60
I found it here (all the way at the bottom): http://community.livejournal.com/mysql/82879.html
The default wait time to kill a stale connection is 22800 seconds.
To verify:
mysql> show variables like 'wait_%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 60 |
+---------------+-------+
EDIT: I forgot to mention, I also added the following to my /etc/sysctl.conf:
net.ipv4.tcp_fin_timeout = 15
This is supposed to help lower the threshold the OS waits before reusing connection resources.
EDIT 2: /etc/init.d/mysql reload won't really reload your my.cnf (see the link below)
Possibly the connection pool(s) are misconfigured to hold on to too many connections and they're holding on to too many idle processes.
Aside from that, all I can think of is that some piece of code is holding onto a result set, but that seems less likely. To catch if it's a slow query that's timing out you can also set MySQL to write to a slow query log in the conf file, and it'll then write all queries that are taking longer than X seconds, default is 10 seconds.
Well, one thing that comes to mind (although I'm not an expert on this) is to increase the logging on mySQL and hunt down all the connect/close messages. If that doesn't work, you can write a tiny proxy to sit in between the actual mySQL server and your suite of applications which does the extra logging and you'll know who is connecting/leaving.
SHOW PROCESSLIST shows the user, host and database for each thread. Unless all of your 15 apps are using the same combination, then you should be able to differentiate using this information.
I had the same problem with +30,000 TIME_WAIT on my client server. Fixed the problem by adding, in /etc/sysctl.conf :
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30
Then :
/sbin/sysctl -p
After 2 or 3 minutes, TIME_WAIT connections went from 30 000 to 7 000.
/proc/sys/net/ipv4/tcp_fin_timeout was 60 in RHEL7.tcp_tw_reuse and tcp_tw_recycle was changed to 1 and the performance improved.