MySQL CPU increase when I have Sleeping connection that stay open - mysql

I have a MySQL 5.6.27-0ubuntu0.14.04.1 that run on a Google Compute instance with 4 CPU.
I noticed that if I have a connection that Sleep for a long time, then the CPU of the server will increase in a linear way. I don't understand why? If I kill the Sleep connection then CPU just restore to a correct usage.
So to summary I have the following:
I notice the CPU of my instance is increasing:
Then I check the processlist on my server
mysql> show processlist
-> ;
+-------+--------+-------------------+----------------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+--------+-------------------+----------------+---------+------+-------+------------------+
| 85949 | nafora | paper-eee-2:58461 | state_recorder | Sleep | 1300 | | NULL |
| 85956 | nafora | paper-eee-2:58568 | state_recorder | Sleep | 64 | | NULL |
| 85959 | root | localhost | NULL | Query | 0 | init | show processlist |
+-------+--------+-------------------+----------------+---------+------+-------+------------------+
You can see I have just 2 connection that Sleep and one is here from 1300 seconds (because I have a process that is stuck with the connection open)
So I kill the connection 85949, and the CPU just fall down.
Can someone explain me why a single connection that is sleeping can impact my database like this.
Thanks.

Some non-closed connections or long running slow queries might cause this behavior. You could limit the non-closed connections by configuring the global variable wait_timeout as reasonable value and also set another related variable interactive_timeout as high as per best practice.
Stateful applications that use a connection pool (Java, .NET, etc.) will need to adjust wait_timeout to match their connection pool settings. The default 8 hours (wait_timeout = 28800) works well with properly configured connection pools.
Configure the wait_timeout to be slightly longer than the application connection pool’s expected connection lifetime. This is a good safety check. Also profile the queries accordingly to observe the performance of MySQL instance can help you to avoid I/O bottlenecks.

Related

ProxySQL Client Connections

When I Query ProxySQL Client Connections :
select * from stats.stats_mysql_global where variable_name like 'Client_connection%';
+-------------------------------------+----------------+
| Variable_Name | Variable_Value |
+-------------------------------------+----------------+
| Client_Connections_aborted | 0 |
| Client_Connections_connected | 495 |
| Client_Connections_created | 43785 |
| Client_Connections_non_idle | 495 |
| Client_Connections_hostgroup_locked | 0 |
+-------------------------------------+----------------+
Connections non idle is always same with connections non idle;
But when I query on Process List :
show processlist;
in the command fields, all status is Sleep.
The Client_Connections_non_idle must be zero, but not in my case.
What is wrong with my thought ?
Thanks for any explanation of my problems.
I am using ProxySql v2.4.2
According to the ProxySQL documentation
Client_Connections_non_idle : number of client connections that are currently handled by the main worker threads. If ProxySQL isn’t running with “–idle-threads”, Client_Connections_non_idle is always equal to “Client_Connections_connected”
So unless you run ProxySQL with --idle-threads this is expected behaviour and nothing to worry about. You can read more about idle threads in the documentation
I already find the solution of my problem. Before i set wait_timeout too big, after I set wait_timeout into 30000 (30 second) it is normal.

what is the best solutions to mysql connection timeout?

I am writing a small web app in Go, which uses mysql to store data.
I got intermittent mysql error if the web sever didn't get any request after some amount of time(> 8 hours):
[mysql] 2017/02/08 16:31:56 packets.go:33: unexpected EOF
[mysql] 2017/02/08 16:31:56 packets.go:130: write tcp 127.0.0.1:49188->127.0.0.1:3306: write: broken pipe
I found some related discussion on github(issue 529, issue 257 and issue 446). From what I understand, mysql db would close the connection if timeout is reached.
I tried to set SetMaxOpenConns to 9 and SetMaxIdleConns to 0 as some people recommended. However, this threw exception immediately. (But if I set SetMaxIdleConns larger than 0, there was no immediate exception thrown)
I also tried to set SetConnMaxLifetime to 5 mins. This threw exception too after 5 mins.
Now I am trying the code below:
db.SetConnMaxLifetime(0)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(5)
It has been running for 20 mins. It's still too early to tell.
(UPDATE: this doesn't work either)
Here is configuration:
driver: go-sql-driver V1.3.
go version: go1.7.1 darwin/amd64
mysql: latest from docker hub
rkt version: 1.18
CoreOS: 1284.0.0
Perhaps you can start a heartbeat Goroutine to avoid timeout.
you can check your mysql time_wait variable:
mysql> show global variables like 'wait_timeout':
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 300 |
+---------------+-------+
1 row in set (0.00 sec)
then use db.SetConnMaxLifetime(120*time.Second), which mean when db connection is idle over than 120s, sql.db will reopen or get a new connection from db pool by db.Open. If you not set connection max life time, you maybe use a closed connection and got the error.
watching the mysql process list,mysql> show processlist;,if connection sleep over than 300s,it's recycled by mysql:
mysql> show processlist;
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| 4 | event_scheduler | localhost | NULL | Daemon | 1363480 | Waiting on empty queue | NULL |
| 26539 | root | 172.17.0.1:48732 | NULL | Query | 0 | starting | show processlist |
| 26575 | auditcenter | 172.17.0.1:51714 | obs_gb_test | Sleep | 51 | | NULL |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
3 rows in set (0.00 sec)
SetMaxOpenConns and SetMaxIdleConns is used for setting connection resource, see enter link description here

How do I close connections in MySQL?

What exactly does this means:
mysql> show status like "Conn%";
+-----------------------------------+-------+
| Variable_name | Value |
+-----------------------------------+-------+
| Connection_errors_accept | 0 |
| Connection_errors_internal | 0 |
| Connection_errors_max_connections | 0 |
| Connection_errors_peer_address | 0 |
| Connection_errors_select | 0 |
| Connection_errors_tcpwrap | 0 |
| Connections | 16 | <-- This value
+-----------------------------------+-------+
7 rows in set (0.00 sec)
Is this a count of how many times I've connected, or a count of how many open connections exist?
Assuming it's the number of open connections, how do I close them?
dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
Okay, thanks to fqdn for the link to the answer. Connections is just a historical count of past connection attempts.
Connections usually closed by those who opened them so in general case you as DBA shouldn't close them.
Moreover, in most of cases - if client application crashes - Server will be notified about that (tcp protocol usually takes care about that) and connection will be closed automatically
But in some cases Server is not notified about the fact that client went down (e.g. whole client machine crashed or some router in the middle goes down). If those connections are not notified by TCP (timeout or keepalive) - then MySQL server will close them after wait_timeout.
If DBA still wants to force close some connection (e.g. if it suspects some malicious activity or connection stuck or eats too many resources) - they may use SQL command KILL followed by process_id from output of SHOW PROCESSLIST

Process that was killed still on my processlist

The thread that I killed is still on my thread list How do I eliminate it?
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| 678 | root | localhost | hthtthv | Killed | 36923 | query end | INSERT INTO `gtgttg` VALUES (1,'tgtg'),(2,'Shopping'),(4,'tgtgtg'),( |
| 695 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
It needs to revert the actions it did, so this can take a long time. If it is an INNODB database, you can for instance look at this question: https://dba.stackexchange.com/questions/5654/internal-reason-for-killing-process-taking-up-long-time-in-mysql
So in the end: you need to wait for it to be eliminated
In my case, my /var partition was full, where the MySQL binlogs are written. Once I freed some disk space, the killed connections immediately went away.
I know this is an old question. I faced a similar situation, where the thread was stuck in killed status.
Instead of forcibly killing the mysqld, I issued
sudo service stop mysqld
The command could different depending on your OS, but you need to gracefully ask the system to terminate the daemon. It will kill the stuck thread and there will be no side effects on the database as it will be a normal shutdown.
Other suggestion above asked to kill the process, which will lead to database recovery which could have its own issues. So I would recommend going with the daemon graceful stop.
Hope that helps.

Grails/Hibernate Database crashes under load: Unable to connect (even when pooling)

I have an application in Grails.
I use Hibernate to access the database (per standard grails rules)
I use MySql and the site works and is stable (for 6 months).
I am doing load testing, and recently discovered that the database rejects connections when under load.
Using MySQL Server 5, I can see threads connected hovering around 20. Thought i jumps between 11 - 30.
mysql> show status like '%con%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Aborted_connects | 72 |
| Connections | 65539 |
| Max_used_connections | 101 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 1 |
+--------------------------+-------+
7 rows in set (0.00 sec)
My database configuration is standard. (The MySql server is installed locally, not shown)
dataSource {
pooled = false
driverClassName = "com.mysql.jdbc.Driver"
username = "username"
password = "secret"
maxIdle = 15
maxActive = 100
}
Should I investigate C3P0? Or should I ratched up my maxActive to 1000 and hope for the best?
What error is Grails reporting when it can't get a database connection? Timeout? Refused?
When you run your test, how loaded is the box? Percent CPU, memory usage, etc.
It's possible the database is just so overloaded that Grails is timing out getting connections. If you want to handle load, you will want to go to pooled DB connections. Without pooling, Grails will open and close a DB connection with each request.
Check your mysql configuration (/etc/mysql.conf or it's local equivalent), particularly the max connections and max conn per user settings; this sounds as though it may be coming from mysql and not grails.