I found that my mysql sever have many of connection who is sleep. i want to delete them all.
so how i can configure my mysql server than then delete or dispose the connection who is in sleep not currently in process.
are this possible to delete this thing in mysql tell me how i can do following
a connection allow only one time datareader open and destroy the connection [process] after giving resposnse of query.
If you want to do it manually you can do like this:
login to Mysql as admin:
mysql -uroot -ppassword;
And than run command:
mysql> show processlist;
You will get something like below :
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| 49 | application | 192.168.44.1:51718 | XXXXXXXX | Sleep | 183 | | NULL ||
| 55 | application | 192.168.44.1:51769 | XXXXXXXX | Sleep | 148 | | NULL |
| 56 | application | 192.168.44.1:51770 | XXXXXXXX | Sleep | 148 | | NULL |
| 57 | application | 192.168.44.1:51771 | XXXXXXXX | Sleep | 148 | | NULL |
| 58 | application | 192.168.44.1:51968 | XXXXXXXX | Sleep | 11 | | NULL |
| 59 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
You will see complete details of different connections. Now you can kill the sleeping connection as below:
mysql> kill 52;
Query OK, 0 rows affected (0.00 sec)
Why would you want to delete a sleeping thread? MySQL creates threads for connection requests, and when the client disconnects the thread is put back into the cache and waits for another connection.
This reduces a lot of overhead of creating threads 'on-demand', and it's nothing to worry about. A sleeping thread uses about 256k of memory.
you can find all working process execute the sql:
show process;
and you will find the sleep process, if you want terminate it, please remember the processid and excute this sql:
kill processid
but actually you can set a timeout variable in my.cnf:
wait_timeout=15
connect_timeout=10
interactive_timeout=100
for me with MySql server on windows,
I update the file (because cannot set variable with sql request due privileges):
D:\MySQL\mysql-5.6.48-winx64\my.ini
add the lines:
wait_timeout=61
interactive_timeout=61
restart service, and acknowledge new values with:
SHOW VARIABLES LIKE '%_timeout';
==> i do a connection tests and after 1 minutes all 10+ connections in sleep are disapeared!
Related
Consider following list of connections:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 15 | cleaned up |
| 90850182 | Sleep | 105 | cleaned up |
| 88009697 | Sleep | 38 | delayed commit ok done |
| 88000267 | Sleep | 6 | delayed commit ok done |
| 88009819 | Sleep | 38 | delayed commit ok done |
| 90634882 | Sleep | 21 | cleaned up |
| 90634878 | Sleep | 21 | cleaned up |
| 90634884 | Sleep | 21 | cleaned up |
| 90634875 | Sleep | 21 | cleaned up |
+----------+---------+------+------------------------+
After some short time under minute:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 9 | cleaned up |
| 88009697 | Sleep | 32 | delayed commit ok done |
| 88000267 | Sleep | 9 | delayed commit ok done |
| 88009819 | Sleep | 31 | delayed commit ok done |
| 90634882 | Sleep | 14 | cleaned up |
| 90634878 | Sleep | 14 | cleaned up |
| 90634884 | Sleep | 14 | cleaned up |
| 90634875 | Sleep | 14 | cleaned up |
+----------+---------+------+------------------------+
8 rows in set (0.02 sec)
enter code here
After I finished writing this stackoverflow post:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 0 | cleaned up |
| 88009697 | Sleep | 53 | delayed commit ok done |
| 88000267 | Sleep | 0 | delayed commit ok done |
| 88009819 | Sleep | 52 | delayed commit ok done |
| 90634882 | Sleep | 5 | cleaned up |
| 90634878 | Sleep | 5 | cleaned up |
| 90634884 | Sleep | 5 | cleaned up |
| 90634875 | Sleep | 5 | cleaned up |
+----------+---------+------+------------------------+
Context:
This is some 3rd vendor app opening connections (source code isn't available to us, so we don't know details). We know that their connection management is awful , they know it as well. It is awful because connections leak which you can see in first table - 90850182. If others have their timers reset, then this one starts to age infinitely. In older versions of the app it would stay forever. In newer version it is eventually captured by a "patch" which vendor introduced , which effectively cleans connections after the x seconds you specify. So it's "a leak healing patch".
The problem:
We are hosting hundreds of such vendor apps and most of them have much more than 8 connections as they have more traffic. That results in disgusting number(talking thousands) of connections we have to maintain. About 80% of connections sit in "cleaned up" state and under 120 seconds (cleaned eventually by aforementioned configurable app parameter).
This is all handled by Aurora RDS and AWS engineers told us that if the app doesn't close properly connections the standard "wait_timeout" isn't going to work. Well, "wait_timeout" becomes useless decoration in AWS Aurora, but let us take it with Jeff in other thread/topic.
So regardless, we have this magic configurable parameter from third party vendor set on this obscure app which controls eviction of stale connections and it works.
The questions:
Is it safe to evict connections which are in "cleaned up" state immediately?
At the moment this happens after 120 seconds which results in huge number of such connections. Yet in the tables above you can see that the timers are reset meaning that something is happening to these connections and they are not entirely stale. I.e. connection pooling of the app "touches" them for further re-use?
I don't posses knowledge of connection pools inner guts as how they are seen from within database. Are all reserved connections of a connection pool by default are "sleeping" in "cleaned up" state?
So say if you start cleaning too much, you will fight connection pool aggressively creating more to replenish?
Or reserved connections have some different state?
Even if you don't fully understand the context I'd expect veteran DBA or connection pool library maintainer to help with such questions. Otherwise will get my hands dirty and answer this myself eventually, would try apache connection pool, hikari, observe them and try to kill their idle connections (simulating magic parameter) and try this 3rd party app connection with 0 seconds magic parameter, see if it still works.
Appreciate your time :bow:.
The Answer
Yes, from AWS forum (https://forums.aws.amazon.com/thread.jspa?messageID=708499)
In Aurora the 'cleaned up' state is the final state of a connection
whose work is complete but which has not been closed from the client
side. In MySQL this field is left blank (no State) in the same
circumstance.
Also from the same post:
Ultimately, explicitly closing the connection in code is the best
solution here
From my personal experience as a MySQL DBA, and knowing that "cleaned up" represents a blank state, I'd definitely kill those connections.
I am writing a small web app in Go, which uses mysql to store data.
I got intermittent mysql error if the web sever didn't get any request after some amount of time(> 8 hours):
[mysql] 2017/02/08 16:31:56 packets.go:33: unexpected EOF
[mysql] 2017/02/08 16:31:56 packets.go:130: write tcp 127.0.0.1:49188->127.0.0.1:3306: write: broken pipe
I found some related discussion on github(issue 529, issue 257 and issue 446). From what I understand, mysql db would close the connection if timeout is reached.
I tried to set SetMaxOpenConns to 9 and SetMaxIdleConns to 0 as some people recommended. However, this threw exception immediately. (But if I set SetMaxIdleConns larger than 0, there was no immediate exception thrown)
I also tried to set SetConnMaxLifetime to 5 mins. This threw exception too after 5 mins.
Now I am trying the code below:
db.SetConnMaxLifetime(0)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(5)
It has been running for 20 mins. It's still too early to tell.
(UPDATE: this doesn't work either)
Here is configuration:
driver: go-sql-driver V1.3.
go version: go1.7.1 darwin/amd64
mysql: latest from docker hub
rkt version: 1.18
CoreOS: 1284.0.0
Perhaps you can start a heartbeat Goroutine to avoid timeout.
you can check your mysql time_wait variable:
mysql> show global variables like 'wait_timeout':
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 300 |
+---------------+-------+
1 row in set (0.00 sec)
then use db.SetConnMaxLifetime(120*time.Second), which mean when db connection is idle over than 120s, sql.db will reopen or get a new connection from db pool by db.Open. If you not set connection max life time, you maybe use a closed connection and got the error.
watching the mysql process list,mysql> show processlist;,if connection sleep over than 300s,it's recycled by mysql:
mysql> show processlist;
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| 4 | event_scheduler | localhost | NULL | Daemon | 1363480 | Waiting on empty queue | NULL |
| 26539 | root | 172.17.0.1:48732 | NULL | Query | 0 | starting | show processlist |
| 26575 | auditcenter | 172.17.0.1:51714 | obs_gb_test | Sleep | 51 | | NULL |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
3 rows in set (0.00 sec)
SetMaxOpenConns and SetMaxIdleConns is used for setting connection resource, see enter link description here
I have a thread showing in PHPmyadmin under processes. However, when I click kill, I get the error:
phpMyAdmin was unable to kill thread 148. It probably has already been closed.
Why does this thread still then show as active? How can I remove it entirely?
Open mysql client and type
mysql> show processlist;
+-----+------+-----------+------+---------+------+-------+------------------+-----------+---------------+-----------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+-----+------+-----------+------+---------+------+-------+------------------+-----------+---------------+-----------+
| 106 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0 | 0 | 0 |
+-----+------+-----------+------+---------+------+-------+------------------+-----------+---------------+-----------+
1 row in set (0.00 sec)
you'll see processes with ID, than you can do this:
mysql> kill 106;
and your process (id = 106) will be killed.
Between the time that phpMyAdmin received the list of processes and the time you clicked to kill one of them, this process had finished by itself.
See also https://sourceforge.net/p/phpmyadmin/feature-requests/1490/.
This phenomenon is caused by the connection used to access PHPmyadmin itself, hence it doesn't show on the direct MySQLQuery. It can't be killed, as it would close the PHPmyadmin connection.
how to send mail alert for MySQL?
can we send alerts when the MySQL has large number of connections, or MySQL is not responding properly. Can someone help me to solve this prolem ?
You can do this in a number of ways. SHOW FULL PROCESSLIST; query would give you information about the number of connections as well as the queries being executed by each connection(thread). A sample result is as follows.
mysql> SHOW FULL PROCESSLIST;
+------+------+--------------------+------+---------+------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+------+--------------------+------+---------+------+-------+-----------------------+
| 1298 | root | 192.168.1.76:37648 | NULL | Sleep | 0 | | NULL |
| 1491 | root | localhost | NULL | Query | 0 | init | show full processlist |
+------+------+--------------------+------+---------+------+-------+-----------------------+
If you are only concerned with the number of current connection(threads) you can use the following query.
mysql> SHOW STATUS WHERE `variable_name` = 'Threads_connected';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 2 |
+-------------------+-------+
Now about the mail alerts, you can setup a cron job(shell script) to fire a mail alert as soon as the number of current connections exceed a certain limit. mail command can be used for this.
$ echo "Max MySQL Connections reached"| mail -s "your subject" your#email.com
Also, I came across a great MySQL Monitoring tool- MONyog. It would let you setup mail alerts for any of the MySQL variable.
Can anyone tell me how can I kill all the sleeping processes?
I searched for it and I found that we can do it by command
mk-kill --match-command Sleep --kill --victims all --interval 10
I connected the DB server(Linux) but I find the message that command not found.
I tried to connect via MYSQL administrator and it doesn't say that command not found but also doesn't executes the query , just says you have an SQl error
login to Mysql as admin:
mysql -uroot -ppassword;
And than run command:
mysql> show processlist;
You will get something like below :
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| 49 | application | 192.168.44.1:51718 | XXXXXXXX | Sleep | 183 | | NULL ||
| 55 | application | 192.168.44.1:51769 | XXXXXXXX | Sleep | 148 | | NULL |
| 56 | application | 192.168.44.1:51770 | XXXXXXXX | Sleep | 148 | | NULL |
| 57 | application | 192.168.44.1:51771 | XXXXXXXX | Sleep | 148 | | NULL |
| 58 | application | 192.168.44.1:51968 | XXXXXXXX | Sleep | 11 | | NULL |
| 59 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
You will see complete details of different connections. Now you can kill the sleeping connection as below:
mysql> kill 55;
Query OK, 0 rows affected (0.00 sec)
kill $queryID; is helpful but if there is only one query causing an issue;
Having a lot of MySQL sleeping processes can cause a huge spike in your CPU load or IO
Here is a simple one-line command (if behind the MySQL server is linux) which would kill all of the current sleeping MySQL processes:
for i in `mysql -e "show processlist" | awk '/Sleep/ {print $1}'` ; do mysql -e "KILL $i;"; done
This is only a temporary repair; I strongly advise identifying and addressing the problem's main cause.
For instance, you may set the wait timeout variable to the amount of time you want MySQL to hold open connections before shutting them.
But if the issue still persists and you have to investigate the DB queries that cause the problem there is another way. In screen session, you can use another while cycle to continuously kill the sleeping queries. (while there is an output of the mysql show processlit | grep -i sleep | awk id column and kill it.) If you are using MySQL replication between different hosts this will help them to catch up. So when using show slave status\G; Seconds_behind_master will be going to catch up.
Of course, you should investigate the root cause again.