Shell Script to auto kill mysql sleep processes - mysql

How We Kill mysql sleep processes Like:
+------+-----------+-----------+------------------------+---------+------+----------------+-------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+-----------+-----------+------------------------+---------+------+----------------+-------------------------------------------------------------------------------------------+
| 2477 | stageuser | localhost | jj_production_11102013 | Query | 0 | end | SELECT * FROM wp_comments WHERE blog_id = 1071 ORDER BY comment_date_gmt DESC LIMIT 0, 50 |
| 3050 | stageuser | localhost | jj_production_11102013 | Query | 0 | Sorting result | SELECT * FROM wp_comments WHERE blog_id = 1071 ORDER BY comment_date_gmt DESC LIMIT 0, 50 |
| 3052 | stageuser | localhost | jj_production_11102013 | Sleep | 336 | | NULL |
| 3056 | stageuser | localhost | NULL | Query | 0 | NULL | show processlist |
| 3057 | stageuser | localhost | jj_production_11102013 | Sleep | 301 | | NULL |
| 3058 | stageuser | localhost | jj_production_11102013 | Sleep | 299 | | NULL |
| 3059 | stageuser | localhost | jj_production_11102013 | Sleep | 298 | | NULL |
| 3061 | stageuser | localhost | jj_production_11102013 | Sleep | 273 | | NULL |
| 3068 | stageuser | localhost | jj_production_11102013 | Sleep | 251 | | NULL |
| 3072 | stageuser | localhost | jj_production_11102013 | Sleep | 233 | | NULL |
| 3111 | stageuser | localhost | jj_production_11102013 | Sleep | 1 | | NULL |
+------+-----------+-----------+------------------------+---------+------+----------------+-------------------------------------------------------------------------------------------+
11 rows in set (0.00 sec)
Is this sleep processes affect site performance like slow other queries?

I made it.
Create kill_sleep.sh file
mysql -u<user> -p<password> -h<host> -e "select concat('KILL ',id,';') into outfile '/tmp/sleep_processes.txt' from information_schema.processlist where Command = 'Sleep'"
mysql -u<user> -p<password> -h<host> -e "source /tmp/sleep_processes.txt;"
rm -rf /tmp/sleep_processes.txt
And set kill_sleep.sh to cron job .

Vishal's answer works well if you're running the command on the MySQL server, but it won't work if you're connecting to the server remotely or if you don't have permission to run SOURCE or SELECT ... INTO OUTFILE (eg. Amazon's RDS). It's possible to rewrite it not to rely on those features though, and then it'll work anywhere:
mysql -h<host> -u<user> -p -e "SELECT CONCAT('KILL ',id,';') FROM information_schema.processlist WHERE Command = 'Sleep'" > sleep.txt
cat sleep.txt | xargs -I% mysql -h<host> -u<user> -p -e "%"

The syntax is:
KILL thread_id
In your case:
mysql > KILL 3057
But in order to delete all the sleep processes,one command cant be used, you need to loop through whole processlist,after taking all the processes in tmp table and looping through it:
select concat('KILL ',id,';') from information_schema.processlist where Command='Sleep';
select concat('KILL ',id,';') from information_schema.processlist where Command='Sleep' into outfile '/tmp/a.txt';
Referred from here

A easy way:
for i in `mysql -e "show processlist" | awk '/Sleep/ {print $1}'` ; do mysql -e "KILL $i;"; done

Percona Tools:
pt-kill --match-command Sleep --idle-time 100 --victims all --interval 30 --kill
This will find all connections that are "Sleep" state and idle for 100 seconds or more and kill them. --interval 30 will make it keep do this every 30 seconds. So you can open a screen -S ptkill then in that screen run the above command, then ctrl-A, D to detach and exit the terminal and it will just keep running cleaning up your connections.
https://www.percona.com/doc/percona-toolkit/2.1/pt-kill.html

Related

Mysql Events Execution cycle

I created an event in mysql to gather some data from different tables which will repeat itself in every 5 minutes. Let's say the event may take more than 5 minutes to complete in some scenario(maybe the db is running slow or needs a restart). Many other events gets fired simultaneously so to handle this I read locks can be used as per mysql manual.
If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking.
But simply putting a lock didn't resolved my issue as the events were still getting executed in a queue and unpredicted data getting dumped, what I wanted was simply if the lock is there don't do anything and wait.
In locks I read if one named lock is assigned to a session another same name lock can be used until earlier lock get released.
if GET_LOCK('ev_test',-1) is not TRUE then
SIGNAL SQLSTATE '45000' set MESSAGE_TEXT = 'failed to obtain lock; not continuing; ';
end if;
some_event_body
RELEASE_LOCK('ev_test');
so I used this statement in mysql event body. and later releasing this lock manually on completion of event
My question is what happens when event some_event_body triggers some other exception like if there is select query and some columns were removed used by event body?
will the lock gets released automatically? will the lock be there always?
mysql manual says locks stays there until the session terminates. But I don't know if event lives inside a session or every event creates a new session?
Externally without above code simply using GET_LOCK I encountered this kind of situation.
+------+-----------------+-----------+-------------+---------+------+-----------------------------+-----------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+-----------------+-----------+-------------+---------+------+-----------------------------+-----------------------------+
| 5 | event_scheduler | localhost | NULL | Daemon | 30 | Waiting for next activation | NULL |
| 8 | root | localhost | logi_test_2 | Query | 0 | init | show processlist |
| 1330 | root | localhost | logi_test_2 | Connect | 2 | User sleep | SELECT SLEEP(30) |
| 1331 | root | localhost | logi_test_2 | Connect | 4974 | User lock | SELECT GET_LOCK('test', -1) |
| 1332 | root | localhost | logi_test_2 | Connect | 4969 | User lock | SELECT GET_LOCK('test', -1) |
| 1333 | root | localhost | logi_test_2 | Connect | 4964 | User lock | SELECT GET_LOCK('test', -1) |
| 1334 | root | localhost | logi_test_2 | Connect | 4959 | User lock | SELECT GET_LOCK('test', -1) |
| 1335 | root | localhost | logi_test_2 | Connect | 4953 | User lock | SELECT GET_LOCK('test', -1) |
| 1338 | root | localhost | logi_test_2 | Connect | 4949 | User lock | SELECT GET_LOCK('test', -1) |
| 1339 | root | localhost | logi_test_2 | Connect | 4944 | User lock | SELECT GET_LOCK('test', -1) |
| 1340 | root | localhost | logi_test_2 | Connect | 4939 | User lock | SELECT GET_LOCK('test', -1) |
| 1341 | root | localhost | logi_test_2 | Connect | 4934 | User lock | SELECT GET_LOCK('test', -1) |
| 1342 | root | localhost | logi_test_2 | Connect | 4929 | User lock | SELECT GET_LOCK('test', -1) |
| 1343 | root | localhost | logi_test_2 | Connect | 4924 | User lock | SELECT GET_LOCK('test', -1) |
| 1344 | root | localhost | logi_test_2 | Connect | 4919 | User lock | SELECT GET_LOCK('test', -1) |
| 1345 | root | localhost | logi_test_2 | Connect | 4914 | User lock | SELECT GET_LOCK('test', -1) |
| 1346 | root | localhost | logi_test_2 | Connect | 4909 | User lock | SELECT GET_LOCK('test', -1) |
| 1347 | root | localhost | logi_test_2 | Connect | 4904 | User lock | SELECT GET_LOCK('test', -1) |
| 1348 | root | localhost | logi_test_2 | Connect | 4899 | User lock | SELECT GET_LOCK('test', -1) |
| 1349 | root | localhost | logi_test_2 | Connect | 4894 | User lock | SELECT GET_LOCK('test', -1) |
| 1352 | root | localhost | logi_test_2 | Connect | 4889 | User lock | SELECT GET_LOCK('test', -1) |
| 1353 | root | localhost | logi_test_2 | Connect | 4884 | User lock | SELECT GET_LOCK('test', -1) |
why locks are getting duplicated here when only one named lock is allowed regardless of session?
I tried finding results on stackoverflow and reading mysql manual to but couldn't find anything.
This is a classic problem with cron, EVENT, etc.
I like to recommend this solution:
Instead of repeatedly firing off a potentially slow process, have a single process that loops. It would do the task, then repeat.
Embellishments
Add a "sleep" between iterations.
Add a calculated sleep to pause 'the rest of 5 minutes'.
Do something to observe that the system is busy and sleep longer.
Add a cron/EVENT as a "keepalive". This would restart the looping task if it dies. This might also be the way to get initially fired up after any type of crash or graceful outage.
I would also look at the queries -- 5 minutes is a looooong time for an SQL task.

mysqlbinlog doesn't work in Google Cloud SQL MySql

I have MySql instance on Google Cloud SQL. I have enabled binary logs. I can check the log files, as shown below.
mysql> SHOW BINARY LOGS;
+------------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+------------------+-----------+-----------+
| mysql-bin.000001 | 1375216 | No |
| mysql-bin.000002 | 7336055 | No |
+------------------+-----------+-----------+
I am able to check the events also in logs file.
mysql> SHOW BINLOG EVENTS IN 'mysql-bin.000001' limit 5;
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysql-bin.000001 | 4 | Format_desc | 883641454 | 124 | Server ver: 8.0.18-google, Binlog ver: 4 |
| mysql-bin.000001 | 124 | Previous_gtids | 883641454 | 155 | |
| mysql-bin.000001 | 155 | Gtid | 883641454 | 234 | SET ##SESSION.GTID_NEXT= 'd635d876-06de-11eb-b2ab-42010a9d0043:1' |
| mysql-bin.000001 | 234 | Query | 883641454 | 309 | BEGIN |
| mysql-bin.000001 | 309 | Table_map | 883641454 | 367 | table_id: 81 (mysql.heartbeat) |
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
But it gives me error when I use the mysqlbinlog command.
mysql> mysqlbinlog mysqld-bin.000001;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqlbinlog mysqld-bin.000001' at line 1
I don't understand what is going wrong here. Please help.
On cloud shell install :
sudo apt-get install mysql-server
Install the Cloud SQL Proxy client on your local machine
Then run :
mysqlbinlog -R --protocol TCP --host localhost --user root --password --port 3306 mysqld-bin.000001;

Nodejs module for mysql - Connections in pool not showing in status

I want to use mySQL nodeJs connection pooling in the felixge Node module.
It appears to work fine except that I am not sure how to find if its really working, as intended. The pool connection params, passed to mysql.createPool() are:
dbConnectionParams = {
connectionLimit: 20,
host: 'localhost',
user: 'jdoe',
password:'somepasswd',
database: 'myDB'
};
All queries using connections from the pool work fine. However, when I try to see the actual connections using "show processlist" I see about 4 to 8 connections at any time, never 20. Should these not be listed too? Is there any other mySQL statement to see them or are they not opened until we actually need them? If so, is there any way to force open them so that later, when the connection is actually needed, there is no time lost in opening them?
I have read the documentation which states that "connections are lazily created by the pool." I thought that the whole idea of pooling is to circumvent this so they are not opened on an as-needed (lazy?) basis but are pre-opened.
UPDATE: Here is the output. Am trying to co-relate it to connectionLimit parameter upon new requests coming in.
+----+------+-----------------+------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------------+-------------+---------+------+-------+------------------+
| 38 | jdoe | localhost:50716 | myDB | Sleep | 669 | | NULL |
| 39 | jdoe | localhost | myDB | Query | 0 | NULL | show processlist |
| 41 | jdoe | localhost:50718 | myDB | Sleep | 4 | | NULL |
| 44 | jdoe | localhost:50721 | myDB | Sleep | 4 | | NULL |
| 45 | jdoe | localhost:50722 | myDB | Sleep | 4 | | NULL |
| 46 | jdoe | localhost:50723 | myDB | Sleep | 5 | | NULL |
| 47 | jdoe | localhost:50724 | myDB | Sleep | 4 | | NULL |
| 48 | jdoe | localhost:50725 | myDB | Sleep | 4 | | NULL |
+----+------+-----------------+-------------+---------+------+-------+------------------+

MySQL Connections causing server went away, nothing in processlist

I have a large amount of connections but when I issue a show full processlist I am not showing anything close to the connections I see. Are these connections orphans of some sort? I tried the flush hosts command and the connections persist, even with a reboot of the server and also restarting the mysql server.
I believe these connections are causing issues with making new connections to the database. User's are getting a "server went away" error. How do I clear these?
See commands below:
mysql> show status like '%onn%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Aborted_connects | 5 |
| Connections | 11743 |
| Max_used_connections | 24 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 6 |
+--------------------------+-------+
7 rows in set (0.00 sec)
mysql> show full processlist;
+-------+---------+----------------------+--------------------+---------+-------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+---------+----------------------+--------------------+---------+-------+-------+-----------------------+
| 4494 | rode | localhost:43411 | NULL | Sleep | 11159 | | NULL |
| 4506 | rode | localhost:43423 | information_schema | Sleep | 11159 | | NULL |
| 4554 | rode | localhost:43511 | performance_schema | Sleep | 11112 | | NULL |
| 11500 | ass | serv:1243 | Home-Tech | Sleep | 0 | | NULL |
| 11743 | root | localhost | NULL | Query | 0 | NULL | show full processlist |
| 11744 | ass | out:6070 | Home-Tech | Sleep | 4 | | NULL |
| 11745 | ass | out:6074 | HTGlobal | Sleep | 8 | | NULL
The MySQL server has gone away (error 2006) has two main causes
Server timed out and closed the connection. To fix, check that “wait_timeout” mysql variable in your my.cnf configuration file is large enough.
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has gone wrong with the client and closes the connection. To fix, you can increase the maximal packet size limit “max_allowed_packet” in my.cnf file, eg. set max_allowed_packet = 128M, then sudo /etc/init.d/mysql restart.
there are two main ways to fix this. if the above change doesn't there may be an issue with your linux or windows mysql database server; you either need to increase ram on your server or watch it's process.
is this on a windows or linux box?

MySQL import taking a long time to finish/ never ends

My ~700mb database has been importing for 1hr 27min now, the actual disk activity stopped at around 15min but I just left it running to see if it would finish by itself.
The command I ran:
mysqldump DB1 | pv | mysql DB2
So it's pretty much a straight copy from 1 database to another, with DB2 starting as empty.
I can actually see that the data is already in DB2, but the command refuses to end!
So the question is... Should I let it continue to run? Or can I kill it? :/
Updated:
SHOW PROCESSLIST;
+------+--------------+---------------------+------------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+--------------+---------------------+------------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| 2762 | user1 | localhost | DB2 | Query | 5298 | Locked | /*!50003 CREATE*/ /*!50020 DEFINER="user2"#"%"*/ /*!50003 FUNCTION "function1" |
| 2763 | user1 | localhost | DB1 | Sleep | 5298 | | NULL |
| 2770 | user2 | localhost | NULL | Query | 3633 | Locked | SELECT COUNT(*) FROM `INFORMATION_SCHEMA`.`ROUTINES` WHERE `ROUTINE_SCHEMA`='DB2' AND `ROUTIN |
| 2775 | user2 | localhost | NULL | Query | 381 | Locked | SELECT COUNT(*) FROM `INFORMATION_SCHEMA`.`ROUTINES` WHERE `ROUTINE_SCHEMA`='DB2' AND `ROUTIN |
| 2776 | user2 | <ipaddress>:<port> | NULL | Query | 0 | NULL | show processlist |
+------+--------------+---------------------+------------+---------+------+--------+------------------------------------------------------------------------------------------------------+
Some settings I have that are non-standard:
innodb_stats_on_metadata=0
innodb_flush_log_at_trx_commit=2