MySQL kill sleep connections - mysql

My DB seems to have almost 100 "sleep" connections and I believe it would be better to get rid of them. I have found the following script which helps me analyze all tables in a DB from the command line. Does anybody know of a similar one liner which can kill all "sleep" connections?
mysql -p -D<database> -B -e "SHOW TABLES" \
| awk '{print "CHECK TABLE "$1";"}' \
| mysql -p -D<database>

You can set the wait_timeout variable to a shorter time than the default and it should kill off the connections once they exceed this time value.
From what I've read there is some bugginess and connections will still randomly hang around, especially under load, but the wait_timeout variable would be the place to start.

Some thoughts:
You may use SHOW PROCESSLIST instead of SHOW TABLES. I'm not much familiar with awk linux utility but I think you can do all sorts of pattern matching there in.
Alternatively, you can write a script to query MySQL for "SHOW PROCESSLIST", parse the resultset for sleeping connections and accordingly use KILL <process id>

for i in `mysql -e "show processlist" | awk '/Sleep/ {print $1}'` ; do mysql -e "KILL $i;"; done
https://www.digitalocean.com/community/questions/how-can-i-kill-all-mysql-sleeping-queries

Related

Does the command mysqldump generate the dump locally before transferring? [duplicate]

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

Close mysql connection through shell script

I am running shell scripts which have multiple mysql -u <> -e "select * from tbl" statements. Do i need to explicitly close mysql connection through my shell script? If yes, can you guide me on how to do that?
I am seeing many sleep connections in mysql. Even if i kill them, they are created again in short period an i am suspecting that shell script execution is causing this.
Thanks
If you didn't solve the root cause of the sleep queries it will be restarted and will consume the resources after each time you killed the connection, so you have to check your code and modify it to close the connection after executing the query.
Any way, you can kill any connection by it's ID which can be determined by:
mysql -e 'show processlist;'
The ID will be in the first column and then you can kill them by the following:
mysql -e 'kill $ID'
And to kill them all in one time:
for i in $(mysql -e "show processlist" | awk '/Sleep/ {print $1}'); do mysql -e "KILL $i;"; done

getting Lost connection to mysql when using mysqldump even with max_allowed_packet parameter

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

Mysql count how many queries per second are executed, is there any way?

i have a busy web server with LAMP installed, and i was wondering, is there any way to count how many queries per second (mysql) are executed in the server ?
Thank you.
SELECT s1.variable_value / s2.variable_value
FROM information_schema.global_status s1, information_schema.global_status s2
WHERE s1.variable_name='queries'
AND s2.variable_name ='uptime';
Try Jeremy Zawodny's excellent utility mytop.
If you have the Perl module Time::HiRes installed, mytop will automatically use it to generate high-resoution query per second information.
There's useful information to be mined from the SHOW GLOBAL STATUS; command, including the number of queries executed (if your MySQL is 5.0.76 or later).
See http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
You can use:
mysqladmin -u root -p status
which will return output like:
Uptime: 17134 Threads: 2 Questions: 1245 Slow queries: 0 Opens: 49 Flush tables: 1 Open tables: 42 Queries per second avg: 0.072
Here queries per second is: 0.072, which is questions/uptime.
When you use the "STATUS" command (not SHOW STATUS), MySQL will calculate the queries per second since server start for you.
Tested with MySQL 5.1.63.
We can have a small script for this. It will be some thing like the below.
declare -i a
declare -i b
declare -i c
a=`mysql -uroot -pxxxxx -e "show status like 'Queries'" |
tail -1 | awk '{print $2}'`
echo "$a"
sleep 1
b=`mysql -uroot -pxxxxx -e "show status like 'Queries'" |
tail -1 | awk '{print $2}'`
echo "$b"
c=$b-$a
echo "Number of Queries per second is: $c"

How can I stop a MySQL query if it takes too long?

Is it possible to timeout a query in MySQL?
That is, if any query exceeds the time I specify, it will be killed by MySQL and it will return an error instead of waiting for eternity.
There is a nice Perl script on CPAN to do just this:
http://search.cpan.org/~rsoliv/mysql-genocide-0.03/mysql-genocide
One only needs to schedule it to run with the proper parameters. Create a CRONtab file /etc/cron.d/mysql_query_timeout to schedule it to run every minute:
* * * * * root /path/to/mysql-genocide -t 7200 -s -K
Where 7200 is the maxiumum allowed execution time in seconds. The -s switch filters out all except SELECT queries. The -K switch instructs the script to kill the matching processes.
The root user should be able to run local mysql tools without authentication otherwise you will need to provide credentials on the command line.
I just set up the following bash script as a cron job to accomplish this with MySQL 5.0 (kills any query that has been executing for more than 30 seconds). Sharing it here in case it proves useful to anyone (apologies if my bash scripting style is inefficient or atrocious, it is not my primary development language):
#!/bin/bash
linecount=0
processes=$(echo "show processlist" | mysql -uroot -ppassword)
oldIfs=$IFS
IFS='
'
echo "Checking for slow MySQL queries..."
for line in $processes
do
if [ "$linecount" -gt 0 ]
then
pid=$(echo "$line" | cut -f1)
length=$(echo "$line" | cut -f6)
query=$(echo "$line" | cut -f8)
#Id User Host db Command Time State Info
if [ "$length" -gt 30 ]
then
#echo "$pid = $length"
echo "WARNING: Killing query with pid=$pid with total execution time of $length seconds! (query=$query)"
killoutput=$(echo "kill query $pid" | mysql -uroot -ppassword)
echo "Result of killing $pid: $killoutput"
fi
fi
linecount=`expr $linecount + 1`
done
IFS=$oldIfs
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
MAX_STATEMENT_TIME = 1000 --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Starting with MySQL 5.1 you can create a stored procedure to query the information_schmea.PROCESSLIST table for all queries that match your criteria for "long running" then iterate over a cursor to kill them. Then setup that procedure to execute on a recurring basis in the event scheduler.
See: http://forge.mysql.com/tools/tool.php?id=106
The MySQL forum has some threads about this.
This post details how to set up timeouts on the server using innodb_lock_wait_timeout.
Here's a way to do it programmatically, assuming you're using JDBC.
I think this old question needs an updated answer.
You can set a GLOBAL timeout for all your read-only SELECT queries like this:
SET GLOBAL MAX_EXECUTION_TIME=1000;
The time specified is in milliseconds.
If you want the timeout only for a specific query, you can set it inline like this:
SELECT /*+ MAX_EXECUTION_TIME(1000) */ my_column FROM my_table WHERE ...
MySQL returns an error instead of waiting for eternity.
Note that this method only works for read-only SELECTs. If a SELECT statement is determined not to be read-only, then any timer set for it is cancelled and the following NOTE message is reported to the user:
Note 1908 Select is not a read only statement, disabling timer
For statements with subqueries, it limits the top SELECT only. It does not apply to SELECT statements within stored programs. Using the MAX_EXECUTION_TIME hint in SELECT statements within a stored program will be ignored.
I don't think the egrep above would find "2000".
Why not try just selecting the id as well, and avoiding all of that posh shell stuff:
mysql -e 'select id from information_schema.processlist where info is not null and time > 30;'
Since MySQL 5.7.8 there is max_execution_time option that defines the execution timeout for SELECT statements.
Here is my script :
mysql -e 'show processlist\G' |\
egrep -b5 'Time: [6-9]{3,10}' |\
grep 'Id:' |\
cut -d':' -f2 |\
grep -v '155' |\ ## Binary Log PID
sed 's/^ //' |\
while read id
do
mysql -e "kill $id;"
done