Process that was killed still on my processlist - mysql

The thread that I killed is still on my thread list How do I eliminate it?
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| 678 | root | localhost | hthtthv | Killed | 36923 | query end | INSERT INTO `gtgttg` VALUES (1,'tgtg'),(2,'Shopping'),(4,'tgtgtg'),( |
| 695 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

It needs to revert the actions it did, so this can take a long time. If it is an INNODB database, you can for instance look at this question: https://dba.stackexchange.com/questions/5654/internal-reason-for-killing-process-taking-up-long-time-in-mysql
So in the end: you need to wait for it to be eliminated

In my case, my /var partition was full, where the MySQL binlogs are written. Once I freed some disk space, the killed connections immediately went away.

I know this is an old question. I faced a similar situation, where the thread was stuck in killed status.
Instead of forcibly killing the mysqld, I issued
sudo service stop mysqld
The command could different depending on your OS, but you need to gracefully ask the system to terminate the daemon. It will kill the stuck thread and there will be no side effects on the database as it will be a normal shutdown.
Other suggestion above asked to kill the process, which will lead to database recovery which could have its own issues. So I would recommend going with the daemon graceful stop.
Hope that helps.

Related

How to completely disable GTID in mysql 5.7?

As my mysql database is just used with a small web app, I won't ever need any replication features. While monitoring, I noticed something named thread/sql/compress_gtid_table. And while dumping some tables with mysqldump I got this warning:
Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.
How can I be sure, all GTID features are completely disabled and are not causing overhead ?
Here is my config:
mysql> SHOW VARIABLES LIKE '%GTID%';
+----------------------------------+----------------+
| Variable_name | Value |
+----------------------------------+----------------+
| binlog_gtid_simple_recovery | ON |
| enforce_gtid_consistency | OFF |
| gtid_executed_compression_period | 1000 |
| gtid_mode | OFF |
| gtid_next | AUTOMATIC |
| gtid_owned | |
| gtid_purged | |
| session_track_gtids | OFF |
+----------------------------------+----------------+
I was just linked here while searching for that Warning. I am setting up GTIDs but this page may be helpful for you: https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-howto.html
Note there is a warning that once you turn on GTID you cannot easily go backwards. I'm sure there is a way but it may not be worth the trouble.
After you disable GTID replication and you don't need more your old binary logs (with GTID info) and slave has catch up all binlog info you can stop slave and do reset master; It will wipe out all binlogs from the server and no more gtid information will be kept. Refer to this post how to resync replication properly.

what is the best solutions to mysql connection timeout?

I am writing a small web app in Go, which uses mysql to store data.
I got intermittent mysql error if the web sever didn't get any request after some amount of time(> 8 hours):
[mysql] 2017/02/08 16:31:56 packets.go:33: unexpected EOF
[mysql] 2017/02/08 16:31:56 packets.go:130: write tcp 127.0.0.1:49188->127.0.0.1:3306: write: broken pipe
I found some related discussion on github(issue 529, issue 257 and issue 446). From what I understand, mysql db would close the connection if timeout is reached.
I tried to set SetMaxOpenConns to 9 and SetMaxIdleConns to 0 as some people recommended. However, this threw exception immediately. (But if I set SetMaxIdleConns larger than 0, there was no immediate exception thrown)
I also tried to set SetConnMaxLifetime to 5 mins. This threw exception too after 5 mins.
Now I am trying the code below:
db.SetConnMaxLifetime(0)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(5)
It has been running for 20 mins. It's still too early to tell.
(UPDATE: this doesn't work either)
Here is configuration:
driver: go-sql-driver V1.3.
go version: go1.7.1 darwin/amd64
mysql: latest from docker hub
rkt version: 1.18
CoreOS: 1284.0.0
Perhaps you can start a heartbeat Goroutine to avoid timeout.
you can check your mysql time_wait variable:
mysql> show global variables like 'wait_timeout':
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 300 |
+---------------+-------+
1 row in set (0.00 sec)
then use db.SetConnMaxLifetime(120*time.Second), which mean when db connection is idle over than 120s, sql.db will reopen or get a new connection from db pool by db.Open. If you not set connection max life time, you maybe use a closed connection and got the error.
watching the mysql process list,mysql> show processlist;,if connection sleep over than 300s,it's recycled by mysql:
mysql> show processlist;
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| 4 | event_scheduler | localhost | NULL | Daemon | 1363480 | Waiting on empty queue | NULL |
| 26539 | root | 172.17.0.1:48732 | NULL | Query | 0 | starting | show processlist |
| 26575 | auditcenter | 172.17.0.1:51714 | obs_gb_test | Sleep | 51 | | NULL |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
3 rows in set (0.00 sec)
SetMaxOpenConns and SetMaxIdleConns is used for setting connection resource, see enter link description here

MySQL CPU increase when I have Sleeping connection that stay open

I have a MySQL 5.6.27-0ubuntu0.14.04.1 that run on a Google Compute instance with 4 CPU.
I noticed that if I have a connection that Sleep for a long time, then the CPU of the server will increase in a linear way. I don't understand why? If I kill the Sleep connection then CPU just restore to a correct usage.
So to summary I have the following:
I notice the CPU of my instance is increasing:
Then I check the processlist on my server
mysql> show processlist
-> ;
+-------+--------+-------------------+----------------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+--------+-------------------+----------------+---------+------+-------+------------------+
| 85949 | nafora | paper-eee-2:58461 | state_recorder | Sleep | 1300 | | NULL |
| 85956 | nafora | paper-eee-2:58568 | state_recorder | Sleep | 64 | | NULL |
| 85959 | root | localhost | NULL | Query | 0 | init | show processlist |
+-------+--------+-------------------+----------------+---------+------+-------+------------------+
You can see I have just 2 connection that Sleep and one is here from 1300 seconds (because I have a process that is stuck with the connection open)
So I kill the connection 85949, and the CPU just fall down.
Can someone explain me why a single connection that is sleeping can impact my database like this.
Thanks.
Some non-closed connections or long running slow queries might cause this behavior. You could limit the non-closed connections by configuring the global variable wait_timeout as reasonable value and also set another related variable interactive_timeout as high as per best practice.
Stateful applications that use a connection pool (Java, .NET, etc.) will need to adjust wait_timeout to match their connection pool settings. The default 8 hours (wait_timeout = 28800) works well with properly configured connection pools.
Configure the wait_timeout to be slightly longer than the application connection pool’s expected connection lifetime. This is a good safety check. Also profile the queries accordingly to observe the performance of MySQL instance can help you to avoid I/O bottlenecks.

How to kill a thread in PHPmyadmin

I have a thread showing in PHPmyadmin under processes. However, when I click kill, I get the error:
phpMyAdmin was unable to kill thread 148. It probably has already been closed.
Why does this thread still then show as active? How can I remove it entirely?
Open mysql client and type
mysql> show processlist;
+-----+------+-----------+------+---------+------+-------+------------------+-----------+---------------+-----------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+-----+------+-----------+------+---------+------+-------+------------------+-----------+---------------+-----------+
| 106 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0 | 0 | 0 |
+-----+------+-----------+------+---------+------+-------+------------------+-----------+---------------+-----------+
1 row in set (0.00 sec)
you'll see processes with ID, than you can do this:
mysql> kill 106;
and your process (id = 106) will be killed.
Between the time that phpMyAdmin received the list of processes and the time you clicked to kill one of them, this process had finished by itself.
See also https://sourceforge.net/p/phpmyadmin/feature-requests/1490/.
This phenomenon is caused by the connection used to access PHPmyadmin itself, hence it doesn't show on the direct MySQLQuery. It can't be killed, as it would close the PHPmyadmin connection.

Linux Mint trigger slowly query on mysql on system booting

My debian-based is booting so slow after I installed MySQL and imported some databases on it. Looking for some statement, I found this one during boot:
mysql> show full processlist;
+----+------------------+-----------+------+---------+------+----------------+----------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------------------+-----------+------+---------+------+----------------+----------------------------------------------------------------------+
| 9 | debian-sys-maint | localhost | NULL | Query | 12 | Opening tables | select count(*) into #discard from `information_schema`.`PARTITIONS` |
| 10 | root | localhost | NULL | Query | 0 | NULL | show full processlist |
+----+------------------+-----------+------+---------+------+----------------+----------------------------------------------------------------------+
2 rows in set (0.00 sec)
Here the statement that causing trouble:
select count(*) into #discard from `information_schema`.`PARTITIONS`
I have +-10 databases totaling over 8gb of data.
Is there any configuration to disable this query on system booting ? If yes, why run it during boot ?
Information
I have a standard MySQL installation without custom configs.
Best regards.
It seems Debian, whose Linux Mint is based upon, have scripts that get executed when the mysql server is started or restarted, to check for corrupted tables and make an alert for that.
In my Debian server, the culprit seems to be /etc/mysql/debian-start bash script, which in turn calls /usr/share/mysql/debian-start.inc.sh , so check both scripts and comment out the function that is iterating all your tables, from a quick look it seems the following:
check_for_crashed_tables;
which is called from the debian-start script I mentioned above.