EC2 Amazon Linux AMI MySQL CPU # 62% When Idle? - mysql

I am running MySQL on an Amazon Linux AMI. There is nothing connected to it. There are no connections and no other applications running that use MySQL. It is completely idle, but yet, top is reporting that mysql is using 62% of the CPU? Why is this happening and how do I fix it?
Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.7%st
Mem: 1738504k total, 390708k used, 1347796k free, 56888k buffers
Swap: 917500k total, 0k used, 917500k free, 229804k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2959 mysql 20 0 466m 39m 5244 S 62.2 2.3 4:00.67 mysqld
1 root 20 0 19252 1504 1212 S 0.0 0.1 0:00.20 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
There are no connections...
mysql> show processlist;
+----+------+-----------+------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------+------+---------+------+-------+------------------+
| 5 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+------+-----------+------+---------+------+-------+------------------+
UPDATE:
My issue was definitely related to the Lead Second bug. Kudos to nico-ekito. Thanks!

The only thing I can think of is to inspect what mysqld is really doing using strace, as user root:
strace -p 2959
Normally, strace should block immediately and show you a call to select(), because mysqld should be waiting for connections.
The call should be something like:
select(SOCKETNO, [OTHER_FDs], NULL, NULL, NULL)
especially important is the fourth parameter, which is a timeout timeval. If NULL, it means that mysqld will sleep until someone connects. If not NULL, it means that mysqld will wait for the specified time and then do some maintenance work. A very small timeval might explain the CPU consumption.
I believe that MySQL always employs a NULL (infinite) timeout. It makes sense and this is how the mysqlds I'm able to reach now are behaving.
However, there might be some connection handling issues that prevent select from sleeping again. Check whether this behaviour appears as soon as mysqld starts, or after someone connects.

Closing out this question. My issue was indeed related to the Leap Second fiasco.

Related

Closing mysql connection when executed from shell script

Using shell script to run mysql insert query, while running the query i am seeing connections are not being closed from shell script.
Server Response after running the shell script
$ date
Tue Feb 20 15:43:58
$ netstat -alnp | grep 3306 | wc -l
26
Where above 26 counts were like
tcp6 0 0 192.168.10.169:31503 192.168.10.170:3306 ESTABLISHED 11603/java
$ netstat -alnp | grep 3306 | wc -l
50
Where above 50 counts were like (TIME_WAIT - 22) and (ESTABLISHED - 28)
tcp6 0 0 192.168.10.169:48308 192.168.10.170:3306 ESTABLISHED 12603/java
tcp6 0 0 192.168.10.169:48990 192.168.10.170:3306 TIME_WAIT
$ date
Tue Feb 20 15:46:49
Does mysql connection ran through shell script doesn't get closed by self
What is greater impact if shell script with mysql insert command ran via cron job at every 30 minutes
Script
#!bin/bash
query="insert into table_name values ('foo', 'bar' , 123, NOW() )where column_name is NOT NUll"
mysql -u username -p password mysql <<EOF
$query;
EOF
What will be impact on mysql maximum connections, While ran to my system i got more than 100 connections ESTABLISHED
What STATUS value corresponds to connections ESTABLISHED?
What was the value of
`SHOW GLOBAL STATUS LIKE 'Max_used_connections';
I expect a small number like 1 or 2.
For
`SHOW GLOBAL STATUS LIKE 'Connections';
I expect over 100.
The commandline tool mysql will create a connection (bumping Connections and possibly increasing the "high water mark" for Max_used_connections), do the action, then close the connection (without decreasing any STATUS). Threads_running is also incremented and decremented.
Your cron job should not be threatening any limitations.

How is MySQL Uptime's value "computed"?

According to MySQL Documentation, the global variable Uptime is defined as "The number of seconds that the server has been up.".
However, can somebody please explain to me how this value is actually computed? What does it use as a reference, System Time?
I am asking this question because I just came across a weird situation : when rebooting a VM with MySQL, ntpd service terminated, and at startup (since was not on chkconfig), the time got shifted +8 hours, as you can see by the following :
15:01:00 hostname shutdown[30383]: shutting down for system reboot
15:01:00 hostname init: Switching to runlevel: 6
...
15:01:06 hostname ntpd[27553]: ntpd exiting on signal 15
15:01:06 hostname syslog-ng[27399]: Termination requested via signal, terminating;
...
23:04:03 hostname kernel: Bootdata ok
The same shift is recorded in the MySQL error logs :
15:01:03 InnoDB: Starting shutdown...
15:01:05 InnoDB: Shutdown completed; log sequence number 2746293826
15:01:06 [Note] /usr/sbin/mysqld: Shutdown complete
15:01:06 mysqld_safe mysqld from pid file /var/lib/mysql/data/hostname.pid ended
23:04:06 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/data
After we fixed the time by starting ntpd, it seemed that the Uptime got shifted :
mysql> show global status like 'Uptime';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Uptime | 18005 |
+---------------+-------+
1 row in set (0.00 sec)
mysql> show global status like 'Uptime_since_flush_status';
+---------------------------+-------+
| Variable_name | Value |
+---------------------------+-------+
| Uptime_since_flush_status | 18007 |
+---------------------------+-------+
Is this behavior possible, or it probably related to other factors?
Thank you for your patience and understanding.
Should be very simple. The application will create a timestamp from when it starts and compare it to the current time. These times are given from the system time.
So if you modify the system time, it will not adjust the initial timestamp. It will consider the time change as the current time and relay that as its comparison.

MySQL Query with LARGE number of records gets Killed

I run the following query from my shell :
mysql -h my-host.net -u myuser -p -e "SELECT component_id, parent_component_id FROM myschema.components comp INNER JOIN my_second_schema.component_parents related_comp ON comp.id = related_comp.component_id ORDER BY component_id;" > /tmp/IT_component_parents.txt
The query runs for a LONG time and then gets KILLED.
However if I add LIMIT 1000, then the query runs till the end and output is written in file.
I further investigated and found (using COUNT(*)) that the total number of records that would be returned are 239553163.
Some information about my server is here:
MySQL 5.5.27
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| wait_timeout | 28800 |
+----------------------------+----------+
Here's STATE of the query as I monitored :
copying to tmp table on disk
sorting results
sending data
writing to net
sending data
writing to net
sending data
writing to net
sending data ...
KILLED
Any guesses what's wrong here ?
The mysql client probably runs out of memory.
Use the --quick option to not buffer results in memory.
What is wrong is that you are returning 239 553 163 rows of data! Don't be surprised it it takes a lot of time to process. Actually, the longest part might very well be sending the result set back to your client.
Reuduce the result set (do you really need all these rows?). Or try to output the data in smaller batches:
mysql -h my-host.net -u myuser -p -e "SELECT ... LIMIT 10000, 0" >> dump.txt
mysql -h my-host.net -u myuser -p -e "SELECT ... LIMIT 10000, 10000" >> dump.txt
Assuming you mean 8 hours when you say a long time, the value 28800 for your wait_timeout causes the connection to drop with no further activity in 28,800 seconds, i.e. 8 hours. If you can't optimize the statement to run in less than 8 hours, you should increase this value.
See this page for further information on the wait_timeout variable.
The interactive_timeout variable is used for interactive client connections, so if you run long queries from an interactive session, that's the one you need look at.
You may want to utilize OUTFILE mechanizm if you are going to dump large amounts of data. That or mysql_dump will be much more efficient (and OUTFILE got the benefit of not locking-down the table).
You said in a comment that your MySQL instance is on RDS. This means you can't be running the query from the same host, since you can't log into an RDS host. I guess you might be doing this query over the WAN from your local network.
You're most likely having trouble because of a slow network. Your process state frequently showing "writing to net" makes me think this is your bottleneck.
Your bottleneck might also be the sorting. Your sort is writing to a temp table, and that can take a long time for a result set that large. Can you skip the ORDER BY?
Even so, I wouldn't expect the query to be killed even if it runs for 3100 seconds or more. I wonder if your DBA has some periodic job killing long-running queries, like pt-kill. Ask your DBA.
To reduce network transfer time, you could try using the compression protocol. You can use the --compress or -C flags to the mysql client for this (see https://dev.mysql.com/doc/refman/5.7/en/mysql-command-options.html#option_mysql_compress)
On a slow network, compression can help. For example, read about some comparisons here: https://www.percona.com/blog/2007/12/20/large-result-sets-vs-compression-protocol/
Another alternative is to run the query from an EC2 spot instance running in the same AZ as your RDS instance. The network between those two instances will be a lot faster, so it won't delay your data transfer. Save the query output to a file on the EC2 spot instance.
Once the query result is saved on your EC2 instance, you can download it to your local machine, using scp or something, which should be more tolerant of slow networks.

InnoDB missing from MySQL

I have no idea of what I have done here, but my InnoDB engine seems to have gone from my MySQL server. I recently upgraded it from the dotdeb repository, then installed mysql-server.
There is no mention of InnoDB in my my.cnf except some comments which explain InnoDB is enabled by default, which I don't understand. There is also no mention of InnoDB in SHOW ENGINES.
Is there something I'm missing here?
If it matters, my MySQL server version is: 5.5.24-1~dotdeb.1 (Debian).
EDIT: SHOW ENGINES:
mysql> SHOW ENGINES;
+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+
| MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO |
| PERFORMANCE_SCHEMA | YES | Performance Schema | NO | NO | NO |
| FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL |
| BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO |
| MyISAM | DEFAULT | MyISAM storage engine | NO | NO | NO |
| CSV | YES | CSV storage engine | NO | NO | NO |
| ARCHIVE | YES | Archive storage engine | NO | NO | NO |
| MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO |
+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+
8 rows in set (0.00 sec)
The problem is most probably a non-matching log file size: mysql expects the innodb log files to be exactly the size that is specified in the config file. To check whether this is really the issue, do the following:
sudo /etc/init.d/mysql restart
sudo tail -n 1000 /var/log/syslog
(I'm assuming you are on Debian)
If you see some errors reported there regarding innodb and log file size (Sorry, I can't remember the exact wording of the message), then the fix is easy:
locate the logfiles (probably /var/lib/mysql/ib_logfile0 and /var/lib/mysql/ib_logfile1)
stop the mysql server
rename the log files: sudo mv /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile0.bak etc.
start the mysql server
check in /var/log/syslog whether the errors are no longer happening
connect to mysql and check via SHOW ENGINES; whether InnoDB is available now...
Hope this helps!
The first thing to do is to run SHOW ENGINES at the MySQL prompt to confirm if Innodb is disabled.
If it is, check the error log for the MySQL server. It will have details on why InnoDB was disabled. There are several reasons MySQL might disable InnoDB on startup. For example, if the innodb log file size specified in my.cnf does not match the size of the existing log file(s) on disk.
I've got this problem with Debian 7 server with preinstalled mysql 5.5. There was no InnoDB engine after SHOW ENGINES
As severin mentioned before run this:
sudo /etc/init.d/mysql restart
sudo tail -n 1000 /var/log/syslog
I've got this one:
InnoDB: Error: io_setup() failed with EAGAIN after 5 attempts.
And solution on other line:
InnoDB: You can disable Linux Native AIO by setting innodb_use_native_aio = 0 in my.cnf
After adding innodb_use_native_aio = 0 to my.cnf InnodDB appeared in SHOW ENGINES
Check if you have enough space on disk and where mysql.sock is stored.
Stop MYSQL
Edit my.cnf and increase:
innodb_buffer_pool_size=100M (May vary per case)
Add:
[mysqld]
innodb_force_recovery = 1
Execute the following
mv /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile0.bak
mv /var/lib/mysql/ib_logfile1 /var/lib/mysql/ib_logfile1.bak
Start MySQL and take backups just in case.
Log in to mysql and: show engines; - Check to see that InnoDB is listed and SUPPORT = YES.
If all is good up until 6, exit and edit my.cnf setting this back:
[mysqld]
innodb_force_recovery = 0
Restart MySQL
Go to your websites, check that all works, and good luck!
PS - You may want to check what caused this, perhaps working on your production server, or restarting caused your log files to get corrupted. You're in the clear for now, so have a look around and make sure all else looks good, especially free disk space and offsite backups.

Unable to get multiple connection to MySql on Windows 7

I have installed MySql on windows 7 ... issue is i'm unable to get multiple connection to MySql .
If I connect to MySql through command line and at the same time open an other MySql command line client it goes into wait state, as soon as I disconnect the first one later one gets connected.
Because of above issues I'm unable to run tomcat in debug mode as it tries to get more than one connection to MySql in debug mode.
Previously I was using same version of MySql i.e. 5.1 on vista and it was working fine.
when connected with only one MySql Command line "show processlist" results
| 4 | root | localhost:49487 | NULL | Query | 0 | NULL | show processlist
1 row in set (0.00 sec)
and after connnecting with 2nd command line which hangs "show processlist" on the 1st window results
| 4 | root | localhost:49487 | NULL | Query | 0 | NULL | show processlist
| 5 | root | localhost:49518 | NULL | Sleep | 0 | NULL | NULL
2 rows in set (0.00 sec)
I entered following command through command line.
mysql -u root -h localhost -P 3306 -p
it asked me for password and got connected. Then I opened an other command prompt entered the same command it asked for password and hanged. I went back to the previous command line and closed it and the current one got connected. max_connection is 100 in my.ini file and show processlist reutns same result as above.
What is your 'max_connections' setting (show variables like '%max_connections%') and how many connections are currently 'live' on the server (show processlist)?
I'm guessing it's set very low (1 or 2) and between tomcat and your monitor connections you're exceeding the limit.
Raising it would be done via the mysql.ini/mysql.cnf file, wherever it's kept on Windows.
Are you connecting over the network? or a local file socket? You may be locking on the windows equivalent of mysql.sock - not sure if that behavior changed in Win7. Something like:
mysql -u root -h localhost -p 3306
and make sure that my.ini/my.cnf have networking enabled
After too many re installation of Windows I guess i have identified the root cause ... On every fresh installation MySql use to work fine but after a while I use to get stuck with this issue.
The cause was my voip messenger "Wizton" after installing it MySql work fine but when i restart my machine ... same Connection issue.
But wizton was working perfectly fine with Vista Business .. don't no what happens in Windows 7.