MySQL Server with high CPU and "kernel time" usage - mysql

I noticed that a mysql server is with CPU at 100%, and the "kernel time" (I'm not sure what it means) is unusually high, about 70%.
There are many connections on this server (around 400) and some active queries (about 40). Would that explain this behavior? Is there something wrong or this is expected?
Edit:
As suggested by a comment, I checked the 'handler_read%' variables:
show global status like 'handler_read%'. Here are the results:
Handler_read_first 248684
Handler_read_key 3081370400
Handler_read_last 83333
Handler_read_next 3520958058
Handler_read_prev 330
Handler_read_rnd 2210158755
Handler_read_rnd_deleted 60107588
Handler_read_rnd_next 929907565
The complete show status and show variables result is here:
https://www.dropbox.com/s/98pnd1rzgfp4jtf/server_status.txt?dl=0
https://www.dropbox.com/s/rh0m8np0mosx6tp/server_variables.txt?dl=0

The high values for handler_read_rnd* indicate that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have.
Due to syscall overhead and context switches table scans use more CPU.
Before changing parameters or invest money in hardware, I would suggest to optimize your database:
Activate the slow query log (additionally you might specify parameters log_queries_not_using_indexes and min_examined_row_limit) for a limited time (size of slow query log might grow very fast).
Analyze the queries in query log with EXPLAIN or EXPLAIN EXTENDED
If the problems occurs on a production server, replicate the content first to a test system

A number of settings are too high or too low...
tmp_table_size and max_heap_table_size are 16G -- This is disastrous! Each connection might need one or more of these. Lower it to 1% of RAM.
There are a large number of Com_show_fields -- complain to the 3rd party vendor.
Large number for Created_tmp_disk_tables -- this usually means poorly indexed or designed queries.
Select_scan / Com_select = 77% -- Missing lots of indexes?
Threads_running = 229 -- they are probably tripping over each other.
FLUSH STATUS was run recently, so some STATUS values are not useful.
table_open_cache is 256 -- There some indications that a bigger number would be good. Try 1500.
key_buffer_size is only 1% of RAM; raise it to 20%.
Still, ... High CPU means poor indexes and/or poorly designed queries. Let's see some of them, together with SHOW CREATE TABLE.

Related

Tuning MySQL RDS which has WriteIOPS unstabilities

We are having troubles tuning MySQL 5.6 on AWS, and are looking for pointers to solve some performance issues.
We used to have dedicated servers and we could configure them how we like.
Our application uses a lot of temporary tables and we had no performance issue on that regard.
Until we switched to AWS RDS instance.
Now, many slow queries show up in the logs and it slows down the whole application.
Previously we worked with MySQL 5.4 and now it's 5.6.
Looking through the docs, we discovered some changes regarding the temporary tables default format.
It is InnoDB by default, and we set it back to MYISAM like we are used to and constated improvements on that regard.
first aspect:
Also, our DB is quite large and we have hundreds of simultaneous access to our application, and some tables require real-time computation. Join and Unions are used in that cases.
When developing the application (with MySQL 5.4), we found that splitting the larger queries into 2 or more steps and using intermediate tables, the over-whole performance improved.
Explain showed file sort and temporary file so we could get rid of those with temporary tables.
Is splitting queries into temporary tables really a good idea?
Edit We know conditions about the implicit conversion of MEMORY temporary tables to disk.
Another thing we are not sure is what makes query using temporary files?
We know using correct indexes is one way to go (correct order usage in where clause, correct usage of order by etc…), but is there anything else we could do?
second aspect:
Regarding some settings we are using, we have some hundred Mb for max_heap_table_size and tmp_table_size so we hoped that our temporary tables could hold in memory.
We also found articles describing to look at ReadIOPS and WriteIOPS.
The reads are stable and low, but the writes are showing unstable and high numbers.
Here is a graph:
The values on the vertical axis are operations/sec.
How can we interpret those numbers?
One thing to know about our application, is that every user action is logged into one big logs table. But it should be once per page load.
third aspect:
How far can we go with those settings so temporary tables can be used in Memory?
For instance, we read some articles explaining they set few Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)
Is that really something we can try? (Are same settings applicable on RDS?)
Also, can we set innodb_buffer_pool_size at the same value as well?
Note I can't find there article where I found that info, but I might have confused some parameter names. I'll edit the question if I find the source again.
The server settings are very different from what we used to have (the new servers on AWS are not set by us) and many settings values have been increased, and some decreased.
We fear it's not a good thing…
Here are the some noticable changes:
innodb_buffer_pool_size (increased *6)
innodb_log_buffer_size (decreased /4)
innodb_additional_mem_pool_size (decreased /4)
sort_buffer_size (decreased /2)
myisam_sort_buffer_size (increased *1279) (we have only innoDB tables, do we need to touch that?)
read_buffer_size (increased *2)
join_buffer_size (decreased /4)
read_rnd_buffer_size (increased *4)
tmp_table_size + max_heap_table_size (increased *8)
Some changes look weird to us (like myisam_sort_buffer_size), we think that using the same settings would have been better in the first place.
Can we have some pointers on those variables? (Sorry, we can't provide the exact numbers)
Also, is there a good article we could read that sums up a good balance between all those parameters?
Because we are concerned about the temporary tables not fitting in memory, I made that query to see what percentage of queries are actually written to disk:
select
tmp_tables
,tmp_tables_disk
,round( (tmp_tables_disk / tmp_tables) * 100, 2 ) as to_disk_percent
from
(select variable_value as tmp_tables from information_schema.GLOBAL_STATUS where variable_name = 'Created_tmp_tables') as s1
,(select variable_value as tmp_tables_disk from information_schema.GLOBAL_STATUS where variable_name =
'Created_tmp_disk_tables') as s2
The result is 10~12% (depending on the instance). Is that high?
TL;DR
I tried to add as many details to the question as possible about the current situation, I hope it's not more confusing that anything...
The issues are about writes.
Is there a way to diagnose what causes so many writes? (maybe linked to temporary tables?)
MySQL 5.4 never existed. Perhaps MariaDB 5.4?
Temp tables, even if seemingly not hurting performance, are a clue that things could be improved.
Do you have TEXT columns that you don't need to fetch. (This can force an in-memory temp table to turn into on-disk.)
Do you have TEXT columns that could be a smaller VARCHAR? (Same reason.)
Do you understand that INDEX(a,b) may be better than INDEX(a), INDEX(b)? ("composite" index)
The buffer_pool is very important for performance. Did the value of innodb_buffer_pool_size change? How much data do/did you have? How much RAM do/did you have.
In general, "you cannot tune your way out of a performance problem".
Is this a Data Warehouse application? Building and maintaining Summary tables is often a huge performance boost for DW "reports".
MyISAM only has table locking. This leads to blocking, which leads to all sorts of performance problems, especially slowing down queries. I'm surprised that InnoDB was slower for you. MyISAM is a dead end. By the time you get to 8.0, there will be even stronger reasons to switch.
Let's see one of the slow queries that benefited from splitting up. There are several techniques for making complex queries run faster (at least in InnoDB).
The order of ANDed things in WHERE does not matter. It does matter in composite indexes.
Changing tmp_table_size rarely helps or hurts. It should, however, be kept under 1% of RAM -- this is to avoid running out of RAM. Swapping is the worst thing to happen.
Graphs - what are the Y-axis units? Where did the graphs come from? They look useless.
"Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)" -- Check that article again. I recommend 120M for that setting on that machine.
When exclusively using MyISAM on a 12GB machine, key_buffer_size should be about 2400M and innodb_buffer_pool_size should be 0.
When exclusively using InnoDB on a 12GB machine, key_buffer_size should be about 20M and innodb_buffer_pool_size should be 8G. Did you change those when you tested InnoDB?
innodb_additional_mem_pool_size -- Unused, and eventually removed.
myisam_sort_buffer_size -- 250M on a 12G machine. If using only InnoDB, the setting can be ignored since that buffer won't be used.
As for the *6 and /4 -- I need to see the actual sizes to judge anything.
"sums up a good balance between all those parameters" -- Two things:
http://mysql.rjweb.org/doc.php/memory
Leave the rest at their defaults.
For more tuning advice, plus Slowlog advice, see http://mysql.rjweb.org/doc.php/mysql_analysis

How to see size of MySQL internal innodb temporary tables

I'm seeing a large number of internal temporary disk tables being written. I can see the count with SHOW GLOBAL STATUS where Variable_name like 'Created_tmp_disk_tables'.
I know I can update max_heap_table_size and tmp_table_size to help prevent this, but without knowing the size of the tables getting written to disk, it's difficult to know what values to use.
Does anyone know how to go about finding this value?
This is not easy to get. In Percona Server, there are options to add additional information in the slow query log that shows the size of temp tables (see https://www.percona.com/doc/percona-server/5.7/diagnostics/slow_extended.html)
# User#Host: mailboxer[mailboxer] # [192.168.10.165]
# Thread_id: 11167745 Schema: board
# Query_time: 1.009400 Lock_time: 0.000190 Rows_sent: 4 Rows_examined: 1543719 Rows_affected: 0 Rows_read: 4
# Bytes_sent: 278 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# QC_Hit: No Full_scan: Yes Full_join: No Tmp_table: No Tmp_table_on_disk: No
# Filesort: No Filesort_on_disk: No Merge_passes: 0
(The example above, taken from the Percona documentation, shows the extended fields, although the example is for a query that did not create temp tables, so the size is shown as 0.)
In Oracle MySQL, some of the same extended information is available in query events in the PERFORMANCE_SCHEMA—but not the temp table sizes.
In 2014, I logged a feature request to supply this information: https://bugs.mysql.com/bug.php?id=74484 and this bug has been acknowledged, but this has not been implemented as far as I know.
It's a little bit unclear how this would be implemented, since it's possible for any given query to create multiple temp tables of different sizes. I believe the Percona feature shows the sum total of the temp table sizes in such cases.
All I can offer as a suggestion is to increase the max_heap_table_size and tmp_table_size in increments, and monitor the rate of increase of the Created_tmp_disk_tables reported by SHOW GLOBAL STATUS, compared to Created_tmp_tables (temp tables that did not use disk). As the allowed tmp table size is able to hold a greater percentage of temp tables created, you should start to see the ratio of on-disk temp tables to in-memory temp tables decrease.
It's typically not necessary to increase tmp_table_size to hold every possible temp table, no matter how large. You want the largest outliers to use the disk. But as long as the temp tables use memory 98% of the time, you should be fine. That would mean that the ratio of Created_tmp_disk_tables to Created_tmp_tables should be 1:50 or more.
(Addenda to Bill's Answer.)
Some significant changes have been made in the handling of tmp tables in 5.7 and 8.0. I am not sure if any of them apply here, but aware.
Exceeding tmp_table_size is not the only reason for using a disk-based temp table -- TEXT, BLOB, #variables, UNION, etc. (More details in https://dev.mysql.com/doc/refman/5.7/en/internal-temporary-tables.html ) For that reason, I would question Bill's 1:50 advice.
Tmp tables (on-disk or in-memory) can be created by lots of queries; multiple tmp tables may be needed for a complex query. So, the number of simultaneous tmp tables is potentially more than max_connections. For this reason, I recommend keeping tmp_table_size less than 1% of available RAM. If you did have a high setting and need lots of simultaneous tmp tables, you could cause swapping, which is terribly bad for performance.
There are many "fixes" to having high values for Created_tmp*tables; too many to itemize here. If you would like to present a query (and the relevant SHOW CREATE TABLEs); we can discuss that particular case.
When analyzing a customer's system, I look for things such as
Created_tmp_disk_tables > 1/second
Created_tmp_disk_tables > 4% of queries
Created_tmp_tables > 20/second
Any of those, in my opinion, indicate the need for scrutiny of slow queries. But if tmp_table_size is greater than 1% of RAM, I advise lowering it, even if it hurts some queries.
Just looking through the documentation, and assuming that just knowing the number of tables is not sufficient (looking at Created_tmp_tables), the variable tmpdir would give you the location of the directory storing these tables. Lowering tmp_table_size should ensure temp tables are created on disk, so you're able to look at the size.

difference between opened files and open files in mysql

In the below status i have opened files count to be '95349'.
this value is increasing rapidly.
mysql> show global status like 'open_%';
Open_files = 721
Open_streams = 0
Open_table_definitions = 706
Open_tables = 741
Opened_files = 95349
Opened_table_definitions = 701
Opened_tables = 2851
also see this.
mysql>show variables like '%open%';
have_openssl = DISABLED
innodb_open_files = 300
open_files_limit = 8502
table_open_cache = 4096
and
max_connection = 300
is there any relation to open files and opened files. will there be any performance issues because of increasing opened_files value. This is a server of 8 GD RAM and 500 GB hardisk with processor: Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz. It is a dedicated mysql server.
here for the command
ulimit -n;
1024 was the count
the server is hanging often. using some online tools i have optimised some parameters already. need to know what else should be optimized ? in what case the opened files count will reduce? is it necessary that opened files count should be with in some limit. if so how to find the appropriate limit for my server. if am not clear some where please help me by asking more questions.
Opened_files is a counter of how many times you have opened a table since the last time you restarted mysqld (see status variable Uptime for the number of seconds since last restart).
Open_files is not a counter; it's the current number of open files.
If your Opened_files counter is increasing rapidly, you may be able to gain improvement to performance by increasing the size of the table_open_cache.
For some tips on the performance implications of this variable (and some cautions about setting it too high), see:
http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scalability/ (the problem described there seems to be solved finally in MySQL 5.6)
Re your comments:
You misunderstand the purpose of the counter. It always increases. It counts the number of times a particular operation has occurred since the last restart of mysqld. In this case, opening a file for a table.
Having a high value in a counter isn't necessarily a problem. It could mean simply that your mysqld has been running for many days or weeks without a restart. So you have to look at that number compared to your Uptime (that is, MySQL status variable Uptime, not Linux uptime).
What is more meaningful is the rate of increase of a counter, that is how fast does it grow in a given interval of time. That could indicate that you are re-opening tables rapidly.
Normally, MySQL shouldn't have to re-open tables, because it retains an open table handle for each table. But it can only have a finite number of those. That's what table_open_cache is for. In your case, your MySQL instance can "remember" that it has already opened up to 4096 tables at a time. If you need another table opened, it closes one of the file descriptors and opens the table you requested.
So if you have many thousands of tables (or partitions of tables) and you access a wide variety of them rapidly, you could see a lot of turnover in that table open cache. That would be indicated by the counter Opened_tables increasing rapidly.
Therefore sizing the table_open_cache higher means that MySQL can retain more open table handles, and possibly decrease the rate of turnover.
SO the solution is either to increase my hardware (especially RAM) so that i will be able to increase the table_open_cache beyond 4096 or to optimize the query.

Is tuning the innodb_buffer_pool_size important on Solaris ZFS?

We're running a moderate size (350GB) database with some fairly large tables (a few hundred million rows, 50GB) on a reasonably large server (2 x quad-core Xeons, 24GB RAM, 2.5" 10k disks in RAID10), and are getting some pretty slow inserts (e.g. simple insert of a single row taking 90 seconds!).
Our innodb_buffer_pool_size is set to 400MB, which would normally be way too low for this kind of setup. However, our hosting provider advises that this is irrelevant when running on ZFS. Is he right?
(Apologies for the double post on https://dba.stackexchange.com/questions/1975/is-tuning-the-innodb-buffer-pool-size-important-on-solaris-zfs, but I'm not sure how big the audience is over there!)
Your hosting provider is incorrect. There are various things you should tune differently when running MySQL on ZFS, but reducing the innodb_buffer_pool_size is not one of them. I wrote an article on the subject of running MySQL on ZFS and gave a lecture on it a while back. Specifically regarding innodb_buffer_pool_size, what you should do is set it to whatever would be reasonable on any other file system, and because O_DIRECT doesn't mean "don't cache" on ZFS, you should set primarycache=metadata on your ZFS file system containing your datadir. There are other optimisations to be made, which you can find in the article and the lecture slides.
I would still set the innodb_buffer_pool_size much higher that 400M. The reason? InnoDB Buffer Pool will still cache the data and index pages you need for tables accessed frequently.
Run this query to get the recommended innodb_buffer_pool_size in MB:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(pw<0,0,IF(pw>3,0,pw)))+0.49999),SUBSTR(' KMG',IF(pw<0,0,IF(pw>3,0,pw))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A,(SELECT 2 pw) B;
Simply use either the result of this query or 80% of installed RAM (in your case 19660M) whichever is smaller.
I would also set the innodb_log_file_size to 25% of the InnoDB Buffer Pool size. Unfortunately, the maximum value of innodb_log_file_size is 2047M. (1M short of 2G) Thus, set innodb_log_file_size to 2047M since 25% of innodb_buffer_pool_size of my recommendated setting is 4915M.
Yet another recommedation is to disable ACID compliance. Use either 0 or 2 for innodb_flush_log_at_trx_commit (default is 1 which support ACID compliance) This will produce faster InnoDB writes AT THE RISK of losing up to 1 second's worth of transactions in the event of a crash.
May be worth reading slow-mysql-inserts if you haven't already. Also this link to the mysql docs on the matter - especially with regards to considering a transaction if you are doing multiple inserts to a large table.
More relevant is this mysql article on performance of innodb and zfs which specifically considers the buffer pool size.
The headline conclusion is;
With InnoDB, the ZFS performance curve suggests a new strategy of "set the buffer pool size low, and let ZFS handle the data buffering."
You may wish to add some more detail such as the number / complexity of the indexes on the table - this can obviously make a big difference.
Apologies for this being rather generic advice rather than from personal experience, I haven't run zfs in anger but hope some of those links might be of use.

MySql performance doctor: someone can translate this values for me?

Slow_queries 11
Select_full_join 13 k
Handler_read_next 203 k
Handler_read_rnd_next 5,174 M
Created_tmp_disk_tables 53 k
Opened_tables 59 k
this are the RED flagged values i found on my mysql status...I'm a self-taught developer so i'm not sure how to fix it or i those values are really high or what...the description given in phpmyadmin is not always clear to me...
NOTE: my website is still on staging os there is no web traffic besides my tests
thanks
You need to optimize your MySQL queries. To find slow queries , you need to log slow queries. you can enable from mysql configuration file my.cnf
tip: use explain to let you know what is MySQL doing with your query.
here is the meaning of the values above from phpmyadmin status:
Slow_queries 11 : "The number of queries that have taken more than long_query_time seconds"
Select_full_join : "The number of joins that do not use indexes. If this value is not 0, you should carefully check the indexes of your tables.
Handler_read_next "The number of requests to read the next row in key order. This is incremented if you are querying an index column with a range constraint or if you are doing an index scan. "
Handler_read_rnd_next : "The number of requests to read the next row in the data file. This is high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have. "
Created_tmp_disk_tables : "The number of temporary tables on disk created automatically by the server while executing statements. If Created_tmp_disk_tables is big, you may want to increase the tmp_table_size value to cause temporary tables to be memory-based instead of disk-based. "
Opened_tables : "The number of tables that have been opened. If opened tables is big, your table cache value is probably too small. "