I have some confusion about my mysql status.
mysql> show status like '%key%';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| Com_assign_to_keycache | 0 |
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 2 |
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 13396 |
| Key_blocks_used | 0 |
| Key_read_requests | 0 |
| Key_reads | 0 |
| Key_write_requests | 0 |
| Key_writes | 0 |
but,there have large (more then 1 billion each day) insert,update and query on the server,but why the status's value is 0.The server has run nearly 3 days.(Uptime:2 days 18 hours 54 min 19 sec).I did not flush the server's status.
Some db config engine = innodb,key_buffer = 16M, innodb_buffer_pool_size = 2147483648.
Thanks for any information.
Perhaps you're using InnoDB tables ?
Those Key_XXX server status values are for MyISAM tables.
The values you're looking at are for MyISAM tables. They represent the MyISAM Key Cache:
http://dev.mysql.com/doc/refman/5.0/en/myisam-key-cache.html
This cache holds recently used keys with the expectation that keys used recently are likely to be reused again soon -- therefore they could be valuable to cache.
Since you're using innodb, the key cache isn't being used.
For tuning purposes you should minimize the amount of memory you have dedicated to the key cache. Any memory taken away from innodb processing is probably wasted -
Related
I am running a mysql server. I am trying to tune my cache mechanism. I am viewing my cache statistics below and I am concerned about the number of lowmem prunes as well as the not cached stat. I believe I have enough memory dedicated but I feel it is possible that my maximum query size may be too small.
mysql> SHOW STATUS LIKE "qcache%"
-> ;
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 297 |
| Qcache_free_memory | 15375480 |
| Qcache_hits | 24724191 |
| Qcache_inserts | 23954609 |
| Qcache_lowmem_prunes | 2011492 |
| Qcache_not_cached | 6987151 |
| Qcache_queries_in_cache | 6004 |
| Qcache_total_blocks | 12386 |
+-------------------------+----------+
8 rows in set (0.00 sec)
Is there a way to get the server to report back a historical statistic for the largest query ever returned? My intention is to discover how large the returned query data is in order to better tune the cache. I feel that the not cached number may be too large and that stems from not having a large enough maximum query.
I have two databases A and B on MySQL server.
A is the original database and B is derived from A changing the format of some tables. So for each table_A in db A there's a respective table table_B in db B and for each row in table_A there is a respective row in table_B representing the exact same table entry, in a different format.
I'm pretty sure that explaining this "format difference" between A and B is irrelevant of what I'm going to ask.
I use Java, JDBC actually, to interface with MySQL server.
I have a number of "SELECT" queries for db A and the equivalent queries for db B. I want to execute them repeatedly and calculate some metrics, like so:
execute SELECT query on db A and calculate metrics;
execute equivalent SELECT query on db B and calculate metrics;
UPDATE data stored in db A and db B by a percentage
loop
The final goal is to compare the performance of the "same" queries on the two twin dbs, to see what effect the "format difference" has in query performance.
My questions:
How can I calculate CPU time of the query execution? Currently what I do is:
long startTime = System.currentTimeMillis();
ResultSet rs = stmt.executeQuery(QUERY);
long time = System.currentTimeMillis() - startTime;
Is this accurate?
How can I calculate other metrics such as memory usage, cache usage, disk reads, disk writes, buffer gets
Could anyone suggest any other metrics to compare the performance of the "same" queries on the two databases?
There are a lot of metrics you cannot get. But here is a set I like to get:
FLUSH STATUS;
SELECT ...; -- or whatever query
SHOW SESSION STATUS LIKE 'Handler%';
The last command might give something like
mysql> SHOW SESSION STATUS LIKE 'Handler%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Handler_commit | 1 |
| Handler_delete | 0 |
| Handler_discover | 0 |
| Handler_external_lock | 2 |
| Handler_mrr_init | 0 |
| Handler_prepare | 0 |
| Handler_read_first | 1 |
| Handler_read_key | 1 |
| Handler_read_last | 0 |
| Handler_read_next | 5484 | -- rows in the table; so it did a table scan
| Handler_read_prev | 0 |
| Handler_read_rnd | 7 |
| Handler_read_rnd_next | 14 |
| Handler_rollback | 0 |
| Handler_savepoint | 0 |
| Handler_savepoint_rollback | 0 |
| Handler_update | 0 |
| Handler_write | 13 | -- wrote to a tmp table 13 rows after a GROUP BY
+----------------------------+-------+
18 rows in set (0.00 sec)
Caching comes and goes, so timings can vary even by a factor of 10. Handlers, on the other hand, are very consistent. They give me insight into what is happening.
If you are running through JDBC, run the FLUSH like you would a non-SELECT; run the SHOW like a SELECT that gives you 2 columns.
We ran an alter table today today that took down the DB. We failed over to the slave, and in the post-mortem, we discovered this in the mysql error.log
InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
This error rings true because we were working on a very large table that contains BLOB data types.
The best answer we found online said
To solve it, you need to stop MySQL cleanly (very important), delete the existing InnoDB log files (probably lb_logfile* in your MySQL data directory, unless you've moved them), then adjust the innodb_log_file_size to suit your needs, and then start MySQL again. This article from the MySQL performance blog might be instructive.
and in the comments
Yes, the database server will effectively hang for any updates to InnoDB tables when the log fills up. It can cripple a site.
which is I guess what happened, based on our current (default) innodb_log_file_size of 48mb?
SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
+-----------------------------+----------+
So, this leads me to two pointed questions and one open-ended one:
How do we determine the largest row so we can set our innodb_log_file_size to be bigger than that?
What is the consequence of the action in step 1? I'd read about long recovery times with bigger logs.
Is there anything else I should worry about regarding migrations, considering that we have a large table (650k rows, 6169.8GB) with unrestrained, variable length BLOB fields.
We're running mysql 5.6 and here's our my.cnf.
[mysqld]
#defaults
basedir = /opt/mysql/server-5.6
datadir = /var/lib/mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
tmpdir = /tmp
bind-address = 0.0.0.0
#logs
log_error = /var/log/mysql/error.log
expire_logs_days = 4
slow_query_log = on
long_query_time = 1
innodb_buffer_pool_size = 11G
#http://stackoverflow.com/a/10866836/182484
collation-server = utf8_bin
init-connect ='SET NAMES utf8'
init_connect ='SET collation_connection = utf8_bin'
character-set-server = utf8
max_allowed_packet = 64M
skip-character-set-client-handshake
#cache
query_cache_size = 268435456
query_cache_type = 1
query_cache_limit = 1048576
```
As a follow-up to the suggestions listed below, I began investigation into the file size of the table in question. I ran a script that wrote the combined byte size of the three BLOB fields to a table called pen_sizes. Here's the result of getting the largest byte size:
select pen_size as bytes,·
pen_size / 1024 / 1024 as mb,·
pen_id from pen_sizes
group by pen_id
order by bytes desc
limit 40
+---------+------------+--------+
| bytes | mb | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 | 84816 |
| 3379107 | 3.22256756 | 74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 | 93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 | 46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 | 76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 | 74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 | 76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 | 76188 |
| 1500157 | 1.43066120 | 77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 | 77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 | 77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 | 75256 |
| 1343621 | 1.28137684 | 75257 |
| 1334071 | 1.27226925 | 77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 | 75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+
This makes me believe that the average row size is closer to 1mb than the 10 suggested. Maybe the table size I listed earlier includes the indexes, too?
I ran
SELECT table_name AS "Tables",
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = 'codepen'
+-------------------+------------+
| Tables | Size in MB |
+-------------------+------------+
...snip
| pens | 6287.89 |
...snip
0. Preliminary information
Your settings:
innodb_log_file_size = 50331648
innodb_log_files_in_group = 2
Therefore your "log group capacity" = 2 x 50331648 = 96 MB
1. How to determine the largest row
There is no direct method. But one can easily calculate the size of one given row based on these tables (compression should not matter to us here, if, as I assume, rows are not compressed in the log files).
2. Impact of innodb_log_file_size
Reference manual:
The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower, although improvements to recovery performance in MySQL 5.5 and higher make the log file size less of a consideration.
3. Anything else to worry about
6169.8 GB / 650k rows = about 10 MB per row on average
This is a serious problem per se if you intend to use your database in a transactional, multi-user situation. Consider storing your BLOB's as files outside of the database. Or, at least, store them in a separate MyISAM (non-transactional) table.
When I searched some status variable by using below command and got:
mysql> show global status like '%key%';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| Com_assign_to_keycache | 0 |
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 713132 |
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 14497 |
| Key_blocks_used | 12 |
| Key_read_requests | 48622 |
| Key_reads | 0 |
| Key_write_requests | 9384 |
| Key_writes | 0 |
+------------------------+--------+
11 rows in set (0.00 sec)
I was curious about why both value of key_reads, and key_writes are 0, and googled. The blow link told me those key leading variables are used in MyIsam engine.
Why mysql status key_reads,key_reads_request's values are zero?
How do we know which variables are Innodb engine orientied, some are only used in MyIsam engine. Where I can find the document? Thanks for any input.
Take a look at this page on server status variables. The documentation is not all-encompassing, and I would recommend you further search the internets if it falls short for a certain status variable. For example, key_reads wasn't mentioned having anything to do with only MyISAM, so you were right to do further digging. I've found slideshare to have some useful information: see this presentation, which contains some information on the various status variables. However, you'll probably not be able to know 100% about every variable listed without looking at MySQL server source code!
Hope some of this helps...
I've some experience with Mysql DBA, however dare to add an expert tag to myself yet. To be honest, I've had lots of doubts about Mysql variables and status variables in the past and could clear most of them through extensive testing and some of them through some great websites. However, there have been a couple of them I wasn't really convinced with my understanding and one such item is Mysql's status variable: Opened_tables
There is one more status variable named Open_tables that's very much related.
Open_tables - number of tables that are open at the moment
Opened_tables - number of tables that have been opened since startup
Let's come to my questions:
Question #1: Eventhough Mysql states
Open_tables show number of "tables"
that are open at the moment, I've read
in the past that it's not actually the
number of tables opened, but the
number of table file descriptors. It's
said that if multiple threads try to
open the same table simultaneously,
multiple file descriptors are created.
I've noticed myself that in some
circumstances Open_tables was > "total
number of tables present on the
server", so that seem to justify the
above claim. I've also read that
tmp_tables also get added into this
which seem to be incorrect from my
experience. Can someone confirm this?
And then, I've a Mysql server that has got around 965 tables (MyISAM - 712 & InnoDB - 253) and I've set table_cache to 1536. However, as soon as I start the Mysql service (within a couple of seconds), I notice this:
| Open_tables | 6 |
| Opened_tables | 12 |
And that difference (here it's 6) remains like that for some time:
| Open_tables | 133 |
| Opened_tables | 139 |
But some time later, the difference increases (here, it's 12):
| Open_tables | 134 |
| Opened_tables | 146 |
Question #2: So can someone tell me
how that difference occurs?
Is it because
a) Mysql closed 12 tables in between? If so, why did it close those tables instead of keeping them in the cache?
b) Mysql adds the count of something else (other than opened tables) into the opened_tables variable?
Any response is much appreciated!
In my understanding, Opened_tables shows how many tables have been opened above and beyond the number held in table_open_cache. This is a cumulative amount for the lifetime of a MySQL instance, so if your table_open_cache is too low you'll see this value steadily increase, but if it never gets exceeded then you could conceivably have Opened_tables always at 0.
may be it's using system tables from
information_schema, if you restart
mysql, and do nothing ,the
Open_tables > 0 but the
Opened_tables = 0
i had try to create a temporary
table and execute select clause on
it, the open_tables status not
changes
try :
mysql> flush tables;
mysql> show status like '%Open%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Com_ha_open | 0 |
| Com_show_open_tables | 0 |
| Open_files | 4 |
| Open_streams | 0 |
| Open_table_definitions | 0 |
| Open_tables | 0 |
| Opened_files | 68 |
| Opened_table_definitions | 2 |
| Opened_tables | 2 |
| Slave_open_temp_tables | 0 |
+--------------------------+-------+
10 rows in set (0.00 sec)
mysql> create temporary table demo(id int);
mysql> flush tables;
mysql> select * from t5;
Empty set (0.00 sec)
mysql> show status like '%Open%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Com_ha_open | 0 |
| Com_show_open_tables | 0 |
| Open_files | 4 |
| Open_streams | 0 |
| Open_table_definitions | 0 |
| Open_tables | 0 |
| Opened_files | 68 |
| Opened_table_definitions | 2 |
| Opened_tables | 2 |
| Slave_open_temp_tables | 0 |
+--------------------------+-------+
10 rows in set (0.00 sec)
you can see that, the Open_tables
not change
your table_open_cache is not big
enough or you have do some operation
like FLUSH TABLES; from mysql
manual:
table_open_cache is related to
max_connections. For example, for 200
concurrent running connections, you
should have a table cache size of at
least 200 * N, where N is the maximum
number of tables per join in any of
the queries which you execute. You
must also reserve some extra file
descriptors for temporary tables and
files.