Wondering about opened_tables! - mysql

I've some experience with Mysql DBA, however dare to add an expert tag to myself yet. To be honest, I've had lots of doubts about Mysql variables and status variables in the past and could clear most of them through extensive testing and some of them through some great websites. However, there have been a couple of them I wasn't really convinced with my understanding and one such item is Mysql's status variable: Opened_tables
There is one more status variable named Open_tables that's very much related.
Open_tables - number of tables that are open at the moment
Opened_tables - number of tables that have been opened since startup
Let's come to my questions:
Question #1: Eventhough Mysql states
Open_tables show number of "tables"
that are open at the moment, I've read
in the past that it's not actually the
number of tables opened, but the
number of table file descriptors. It's
said that if multiple threads try to
open the same table simultaneously,
multiple file descriptors are created.
I've noticed myself that in some
circumstances Open_tables was > "total
number of tables present on the
server", so that seem to justify the
above claim. I've also read that
tmp_tables also get added into this
which seem to be incorrect from my
experience. Can someone confirm this?
And then, I've a Mysql server that has got around 965 tables (MyISAM - 712 & InnoDB - 253) and I've set table_cache to 1536. However, as soon as I start the Mysql service (within a couple of seconds), I notice this:
| Open_tables | 6 |
| Opened_tables | 12 |
And that difference (here it's 6) remains like that for some time:
| Open_tables | 133 |
| Opened_tables | 139 |
But some time later, the difference increases (here, it's 12):
| Open_tables | 134 |
| Opened_tables | 146 |
Question #2: So can someone tell me
how that difference occurs?
Is it because
a) Mysql closed 12 tables in between? If so, why did it close those tables instead of keeping them in the cache?
b) Mysql adds the count of something else (other than opened tables) into the opened_tables variable?
Any response is much appreciated!

In my understanding, Opened_tables shows how many tables have been opened above and beyond the number held in table_open_cache. This is a cumulative amount for the lifetime of a MySQL instance, so if your table_open_cache is too low you'll see this value steadily increase, but if it never gets exceeded then you could conceivably have Opened_tables always at 0.

may be it's using system tables from
information_schema, if you restart
mysql, and do nothing ,the
Open_tables > 0 but the
Opened_tables = 0
i had try to create a temporary
table and execute select clause on
it, the open_tables status not
changes
try :
mysql> flush tables;
mysql> show status like '%Open%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Com_ha_open | 0 |
| Com_show_open_tables | 0 |
| Open_files | 4 |
| Open_streams | 0 |
| Open_table_definitions | 0 |
| Open_tables | 0 |
| Opened_files | 68 |
| Opened_table_definitions | 2 |
| Opened_tables | 2 |
| Slave_open_temp_tables | 0 |
+--------------------------+-------+
10 rows in set (0.00 sec)
mysql> create temporary table demo(id int);
mysql> flush tables;
mysql> select * from t5;
Empty set (0.00 sec)
mysql> show status like '%Open%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Com_ha_open | 0 |
| Com_show_open_tables | 0 |
| Open_files | 4 |
| Open_streams | 0 |
| Open_table_definitions | 0 |
| Open_tables | 0 |
| Opened_files | 68 |
| Opened_table_definitions | 2 |
| Opened_tables | 2 |
| Slave_open_temp_tables | 0 |
+--------------------------+-------+
10 rows in set (0.00 sec)
you can see that, the Open_tables
not change
your table_open_cache is not big
enough or you have do some operation
like FLUSH TABLES; from mysql
manual:
table_open_cache is related to
max_connections. For example, for 200
concurrent running connections, you
should have a table cache size of at
least 200 * N, where N is the maximum
number of tables per join in any of
the queries which you execute. You
must also reserve some extra file
descriptors for temporary tables and
files.

Related

MYSQL - Find the largest historical result?

I am running a mysql server. I am trying to tune my cache mechanism. I am viewing my cache statistics below and I am concerned about the number of lowmem prunes as well as the not cached stat. I believe I have enough memory dedicated but I feel it is possible that my maximum query size may be too small.
mysql> SHOW STATUS LIKE "qcache%"
-> ;
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 297 |
| Qcache_free_memory | 15375480 |
| Qcache_hits | 24724191 |
| Qcache_inserts | 23954609 |
| Qcache_lowmem_prunes | 2011492 |
| Qcache_not_cached | 6987151 |
| Qcache_queries_in_cache | 6004 |
| Qcache_total_blocks | 12386 |
+-------------------------+----------+
8 rows in set (0.00 sec)
Is there a way to get the server to report back a historical statistic for the largest query ever returned? My intention is to discover how large the returned query data is in order to better tune the cache. I feel that the not cached number may be too large and that stems from not having a large enough maximum query.

Comparing most recent InnoDB_Rows_Inserted variable value with old value

This is the query that I have written:
SELECT variable_name,0 - variable_value
FROM information_schema.global_status
WHERE variable_name IN ('Innodb_rows_inserted','Innodb_rows_updated'
,'Innodb_rows_deleted','Innodb_rows_read'
,'Innodb_data_reads','Innodb_data_read'
, 'Innodb_data_writes','Innodb_data_written');
+----------------------+--------------------+
| variable_name | 0 - variable_value |
+----------------------+--------------------+
| INNODB_DATA_READ | -6672384 |
| INNODB_DATA_READS | -422 |
| INNODB_DATA_WRITES | -22 |
| INNODB_DATA_WRITTEN | -333312 |
| INNODB_ROWS_DELETED | 0 |
| INNODB_ROWS_INSERTED | -2 |
| INNODB_ROWS_READ | -17 |
| INNODB_ROWS_UPDATED | 0 |
+----------------------+--------------------+
8 rows in set (0.00 sec)
Now, I want the difference between the most recently updated value for INNODB_ROWS_INSERTED and its last value.
For example - In the above output, value of INNODb_ROWS_INSERTED is 2. If I make one more insert and re run this query, the updated value will be 3. Now I want to display the difference, i.e. 1 in a new table or a file.
Thanks
Beware of the Observer effect.
Plan A: Put the original values in a temp table, but make sure it is not an InnoDB table. After the test, JOIN the information_schema query to it to get the diffs.
Plan B: Use SHOW GLOBAL STATUS LIKE 'Innodb%' before and after; parse the ouput (in, say, PHP); take the diffs. It seems that SHOW is less likely to be subject to the Observer effect. But the coding is more clumsy.
Plan C (won't work for those STATUS values): FLUSH STATUS zeros out some STATUS values. Do that before the test, then use SHOW SESSION STATUS LIKE '...' afterwards. (Note use of SESSION.) This works well for LIKE 'Handler%', which I find useful in digging into how queries work and how fast or slow they are.

Performance metrics calculation for SQL queries

I have two databases A and B on MySQL server.
A is the original database and B is derived from A changing the format of some tables. So for each table_A in db A there's a respective table table_B in db B and for each row in table_A there is a respective row in table_B representing the exact same table entry, in a different format.
I'm pretty sure that explaining this "format difference" between A and B is irrelevant of what I'm going to ask.
I use Java, JDBC actually, to interface with MySQL server.
I have a number of "SELECT" queries for db A and the equivalent queries for db B. I want to execute them repeatedly and calculate some metrics, like so:
execute SELECT query on db A and calculate metrics;
execute equivalent SELECT query on db B and calculate metrics;
UPDATE data stored in db A and db B by a percentage
loop
The final goal is to compare the performance of the "same" queries on the two twin dbs, to see what effect the "format difference" has in query performance.
My questions:
How can I calculate CPU time of the query execution? Currently what I do is:
long startTime = System.currentTimeMillis();
ResultSet rs = stmt.executeQuery(QUERY);
long time = System.currentTimeMillis() - startTime;
Is this accurate?
How can I calculate other metrics such as memory usage, cache usage, disk reads, disk writes, buffer gets
Could anyone suggest any other metrics to compare the performance of the "same" queries on the two databases?
There are a lot of metrics you cannot get. But here is a set I like to get:
FLUSH STATUS;
SELECT ...; -- or whatever query
SHOW SESSION STATUS LIKE 'Handler%';
The last command might give something like
mysql> SHOW SESSION STATUS LIKE 'Handler%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Handler_commit | 1 |
| Handler_delete | 0 |
| Handler_discover | 0 |
| Handler_external_lock | 2 |
| Handler_mrr_init | 0 |
| Handler_prepare | 0 |
| Handler_read_first | 1 |
| Handler_read_key | 1 |
| Handler_read_last | 0 |
| Handler_read_next | 5484 | -- rows in the table; so it did a table scan
| Handler_read_prev | 0 |
| Handler_read_rnd | 7 |
| Handler_read_rnd_next | 14 |
| Handler_rollback | 0 |
| Handler_savepoint | 0 |
| Handler_savepoint_rollback | 0 |
| Handler_update | 0 |
| Handler_write | 13 | -- wrote to a tmp table 13 rows after a GROUP BY
+----------------------------+-------+
18 rows in set (0.00 sec)
Caching comes and goes, so timings can vary even by a factor of 10. Handlers, on the other hand, are very consistent. They give me insight into what is happening.
If you are running through JDBC, run the FLUSH like you would a non-SELECT; run the SHOW like a SELECT that gives you 2 columns.

how to distinguish which variables are used for innodb engine or for MyIsam engine?

When I searched some status variable by using below command and got:
mysql> show global status like '%key%';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| Com_assign_to_keycache | 0 |
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 713132 |
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 14497 |
| Key_blocks_used | 12 |
| Key_read_requests | 48622 |
| Key_reads | 0 |
| Key_write_requests | 9384 |
| Key_writes | 0 |
+------------------------+--------+
11 rows in set (0.00 sec)
I was curious about why both value of key_reads, and key_writes are 0, and googled. The blow link told me those key leading variables are used in MyIsam engine.
Why mysql status key_reads,key_reads_request's values are zero?
How do we know which variables are Innodb engine orientied, some are only used in MyIsam engine. Where I can find the document? Thanks for any input.
Take a look at this page on server status variables. The documentation is not all-encompassing, and I would recommend you further search the internets if it falls short for a certain status variable. For example, key_reads wasn't mentioned having anything to do with only MyISAM, so you were right to do further digging. I've found slideshare to have some useful information: see this presentation, which contains some information on the various status variables. However, you'll probably not be able to know 100% about every variable listed without looking at MySQL server source code!
Hope some of this helps...

Why mysql status key_reads,key_reads_request's values are zero?

I have some confusion about my mysql status.
mysql> show status like '%key%';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| Com_assign_to_keycache | 0 |
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 2 |
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 13396 |
| Key_blocks_used | 0 |
| Key_read_requests | 0 |
| Key_reads | 0 |
| Key_write_requests | 0 |
| Key_writes | 0 |
but,there have large (more then 1 billion each day) insert,update and query on the server,but why the status's value is 0.The server has run nearly 3 days.(Uptime:2 days 18 hours 54 min 19 sec).I did not flush the server's status.
Some db config engine = innodb,key_buffer = 16M, innodb_buffer_pool_size = 2147483648.
Thanks for any information.
Perhaps you're using InnoDB tables ?
Those Key_XXX server status values are for MyISAM tables.
The values you're looking at are for MyISAM tables. They represent the MyISAM Key Cache:
http://dev.mysql.com/doc/refman/5.0/en/myisam-key-cache.html
This cache holds recently used keys with the expectation that keys used recently are likely to be reused again soon -- therefore they could be valuable to cache.
Since you're using innodb, the key cache isn't being used.
For tuning purposes you should minimize the amount of memory you have dedicated to the key cache. Any memory taken away from innodb processing is probably wasted -