Could you, please, explain what is the meaning of the following MySQL metric:
table cache hit rate = open_tables / opened_tables.
As I understand open_tables is the current value of opened tables and opened_tables is a counter and there is no any correlation between these two status variables.
open_tables is the number of tables you have open right now; opened_tables is the total number of table-opening operations since the server started.
For example, if you have performed 100 table opening operations and have 25 tables open now, your table cache hit rate is 25/100 = 1/4.
The rationale is that you are trying to measure whether your table cache is big enough or not, but the ratio of open to opened tables doesn't give you the whole picture. Read "How MySQL Opens and Closes Pages" (http://dev.mysql.com/doc/refman/5.0/en/table-cache.html) to understand this better.
What you want to do is look at the value of opened tables over time - if it is growing rapidly while your system is busy, you might want to increase your table cache size. But be careful about making the table cache too large - it takes time for MySQL to check a large number of cached table descriptors to figure out which one to close next.
Related
We are facing a problem, our DB instance MySQL 8.0 (Production environment) is continuously showing an alert that number of open tables is equal to table_open_cache value. The number of open tables is increased more than 43,200 in 24 hour observation period which makes total count of open tables equals to 2845063.
Please help me how to reduce this, If I go for Flush tables command with read only or with read lock will it cause any data loss or performance issues. I have to implement this to my production Database, Is it a good practice to use Flush tables manually once a day.
Posted a question regarding MySQL DB instance open tables, need to know how to reduce the same by any method. Is it a good practice to use Flush tables manually once a day.
I am attaching an image for reference :-
image1
Misses/Hits is about 2% -- reasonable.
Apparently that screenshot should be talking about "opened" tables, not "open" tables. Only 4K are currently "open", limited by table_open_cache.
The image shows 43.2K vs 2.8M -- it is unclear what each means. 43.2K/24h is exactly 1 per 2 seconds. This is suspect.
2.8M openings of tables in 24 hours is high, but not necessarily "bad. (It's about the 95th percentile.)
Suggest increasing table_open_cache to 8000. What activity is going on? Perhaps you are opening a connection, performing a single operation (which involves opening one or more tables), then disconnecting? Can you cut back on the rapidity of creating connections?
Please provide SHOW GLOBAL STATUS LIKE 'Connection'; 50 per second is "high".
I await seeing Opened_tables and Uptime fetched at the 'same' time.
No, I don't think FLUSH is the answer.
I'm running MariaDB 10.2.31 on Ubuntu 18.4.4 LTS.
On a regular basis I encounter the following conundrum - especially when starting out in the morning, that is when my DEV environment has been idle for the night - but also during the day from time to time.
I have a table (this applies to other tables as well) with approx. 15.000 rows and (amongst others) an index on a VARCHAR column containing on average 5 to 10 characters.
Notably, most columns including this one are GENERATED ALWAYS AS (JSON_EXTRACT(....)) STORED since 99% of my data comes from a REST API as JSON-encoded strings (and conveniently I simply store those in one column and extract everything else).
When running a query on that column WHERE colname LIKE 'text%' I find query-result durations of i.e. 0.006 seconds. Nice. When I have my query EXPLAINed, I can see that the index is being used.
However, as I have mentioned, when I start out in the morning, this takes way longer (14 seconds this morning). I know about the query cache and I tried this with query cache turned off (both via SET GLOBAL query_cache_type=OFF and RESET QUERY CACHE). In this case I get consistent times of approx. 0.3 seconds - as expected.
So, what would you recommend I should look into? Is my DB sleeping? Is there such a thing?
There are two things that could be going on:
1) Cold caches (overnight backup, mysqld restart, or large processing job results in this particular index and table data being evicted from memory).
2) Statistics on the table go stale and the query planner gets confused until you run some queries against the table and the statistics get refreshed. You can force an update using ANALYZE TABLE table_name.
3) Query planner heisenbug. Very common in MySQL 5.7 and later, never seen it before on MariaDB so this is rather unlikely.
You can get to the bottom of this by enablign the following in the config:
log_output='FILE'
log_slow_queries=1
log_slow_verbosity='query_plan,explain'
long_query_time=1
Then review what is in the slow log just after you see a slow occurrence. If the logged explain plan looks the same for both slow and fast cases, you have a cold caches issue. If they are different, you have a table stats issue and you need to cron ANALYZE TABLE at the end of the over night task that reads/writes a lot to that table. If that doesn't help, as a last resort, hard code an index hint into your query with FORCE INDEX (index_name).
Enable your slow query log with log_slow_verbosity=query_plan,explain and the long_query_time sufficient to catch the results. See if occasionally its using a different (or no) index.
Before you start your next day, look at SHOW GLOBAL STATUS LIKE "innodb_buffer_pool%" and after your query look at the values again. See how many buffer pool reads vs read requests are in this status output to see if all are coming off disk.
As #Solarflare mentioned, backups and nightly activity might be purging the innodb buffer pool of cached data and reverting bad to disk to make it slow again. As part of your nightly activites you could set innodb_buffer_pool_dump_now=1 to save the pages being hot before scripted activity and innodb_buffer_pool_load_now=1 to restore it.
Shout-out and Thank you to everyone giving valuable insight!
From all the tips you guys gave I think I am starting to understand the problem better and beginning to narrow it down:
First thing I found was my default innodb_buffer_pool_size of 134 MB. With the sort and amount of data I'm processing this is ridiculously low - so I was able to increase it.
Very helpful post: https://dba.stackexchange.com/a/27341
And from the docs: https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool-resize.html
Now that I have increased it to close to 2GB and am able to monitor its usage and RAM usage in general (cli: cat /proc/meminfo) I realize that my 4GB RAM is in fact on the low side of things. I am nowhere near seeing any unused overhead (buffer usage still at 99% and free RAM around 100MB).
I will start to optimize RAM usage of my daemon next and see where this leads - but this will not free enough RAM altogether.
#danblack mentioned innodb_buffer_pool_dump_now and innodb_buffer_pool_load_now. This is an interesting approach to maybe use whenever the daemon accesses the DB as I would love to separate my daemon's buffer usage from the front end's (apparently this is not possible!). I will look into this further but as my daemon is running all the time (not only at night) this might not be feasible.
#Gordan Bobic mentioned "refreshing" DBtables by using ANALYZE TABLE tableName. I found this to be quite fast and incorporated it into the daemon after each time it does an extensive read/write. This increases daemon run times by a few seconds but this is no issue at all. And I figure I can't go wrong with it :)
So, in the end I believe my issue to be a combination of things: Too small buffer size, too small RAM, too many read/write operations for that environment (evicting buffered indexes etc.).
Also I will have to learn more about memory allocation etc and optimize this better (large-pages=1 etc).
I have several tables with ~15 million rows. When I create an idex on the id column and then I execute a simple query like SELECT * FROM my_table WHERE id = 1 I retrieve the data within one second. But then, after a few minutes, if I execute the query with a different id it takes over 15 seconds.
I'm sure it is not the query cache because I'm trying different ids all the time to make sure I'm not retrieving from the cache. Also, I used EXPLAIN to make sure the index it's being used.
The specs of the server are:
CPU: Intel Dual Xeon 5405 Harpertown 2.0Ghz Quad Core
RAM: 8GB
Hard drive 2: 146GB SAS (15k rpm)
Another thing I noticed is that if I execute REPAIR TABLE my_table the queries become within one second again. I assume something is being cached, either the table or the index. If so, is there any way to tell MySQL to keep it cached. Is it normal, given the specs of the server, to take around 13 seconds on an indexed table? The index is not unique and each query returns around 3000 rows.
NOTE: I'm using MyISAM and I know there won't be any write in these tables, all the queries will be to read data.
SOLVED: thank you for your answers, as many of you pointed out it was the key_buffer_size.I also reordered the tables using the same column as the index so the records are not scattered, now I'm executing the queries consistently under 1 second.
Please provide
SHOW CREATE TABLE
SHOW VARIABLES LIKE '%buffer%';
Likely causes:
key_buffer_size (when using MyISAM) is not 20% of RAM; or innodb_buffer_pool_size is not 70% of available RAM (when using InnoDB).
Another query (or group of queries) is coming in and "blowing out the cache" (key_buffer or buffer_pool). Look for such queries).
When using InnoDB, you don't have a PRIMARY KEY. (It is really important to have such.)
For 3000 rows to take 15 seconds to load, I deduce:
The cache for the table (not necessarily for the index) was blown out, and
The 3000 rows were scattered around the table (hence fetching one row does not help much in finding subsequent rows).
Memory allocation blog: http://mysql.rjweb.org/doc.php/memory
Is it normal, given the specs of the server, to take around 13 seconds on an indexed table?
The high variance in response time indicates that something is amiss. With only 8 GB of RAM and 15 million rows, you might not have enough RAM to keep the index in memory.
Is swap enabled on the server? This could explain the extreme jump in response time.
Investigate the memory situation with a tool like top, htop or glances.
There are 10 InnoDB partitioned tables. MySQL is configured with option innodb-file-per-table=1 (innodb-file per table/partition - for some reasons). Tables size is abount 40GB each. They contains statictics data.
During normal operation, the system can handle the load. The accumulated data is processed every N minutes. However, if for some reason, there was no treatment for more than 30 minutes (eg, maintenance of the system - it is rare, but once a year is necessary to make changes), begin to lock timeout.
I will not tell you how to come to such an architecture, but it is the best solution - way was long.
Đ•ach time, making changes requires more and more time. Today, for example, a simple ALTER TABLE took 2:45 hours. This is unacceptable.
So, as I said, processing the accumulated data requires a lot of resources and SELECT statements are beginning to return lock timeout errors. Of course, the tables in the query are not involved, and the work goes to the results of queries to them. Total size of these 10 tables is about 400GB, and a few dozen small tables, the total size of which is comparable to (and maybe not yet) to the size of an big table. Problems with small tables there.
My question is: how can I solve the problem with a lock timeout error? A server is not bad - 8 core xeon, 64 RAM. And this is only the database server. Of course, the entire system is not located on the same machine.
There is an only reason why I get this errors: on data transfrom process from big tables to small.
Any ideas?
I have 81 tables in an innodb database (MySQL).
The data in them amounts to 2GB on disk.
My queries rarely join more than 3 tables together at once. My innodb_buffer_pool size is about 2.1 GB.
Running mysqltuner.pl I get the following !!
[!!] Table cache hit rate: 7% (274 open / 3K opened)
From mysqlreport I see that I indeed have 274 open, have had 3K opened and that my ceiling for open is 400.
However, doing this
show status like '%open%'
gets this result
...
Open_table_definitions 161
Open_tables 274
Opened_files 150232
Opened_table_definitions 0
Opened_tables 0
Two questions:
1) Shouldn't the "opened tables" say 3K and not zero in the above result from show status like '%open%'?
2) Any advice on what I need to do to remedy this !! i.e. the low table cache hit rate?
Thanks
PS. If it helps, the second !! I have in mysqltuner.pl is this:
[!!] Temporary tables created on disk: 29% (35K on disk / 119K total)
show status like '%open%' shows the status for the current session rather then for the whole MySQL DB(SHOW STATUS Syntax). For getting global status, use show global status like '%open%' instead.
One problem I encountered for mysqltuner.pl is whenever it is run, it opens all the tables in the database, thus increasing the opened_tables statistics. If it is not the case, MySQL manual suggested set table_open_cache to * .
1) Shouldn't the "opened tables" say 3K and not zero in the above
result from show status like '%open%'?
YES, result came mostly from SHOW STATUS and SHOW VARIABLES and some basic calculus operations.
2) Any advice on what I need to do to remedy this !! i.e. the low
table cache hit rate?
Table cache hit is due to:
1. too few table opened
2. Total number of table in all databases <<< open table cache size.