Tuning recommendations from mysqltuner.pl: query_cache_limit - mysql

The mysqltuner.pl script gives me the following recommendation:
query_cache_limit (> 1M, or use smaller result sets)
And MySQL status output shows:
mysql> SHOW STATUS LIKE 'Qcache%';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| Qcache_free_blocks | 12264 |
| Qcache_free_memory | 1001213144 |
| Qcache_hits | 3763384 |
| Qcache_inserts | 54632419 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 6656246 |
| Qcache_queries_in_cache | 55280 |
| Qcache_total_blocks | 122848 |
+-------------------------+------------+
8 rows in set (0.00 sec)
From the status output above, how can I judge whether or nor the suggested increase in query_cache_limit is needed?

Your best bet is to set up some kind of test harness that executes a realistic (defined by your scenario) load on your database, and then run that test against MySql with different settings. Tuning is such an art in itself that it is very difficult to give an all embracing answer without knowing your exact needs.
From http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html:
The Qcache_free_memory counter
provides insight into the cache's free
memory. Low amounts observed vs. total
allocated for the cache may indicate
an undersized cache, which can be
remedied by altering the global
variable query_cache_size.
Qcache_hits and Qcache_inserts shows
the number of times a query was
serviced from the cache and how many
queries have been inserted into the
cache. Low ratios of hits to inserts
indicate little query reuse or a
too-low setting of the
query_cache_limit, which serves to
govern the RAM devoted to each
individual query cache entry. Large
query result sets will require larger
settings of this variable.
Another indicator of poor query reuse
is an increasing Qcache_lowmem_prunes
value. This indicates how often MySQL
had to remove queries from the cache
to make use for incoming statements.
Other reasons for an increasing number
of Qcache_lowmem_prunes are an
undersized cache, which can't hold the
needed amount of SQL statements and
result sets, and memory fragmentation
in the cache which may be alleviated
by issuing a FLUSH QUERY CACHE
statement. You can remove all queries
from the cache with the RESET QUERY
CACHE command.
The Qcache_not_cached counter provides
insight into the number of statements
executed against MySQL that were not
cacheable, due to either being a
non-SELECT statement or being
explicitly barred from entry with a
SQL_NO_CACHE hint.
Your hits-to-inserts ratio is something like 1:15 or 6%, so it looks like your settings could do with some finetuning (although, as I said, you are the best judge of that as you know your requirements best).

Related

How to calculate amount of work performed by the page cleaner thread each second?

I try to tune InnoDB Buffer Pool Flushing parameters.
In MySQL 5.7 manual
innodb_lru_scan_depth * innodb_buffer_pool_instances = amount of work performed by the page cleaner thread each second
My question is : How can I calculate the amount of work performed by the page cleaner thread each second?
Run the SQL command:
SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_flushed'
Once every second. Compare the value to the previous second.
The difference of that value from one second to the next is the number of dirty pages the page cleaner requested to flush to disk.
Example:
mysql> SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_flushed';
+----------------------------------+-----------+
| Variable_name | Value |
+----------------------------------+-----------+
| Innodb_buffer_pool_pages_flushed | 496786650 |
+----------------------------------+-----------+
...wait a moment...
mysql> SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_flushed';
+----------------------------------+-----------+
| Variable_name | Value |
+----------------------------------+-----------+
| Innodb_buffer_pool_pages_flushed | 496787206 |
+----------------------------------+-----------+
So in the moment I waited, the page cleaner flushed 556 pages.
The upper limit of this work is a complex calculation, involving several InnoDB configuration options. Read my answer to How to solve mysql warning: "InnoDB: page_cleaner: 1000ms intended loop took XXX ms. The settings might not be optimal "? for a description of how it works.

How to Tune Storage engine of MySQL

I have database of just 5 million rows, But inner joins and IN taking to much time (55seconds,60seconds). so i am checking if there is a problem with my MyISAM setting.
Query: SHOW STATUS LIKE 'key%'
+------------------------+-------------+
| Variable_name | Value |
+------------------------+-------------+
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 275029 |
| Key_blocks_used | 3316428 |
| Key_read_requests | 11459264178 |
| Key_reads | 3385967 |
| Key_write_requests | 91281692 |
| Key_writes | 27930218 |
+------------------------+-------------+
give me your suggestions to increase performance of MyISAM
I have worked with more then 45GB database, I was also faced performance issue,
Here are the some stpes which I have taken for improve perfomance.
(1) Remove any unnecessary indexes on the table, paying particular attention to UNIQUE indexes as these disable change buffering. Don't use a UNIQUE index if you have no reason for that constraint; prefer a regular INDEX.
(2) Inserting in order will result in fewer page splits (which will perform worse on tables not in memory), and the bulk loading is not specifically related to the table size, but it will help reduce redo log pressure.
(3) If bulk loading a fresh table, delay creating any indexes besides the PRIMARY KEY. If you create them once all data is loaded, then InnoDB is able to apply a pre-sort and bulk load process which is both faster and results in typically more compact indexes. This optimization became true in MySQL 5.5.
(4) Make sure to use InnoDB instead of MyISAM. MyISAM can be faster at inserts to the end of a table. Innodb is row level locking and MYISAM is table level locking
(5) Try to avoid complex SELECT queries on MyISAM tables that are updated frequently, and use query like which return less result on first condition
(6) For MyISAM tables that change frequently, try to avoid all variable-length columns (VARCHAR, BLOB, and TEXT). The table uses dynamic row format if it includes even a single variable-length column

Why do I receive notifications of lowmen prunes even though half the query cache is not being used?

I'm using a monitoring system that has been reporting every few hours that there were a lot of lowmem prunes
Thu Dec 5 01:21:52 UTC 2013
7347 query cache lowmem prunes in 600 seconds (12.24/sec)
Thu Dec 5 10:21:52 UTC 2013
10596 query cache lowmem prunes in 600 seconds (17.66/sec)
Thu Dec 5 11:26:52 UTC 2013
8979 query cache lowmem prunes in 600 seconds (14.96/sec)
mysql> SHOW STATUS LIKE 'Qc%';
Variable_name Value
Qcache_free_blocks 2250
Qcache_free_memory 6938840
Qcache_hits 578811080
Qcache_inserts 331501709
Qcache_lowmem_prunes 124066063
Qcache_not_cached 135977294
Qcache_queries_in_cache 5638
Qcache_total_blocks 13625
About 6MB of my 16MB query cache is not being used
mysql> SHOW VARIABLES LIKE 'query_cache_size';
+------------------+----------+
| Variable_name | Value |
+------------------+----------+
| query_cache_size | 16777216 |
+------------------+----------+
1 row in set (0.00 sec)
Why are queries being pruned without the cache filling up?
Should I increase or decrease my cache size?
Additional information
mysql> FLUSH STATUS;
30 minutes later
mysql> SHOW STATUS LIKE '%Qcache%';
+-------------------------+---------+
| Variable_name | Value |
+-------------------------+---------+
| Qcache_free_blocks | 1935 |
| Qcache_free_memory | 5154904 |
| Qcache_hits | 43918 |
| Qcache_inserts | 33074 |
| Qcache_lowmem_prunes | 4443 |
| Qcache_not_cached | 10438 |
| Qcache_queries_in_cache | 6276 |
| Qcache_total_blocks | 14713 |
+-------------------------+---------+
8 rows in set (0.00 sec)
The query cache expires entries when any INSERT/UPDATE/DELETE statements modify data in the associated table. This does not wait for the cache to fill up.
http://dev.mysql.com/doc/refman/5.6/en/query-cache-operation.html says:
If a table changes, all cached queries that use the table become invalid and are removed from the cache. This includes queries that use MERGE tables that map to the changed table. A table can be changed by many types of statements, such as INSERT, UPDATE, DELETE, TRUNCATE TABLE, ALTER TABLE, DROP TABLE, or DROP DATABASE.
Re your question:
If using InnoDB and the insertion is at the end of a table, does the query cache expire entries?
Yes, that's correct. Say for example a query cache entry is associated with a query SELECT COUNT(*) FROM mytable. An insert to the end of mytable would make the cached result from this query invalid.
The query cache doesn't have much intelligence with respect to deciding whether a given change to the data affects the cached entry. It assumes that if you change anything in a table, then all queries in the cache associated with that table in any way must be discarded.
It could apply more intelligence to discard some query results only if the cached result would change after your insert. But how would it do that? It would have to run the query again after your insert, comparing the result to the result that is stored in the cache. If they differ, replace the result in the cache.
But it would have to do that with every query result in the cache. Note that your status output shows your query cache has 5638 queries in it. Of course not every one of these is associated with the same table you're inserting into, but we can assume that many of them are.
It would not be a good tradeoff for a single INSERT to cause hundreds or thousands of SELECT statements to be re-executed to refresh their cached results.
So the compromise is that a change to a table purges all cached results associated with that table, even if it was not strictly necessary.
The query cache is therefore not a very precise method for caching queries. It can be helpful for certain workloads, for example if your application tends to repeat a given uery many times while the table receives no changes. But we have seen many cases where the workload makes the query cache not helpful, and in some cases the overhead of maintaining the query cache is actually a detriment to performance.
If you want some cache mechanism that is more precise, you have to code it yourself in your application, saving certain results to memcached or similar fast in-memory cache. Then it becomes your responsibility to track which entries need to be refreshed when data changes.

How do I benchmark MySQL?

I'm currently using MySQL workbench. I want to see the difference in performance as the number of rows in a table increases. I want to specifically test and compare 1000 rows, 10,000 rows, 100,000 rows, 1,000,000 rows and 10,000,000 rows.
So, are there any tools that will allow me to do this and provide statistics on disk I/O, memory usage, CPU usage and time to complete query?
yes. Benchmark is your best option I guess for some of them
you can make simple queries likes:
jcho360> select benchmark (10000000,1+1);
+--------------------------+
| benchmark (10000000,1+1) |
+--------------------------+
| 0 |
+--------------------------+
1 row in set (0.18 sec)
jcho360> select benchmark (10000000,1/1);
+--------------------------+
| benchmark (10000000,1/1) |
+--------------------------+
| 0 |
+--------------------------+
1 row in set (1.30 sec)
a sum is faster than a division (you can do this with all the things that you can imagine.
I'll recommend you to take a look to this program that will help you with this part of performance.
Mysqlslap (it's like benchmark but you can customize more the result).
SysBench (test CPUperformance, I/O performance, mutex contention, memory speed, database performance).
Mysqltuner (with this you can analize general statistics, Storage engine Statistics, performance metrics).
mk-query-profiler (perform analysis of a SQL Statement).
mysqldumpslow (good to know witch queries are causing problems).
some of them are third party, but I'm pretty sure that you can find tons of info googling the name of the APP

Command to check read/write ratio?

Is there a command in MySQL that returns the read-to-write ratio of queries so that I'm able to know on what MySQL spends time, and whether the load would lower significantly by splitting data over two servers?
This SQL command will give you an indication as to the read/write ratio:
SHOW GLOBAL STATUS WHERE Variable_name = 'Com_insert'
OR Variable_name = 'Com_update'
OR Variable_name = 'Com_select'
OR Variable_name = 'Com_delete';
3rd party edit
On one of our servers gave this result
Variable_name | Value
Com_delete | 6878
Com_insert | 5975
Com_select | 101061
Com_update | 9026
Bytes_received | 136301641 <-- added by 3rd party
Bytes_sent | 645476511 <-- added by 3rd party
I assume that update and insert have different IO implications but i combined them like this Com_insert + Com_update / Com_select to get a "write/read" idea. I also use Bytes_received and Bytes_sent - but this might lead to false conclusions since bytes received do not have to lead to a write on disk (for example a long where clause).
SELECT (136263935/1000000) AS GB_received
, (644471797/1000000) AS GB_sent
, (136263935/644471797) AS Ratio_Received_Sent
, (6199+9108)/106789 AS Ins_Upd_Select_ratio;
This gave this result
GB_received | GB_sent | Ratio_Received_Sent | Ins_Upd_Select_ratio
136 | 644 | 0,2114 | 0,1433
You can use the "show status" and check the "Com_%" variables for read/write ratios.
As for splitting the data, you'll have to check the slow query log (Google mysqlsla) and find out if those queries are amicable to being split.
http://forums.mysql.com/read.php?10,328920,337142#msg-337142
"SHOW GLOBAL STATUS LIKE 'Com%'. This will give overall counts (since last restart) of each statement type. This will not necessarily tell you whether you are mostly SELECT-bound versus write-bound. You might have a small number in Com_select, but the selects are terribly slow. "