Is there a command in MySQL that returns the read-to-write ratio of queries so that I'm able to know on what MySQL spends time, and whether the load would lower significantly by splitting data over two servers?
This SQL command will give you an indication as to the read/write ratio:
SHOW GLOBAL STATUS WHERE Variable_name = 'Com_insert'
OR Variable_name = 'Com_update'
OR Variable_name = 'Com_select'
OR Variable_name = 'Com_delete';
3rd party edit
On one of our servers gave this result
Variable_name | Value
Com_delete | 6878
Com_insert | 5975
Com_select | 101061
Com_update | 9026
Bytes_received | 136301641 <-- added by 3rd party
Bytes_sent | 645476511 <-- added by 3rd party
I assume that update and insert have different IO implications but i combined them like this Com_insert + Com_update / Com_select to get a "write/read" idea. I also use Bytes_received and Bytes_sent - but this might lead to false conclusions since bytes received do not have to lead to a write on disk (for example a long where clause).
SELECT (136263935/1000000) AS GB_received
, (644471797/1000000) AS GB_sent
, (136263935/644471797) AS Ratio_Received_Sent
, (6199+9108)/106789 AS Ins_Upd_Select_ratio;
This gave this result
GB_received | GB_sent | Ratio_Received_Sent | Ins_Upd_Select_ratio
136 | 644 | 0,2114 | 0,1433
You can use the "show status" and check the "Com_%" variables for read/write ratios.
As for splitting the data, you'll have to check the slow query log (Google mysqlsla) and find out if those queries are amicable to being split.
http://forums.mysql.com/read.php?10,328920,337142#msg-337142
"SHOW GLOBAL STATUS LIKE 'Com%'. This will give overall counts (since last restart) of each statement type. This will not necessarily tell you whether you are mostly SELECT-bound versus write-bound. You might have a small number in Com_select, but the selects are terribly slow. "
Related
I try to tune InnoDB Buffer Pool Flushing parameters.
In MySQL 5.7 manual
innodb_lru_scan_depth * innodb_buffer_pool_instances = amount of work performed by the page cleaner thread each second
My question is : How can I calculate the amount of work performed by the page cleaner thread each second?
Run the SQL command:
SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_flushed'
Once every second. Compare the value to the previous second.
The difference of that value from one second to the next is the number of dirty pages the page cleaner requested to flush to disk.
Example:
mysql> SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_flushed';
+----------------------------------+-----------+
| Variable_name | Value |
+----------------------------------+-----------+
| Innodb_buffer_pool_pages_flushed | 496786650 |
+----------------------------------+-----------+
...wait a moment...
mysql> SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_flushed';
+----------------------------------+-----------+
| Variable_name | Value |
+----------------------------------+-----------+
| Innodb_buffer_pool_pages_flushed | 496787206 |
+----------------------------------+-----------+
So in the moment I waited, the page cleaner flushed 556 pages.
The upper limit of this work is a complex calculation, involving several InnoDB configuration options. Read my answer to How to solve mysql warning: "InnoDB: page_cleaner: 1000ms intended loop took XXX ms. The settings might not be optimal "? for a description of how it works.
The following questions will be answered.
How to enable slow query log in MySQL
How to set slow query time
How to read the logs generated by MySQL
Log analysis is becoming a menace day-by-day. Most tech companies have started using ELK stack or similar tools for Log analysis. But what if you don't have hours to spend on the set up of ELK and just want to spend some time on analysing the logs by your on (manually, that is).
Although, it is not the best way but don't underestimate the power of analysing the logs from the terminal. From the terminal too, we can efficiently analyse the logs but there are limitations to what we can or cannot do. I am posting about the basic process of analysing a MySQL log.
(In addition to the 'setup' provided by #MontyPython...)
Run
pt-query-digest, or mysqldumpslow -s t
Either will give you the details of 'worst' query first, so stop the output after a few dozen lines.
I prefer long_query_time=1. It's in seconds; you can specify less than 1.
Also, in more recent versions, you need log_output = FILE.
show variables like '%slow%';
+---------------------------+-----------------------------------+
| Variable_name | Value |
+---------------------------+-----------------------------------+
| log_slow_admin_statements | OFF |
| log_slow_slave_statements | OFF |
| slow_launch_time | 2 |
| slow_query_log | OFF |
| slow_query_log_file | /var/lib/mysql/server-slow.log |
+---------------------------+-----------------------------------+
And then,
show variables like '%long_query%';
+-----------------+----------+
| Variable_name | Value |
+-----------------+----------+
| long_query_time | 5.000000 |
+-----------------+----------+
Change the long query time to whatever you want. Queries taking more than this will be captured in the slow query log.
set global long_query_time = 2.00;
Now, switch on the slow query log.
set global slow_query_log = 'ON';
flush logs;
Go to the terminal and check the directory where the log file is supposed to be.
cd /var/lib/mysql/
la -lah | grep slow
-rw-rw---- 1 mysql mysql 4.6M Apr 24 08:32 server-slow.log
Opening the file - use one of the following commands
cat server-slow.log
tac server-slow.log
less server-slow.log
more server-slow.log
tail -f server-slow.log
How many unique slow queries have been logged during a day?
grep 'Time: 160411.*' server-slow.log | cut -c2-18 | uniq -c
I'm using Nagios with the mysql_check_health pugin to monitor my MySQL databases. I need to be able to return a numeric value to my plugin from an sql query to tell me if the replicated database is up and running.
so here is what I have...
SHOW GLOBAL STATUS like 'slave_running'
will return:
Variable_name Value
Slave_running OFF/ON
I need to retrun a numeric value from a simple query for the plugin, anyone have any ideas... my thought was to return 3-LENGTH(Slave_running) that would == 1 for ON 0 for off but having trouble using the return values in that way.
The global status variable will be accessible in the information_schema.GLOBAL_STATUS table, from which you can query just the value. That makes it easy to conditionally convert it to a 0 or 1 based on the ON/OFF state.
For example:
> SELECT VARIABLE_VALUE
FROM information_schema.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'slave_running';
+----------------+
| VARIABLE_VALUE |
+----------------+
| ON |
+----------------+
So to convert that into a zero or one, there are a few possibilities. MySQL will treat booleans as 0 or 1, so you can just compare it = 'ON'
Wrapping the above in a subquery (since it returns one value) and comparing to 'ON':
> SELECT (
'ON' = (SELECT VARIABLE_VALUE
FROM information_schema.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'slave_running')
) AS state;
+-------+
| state |
+-------+
| 1 |
+-------+
Or a similar expression formatted as a CASE:
> SELECT CASE WHEN (
SELECT VARIABLE_VALUE
FROM information_schema.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'slave_running') = 'ON' THEN 1
ELSE 0 END AS state;
+-------+
| state |
+-------+
| 1 |
+-------+
In both of the above, I aliased the result as 'state', but you could use any column alias name to read output, replacing AS state accordingly.
What's already out there?
I couldn't help but wonder if there was already a Nagios plugin built for this purpose, and found this as a possibility.
I would strongly consider using:
SHOW SLAVE STATUS
As your informational query. This gives you a few more key fields to monitor on.
From this, you might consider alarming on the following fields:
Slave_IO_Running: Yes / No (tells you if binary log feed from master is working)
Slave_SQL_Running: Yes / No (tells you if slave's SQL execution thread is runing)
Seconds_Behind_Master: INT value (you should set appropriate low value here to alarm off of)
The Slave_running global status is OK for determining overall state (that value is only 'On' when both IO thread and SQL thread on slave are running), but may not give you what you want in terms of more granular monitoring. For example, an interruption in the IO thread may be considered a higher severity event than the SQL thread breaking (and may have totally different recovery scenarios). The Seconds_Behind_Master may also be key to monitor, as you might have both IO and SQL threads happily running, while not realizing that the slave can't keep up for some reason.
If you need to convert to INT value results for slave status values, you could do something like:
SELECT
(CASE WHEN a.Slave_IO_Running = 'Yes' THEN 1 ELSE 0 END)
AS Slave_IO_Running,
(CASE WHEN a.Slave_SQL_Running = 'Yes' THEN 1 ELSE 0 END)
AS Slave_SQL_Running,
a.Seconds_Behind_Master AS Seconds_Behind_Master
FROM (SHOW SLAVE STATUS) AS a
I'm using a monitoring system that has been reporting every few hours that there were a lot of lowmem prunes
Thu Dec 5 01:21:52 UTC 2013
7347 query cache lowmem prunes in 600 seconds (12.24/sec)
Thu Dec 5 10:21:52 UTC 2013
10596 query cache lowmem prunes in 600 seconds (17.66/sec)
Thu Dec 5 11:26:52 UTC 2013
8979 query cache lowmem prunes in 600 seconds (14.96/sec)
mysql> SHOW STATUS LIKE 'Qc%';
Variable_name Value
Qcache_free_blocks 2250
Qcache_free_memory 6938840
Qcache_hits 578811080
Qcache_inserts 331501709
Qcache_lowmem_prunes 124066063
Qcache_not_cached 135977294
Qcache_queries_in_cache 5638
Qcache_total_blocks 13625
About 6MB of my 16MB query cache is not being used
mysql> SHOW VARIABLES LIKE 'query_cache_size';
+------------------+----------+
| Variable_name | Value |
+------------------+----------+
| query_cache_size | 16777216 |
+------------------+----------+
1 row in set (0.00 sec)
Why are queries being pruned without the cache filling up?
Should I increase or decrease my cache size?
Additional information
mysql> FLUSH STATUS;
30 minutes later
mysql> SHOW STATUS LIKE '%Qcache%';
+-------------------------+---------+
| Variable_name | Value |
+-------------------------+---------+
| Qcache_free_blocks | 1935 |
| Qcache_free_memory | 5154904 |
| Qcache_hits | 43918 |
| Qcache_inserts | 33074 |
| Qcache_lowmem_prunes | 4443 |
| Qcache_not_cached | 10438 |
| Qcache_queries_in_cache | 6276 |
| Qcache_total_blocks | 14713 |
+-------------------------+---------+
8 rows in set (0.00 sec)
The query cache expires entries when any INSERT/UPDATE/DELETE statements modify data in the associated table. This does not wait for the cache to fill up.
http://dev.mysql.com/doc/refman/5.6/en/query-cache-operation.html says:
If a table changes, all cached queries that use the table become invalid and are removed from the cache. This includes queries that use MERGE tables that map to the changed table. A table can be changed by many types of statements, such as INSERT, UPDATE, DELETE, TRUNCATE TABLE, ALTER TABLE, DROP TABLE, or DROP DATABASE.
Re your question:
If using InnoDB and the insertion is at the end of a table, does the query cache expire entries?
Yes, that's correct. Say for example a query cache entry is associated with a query SELECT COUNT(*) FROM mytable. An insert to the end of mytable would make the cached result from this query invalid.
The query cache doesn't have much intelligence with respect to deciding whether a given change to the data affects the cached entry. It assumes that if you change anything in a table, then all queries in the cache associated with that table in any way must be discarded.
It could apply more intelligence to discard some query results only if the cached result would change after your insert. But how would it do that? It would have to run the query again after your insert, comparing the result to the result that is stored in the cache. If they differ, replace the result in the cache.
But it would have to do that with every query result in the cache. Note that your status output shows your query cache has 5638 queries in it. Of course not every one of these is associated with the same table you're inserting into, but we can assume that many of them are.
It would not be a good tradeoff for a single INSERT to cause hundreds or thousands of SELECT statements to be re-executed to refresh their cached results.
So the compromise is that a change to a table purges all cached results associated with that table, even if it was not strictly necessary.
The query cache is therefore not a very precise method for caching queries. It can be helpful for certain workloads, for example if your application tends to repeat a given uery many times while the table receives no changes. But we have seen many cases where the workload makes the query cache not helpful, and in some cases the overhead of maintaining the query cache is actually a detriment to performance.
If you want some cache mechanism that is more precise, you have to code it yourself in your application, saving certain results to memcached or similar fast in-memory cache. Then it becomes your responsibility to track which entries need to be refreshed when data changes.
The mysqltuner.pl script gives me the following recommendation:
query_cache_limit (> 1M, or use smaller result sets)
And MySQL status output shows:
mysql> SHOW STATUS LIKE 'Qcache%';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| Qcache_free_blocks | 12264 |
| Qcache_free_memory | 1001213144 |
| Qcache_hits | 3763384 |
| Qcache_inserts | 54632419 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 6656246 |
| Qcache_queries_in_cache | 55280 |
| Qcache_total_blocks | 122848 |
+-------------------------+------------+
8 rows in set (0.00 sec)
From the status output above, how can I judge whether or nor the suggested increase in query_cache_limit is needed?
Your best bet is to set up some kind of test harness that executes a realistic (defined by your scenario) load on your database, and then run that test against MySql with different settings. Tuning is such an art in itself that it is very difficult to give an all embracing answer without knowing your exact needs.
From http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html:
The Qcache_free_memory counter
provides insight into the cache's free
memory. Low amounts observed vs. total
allocated for the cache may indicate
an undersized cache, which can be
remedied by altering the global
variable query_cache_size.
Qcache_hits and Qcache_inserts shows
the number of times a query was
serviced from the cache and how many
queries have been inserted into the
cache. Low ratios of hits to inserts
indicate little query reuse or a
too-low setting of the
query_cache_limit, which serves to
govern the RAM devoted to each
individual query cache entry. Large
query result sets will require larger
settings of this variable.
Another indicator of poor query reuse
is an increasing Qcache_lowmem_prunes
value. This indicates how often MySQL
had to remove queries from the cache
to make use for incoming statements.
Other reasons for an increasing number
of Qcache_lowmem_prunes are an
undersized cache, which can't hold the
needed amount of SQL statements and
result sets, and memory fragmentation
in the cache which may be alleviated
by issuing a FLUSH QUERY CACHE
statement. You can remove all queries
from the cache with the RESET QUERY
CACHE command.
The Qcache_not_cached counter provides
insight into the number of statements
executed against MySQL that were not
cacheable, due to either being a
non-SELECT statement or being
explicitly barred from entry with a
SQL_NO_CACHE hint.
Your hits-to-inserts ratio is something like 1:15 or 6%, so it looks like your settings could do with some finetuning (although, as I said, you are the best judge of that as you know your requirements best).