Does MySQL record how often it uses indices? - mysql

I've got a table (InnoDB) with a fair number of indices. It would be great to know if one (or more) of these was never actually used. I don't care as much about the disk space, but insertion speed is sometimes an issue.
Does MySQL record any statistics on how often it has used each index when running queries?

There is a way to find unused indexes with a small patch to MySQL.

I'd suggest turning on profiling and doing some bottleneck analysis.
set profiling=1;
After which point you can let some of your heavy queries run for awhile. Eventually, you turn it off and than examine the queries that ran to see which were heaviest in execution time. If you don't see any
Some other commands to note are:
show profiles;
select sum(duration) from information_schema.profiling where query_id=<ID you want to look at>;
show profile for query <you want to look at>;
Finally if you wish to see unused indexes and you have userstats enabled
SELECT DISTINCT s.TABLE_SCHEMA, s.TABLE_NAME, s.INDEX_NAME
FROM information_schema.statistics `s` LEFT JOIN information_schema.index_statistics INDXS
ON (s.TABLE_SCHEMA = INDXS.TABLE_SCHEMA AND
s.TABLE_NAME=INDXS.TABLE_NAME AND
s.INDEX_NAME=INDXS.INDEX_NAME)
WHERE INDXS.TABLE_SCHEMA IS NULL;

AFAIK, it only records total count of index usages.
SHOW STATUS; has the Select_scan column.

The plans for queries don't vary much - turn on the (slow) query logging with a threshold of 0 seconds, then write a bit of code in the language of your choice to parse the log files and strip all the literals out the queries, then do explain plans for any query executed more then 'N' times.
C.

Related

How to check performance of mysql query?

I have been learning query optimization, increase query performance and all but in general if we create a query how can we know if this is a wise query.
I know we can see the execution time below, But this time will not give a clear indication without a good amount of data. And usually, when we create a new query we don't have much data to check.
I have learned about clauses and commands performance. But is there is anything by which we can check the performance of the query? Performance here is not execution time, it means that whether a query is "ok" or not, without data dependency.
As we cannot create that much data that would be in live database.
General performance of a query can be checked using the EXPLAIN command in MySQL. See https://dev.mysql.com/doc/refman/5.7/en/using-explain.html
It shows you how MySQL engine plans to execute the query and allows you to do some basic sanity checks i.e. if the engine will use keys and indexes to execute the query, see how MySQL will execute the joins (i.e. if foreign keys aren't missing) and many more.
You can find some general tips about how to use EXPLAIN for optimizing queries here (along with some nice samples): http://www.sitepoint.com/using-explain-to-write-better-mysql-queries/
As mentioned above, Right query is always data-dependent. Up to some level you can use the below methods to check the performance
You can use Explain to understand the Query Execution Plan and that may help you to correct some stuffs. For more info :
Refer Documentation Optimizing Queries with EXPLAIN
You can use Query Analyzer. Refer MySQL Query Analyzer
I like to throw my cookbook at Newbies because they often do not understand how important INDEXes are, or don't know some of the subtleties.
When experimenting with multiple choices of query/schema, I like to use
FLUSH STATUS;
SELECT ...;
SHOW SESSION STATUS LIKE 'Handler%';
That counts low level actions, such as "read next record". It essentially eliminates caching issues, disk speed, etc, and is very reproducible. Often there is a counter in that output (or multiple counters) that match the number of rows in the table (sometimes +/-1) -- that tells me there are table scan(s). This is usually not as good as if some INDEX were being used. If the query has a LIMIT, that value may show up in some Handler.
A really bad query, such as a CROSS JOIN, would show a value of N*M, where N and M are the row counts for the two tables.
I used the Handler technique to 'prove' that virtually all published "get me a random row" techniques require a table scan. Then I could experiment with small tables and Handlers to come up with a list of faster random routines.
Another tip when timing... Turn off the Query_cache (or use SELECT SQL_NO_CACHE).

SQL - Query Performance

How to find the inefficient queries in mysql database ? I want to do performance tuning on my queries , but i coudn't find where my queries are located ? Please suggest me where can i find mysql queries for my tables .
Thanks
Prabhakaran.R
You can enable the general log and slow query logs.
Enabling general query log will log all the queries and might be heavy if you have many reads/writes. In slow query log, you can mention a threshold and only queries taking time beyond some time will be logged. Post that, you can manually analyze it or you can use tools provided( Percona has great tools)
Have you analyzed your queries with explain plans? You may be able to find a query that will return the result set you wish that isn't as heavy a load on the query engine. Remember to only select the columns you actually need (try to avoid SELECT *), well planned indexing and to use inner/outer joins in favour of a huge list of WHERE clause filters.
Good luck.
RP
In addition to what the others said, use pt-query-digest (from percona.com) to summarize the slowlog. That will tell you the worst queries.
Performance tuning often involves knowing what indexes to put on tables. My index cookbook is one place to learn how to build an INDEX for a given SELECT.

Determining which columns to index in MySQL in CakePHP

I have a web app, which has quite a few queries being fired from every page. As more data was added to the DB, we noticed that the pages were taking longer and longer to load.
On examining PhpMyAdmin -> Status -> Joins, we noticed this (with the number in red):
Select_full_join 348.6 k The number of joins that do not use indexes. If this value is not 0, you should carefully check the indexes of your tables.
How do I determine which joins are causing the problems? Are all the joins equally to be blamed?
How do I determine which columns should be indexed, for the performance to be proper?
We are using CakePHP + MySQL, and the queries are all auto-generated.
The rule of thumb that I have always used, is that if I am using join, the fields that I am joining on need to be indexed.
For instance, if you have a query like the following:
SELECT t1.name, t2.salary
FROM employee AS t1
INNER JOIN info AS t2 ON t1.name = t2.name;
Both t1.name and t2.name should be indexed.
Below are some good reads for this as well:
Optimizing MySQL: Importance of JOIN Order
How to optimize MySQL JOIN queries through indexing
And in general, this guy's site has some good info as well.
MySQL Optimizer Team
Edit: This is always helpful.
And if you have access to your server settings, check out:
MySQL Slow Server Logs
Once you have a log of slow queries, you can use explain on them to see what needs indexing.
If you don't know which queries are running inefficiently, you have a couple of choices.
You could try this:
Try issuing the command SHOW FULL PROCESSLIST from phpmyadmin while your web site is active. It will show you, hopefully, a bunch of slow running queries. The FULL processlist should give you the entire query. You could then use the EXPLAIN command to figure out what it's doing.
You should also try this:
Think through the work your application is doing on behalf of your users. Think through which of your queries have to romp through lots of data to deliver value to the users. Think through which tables are growing as your application gets used more and more.
Then, find your queries that deliver that value, and that access your growing tables. Again, use the EXPLAIN command to see how MySQL is processing them, and add indexes as needed.
I suspect it will be very obvious which indexes you should add. Add the obvious ones, then let your system stabilize for a couple of workdays, then remeasure.
Notice that this is a normal part of bringing a new application into production.

How to find slower MySQL queries from many small queries

I'm wondering if anyone has a suggestion for my situation:
I have a process that runs many tens of thousands of queries. The whole process takes between 5 and 10 minutes. I want to know which queries are running slower than the rest, but I know that none of them are running for more than say, 5 seconds (with this many queries, that would be very noticeable in my logs). How should I find out which ones are taking the most time, and are the ones that, if optimized, would provide the best results?
MORE DETAILS:
My queries run single-threaded and synchronous, and I'd say 70% SELECT and 30% INSERT/UPDATE. I'd have to get some heads together and determine if the work can be split up into different units that can be run simultaneously - I'm not sure...
All the queries are either simple INSERT statements, single-property UPDATE statements, or SELECT statements on either a primary or foreign key or a two-field ANDed restriction.
DESCRIPTION OF THE ISSUE:
What I'm doing is basically copying a complex directed graph structure, in its entirety. Nodes are database entries, and adjacencies represent essentially foreign keys, but not strictly-speaking (they could be a two-field combination, where the first says what table the second is the id for).
Take a look at MySQL's slow query log. You can configure the threshold of what is regarded as "slow".
Basically, you track time from query start to query end, e.g.:
function getmicrotime() {
list($usec, $sec) = explode(" ",microtime());
return ((float)$usec + (float)$sec);
}
$query_start = getmicrotime();
$query = 'SELECT ...';
mysql_query($query, $connect);
$query_end = getmicrotime();
if ($query_end - $query_start > 2) {
// add query to log
your_slow_queries_logger($query);
}
I'd sugest to make some kind of a wrapper for mysql_query function that would take care of logging slow queries.
I, personally, log my slow queries using syslog
If you serious in optimizing MySQL, have a read at this book
High Performance MySQL: Optimization, Backups, Replication, and More
Some of the contents may be outdated for the newest MySQL versions, but should be sufficient as it covers MySQL 5.1. It also has an extensive chapter on benchmarking along with techniques and methods and guiding you to create a benchmark plan that suits your needs.
It also has chapters on Index and Query optimizations which would be very helpful to you if you need to optimize the slow queries identified with the benchmarks.

mySQL Inconsistent Performance

I'm running a mySQL query that joins various tables of 500,000+ rows. Sometimes it takes a second, other times around 15 seconds! This is on my local machine. I have experienced similarly varied times before on other intensive queries, does anyone know why this is?
Thanks
Thanks for the replies - I am using appropriate indexes, inner and left joins and have a WHERE clause range of one week out of possible 2 year period of invoices. If I keep varying it (so presumably query results are not cached) and re-running, time varies a lot, even if no. of rows retrieved is similar. The server is not busy. A few scheduled queries every minute but not intensive, take around 200ms.
The explain plan shows that a table of around 2000 rows is always fully scanned. So maybe these rows are sometimes cached, or maybe indexes are cached - didnt know indexes could be cached. I will try again with caching turned off.
Editing again - query cache is in fact off, I'm using InnoDB so looks like increasing innodb_buffer_pool_size is way to go
Same query each time?
It's hard to tell, based on what you've posted. If we assume that the schema and data aren't changing, I'd guess that there's something else running on your machine when the queries are long that would explain the difference. It could be that the state of memory is different, so paging is going on; an anti-virus program is running; some other service has started. It's impossible to answer.
Try to do an
Optimize Table
That should help to refresh some data useful for the query planner.
You have not give us much information, if you're using MyISAM tables, it may be a matter of locks.
Are you using ANSI INNER JOINs? Little basic, but don't use "cross joins". Those are the joins with the comma, like
SELECT * FROM t1, t2 WHERE t1.id_t1=t2.id_t1
Last things you may want to try. Increase your buffers (innodb), your key_buffers (myisam), and some query cache buffers.
Here's some common reasons(bar your server simply being too busy)
The slow query is hitting the harddrive. In the fast case the indexes and data are already cached in MySQL or the OS file cache.
Retrieving the data gets locked by updates/inserts, for MyISAM tables the whole table gets locked whenever someone inserts/updates data in it in some cases.
Table statistics are out of date and/or the wrong index gets selected. running analyze oroptimize on the table can help.
You have the query cache enabled, fetching the result of a cached query is fast, fetching it if it's not in the cache might be slow. Try turning off the query cache to check if the query is always slow if its not fetched from the cache.
In any case, you should show the output of EXPLAIN on your queries to verify indexes are getting used properly - even if they're not, queries can be fast if everything is in ram but grinding to a halt if it needs to hit the hardddrive.