`MySQL GROUP BY is slower when using index - mysql

I'm running on an AWS m4.large (2 vCPUs, 8 GB ram) and I'm seeing a slightly surprising behaviour regarding MySQL and GROUPBY's. I have this test database:
CREATE TABLE demo (
time INT,
word VARCHAR(30),
count INT
);
CREATE INDEX timeword_idx ON demo(time, word);
I insert 4,000,000 records with (uniformly) random words "t%s" % random.randint(0, 30000) and times random.randint(0, 86400).
SELECT word, time, sum(count) FROM demo GROUP BY time, word;
3996922 rows in set (1 min 28.29 sec)
EXPLAIN SELECT word, time, sum(count) FROM demo GROUP BY time, word;
+----+-------------+-------+-------+---------------+--------------+---------+------+---------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+--------------+---------+------+---------+-------+
| 1 | SIMPLE | demo | index | NULL | timeword_idx | 38 | NULL | 4002267 | |
+----+-------------+-------+-------+---------------+--------------+---------+------+---------+-------+
and then I don't use the index:
SELECT word, time, sum(count) FROM demo IGNORE INDEX (timeword_idx) GROUP BY time, word;
3996922 rows in set (34.75 sec)
EXPLAIN SELECT word, time, sum(count) FROM demo IGNORE INDEX (timeword_idx) GROUP BY time, word;
+----+-------------+-------+------+---------------+------+---------+------+---------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+---------+---------------------------------+
| 1 | SIMPLE | demo | ALL | NULL | NULL | NULL | NULL | 4002267 | Using temporary; Using filesort |
+----+-------------+-------+------+---------------+------+---------+------+---------+---------------------------------+
As you can see by using the index the query takes 3 times more time. I'm not that surprised since by using the index the query might have to avoid reading the time and word columns but unfortunately by the index being so sparse it shouldn't gain much. On the contrary it turns a direct scan to a random access pattern when it comes to retrieving count.
I just would like to confirm that this is the reason and wonder if there's a "compact rule" on when and index will bring eventually worse performance when used for GROUP BY.
EDIT:
I followed Gordon Linoff's answer and used:
CREATE INDEX timeword_idx ON demo(time, word, count);
The "covering index" computes the results 10 times faster when compared with the full scan:
SELECT word, time, sum(count) FROM demo GROUP BY time, word;
3996922 rows in set (3.36 sec)
EXPLAIN SELECT word, time, sum(count) FROM demo GROUP BY time, word;
+----+-------------+-------+-------+---------------+--------------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+--------------+---------+------+---------+-------------+
| 1 | SIMPLE | demo | index | NULL | timeword_idx | 43 | NULL | 4002267 | Using index |
+----+-------------+-------+-------+---------------+--------------+---------+------+---------+-------------+
Very impressive!

You have a reasonably sized table, so the problem might be sequential access of the data or thrashing. Using the index requires going through the index and then looking up the data in the data pages to get the count.
This can actually be worse than just reading the pages and doing a sort, because the pages are not read in order. Sequential reads are considerably more optimized than random reads. In the worst case, the page cache is full and the random reads require flushing pages. If this happens, a single page might need to be read multiple times. With only 4 million relatively small rows, thrashing is unlikely unless you are severely memory constrained.
If this interpretation is correct, then including count in the index should speed the query:
CREATE INDEX timeword_idx ON demo(time, word, count);

From the manual page How MySQL Uses Indexes
Indexes are less important for queries on small tables, or big tables
where report queries process most or all of the rows. When a query
needs to access most of the rows, reading sequentially is faster than
working through an index. Sequential reads minimize disk seeks, even
if not all the rows are needed for the query.
As for tacking on more columns to create covering indexes (ones in which datapages are not accessed but all data is available in the index), be careful. They come at a cost. In your case, your index is getting wide anyway. But a careful balance is always needed.
As spencer alludes to, cardinality always play a role with ranges. For cardinality information, use the show index from tblName command. It is not a driving issue for your query, but useful in other settings. I should rephrase that: cardinality is very high for your table. So your index is deemed a hindrance for it in that query.

Related

Mysql: explain returns more rows than the actual number

I have a table, which contains 40 M rows by counting.
select count(*) from xxxs;
returns 38000389
but the explain:
mysql> explain select * from xxxs where s_uuid = "21eaef";
+----+-------------+-------+------------+------+---------------+------+---------+------+----------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+------+---------------+------+---------+------+----------+----------+-------------+
| 1 | SIMPLE | xxxs | NULL | ALL | NULL | NULL | NULL | NULL | 56511776 | 10.00 | Using where |
+----+-------------+-------+------------+------+---------------+------+---------+------+----------+----------+-------------+
1 row in set, 1 warning (0.06 sec)
why the rows is 56M which is much larger than 40 M?
Thanks
UPDATE
1, the above query may take several minutes. is it normal? How to tune the performance?
2, I plan to create an index on s_uuid. I guess it will improve the performance. Am I right?
The "rows" in EXPLAIN is an estimate based on statistics that were gathered in the recent past. The value is rarely exact; sometimes it is even off by more than a factor of two.
Still, the estimate is usually "good enough" for the Optimizer to decide how to perform the query.
Another place to see this estimate of row count is via
SHOW TABLE STATUS LIKE 'xxxs';
(As mentioned in a Comment) Adding this is likely to speed up select * from xxxs where s_uuid = "21eaef";:
INDEX(s_uuid)
I say "likely to" because, if a lot of rows have s_uuid = "21eaef", the Optimizer will shun the index and simply scan the entire table rather than bouncing back and forth from the index's BTree and the data's BTree. You can see the "shun" in EXPLAIN by having Possible keys = idx_uuid but key = NULL.
There will be cases where the Optimizer makes the 'wrong' choice. But we can discuss that in another Q&A.

MySQL composite index column order & performance

I have a table with approx 500,000 rows and I'm testing two composite indexes for it. The first index puts the ORDER BY column last, and the second one is in reverse order.
What I don't understand is why the second index appears to offer better performance by estimating 30 rows to be scanned compared to 889 for the first query, as I was under the impression the second index could not be properly used as the ORDER BY column is not last. Would anyone be able to explain why this is the case? MySQL prefers the first index if both exist.
Note that the second EXPLAIN lists possible_keys as NULL but still lists a chosen key.
1) First index
ALTER TABLE user ADD INDEX test1_idx (city_id, quality);
(cardinality 12942)
EXPLAIN SELECT * FROM user u WHERE u.city_id = 3205 ORDER BY u.quality DESC LIMIT 30;
+----+-------------+-------+--------+---------------+-----------+---------+----------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------+-----------+---------+----------------+------+-------------+
| 1 | SIMPLE | u | ref | test1_idx | test1_idx | 3 | const | 889 | Using where |
+----+-------------+-------+--------+---------------+-----------+---------+----------------+------+-------------+
2) Second index (same fields in reverse order)
ALTER TABLE user ADD INDEX test2_idx (quality, city_id);
(cardinality 7549)
EXPLAIN SELECT * FROM user u WHERE u.city_id = 3205 ORDER BY u.quality DESC LIMIT 30;
+----+-------------+-------+--------+---------------+-----------+---------+----------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------+-----------+---------+----------------+------+-------------+
| 1 | SIMPLE | u | index | NULL | test2_idx | 5 | NULL | 30 | Using where |
+----+-------------+-------+--------+---------------+-----------+---------+----------------+------+-------------+
UPDATE:
The second query does not perform well in a real-life scenario whereas the first one does, as expected. I would still be curious as to why MySQL EXPLAIN provides such opposite information.
The rows in EXPLAIN is just an estimate of the number of rows that MySQL believes it must examine to produce the result.
I remembered reading one article from Peter Zaitsev of Percona that this number could be very inaccurate. So you can not simply compare the query efficiency based on this number.
I agree with you that the first index will produce better result in normal scenarios.
You should have noticed that the type column in the first EXPLAIN is ref while index for the second. ref is usually better than a index scan. As you mentioned, if both keys exists, MySQL prefer the first one.
I guess your data type
city_id: MEDIUMINT 3 Bytes
quality: SMALLINT 2 Bytes
As I know,
For
SELECT * FROM user u WHERE u.city_id = 3205 ORDER BY u.quality DESC LIMIT 30;
The second index(quality, city_id) can not be fully used.
Because Order by is Range scan, which can only do for last part of your index.
The first Index looks fit perfect.
I guess that some time Mysql is not so smart. maybe the amount of city_id targeted could effect mysql decide which index will be used.
You may try key word
FORCE INDEX(test1_idx)

How to speed up mysql select in database with highly redundant key values

I have a very simple MYSQL database with only 3 columns but several millions of rows.
Two of the colums (hid1, hid2) describe study objects (about 50,000 of them) and the third column (score) is the result of a comparison of hid1 with hid2. Thus, the number of rows is max(hid1)*max(hid2), which is quite a big number. Because the table has to be written only once and read many million times, I selected a MyISAM table (I hope this was a good idea). Initially, it was planned that I would retrieve 'score' for a given pair of hid1,hid2 but it turned out to be more convenient to retrieve all scores (and hid2) for a given hid1.
My table ("result") looks like this:
+-------+-----------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-----------------------+------+-----+---------+-------+
| hid1 | mediumint(8) unsigned | YES | MUL | NULL | |
| hid2 | mediumint(8) unsigned | YES | | NULL | |
| score | float | YES | | NULL | |
+-------+-----------------------+------+-----+---------+-------+
and a typical query would be
select hid1,hid2,score from result where hid1=13531 into outfile "/tmp/ttt"
Here is the problem: The query just takes too long, at least sometimes. For some 'hid1' values, I get the result back in under a second. For other hid1 (particularly for big numbers), I have to wait for up to 40 sec. As I said, I have to run thousands of these queryies, so I am interested in speeding things up.
Let me reiterate: there are about 50,000 hits to the query, and I don't need them in any particular order. Am I doing something wrong here, or is a relational database like MySQL not up to this task?
What I already tried is to increase the key_buffer in /etc/mysql/my.conf
this appeared to help, but not much. The index on hid1 is a few GB, does the key_buffer have to be bigger than the index size to be effective?
Any hint would be appreciated.
Edit: here is an example run with the corresponding 'explain' output:
select hid1,hid2,score from result where hid1=132885 into outfile "/tmp/ttt"
Query OK, 16465 rows affected (31.88 sec)
As you can see below, the index hid1_idx is actually being used:
mysql> explain select hid1,hid2,score from result where hid1=132885 into outfile "/tmp/ttt";
+----+-------------+--------+------+---------------+------------+---------+-------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+---------------+------------+---------+-------+-------+-------------+
| 1 | SIMPLE | result | ref | hid1_index | hid1_index | 4 | const | 15456 | Using where |
+----+-------------+--------+------+---------------+------------+---------+-------+-------+-------------+
1 row in set (0.00 sec)
What I do find puzzling is the fact that query with low numbers for hid1 always are much faster than those with high numbers. This is not what I would expect from using an index.
Two random suggestions, based on a query pattern that always involve equality filter on hid1:
Use InnoDB table instead and take advantage of a clustered index on (hid1, hid2). That way all rows belonging to the same hid will be physically located together, and this will speed up retreival.
Hash-partition the table on hid1, with a suitable nr of partitions.
The simplest way to optimize a query like that, would be to use an index. A simple thing like
alter table results add index(hid1)
would improve the query you sent. Even more, if you want to search by both fields at once, you can use both fields in the index.
alter table results add index(hid1, hid2)
That way, MySQL can access results in a very organized way, and find the information you want.
If you run an explain on the first query, you might see something like
| select_type | table | type|possible_keys| rows |Extra
| SIMPLE | results| ALL | | 7765605| Using where
After adding the index, you should see
| select_type | table | type|possible_keys| rows |Extra
| SIMPLE | results| ref |hid1 | 2816304|
Which is telling you, in the first case, that it needs to check ALL the rows, and in the second case, that it can find the information using a ref
If you know the combination of hid1 and hid2 is unique, you should consider making that your primary key. That will automatically also add an index to hid1. See: http://dev.mysql.com/doc/refman/5.5/en/multiple-column-indexes.html
Also, check the output of EXPLAIN. See: http://dev.mysql.com/doc/refman/5.5/en/select-optimization.html and related links.

MySQL has indexed tables and EXPLAIN looks good, but still not using index

I am trying to optimize a query, and all looks well when I got to "EXPLAIN" it, but it's still coming up in the "log_queries_not_using_index".
Here is the query:
SELECT t1.id_id,t1.change_id,t1.like_id,t1.dislike_id,t1.user_id,t1.from_id,t1.date_id,t1.type_id,t1.photo_id,t1.mobile_id,t1.mobiletype_id,t1.linked_id
FROM recent AS t1
LEFT JOIN users AS t2 ON t1.user_id = t2.id_id
WHERE t2.active_id=1 AND t1.postedacommenton_id='0' AND t1.type_id!='Friends'
ORDER BY t1.id_id DESC LIMIT 35;
So it grabs like a 'wallpost' data, and then I joined in the USERS table to make sure the user is still an active user (the number 1), and two small other "ANDs".
When I run this with the EXPLAIN in phpmyadmin it shows
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
1 | SIMPLE | t1 | index | user_id | PRIMARY | 4 | NULL | 35 | Using where
1 | SIMPLE | t2 | eq_ref | PRIMARY,active_id | PRIMARY | 4 | hnet_user_info.t1.user_id | 1 | Using where
It shows the t1 query found 35 rows using "WHERE", and the t2 query found 1 row (the user), using "WHERE"
So I can't figure out why it's showing up in the log_queries_not_using_index report.
Any tips? I can post more info if you need it.
tldr; ignore the "not using index warning". A query execution time of 1.3 milliseconds is not a problem; there is nothing to optimize here - look at the entire performance profile to find bottlenecks.
Trust the database engine. The database query planner will use indices when it determines that doing so is beneficial. In this case, due to the low cardinality estimates (35x1), the query planner decided that there was no reason to use indices for the actual execution plan. If indices were used in a trivial case like this it could actually increase the query execution time.
As always, use the 97/3 rule.

how can I optimize a query with multiple joins (already have indexes)?

SELECT citing.article_id as citing, lac_a.year, r.id_when_cited, cited_issue.country, citing.num_citations
FROM isi_lac_authored_articles as lac_a
JOIN isi_articles citing ON (lac_a.article_id = citing.article_id)
JOIN isi_citation_references r ON (citing.article_id = r.article_id)
JOIN isi_articles cited ON (cited.id_when_cited = r.id_when_cited)
JOIN isi_issues cited_issue ON (cited.issue_id = cited_issue.issue_id);
I have indexes on all the fields being JOINED on.
Is there anything I can do? My tables are large (some 1 Million records, the references tables has 500 million records, the articles table has 25 Million).
This is what EXPLAIN has to say:
+----+-------------+-------------+--------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+--------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
| 1 | SIMPLE | cited_issue | ALL | NULL | NULL | NULL | NULL | 1156856 | |
| 1 | SIMPLE | cited | ref | isi_articles_id_when_cited,isi_articles_issue_id | isi_articles_issue_id | 49 | func | 19 | Using where |
| 1 | SIMPLE | r | ref | isi_citation_references_article_id,isi_citation_references_id_when_cited | isi_citation_references_id_when_cited | 17 | mimir_dev.cited.id_when_cited | 4 | Using where |
| 1 | SIMPLE | lac_a | eq_ref | PRIMARY | PRIMARY | 16 | mimir_dev.r.article_id | 1 | |
| 1 | SIMPLE | citing | eq_ref | PRIMARY | PRIMARY | 16 | mimir_dev.r.article_id | 1 | |
+----+-------------+-------------+--------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
5 rows in set (0.07 sec)
If you realy need all the returned data, I would suggest two things:
You, probably, know the data better than MySQL and you can try to make advantage of it if MySQL is not correct in its assumptions. Currently, MySQL thinks that it is easier to full scan the whole isi_issues table at the beginning, and if the result is really going to include all issues, than the assumption is correct. But if there are many issues that should not be in the result, you may want to force another order of the joins that you consider more correct. It is you, who knows which table applies the strongest restrictions and which are the smallest to full scan (you will anyway need to full scan something, since there is no WHERE clause).
You can make profit from covering indexes (that is indexes that contain enough data in itself and not needing to touch the row data). For example, having an index (article_id, num_citations) on isi_articles and (article_id, year) on isi_lac_authored_articles and even (country) on isi_issues will significantly speed up that query as long as the indexes fit in memory, but, from the other side, will make you indexes larger and slightly slow dow inserts into the table.
i think it's the best you can do. i mean at least it's not using nested/multiple queries. you should do a little benchmark on the sql. you could at least limit your results at the least as possible. 15-30 rows for a return set is pretty fine per page (this depends on the app, but 15-30 for me is the tolerance range)
i believe in mySQL (phpMyAdmin, console, GUI whatever) they return some sort of "execution time" which is the time that it took to the query to process. compare that with a benchmark of the query using your server-side code. then compare that with the query run using the server-side code and outputting it with your app interface included after that.
by this, you can see where your bottle-neck is - that is where you optimize.
Unless the result of your query is input to some other query or system, it is useless to return that much(3M) rows. That would be clever to return just an acceptable amount of rows per query(like 1000) that is for visualizing.
Looking at your SQL - the lack of a WHERE clause means it is pulling all rows from:
JOIN isi_issues cited_issue ON (cited.issue_id = cited_issue.issue_id)
You could look at partitioning the large isi_issues table, this would allow MySQL to perform a bit quicker (smaller files are easier to handle)
Or alternatively you can loop the statement and use a LIMIT clause.
LIMIT 0,100000
then
LIMIT 100001, 200000
This will let the statements run quicker and you can deal with the data in batches.