Why can't MySQL optimize this query? - mysql

I have a query that is giving me problems and I can't understand why MySQL's query optimizer is behaving the way it is. Here is the background info:
I have 3 tables. Two are relatively small and one is large.
Table 1 (very small, 727 rows):
CREATE TABLE ipa (
ipa_id int(11) NOT NULL AUTO_INCREMENT,
ipa_code int(11) DEFAULT NULL,
ipa_name varchar(100) DEFAULT NULL,
payorcode varchar(2) DEFAULT NULL,
compid int(11) DEFAULT '2'
PRIMARY KEY (ipa_id),
KEY ipa_code (ipa_code) )
ENGINE=MyISAM
Table 2 (smallish, 59455 rows):
CREATE TABLE assign_ipa (
assignid int(11) NOT NULL AUTO_INCREMENT,
ipa_id int(11) NOT NULL,
userid int(11) NOT NULL,
username varchar(20) DEFAULT NULL,
compid int(11) DEFAULT NULL,
PayorCode char(10) DEFAULT NULL
PRIMARY KEY (assignid),
UNIQUE KEY assignid (assignid,ipa_id),
KEY ipa_id (ipa_id)
) ENGINE=MyISAM
Table 3 (large, 24,711,730 rows):
CREATE TABLE master_final (
IPA int(11) DEFAULT NULL,
MbrCt smallint(6) DEFAULT '0',
PayorCode varchar(4) DEFAULT 'WC',
KEY idx_IPA (IPA)
) ENGINE=MyISAM DEFAULT
Now for the query. I'm doing a 3-way join using the first two smaller tables to essentially subset the big table on one of it's indexed values. Basically, I get a list of IDs for a user, SJOnes and query the big file for those IDs.
mysql> explain
SELECT master_final.PayorCode, sum(master_final.Mbrct) AS MbrCt
FROM master_final
INNER JOIN ipa ON ipa.ipa_code = master_final.IPA
INNER JOIN assign_ipa ON ipa.ipa_id = assign_ipa.ipa_id
WHERE assign_ipa.username = 'SJones'
GROUP BY master_final.PayorCode, master_final.ipa\G;
************* 1. row *************
id: 1
select_type: SIMPLE
table: master_final
type: ALL
possible_keys: idx_IPA
key: NULL
key_len: NULL
ref: NULL
rows: 24711730
Extra: Using temporary; Using filesort
************* 2. row *************
id: 1
select_type: SIMPLE
table: ipa
type: ref
possible_keys: PRIMARY,ipa_code
key: ipa_code
key_len: 5
ref: wc_test.master_final.IPA
rows: 1
Extra: Using where
************* 3. row *************
id: 1
select_type: SIMPLE
table: assign_ipa
type: ref
possible_keys: ipa_id
key: ipa_id
key_len: 4
ref: wc_test.ipa.ipa_id
rows: 37
Extra: Using where
3 rows in set (0.00 sec)
This query takes forever (like 30 minutes!). The explain statement tells me why, it's doing a full table scan on the big table even though there is a perfectly good index. It's not using it. I don't understand this. I can look at the query and see that it's only needs to query a couple of IDs from the big table. If I can do it, why can't MySQL's optimizer do it?
To illustrate, here are the IDs associated with 'SJones':
mysql> select username, ipa_id from assign_ipa where username='SJones';
+----------+--------+
| username | ipa_id |
+----------+--------+
| SJones | 688 |
| SJones | 689 |
+----------+--------+
2 rows in set (0.02 sec)
Now, I can rewrite the query substituting the ipa_id values for the username in the where clause. To me this is equivalent to the original query. MySQL sees it differently. If I do this, the optimizer makes use of the index on the big table.
mysql> explain
SELECT master_final.PayorCode, sum(master_final.Mbrct) AS MbrCt
FROM master_final
INNER JOIN ipa ON ipa.ipa_code = master_final.IPA
INNER JOIN assign_ipa ON ipa.ipa_id = assign_ipa.ipa_id
*WHERE assign_ipa.ipa_id in ('688','689')*
GROUP BY master_final.PayorCode, master_final.ipa\G;
************* 1. row *************
id: 1
select_type: SIMPLE
table: ipa
type: range
possible_keys: PRIMARY,ipa_code
key: PRIMARY
key_len: 4
ref: NULL
rows: 2
Extra: Using where; Using temporary; Using filesort
************* 2. row *************
id: 1
select_type: SIMPLE
table: assign_ipa
type: ref
possible_keys: ipa_id
key: ipa_id
key_len: 4
ref: wc_test.ipa.ipa_id
rows: 37
Extra: Using where
************* 3. row *************
id: 1
select_type: SIMPLE
table: master_final
type: ref
possible_keys: idx_IPA
key: idx_IPA
key_len: 5
ref: wc_test.ipa.ipa_code
rows: 34953
Extra: Using where
3 rows in set (0.00 sec)
The only thing I've changed is a where clause that doesn't even directly hit the big table. And yet, the optimizer uses the index 'idx_IPA' on the big table and the full table scan is no longer used. The query when re-written like this is very fast.
OK, that's a lot of background. Now my question. Why should the where clause matter to the optimizer? Either where clause will return the same result set from the smaller table, and yet I'm getting dramatically different results depending on which one I use. Obviously, I want to use the where clause containing the username rather than trying to pass all associated IDs to the query. As written though, this is not possible?
Can someone explain why this is happening?
How might I rewrite my query to avoid the full table scan?
Thanks for sticking with me. I know its a very longish question.

Not quite sure if I'm right, but I think the following is happening here. This:
WHERE assign_ipa.username = 'SJones'
may create a temporary table, since it requires a full table scan. Temporary tables have no indexes, and they tend to slow down things down a lot.
The second case
INNER JOIN ipa ON ipa.ipa_code = master_final.IPA
INNER JOIN assign_ipa ON ipa.ipa_id = assign_ipa.ipa_id
WHERE assign_ipa.ipa_id in ('688','689')
on the other hand allows for joining of indexes, which is fast. Additionally, it can be transformed to
SELECT .... FROM master_final WHERE IDA IN (688, 689) ...
and I think MySQL is doing that, too.
Creating an index on assign_ipa.username may help.
Edit
I rethought the problem and now have a different explanation.
The reason of course is the missing index. This means that MySQL has no clue how large the result of querying assign_ipa would be (MySQL does not store counts), so it starts with the joins first, where it can relay on keys.
That's what row 2 and 3 of explain log tell us.
And after that, it tries to filter the result by assign_ipa.username, which has no key, as stated in row 1.
As soon as there is an index, it filters assign_ipa first, and joins afterwards, using the according indexes.

This is probably not a direct answer to your question, but here are few things that you can do:
Run ANALYZE_TABLE ...it will update table statistics which has a great impact on what optimizer will decide to do.
If you still think that joins are not in order you wish them to be (which happens in your case, and thus optimizer is not using indexes as you expect it to do), you can use STRAIGHT_JOIN ... from here: "STRAIGHT_JOIN forces the optimizer to join the tables in the order in which they are listed in the FROM clause. You can use this to speed up a query if the optimizer joins the tables in nonoptimal order"
For me, putting "where part" right into join sometimes makes a difference and speeds things up. For example, you can write:
...t1 INNER JOIN t2 ON t1.k1 = t2.k2 AND t2.k2=something...
instead of
...t1 INNER JOIN t2 ON t1.k1 = t2.k2 .... WHERE t2.k2=something...
So this is definitely not an explanation on why you have that behavior but just few hints. Query optimizer is a strange beast, but fortunately there is EXPLAIN command which can help you to trick it to behave in a way you want.

Related

"SELECT [value]" vs "SELECT [value] FROM [table] LIMIT 1" in MySQL

Which query is better ?
SELECT true;
SELECT true FROM users LIMIT 1;
In terms of:
Best practice
Performance
The first query has less overhead because it doesn't reference any tables.
mysql> explain select true\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: NULL
partitions: NULL
type: NULL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: NULL
filtered: NULL
Extra: No tables used
Whereas the second query does reference a table, which means it has to spend time:
Checking that the table exists and if the query references any columns, check that the columns exist.
Checking that your user has privileges to read that table.
Acquiring a metadata lock, so no one does any DDL or LOCK TABLES while your query is reading it.
Starting to do an index-scan, even though it will be cut short by the LIMIT.
Here's the explain for the second query for comparison:
mysql> explain select true from mysql.user limit 1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: user
partitions: NULL
type: index
possible_keys: NULL
key: PRIMARY
key_len: 276
ref: NULL
rows: 8
filtered: 100.00
Extra: Using index
First query will one row with value true.
Second query will return all the rows from users table but true as only value.
So you if you need one row user first query. But if you need all the rows with same value then use second one.
In either case, it is obvious you want the value of TRUE :) With this intention, the "SELECT TRUE" is the most efficient as it won't cause MySQL to go further looking for users table, no matter how many rows in it, and then go even further with "LIMIT 1" if there are rows!
By the term BEST PRACTICE, I am not sure what you meant here, because, from my point of view, this doesn't even require a PRACTICE, let alone BEST, as I fail to see any real life application of this approach.

Which query should be used? Deducing from MySQL Explain

Explaining MySQL Explain chapter in O'reilly Optimizing SQL Statments Book, has this question at the end.
The following is an example of a business need that retrieves orphaned parent records in a parent/child relationship. This SQL query can be written in three different ways. While the output produces the same results, the QEP shows three different paths.
mysql> EXPLAIN SELECT p.*
-> FROM parent p
-> WHERE p.id NOT IN (SELECT c.parent_id FROM child c)\G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: p
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 160
Extra: Using where
*************************** 2. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: c
type: index_subquery
possible_keys: parent_id
key: parent_id
key_len: 4
ref: func
rows: 1
Extra: Using index
2 rows in set (0.00 sec)
mysql> EXPLAIN SELECT p.*
-> FROM parent p
-> LEFT JOIN child c ON p.id = c.parent_id
-> WHERE c.child_id IS NULL\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: p
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 160
Extra:
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: c
type: ref
possible_keys: parent_id
key: parent_id
key_len: 4
ref: test.p.id
rows: 1
Extra: Using where; Using index; Not exists
2 rows in set (0.00 sec)
mysql> EXPLAIN SELECT p.*
-> FROM parent p
-> WHERE NOT EXISTS
-> SELECT parent_id FROM child c WHERE c.parent_id = p.id)\G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: p
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 160
Extra: Using where
*************************** 2. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: c
type: ref
possible_keys: parent_id
key: parent_id
key_len: 4
ref: test.p.id
rows: 1
Extra: Using index
2 rows in set (0.00 sec)
Which is best? Will data growth over time cause a different QEP to perform better?
There is no answer in the book or internet as far as I could research.
There is an old article from 2009 which I've seen linked on stackoverflow many times. The test there shows, that the NOT EXISTS query is 27% (it's actually 26%) slower than the other two queries (LEFT JOIN and NOT IN).
However, the optimizer has been improved from version to version. And the perfect optimizer would create the same execution plan for all three queries. But as long as the optimizer is not perfect, the answer on "Which query is faster?" can depend on actual setup (which includes version, settings and data).
I've run similar tests in the past, and all I remember is that the LEFT JOIN has never been significantly slower than any other method. But out of curiosity I've just created a new test on MariaDB 10.3.13 portable Windows version with default settings.
Dummy data:
set #parents = 1000;
drop table if exists parent;
create table parent(
parent_id mediumint unsigned primary key
);
insert into parent(parent_id)
select seq
from seq_1_to_1000000
where seq <= #parents
;
drop table if exists child;
create table child(
child_id mediumint unsigned primary key,
parent_id mediumint unsigned not null,
index (parent_id)
);
insert into child(child_id, parent_id)
select seq as child_id
, floor(rand(1)*#parents)+1 as parent_id
from seq_1_to_1000000
;
NOT IN:
set #start = TIME(SYSDATE(6));
select count(*) into #cnt
from parent p
where p.parent_id not in (select parent_id from child c);
select #cnt, TIMEDIFF(TIME(SYSDATE(6)), #start);
LEFT JOIN:
set #start = TIME(SYSDATE(6));
select count(*) into #cnt
from parent p
left join child c on c.parent_id = p.parent_id
where c.parent_id is null;
select #cnt, TIMEDIFF(TIME(SYSDATE(6)), #start);
NOT EXISTS:
set #start = TIME(SYSDATE(6));
select count(*) into #cnt
from parent p
where not exists (
select *
from child c
where c.parent_id = p.parent_id
);
select #cnt, TIMEDIFF(TIME(SYSDATE(6)), #start);
Execution time in milliseconds:
#parents | 1000 | 10000 | 100000 | 1000000
-----------|------|-------|--------|--------
NOT IN | 21 | 38 | 175 | 4459
LEFT JOIN | 24 | 40 | 183 | 1508
NOT EXISTS | 26 | 44 | 180 | 4463
I've executed the queries multiple times and took the least time value. And SYSDATE is probably not the best method to measure execution time - So don't take these numbers as accurate. However, we can see that up to 100K parent rows, there is not much difference, and the NOT IN method is a bit faster. But with 1M parent rows the LEFT JOIN is three times faster.
Conclusion
So what is the answer? I could just say: "LEFT JOIN" wins. But the truth is - This test proves nothing. And the answer is (as many times): "It depends". When performance matters, best you can do, is to run your own tests with real queries against real data. If you don't have real data (yet), you should create dummy data with the amount and distribution you expect to have in the future.
It depends on what version of MySQL you are using. In older versions, IN ( SELECT ...) performed terribly. In the latest version, it is often as good as the other variants. Also, MariaDB has some optimization differences, probably in this area.
EXISTS( SELECT 1 ... ) is perhaps the clearest in stating the intent. And it perhaps has always (once it came into existence) been fast.
NOT IN and NOT EXISTS are a different animal.
Some things in your Question that may have impact: func and index_subquery. In similar queries, you may not see these, and that difference may lead to performance differences.
Or, to repeat myself:
"There have been a number of improvements in the Optimizer since 2009.
"To the Author (Quassnoi): Please rerun your tests, and specify which version they are being run against. Note also that MySQL and MariaDB may yield different results.
"To the Reader: Test the variants yourself, do not blindly trust the conclusions in this blog."

mariadb not using the correct index with subquery

I'm facing a performance issue on a mariadb database. It seems to me that mariadb is not using the correct index when doing a request with a subquery, while injecting manually the result of the subquery in the request successfully uses the index:
Here is the request with bad behavior (note that the second subquery reads more rows than necessary):
ANALYZE SELECT `orders`.* FROM `orders`
WHERE `orders`.`account_id` IN (SELECT `accounts`.`id` FROM `accounts` WHERE `accounts`.`user_id` = 88144)
AND ( orders.type not in ("LimitOrder", "MarketOrder")
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state <> "canceled"
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state = "canceled" AND orders.traded_btc > 0 )
AND (NOT (orders.type = 'AdminOrder' AND orders.state = 'canceled')) ORDER BY `orders`.`id` DESC LIMIT 20 OFFSET 0 \G;
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: accounts
type: ref
possible_keys: PRIMARY,index_accounts_on_user_id
key: index_accounts_on_user_id
key_len: 4
ref: const
rows: 7
r_rows: 7.00
filtered: 100.00
r_filtered: 100.00
Extra: Using index; Using temporary; Using filesort
*************************** 2. row ***************************
id: 1
select_type: PRIMARY
table: orders
type: ref
possible_keys: index_orders_on_account_id_and_type,index_orders_on_type_and_state_and_buying,index_orders_on_account_id_and_type_and_state,index_orders_on_account_id_and_type_and_state_and_traded_btc
key: index_orders_on_account_id_and_type_and_state_and_traded_btc
key_len: 4
ref: bitcoin_central.accounts.id
rows: 60
r_rows: 393.86
filtered: 100.00
r_filtered: 100.00
Extra: Using index condition; Using where
When manually injecting the result of the subquery I have the correct behaviour (and expected performance):
ANALYZE SELECT `orders`.* FROM `orders`
WHERE `orders`.`account_id` IN (433212, 433213, 433214, 433215, 436058, 436874, 437950)
AND ( orders.type not in ("LimitOrder", "MarketOrder")
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state <> "canceled"
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state = "canceled" AND orders.traded_btc > 0 )
AND (NOT (orders.type = 'AdminOrder' AND orders.state = 'canceled'))
ORDER BY `orders`.`id` DESC LIMIT 20 OFFSET 0\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: orders
type: range
possible_keys: index_orders_on_account_id_and_type,index_orders_on_type_and_state_and_buying,index_orders_on_account_id_and_type_and_state,index_orders_on_account_id_and_type_and_state_and_traded_btc
key: index_orders_on_account_id_and_type_and_state_and_traded_btc
key_len: 933
ref: NULL
rows: 2809
r_rows: 20.00
filtered: 100.00
r_filtered: 100.00
Extra: Using index condition; Using where; Using filesort
1 row in set (0.37 sec)
Note that I have exactly the same issue when JOINing the two tables.
Here is an extract of the definitions of my orders table:
SHOW CREATE TABLE orders \G;
*************************** 1. row ***************************
Table: orders
Create Table: CREATE TABLE `orders` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`account_id` int(11) NOT NULL,
`traded_btc` decimal(16,8) DEFAULT '0.00000000',
`type` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`state` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
KEY `index_orders_on_account_id_and_type_and_state_and_traded_btc` (`account_id`,`type`,`state`,`traded_btc`),
CONSTRAINT `orders_account_id_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=8575594 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Does anyone know what's going on here?
Is there a way to force the database to use my index in my subrequest.
IN ( SELECT ... ) optimizes poorly. The usual solution is to turn into a JOIN:
FROM accounts AS a
JOIN orders AS o ON a.id = o.account_id
WHERE a.user_id = 88144
AND ... -- the rest of your WHERE
Or is that what you did with "Note that I have exactly the same issue when JOINing the two tables."? If so, let's see the query and it's EXPLAIN.
You refer to "expected performance"... Are you referring to the numbers in the EXPLAIN? Or do you have timings to back up the assertion?
I like to do this to get a finer grained look into how much "work" is going on:
FLUSH STATUS;
SELECT ...;
SHOW SESSION STATUS LIKE 'Handler%';
Those numbers usually make it clear whether a table scan was involved or whether the query stopped after OFFSET+LIMIT. The numbers are exact counts, unlike EXPLAIN, which is just estimates.
Presumably you usually look in orders via account_id? Here is a way to speed up such queries:
Replace the current two indexes
PRIMARY KEY (`id`),
KEY `account_id__type__state__traded_btc`
(`account_id`,`type`,`state`,`traded_btc`),
with these:
PRIMARY KEY (`account_id`, `type`, `id`),
KEY (id) -- to keep AUTO_INCREMENT happy.
This clusters all the rows for a given account, thereby making the queries run faster, especially if you are now I/O-bound. If some combination of columns makes a "natural" PK, then toss id completely.
(And notice how I shortened your key name without losing any info?)
Also, if you are I/O-bound, shrinking the table is quite possible by turning those lengthy VARCHARs (state & type) into ENUMs.
More
Given that the query involves
WHERE ... mess with INs and ORs ...
ORDER BY ...
LIMIT 20
and there are 2 million rows for that one user, there is no INDEX that can get past the WHERE to get into the ORDER BY so that it can consume the LIMIT. That is, it must perform this way:
filter through the 2M rows for that one user
sort (ORDER BY) some significant fraction of 2M rows
peel off 20 rows. (Yeah, 5.6 uses a "priority" queue, making the sort O(1) instead of O(log N), but this is not that much help.
I'm actually amazed that the IN( constants ) worked well.
I had the same problem. Please use inner join instead of in-subquery.

Is there a better way of doing this in mysql? - update entire column with another select and group by

I have a table sample with two columns id and cnt and another table PostTags with two columns postid and tagid
I want to update all cnt values with their corresponding counts and I have written the following query:
UPDATE sample SET
cnt = (SELECT COUNT(tagid)
FROM PostTags
WHERE sample.postid = PostTags.postid
GROUP BY PostTags.postid)
I intend to update entire column at once and I seem to accomplish this. But performance-wise, is this the best way? Or is there a better way?
EDIT
I've been running this query (without GROUP BY) for over 1 hour for ~18m records. I'm looking for a query that is better in performance.
That query should not take an hour. I just did a test, running a query like yours on a table of 87520 keywords and matching rows in a many-to-many table of 2776445 movie_keyword rows. In my test, it took 32 seconds.
The crucial part that you're probably missing is that you must have an index on the lookup column, which is PostTags.postid in your example.
Here's the EXPLAIN from my test (finally we can do EXPLAIN on UPDATE statements in MySQL 5.6):
mysql> explain update kc1 set count =
(select count(*) from movie_keyword
where kc1.keyword_id = movie_keyword.keyword_id) \G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: kc1
type: index
possible_keys: NULL
key: PRIMARY
key_len: 4
ref: NULL
rows: 98867
Extra: Using temporary
*************************** 2. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: movie_keyword
type: ref
possible_keys: k_m
key: k_m
key_len: 4
ref: imdb.kc1.keyword_id
rows: 17
Extra: Using index
Having an index on keyword_id is important. In my case, I had a compound index, but a single-column index would help too.
CREATE TABLE `movie_keyword` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`movie_id` int(11) NOT NULL,
`keyword_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `k_m` (`keyword_id`,`movie_id`)
);
The difference between COUNT(*) and COUNT(movie_id) should be immaterial, assuming movie_id is NOT NULLable. But I use COUNT(*) because it'll still count as an index-only query if my index is defined only on the keyword_id column.
Remove the unnecessary GROUP BY and the statement looks good. If however you expect many sample.set already to contain the correct value, then you would update many records that need no update. This may create some overhead (larger rollback segments, triggers executed etc.) and thus take longer.
In order to only update the records that need be updated, join:
UPDATE sample
INNER JOIN
(
SELECT postid, COUNT(tagid) as cnt
FROM PostTags
GROUP BY postid
) tags ON tags.postid = sample.postid
SET sample.cnt = tags.cnt
WHERE sample.cnt != tags.cnt OR sample.cnt IS NULL;
Here is the SQL fiddle: http://sqlfiddle.com/#!2/d5e88.

Why would MySQL use index intersection instead of combined index?

From time to time I encounter a strange MySQL behavior. Let's assume I have indexes (type, rel, created), (type), (rel). The best choice for a query like this one:
SELECT id FROM tbl
WHERE rel = 3 AND type = 3
ORDER BY created;
would be to use index (type, rel, created).
But MySQL decides to intersect indexes (type) and (rel), and that leads to worse perfomance. Here is an example:
mysql> EXPLAIN
-> SELECT id FROM tbl
-> WHERE rel = 3 AND type = 3
-> ORDER BY created\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tbl
type: index_merge
possible_keys: idx_type,idx_rel,idx_rel_type_created
key: idx_type,idx_rel
key_len: 1,2
ref: NULL
rows: 4343
Extra: Using intersect(idx_type,idx_rel); Using where; Using filesort
And the same query, but with a hint added:
mysql> EXPLAIN
-> SELECT id FROM tbl USE INDEX (idx_type_rel_created)
-> WHERE rel = 3 AND type = 3
-> ORDER BY created\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tbl
type: ref
possible_keys: idx_type_rel_created
key: idx_type_rel_created
key_len: 3
ref: const,const
rows: 8906
Extra: Using where
I think MySQL takes an execution plan which contains less number in the "rows" column of the EXPLAIN command. From that point of view, index intersection with 4343 rows looks really better than using my combined index with 8906 rows. So, maybe the problem is within those numbers?
mysql> SELECT COUNT(*) FROM tbl WHERE type=3 AND rel=3;
+----------+
| COUNT(*) |
+----------+
| 3056 |
+----------+
From this I can conclude that MySQL is mistaken at calculating approximate number of rows for combined index.
So, what can I do here to make MySQL take the right execution plan?
I can not use optimizer hints, because I have to stick to Django ORM
The only solution I found yet is to remove those one-field indexes.
MySQL version is 5.1.49.
The table structure is:
CREATE TABLE tbl (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` tinyint(1) NOT NULL,
`rel` smallint(2) NOT NULL,
`created` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `idx_type` (`type`),
KEY `idx_rel` (`rel`),
KEY `idx_type_rel_created` (`type`,`rel`,`created`)
) ENGINE=MyISAM;
It's hard to tell exactly why MySQL chooses index_merge_intersection over the index scan, but you should note that with the composite indexes, statistics up to the given column are stored for the composite indexes.
The value of information_schema.statistics.cardinality for the column type of the composite index will show the cardinality of (rel, type), not type itself.
If there is a correlation between rel and type, then cardinality of (rel, type) will be less than product of cardinalities of rel and type taken separately from the indexes on corresponding columns.
That's why the number of rows is calculated incorrectly (an intersection cannot be larger in size than a union).
You can forbid index_merge_intersection by setting it to off in ##optimizer_switch:
SET optimizer_switch = 'index_merge_intersection=off'
Another thing is worth mentioning: you would not have the problem if you deleted the index on type only. the index is not required since it duplicates a part of the composite index.
Some time the intersection on same table could be interesting, and you may not want to remove an index on a single colum so as some other query work well with intersection.
In such case, if the bad execution plan concerns only one single query, a solution is to exclude the unwanted index. Il will then prevent the usage of intersection only for that sepcific query...
In your example :
SELECT id FROM tbl IGNORE INDEX(idx_type)
WHERE rel = 3 AND type = 3
ORDER BY created;
enter code here