I'm facing a performance issue on a mariadb database. It seems to me that mariadb is not using the correct index when doing a request with a subquery, while injecting manually the result of the subquery in the request successfully uses the index:
Here is the request with bad behavior (note that the second subquery reads more rows than necessary):
ANALYZE SELECT `orders`.* FROM `orders`
WHERE `orders`.`account_id` IN (SELECT `accounts`.`id` FROM `accounts` WHERE `accounts`.`user_id` = 88144)
AND ( orders.type not in ("LimitOrder", "MarketOrder")
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state <> "canceled"
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state = "canceled" AND orders.traded_btc > 0 )
AND (NOT (orders.type = 'AdminOrder' AND orders.state = 'canceled')) ORDER BY `orders`.`id` DESC LIMIT 20 OFFSET 0 \G;
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: accounts
type: ref
possible_keys: PRIMARY,index_accounts_on_user_id
key: index_accounts_on_user_id
key_len: 4
ref: const
rows: 7
r_rows: 7.00
filtered: 100.00
r_filtered: 100.00
Extra: Using index; Using temporary; Using filesort
*************************** 2. row ***************************
id: 1
select_type: PRIMARY
table: orders
type: ref
possible_keys: index_orders_on_account_id_and_type,index_orders_on_type_and_state_and_buying,index_orders_on_account_id_and_type_and_state,index_orders_on_account_id_and_type_and_state_and_traded_btc
key: index_orders_on_account_id_and_type_and_state_and_traded_btc
key_len: 4
ref: bitcoin_central.accounts.id
rows: 60
r_rows: 393.86
filtered: 100.00
r_filtered: 100.00
Extra: Using index condition; Using where
When manually injecting the result of the subquery I have the correct behaviour (and expected performance):
ANALYZE SELECT `orders`.* FROM `orders`
WHERE `orders`.`account_id` IN (433212, 433213, 433214, 433215, 436058, 436874, 437950)
AND ( orders.type not in ("LimitOrder", "MarketOrder")
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state <> "canceled"
OR orders.type in ("LimitOrder", "MarketOrder") AND orders.state = "canceled" AND orders.traded_btc > 0 )
AND (NOT (orders.type = 'AdminOrder' AND orders.state = 'canceled'))
ORDER BY `orders`.`id` DESC LIMIT 20 OFFSET 0\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: orders
type: range
possible_keys: index_orders_on_account_id_and_type,index_orders_on_type_and_state_and_buying,index_orders_on_account_id_and_type_and_state,index_orders_on_account_id_and_type_and_state_and_traded_btc
key: index_orders_on_account_id_and_type_and_state_and_traded_btc
key_len: 933
ref: NULL
rows: 2809
r_rows: 20.00
filtered: 100.00
r_filtered: 100.00
Extra: Using index condition; Using where; Using filesort
1 row in set (0.37 sec)
Note that I have exactly the same issue when JOINing the two tables.
Here is an extract of the definitions of my orders table:
SHOW CREATE TABLE orders \G;
*************************** 1. row ***************************
Table: orders
Create Table: CREATE TABLE `orders` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`account_id` int(11) NOT NULL,
`traded_btc` decimal(16,8) DEFAULT '0.00000000',
`type` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`state` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
KEY `index_orders_on_account_id_and_type_and_state_and_traded_btc` (`account_id`,`type`,`state`,`traded_btc`),
CONSTRAINT `orders_account_id_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=8575594 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Does anyone know what's going on here?
Is there a way to force the database to use my index in my subrequest.
IN ( SELECT ... ) optimizes poorly. The usual solution is to turn into a JOIN:
FROM accounts AS a
JOIN orders AS o ON a.id = o.account_id
WHERE a.user_id = 88144
AND ... -- the rest of your WHERE
Or is that what you did with "Note that I have exactly the same issue when JOINing the two tables."? If so, let's see the query and it's EXPLAIN.
You refer to "expected performance"... Are you referring to the numbers in the EXPLAIN? Or do you have timings to back up the assertion?
I like to do this to get a finer grained look into how much "work" is going on:
FLUSH STATUS;
SELECT ...;
SHOW SESSION STATUS LIKE 'Handler%';
Those numbers usually make it clear whether a table scan was involved or whether the query stopped after OFFSET+LIMIT. The numbers are exact counts, unlike EXPLAIN, which is just estimates.
Presumably you usually look in orders via account_id? Here is a way to speed up such queries:
Replace the current two indexes
PRIMARY KEY (`id`),
KEY `account_id__type__state__traded_btc`
(`account_id`,`type`,`state`,`traded_btc`),
with these:
PRIMARY KEY (`account_id`, `type`, `id`),
KEY (id) -- to keep AUTO_INCREMENT happy.
This clusters all the rows for a given account, thereby making the queries run faster, especially if you are now I/O-bound. If some combination of columns makes a "natural" PK, then toss id completely.
(And notice how I shortened your key name without losing any info?)
Also, if you are I/O-bound, shrinking the table is quite possible by turning those lengthy VARCHARs (state & type) into ENUMs.
More
Given that the query involves
WHERE ... mess with INs and ORs ...
ORDER BY ...
LIMIT 20
and there are 2 million rows for that one user, there is no INDEX that can get past the WHERE to get into the ORDER BY so that it can consume the LIMIT. That is, it must perform this way:
filter through the 2M rows for that one user
sort (ORDER BY) some significant fraction of 2M rows
peel off 20 rows. (Yeah, 5.6 uses a "priority" queue, making the sort O(1) instead of O(log N), but this is not that much help.
I'm actually amazed that the IN( constants ) worked well.
I had the same problem. Please use inner join instead of in-subquery.
Related
I have the following query:
select *
from test_table
where app_id = 521
and is_deleted=0
and category in (7650)
AND created_timestamp >= '2020-07-28 18:19:26'
AND created_timestamp <= '2020-08-04 18:19:26'
ORDER BY created_timestamp desc
limit 30
All four fields, app_id, is_deleted, category and created_timestamp are indexed. However, the cardinality of app_id and is_deleted are very small (3 each).
category field is fairly distributed, but created_timestamp seems like a very good index choice for this query.
However, MySQL is not using the created_timestamp index and is in turn taking 4 seconds to return. If I force MySQL to use the created_timestamp index using USE INDEX (created_timestamp), it returns in 40ms.
I checked the output of explain command to see why that's happening, an found that MySQL is performing the query with the following params:
Automatic index decision, takes > 4s
type: index_merge
key: category,app_id,is_deleted
rows: 10250
filtered: 0.36
Using intersect(category,app_id,is_deleted); Using where; Using filesort
Force index usage:
Use index created_timestamp, takes < 50ms
type: range
key: created_timestamp
rows: 47000
filtered: 0.50
Using index condition; Using where; Backward index scan
MySQL probably decides that lesser number of rows to scan is better, and that makes sense also, but then why does it take forever for the query to return in that case? How can I fix this query?
The using intersection and the using filesort are both costly for performance. It's best if we can eliminate these.
Here's a test. I'm assuming the IN ( ... ) predicate could sometimes have multiple values, so it will be a range type query, and cannot be optimized as an equality.
CREATE TABLE `test_table` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`app_id` int(11) NOT NULL,
`is_deleted` tinyint(4) NOT NULL DEFAULT '0',
`category` int(11) NOT NULL,
`created_timestamp` timestamp NOT NULL,
`other` text,
PRIMARY KEY (`id`),
KEY `a_is_ct_c` (`app_id`,`is_deleted`,`created_timestamp`,`category`),
KEY `a_is_c_ct` (`app_id`,`is_deleted`,`category`,`created_timestamp`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
If we use your query and hint the optimizer to use the first index (created_timestamp before category), we get a query that eliminates both:
EXPLAIN SELECT * FROM test_table FORCE INDEX (a_is_ct_c)
WHERE app_id = 521
AND is_deleted=0
AND category in (7650,7651,7652)
AND created_timestamp >= '2020-07-28 18:19:26'
AND created_timestamp <= '2020-08-04 18:19:26'
ORDER BY created_timestamp DESC\G
id: 1
select_type: SIMPLE
table: test_table
partitions: NULL
type: range
possible_keys: a_is_ct_c
key: a_is_ct_c
key_len: 13
ref: NULL
rows: 1
filtered: 100.00
Extra: Using index condition
Whereas if we use the second index (category before created_timestamp), then at least the using intersection is gone, but we still have a filesort:
EXPLAIN SELECT * FROM test_table FORCE INDEX (a_is_c_ct)
WHERE app_id = 521
AND is_deleted=0
AND category in (7650,7651,7652)
AND created_timestamp >= '2020-07-28 18:19:26'
AND created_timestamp <= '2020-08-04 18:19:26'
ORDER BY created_timestamp DESC\G
id: 1
select_type: SIMPLE
table: test_table
partitions: NULL
type: range
possible_keys: a_is_c_ct
key: a_is_c_ct
key_len: 13
ref: NULL
rows: 3
filtered: 100.00
Extra: Using index condition; Using filesort
The "using index condition" is a feature of InnoDB to filter the fourth column at the storage engine level. This is called Index condition pushdown.
The optimal index for the query given, plus some others:
INDEX(app_id, is_deleted, -- put first, in either order
category, -- in this position, assuming it might have multiple INs
created_timestamp) -- a range; last.
"Index merge intersect" is probably always worse than having an equivalent composite index.
Note that an alternative for the Optimizer is to ignore the WHERE and focus on the ORDER BY, especially because of LIMIT 30. However, this is very risky. It may have to scan the entire table without finding the 30 rows desired. Apparently, it had to look at about 47000 rows to find the 30.
With the index above, it will touch only 30 (or fewer) rows.
"All four fields, ... are indexed." -- This is a common misconception, especially by newcomers to databases. It is very rare for a query to use more than one index. So, it is better to try for a "composite" index, which is likely to work much better.
How to build the optimal INDEX for a given SELECT: http://mysql.rjweb.org/doc.php/index_cookbook_mysql
I have table
user[id, name, status] with index[status, name, id]
SELECT *
FROM user
WHERE status = 'active'
ORDER BY name, id
LIMIT 50
I have about 50000 users with status == 'active'
1.) Why does MySQL explain show about 50000 in ROWS column? Why it follows all leaf nodes even when the index columns equals to order by clause?
2.) When I change order by clause to
ORDER BY status, name, id
EXTRA column of explain clause shows:
Using index condition; Using where; Using filesort
Is there any reason why it can't use index order in this query?
edit1:
CREATE TABLE `user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`status` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `status_name_id` (`status`,`name`,`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
query:
SELECT *
FROM `user`
WHERE status = 'complete'
ORDER BY status, name, id
LIMIT 50
explain:
id: 1
select_type: SIMPLE
table: f_order
type: ref
possible_keys: status_name_id
key: status_name_id
key_len: 768
ref: const
rows: 50331
Extra: "Using where; Using index; Using filesort"
The weirdest thing is that if I change SELECT statement to
SELECT *, count(id)
It use index again and query is twice faster. And extra section contains only
Using where; Using index
Table contains 100k rows, 5 different statuses and 12 different names.
MySQL: 5.6.27
edit2:
Another example:
This takes 400ms (avg) and does explicit sort
SELECT *
FROM `user`
WHERE status IN('complete')
ORDER BY status, name, id
LIMIT 50
This takes 2ms (avg) and doesn't explicit sort
SELECT *
FROM `user`
WHERE status IN('complete', 'something else')
ORDER BY status, name, id
LIMIT 50
Q1: EXPLAIN is a bit lame. It fails to take into account the existence of the LIMIT when providing the Rows estimation. Be assured that if it can stop short, it will.
Q2: Did it say that it was using your index? Please provide the full EXPLAIN and SHOW CREATE TABLE.
More
With INDEX(status, name, id), the WHERE, ORDER BY, and LIMIT can be handled in the index. Hence it has to read only 50 rows.
Without that index, (or with practically any change to the query), much or all of the table would need to be read, stored in a tmp table, sorted, and only then could 50 rows be peeled off.
So, I suggest that it is more complicated than "explicit sort can kill my db server".
According to comments it is probably bug.
I have a table sample with two columns id and cnt and another table PostTags with two columns postid and tagid
I want to update all cnt values with their corresponding counts and I have written the following query:
UPDATE sample SET
cnt = (SELECT COUNT(tagid)
FROM PostTags
WHERE sample.postid = PostTags.postid
GROUP BY PostTags.postid)
I intend to update entire column at once and I seem to accomplish this. But performance-wise, is this the best way? Or is there a better way?
EDIT
I've been running this query (without GROUP BY) for over 1 hour for ~18m records. I'm looking for a query that is better in performance.
That query should not take an hour. I just did a test, running a query like yours on a table of 87520 keywords and matching rows in a many-to-many table of 2776445 movie_keyword rows. In my test, it took 32 seconds.
The crucial part that you're probably missing is that you must have an index on the lookup column, which is PostTags.postid in your example.
Here's the EXPLAIN from my test (finally we can do EXPLAIN on UPDATE statements in MySQL 5.6):
mysql> explain update kc1 set count =
(select count(*) from movie_keyword
where kc1.keyword_id = movie_keyword.keyword_id) \G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: kc1
type: index
possible_keys: NULL
key: PRIMARY
key_len: 4
ref: NULL
rows: 98867
Extra: Using temporary
*************************** 2. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: movie_keyword
type: ref
possible_keys: k_m
key: k_m
key_len: 4
ref: imdb.kc1.keyword_id
rows: 17
Extra: Using index
Having an index on keyword_id is important. In my case, I had a compound index, but a single-column index would help too.
CREATE TABLE `movie_keyword` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`movie_id` int(11) NOT NULL,
`keyword_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `k_m` (`keyword_id`,`movie_id`)
);
The difference between COUNT(*) and COUNT(movie_id) should be immaterial, assuming movie_id is NOT NULLable. But I use COUNT(*) because it'll still count as an index-only query if my index is defined only on the keyword_id column.
Remove the unnecessary GROUP BY and the statement looks good. If however you expect many sample.set already to contain the correct value, then you would update many records that need no update. This may create some overhead (larger rollback segments, triggers executed etc.) and thus take longer.
In order to only update the records that need be updated, join:
UPDATE sample
INNER JOIN
(
SELECT postid, COUNT(tagid) as cnt
FROM PostTags
GROUP BY postid
) tags ON tags.postid = sample.postid
SET sample.cnt = tags.cnt
WHERE sample.cnt != tags.cnt OR sample.cnt IS NULL;
Here is the SQL fiddle: http://sqlfiddle.com/#!2/d5e88.
I have a query that is giving me problems and I can't understand why MySQL's query optimizer is behaving the way it is. Here is the background info:
I have 3 tables. Two are relatively small and one is large.
Table 1 (very small, 727 rows):
CREATE TABLE ipa (
ipa_id int(11) NOT NULL AUTO_INCREMENT,
ipa_code int(11) DEFAULT NULL,
ipa_name varchar(100) DEFAULT NULL,
payorcode varchar(2) DEFAULT NULL,
compid int(11) DEFAULT '2'
PRIMARY KEY (ipa_id),
KEY ipa_code (ipa_code) )
ENGINE=MyISAM
Table 2 (smallish, 59455 rows):
CREATE TABLE assign_ipa (
assignid int(11) NOT NULL AUTO_INCREMENT,
ipa_id int(11) NOT NULL,
userid int(11) NOT NULL,
username varchar(20) DEFAULT NULL,
compid int(11) DEFAULT NULL,
PayorCode char(10) DEFAULT NULL
PRIMARY KEY (assignid),
UNIQUE KEY assignid (assignid,ipa_id),
KEY ipa_id (ipa_id)
) ENGINE=MyISAM
Table 3 (large, 24,711,730 rows):
CREATE TABLE master_final (
IPA int(11) DEFAULT NULL,
MbrCt smallint(6) DEFAULT '0',
PayorCode varchar(4) DEFAULT 'WC',
KEY idx_IPA (IPA)
) ENGINE=MyISAM DEFAULT
Now for the query. I'm doing a 3-way join using the first two smaller tables to essentially subset the big table on one of it's indexed values. Basically, I get a list of IDs for a user, SJOnes and query the big file for those IDs.
mysql> explain
SELECT master_final.PayorCode, sum(master_final.Mbrct) AS MbrCt
FROM master_final
INNER JOIN ipa ON ipa.ipa_code = master_final.IPA
INNER JOIN assign_ipa ON ipa.ipa_id = assign_ipa.ipa_id
WHERE assign_ipa.username = 'SJones'
GROUP BY master_final.PayorCode, master_final.ipa\G;
************* 1. row *************
id: 1
select_type: SIMPLE
table: master_final
type: ALL
possible_keys: idx_IPA
key: NULL
key_len: NULL
ref: NULL
rows: 24711730
Extra: Using temporary; Using filesort
************* 2. row *************
id: 1
select_type: SIMPLE
table: ipa
type: ref
possible_keys: PRIMARY,ipa_code
key: ipa_code
key_len: 5
ref: wc_test.master_final.IPA
rows: 1
Extra: Using where
************* 3. row *************
id: 1
select_type: SIMPLE
table: assign_ipa
type: ref
possible_keys: ipa_id
key: ipa_id
key_len: 4
ref: wc_test.ipa.ipa_id
rows: 37
Extra: Using where
3 rows in set (0.00 sec)
This query takes forever (like 30 minutes!). The explain statement tells me why, it's doing a full table scan on the big table even though there is a perfectly good index. It's not using it. I don't understand this. I can look at the query and see that it's only needs to query a couple of IDs from the big table. If I can do it, why can't MySQL's optimizer do it?
To illustrate, here are the IDs associated with 'SJones':
mysql> select username, ipa_id from assign_ipa where username='SJones';
+----------+--------+
| username | ipa_id |
+----------+--------+
| SJones | 688 |
| SJones | 689 |
+----------+--------+
2 rows in set (0.02 sec)
Now, I can rewrite the query substituting the ipa_id values for the username in the where clause. To me this is equivalent to the original query. MySQL sees it differently. If I do this, the optimizer makes use of the index on the big table.
mysql> explain
SELECT master_final.PayorCode, sum(master_final.Mbrct) AS MbrCt
FROM master_final
INNER JOIN ipa ON ipa.ipa_code = master_final.IPA
INNER JOIN assign_ipa ON ipa.ipa_id = assign_ipa.ipa_id
*WHERE assign_ipa.ipa_id in ('688','689')*
GROUP BY master_final.PayorCode, master_final.ipa\G;
************* 1. row *************
id: 1
select_type: SIMPLE
table: ipa
type: range
possible_keys: PRIMARY,ipa_code
key: PRIMARY
key_len: 4
ref: NULL
rows: 2
Extra: Using where; Using temporary; Using filesort
************* 2. row *************
id: 1
select_type: SIMPLE
table: assign_ipa
type: ref
possible_keys: ipa_id
key: ipa_id
key_len: 4
ref: wc_test.ipa.ipa_id
rows: 37
Extra: Using where
************* 3. row *************
id: 1
select_type: SIMPLE
table: master_final
type: ref
possible_keys: idx_IPA
key: idx_IPA
key_len: 5
ref: wc_test.ipa.ipa_code
rows: 34953
Extra: Using where
3 rows in set (0.00 sec)
The only thing I've changed is a where clause that doesn't even directly hit the big table. And yet, the optimizer uses the index 'idx_IPA' on the big table and the full table scan is no longer used. The query when re-written like this is very fast.
OK, that's a lot of background. Now my question. Why should the where clause matter to the optimizer? Either where clause will return the same result set from the smaller table, and yet I'm getting dramatically different results depending on which one I use. Obviously, I want to use the where clause containing the username rather than trying to pass all associated IDs to the query. As written though, this is not possible?
Can someone explain why this is happening?
How might I rewrite my query to avoid the full table scan?
Thanks for sticking with me. I know its a very longish question.
Not quite sure if I'm right, but I think the following is happening here. This:
WHERE assign_ipa.username = 'SJones'
may create a temporary table, since it requires a full table scan. Temporary tables have no indexes, and they tend to slow down things down a lot.
The second case
INNER JOIN ipa ON ipa.ipa_code = master_final.IPA
INNER JOIN assign_ipa ON ipa.ipa_id = assign_ipa.ipa_id
WHERE assign_ipa.ipa_id in ('688','689')
on the other hand allows for joining of indexes, which is fast. Additionally, it can be transformed to
SELECT .... FROM master_final WHERE IDA IN (688, 689) ...
and I think MySQL is doing that, too.
Creating an index on assign_ipa.username may help.
Edit
I rethought the problem and now have a different explanation.
The reason of course is the missing index. This means that MySQL has no clue how large the result of querying assign_ipa would be (MySQL does not store counts), so it starts with the joins first, where it can relay on keys.
That's what row 2 and 3 of explain log tell us.
And after that, it tries to filter the result by assign_ipa.username, which has no key, as stated in row 1.
As soon as there is an index, it filters assign_ipa first, and joins afterwards, using the according indexes.
This is probably not a direct answer to your question, but here are few things that you can do:
Run ANALYZE_TABLE ...it will update table statistics which has a great impact on what optimizer will decide to do.
If you still think that joins are not in order you wish them to be (which happens in your case, and thus optimizer is not using indexes as you expect it to do), you can use STRAIGHT_JOIN ... from here: "STRAIGHT_JOIN forces the optimizer to join the tables in the order in which they are listed in the FROM clause. You can use this to speed up a query if the optimizer joins the tables in nonoptimal order"
For me, putting "where part" right into join sometimes makes a difference and speeds things up. For example, you can write:
...t1 INNER JOIN t2 ON t1.k1 = t2.k2 AND t2.k2=something...
instead of
...t1 INNER JOIN t2 ON t1.k1 = t2.k2 .... WHERE t2.k2=something...
So this is definitely not an explanation on why you have that behavior but just few hints. Query optimizer is a strange beast, but fortunately there is EXPLAIN command which can help you to trick it to behave in a way you want.
I am using MySQL version 5.5.14 to run the following query, QUERY 1, from a table of 5 Million rows:
SELECT P.ID, P.Type, P.Name, P.cty
, X(P.latlng) as 'lat', Y(P.latlng) as 'lng'
, P.cur, P.ak, P.tn, P.St, P.Tm, P.flA, P.ldA, P.flN
, P.lv, P.bd, P.bt, P.nb
, P.ak * E.usD as 'usP'
FROM PIG P
INNER JOIN EEL E
ON E.cur = P.cur
WHERE act='1'
AND flA >= '1615'
AND ldA >= '0'
AND yr >= (YEAR(NOW()) - 100)
AND lv >= '0'
AND bd >= '3'
AND bt >= '2'
AND nb <= '5'
AND cDate >= NOW()
AND MBRContains(LineString( Point(39.9097, -2.1973)
, Point(65.5130, 41.7480)
), latlng)
AND Type = 'g'
AND tn = 'l'
AND St + Tm - YEAR(NOW()) >= '30'
HAVING usP BETWEEN 300/2 AND 300
ORDER BY ak
LIMIT 100;
Using an Index (Type, tn, act, flA), I am able to obtain results within 800ms. In QUERY 2, I changed the ORDER BY clause to lv, I am also able to obtain results within similar timings. In QUERY 3, I changed the ORDER BY clause to ID and the query time slowed dramatically to a full 20s on an average of 10 trials.
Running the EXPLAIN SELECT statement produces exactly the same query execution plan:
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: P
type: range
possible_keys: Index
key: Index
key_len: 6
ref: NULL
rows: 132478
Extra: Using where; Using filesort
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: E
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 3
ref: BS.P.cur
rows: 1
Extra:
My question is: why does ordering by ID in QUERY 3 runs so slow compared to the rest?
The partial table definition is as such:
CREATE TABLE `PIG` (
`ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
`lv` smallint(3) unsigned NOT NULL DEFAULT '0',
`ak` int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`),
KEY `id_ca` (`cty`,`ak`),
KEY `Index` (`Type`, `tn`, `act`, `flA`),
) ENGINE=MyISAM AUTO_INCREMENT=5000001 DEFAULT CHARSET=latin1
CREATE TABLE `EEL` (
`cur` char(3) NOT NULL,
`usD` decimal(11,10) NOT NULL,
PRIMARY KEY (`cur`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
UPDATE: After extensive testing of various ORDER BYs options, I have confirmed that the ID column which happens to be the Primary Key is the only one causing the slow query time.
From MySQL documentation at http://dev.mysql.com/doc/refman/5.6/en/order-by-optimization.html
In some cases, MySQL cannot use indexes to resolve the ORDER BY, although it still uses indexes to find the rows that match the WHERE clause. These cases include the following:
. . .
The key used to fetch the rows is not the same as the one used in the ORDER BY:
`SELECT * FROM t1 WHERE key2=constant ORDER BY key1;`
This probably won't help, but what happens if you add AND ID > 0 to the WHERE clause? Would this cause MySQL to use the primary key for sorting? Worth a try I suppose.
(It seems odd that ordering with ak is efficient, since ak does not even have an index, but that may be due to fewer values for ak?)
If the condition in the WHERE clause differs from the one in the ORDER BY or it is not part of a composite index, then the sorting does not take place in the storage engine but rather at the MySQL server level which is much slower. Long story short you must rearrange your indexes in order to satisfy both the row filtering and the sorting as well.
you can use force index(PRIMARY)
try it, and you will see in explain query that mysql now will use the primary key index when 'order by'