What's wrong with this query? EXPLAIN looks fine to me - mysql

I'm going through an application and trying to optimize some queries and I'm really struggling with a few of them. Here's an example:
SELECT `Item` . * , `Source` . * , `Keyword` . * , `Author` . *
FROM `items` AS `Item`
JOIN `sources` AS `Source` ON ( `Item`.`source_id` = `Source`.`id` )
JOIN `authors` AS `Author` ON ( `Item`.`author_id` = `Author`.`id` )
JOIN `items_keywords` AS `ItemsKeyword` ON ( `Item`.`id` = `ItemsKeyword`.`item_id` )
JOIN `keywords` AS `Keyword` ON ( `Keyword`.`id` = `ItemsKeyword`.`keyword_id` )
JOIN `keywords_profiles` AS `KeywordsProfile` ON ( `Keyword`.`id` = `KeywordsProfile`.`keyword_id` )
JOIN `profiles` AS `Profile` ON ( `Profile`.`id` = `KeywordsProfile`.`profile_id` )
WHERE `KeywordsProfile`.`profile_id` IN ( 17 )
GROUP BY `Item`.`id`
ORDER BY `Item`.`timestamp` DESC , `Item`.`id` DESC
LIMIT 0 , 20;
This one is taking 10-30 seconds...in the tables referenced, there are about 500k author rows, and about 750k items and items_keywords rows. Everything else is less than 500 rows.
Here's the explain output:
http://img.skitch.com/20090220-fb52wd7jf58x41ikfxaws96xjn.jpg
EXPLAIN is relatively new to me, but I went through this line by line and it all seems fine. Not sure what else I can do, as I've got indexes on everything...what am I missing?
The server this sits on is just a 256 slice over at slicehost, but there's nothing else running on it and the CPU is at 0% before its run. And yet still it cranks away on this query. Any ideas?
EDIT: Some further info; one of the things that makes this really frustrating is that if I repeatedly run this query, it takes less than .1 seconds. I'm assuming this is due to the query cache, but if I run RESET QUERY CACHE before it, it still runs extremely quickly. It's only after I wait a little while or run some other queries that the 10-30 second times return. All the tables are MyISAM...does this indicate that MySQL is loading stuff into memory and that's why it runs so much faster for awhile?
EDIT 2: Thanks so much to everyone for your help...an update...I cut everything down to this:
SELECT i.id
FROM items AS i
ORDER BY i.timestamp DESC, i.id DESC
LIMIT 0, 20;
Consistently took 5-6 seconds, despite there only being 750k records in the DB. Once I dropped the 2nd column on the ORDER BY clause, it was pretty much instant. There's obviously several things going on here, but when I cut the query down to this:
SELECT i.id
FROM items AS i
JOIN items_keywords AS ik ON ( i.id = ik.item_id )
JOIN keywords AS k ON ( k.id = ik.keyword_id )
JOIN keywords_profiles AS kp ON ( k.id = kp.keyword_id )
WHERE kp.profile_id IN (139)
ORDER BY i.timestamp DESC
LIMIT 20;
It's still taking 10+ seconds...what else can I do?
Minor curiosity: on the explain, the rows column for items_keywords is always 1544, regardless of what profile_id I'm using in the query. shouldn't it change depending on the number of items associated with that profile?
EDIT 3: Ok, this is getting ridiculous :). If I drop the ORDER BY clause entirely, things are very speedy and the temp table / filesort disappears from explain. There's currently an index on the item.timestamp column, but is it not being used for some reason? I thought I remembered something about mysql only using one index per table or something? should I create a multi-column index over all the columns on the items table that this query references (source_id, author_id, timestamp, etc)?

Try this and see how it does:
SELECT i.*, s.*, k.*, a.*
FROM items AS i
JOIN sources AS s ON (i.source_id = s.id)
JOIN authors AS a ON (i.author_id = a.id)
JOIN items_keywords AS ik ON (i.id = ik.item_id)
JOIN keywords AS k ON (k.id = ik.keyword_id)
WHERE k.id IN (SELECT kp.keyword_id
FROM keywords_profiles AS kp
WHERE kp.profile_id IN (17))
ORDER BY i.timestamp DESC, i.id DESC
LIMIT 0, 20;
I factored out a couple of the joins into a non-correlated subquery, so you wouldn't have to do a GROUP BY to map the result to distinct rows.
Actually, you may still get multiple rows per i.id in my example, depending on how many keywords map to a given item and also to profile_id 17.
The filesort reported in your EXPLAIN report is probably due to the combination of GROUP BY and ORDER BY using different fields.
I agree with #ʞɔıu's answer that the speedup is probably because of key caching.

It looks okay, every row in the explain is using an index of some sort. One possible worry is the filesort bit. Try running the query without the order by clause and see if that improves it.
Then, what I would do is gradually take out each join until you (hopefully) get that massive speed increase, then concentrate on why that's happening.
The reason I mention the filesort is because I can't see a mention of timestamp anywhere in the explain output (even though it's your primary sort criteria) - it might be requiring a full non-indexed sort.
UPDATE#1:
Based on edit#2, the query:
SELECT i.id
FROM items AS i
ORDER BY i.timestamp DESC, i.id DESC
LIMIT 0, 20;
takes 5-6 seconds. That's abhorrent. Try creating a composite index on both TIMESTAMP and ID and see if that improves it:
create index timestamp_id on items(timestamp,id);
select id from items order by timestamp desc,id desc limit 0,20;
select id from items order by timestamp,id limit 0,20;
select id from items order by timestamp desc,id desc;
select id from items order by timestamp,id;
On one of the tests, I've left off the descending bit (DB2 for one sometimes doesn't use indexes if they're in the opposite order). The other variation is to take off the limit in case that's affecting it.

For your query to run fast, you need:
Create an index: CREATE INDEX ix_timestamp_id ON items (timestamp, id)
Ensure that id's on sources, authors and keywords are primary keys.
Force MySQL to use this index for items, and perform NESTED LOOP joins for other items:
EXPLAIN EXTENDED
SELECT Item.*, Source . * , Keyword . * , Author . *
FROM items AS Item FORCE INDEX FOR ORDER BY (ix_timestamp_id)
JOIN items_keywords AS ItemsKeyword FORCE INDEX (ix_item_keyword) ON ( Item.id = ItemsKeyword.item_id AND ItemsKeyword.keyword_id IN
(
SELECT keyword_id
FROM keywords_profiles AS KeywordsProfile FORCE INDEX (ix_keyword_profile)
WHERE KeywordsProfile.profile_id = 17
)
)
JOIN sources AS Source FORCE INDEX (primary) ON ( Item.source_id = Source.id )
JOIN authors AS Author FORCE INDEX (primary) ON ( Item.author_id = Author.id )
JOIN keywords AS Keyword FORCE INDEX (primary) ON ( Keyword.id = ItemsKeyword.keyword_id )
ORDER BY Item.timestamp DESC, Item.id DESC
LIMIT 0, 20
As you can see, we get rid of GROUP BY, push the subquery into the JOIN condition and force PRIMARY KEYs to be used for joins.
That's how we ensure that NESTED LOOPS with items as a leading tables will be used for all joins.
As a result:
1, 'PRIMARY', 'Item', 'index', '', 'ix_timestamp_id', '12', '', 20, 2622845.00, ''
1, 'PRIMARY', 'Author', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'test.Item.author_id', 1, 100.00, ''
1, 'PRIMARY', 'Source', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'test.Item.source_id', 1, 100.00, ''
1, 'PRIMARY', 'ItemsKeyword', 'ref', 'PRIMARY', 'PRIMARY', '4', 'test.Item.id', 1, 100.00, 'Using where; Using index'
1, 'PRIMARY', 'Keyword', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'test.ItemsKeyword.keyword_id', 1, 100.00, ''
2, 'DEPENDENT SUBQUERY', 'KeywordsProfile', 'unique_subquery', 'PRIMARY', 'PRIMARY', '8', 'func,const', 1, 100.00, 'Using index; Using where'
, and when we run this, we get
20 rows fetched in 0,0038s (0,0019s)
There are 500k in items, 600k in items_keywords, 512 values in keywords and 512 values in keywords_profiles (all with profile 17).

I would suggest you run a profiler on the query, then you can see how long each subquery took and where the time is being consumed. If you have phpmyadmin, it's a simple chekbox you need to check to get this functionality, but my guess is you can get it manually from the mysql terminal app as well. I haven't seen this explain thing before, if it is in fact the profiling i am used to in phpmyadmin i apologize for the nonsense.

What is the GROUP BY clause achieving? There are no aggregate functions in the SELECT so the GROUP BY should be unnecessary

Some things to try:
Try not selecting all columns from all tables, and select only what you need. That may preclude the use of covering indexes (looking for using index in the extra column) and in general will soak up a lot of needless IO.
That filesort looks a little troubling. Try removing the order by and replacing it with order by null -- group by implicitly sorts in mysql so you have to order by null to remove that implicit sort.
Try adding an index on item (timestamp, id) or (id, timestamp). Might do something about that filesort (you never know).
Why are you grouping by item id? and not selecting any aggregate columns? if you group by a column and then select (much less order by) some other non-aggregate columns then the values of those columns will be selected more or less arbitrary. Unless, is item id is always unique for this query, in which case the group by will not accomplish anything.
Lastly, in my experience, mysql sometimes will just inexplicably freak out if you give it too many joins to try to optimize. Try and figure out if there's some way you don't have to do so many joins all once like that, i.e. split it up into multiple queries if you can.
one of the things that makes this really frustrating is that if I repeatedly run this query, it takes less than .1 seconds. I'm assuming this is due to the query cache — add SQL_NO_CACHE after the SELECT keyword to disable the use of the query cache per this query
All the tables are MyISAM...does this indicate that MySQL is loading stuff into memory and that's why it runs so much faster for awhile — MyISAM uses a key buffer and only caches index data in memory, and relies on the OS to hopefully cache non-index data. Unlike Innodb, which caches everything in the buffer pool.

Is it possible you're having issues because of filesystem I/O ? The EXPLAIN shows that there have to be 1544 rows fetched from the ItemsKeyword table. If you have to go to disk for each of those you'll add about 10-15 second total to the run time (assuming a high-ish seek time because you're on a VM). Normally the tables are cached in RAM or the data is stored close enough on the disk that reads can be combined. However, you're running on a VM with 256MB of ram, so you may no memory spare it can cache into and if your table file is fragmented enough you might be able to get the query performance degraded this much.
You could probably get some idea of what's happening with I/O during the query by running something like pidstat -d 1 or iostat 1 in another shell on the server.
EDIT:
From looking at the query adding an index on (ItemsKeyword.item_id, ItemsKeyword.keyword_id) should fix it if my theory is right about it being a problem with the seeks for the ItemsKeyword table.

MySQL loads a lot into different caches, including indexes and queries. In addition, your operating system will keep a file system cache that could speed up your query when executed repeatedly.
One thing to consider is how MySQL creates temporary tables during this type of query. As you can see in your explain, a temporary table is being created, probably for sorting of the results. Generally, MySQL will create these temporary tables in memory, except for 2 conditions. First, if they exceed the maximum size set in MySQL settings (max temp table size or heap size - check mysqlperformanceblogs.com for more info on these settings). The second and more important one is this:
Temporary tables will always be created on disk when text or blob tables are selected in the query.
This can create a major performance hit, and even lead to an i/o bottleneck if your server is getting any amount of action.
Check to see if any of your columns are of this data type. If they are, you can try to rewrite the query so that a temporary table is not created (group by always causes them, I think), or try not selecting these out. Another strategy would be to break this up into several smaller queries that might execute in a fraction of the time.
Good luck!

I may be completely wrong but what happens when you change
WHERE kp.profile_id IN (139)
to
WHERE kp.profile_id = 139

Try this:
SELECT i.id
FROM ((items AS i
INNER JOIN items_keywords AS ik ON ( i.id = ik.item_id ))
INNER JOIN keywords AS k ON ( k.id = ik.keyword_id ))
INNER JOIN keywords_profiles AS kp ON ( k.id = kp.keyword_id AND kp.profile_id = 139)
ORDER BY i.timestamp DESC
LIMIT 20;

Looking at the pastie.org link in the comments to the question:
you're joining items.source_id int(4) to sources.id int(16)
also items.id int(16) to itemskeywords.item_id int(11)
I can't see any good reason for the two fields to have different widths in these cases
I realise that these are just display widths and that the actual range of numbers which the column can store is determined solely by the INT part but the MySQL 6.0 reference manual says:
Note that if you store larger values
than the display width in an integer
column, you may experience problems
when MySQL generates temporary tables
for some complicated joins, because in
these cases MySQL assumes that the
data fits into the original column
width.
From the rough figures you quoted, it doesn't look as though you are exceeding the display width on any of the ID columns. You may as well tidy up these inconsistencies though just to eliminate another possible bug.
You might be as well to remove the display widths altogether if you don't have a need for them
edit:
I would hazard a guess that the original author of the database perhaps thought that int(4) meant "an integer with up to 4 digits" whereas it actually means "an integer between -2147483648 and 2147482647 displayed with at least 4 characters left-padded with spaces if need be"
Definitions like authors.refreshed int(20) or items.timestamp int(30) don't really make sense as there can only be 10 digits plus the sign in an int. Even a bigint can't exceed 20 characters. Perhaps the original author thought that int(4) was analogous to varchar(4)?

Try a backup copy of your tables. After that rename the original tables to something else, rename the new tables to the original and try again with your new-but-old-named tables...
Or you can try to repair the tables, but this doesn't always help.
Edit: Man, this was an old question...

The problem appears that it has to full joins across every single table before it even tries to do a where clause. This can cause 500k rows per table across you're looking in the millions+ rows that it's populating in memory. I would try changing the JOINS to LEFT JOINS USING ().

Related

MySQL Slow query ~ 10 seconds

I have this query which basically goes through a bunch of tables to get me some formatted results but I can't seem to find the bottleneck. The easiest bottleneck was the ORDER BY RAND() but the performance are still bad.
The query takes from 10 sec to 20 secs without ORDER BY RAND();
SELECT
c.prix AS prix,
ST_X(a.point) AS X,
ST_Y(a.point) AS Y,
s.sizeFormat AS size,
es.name AS estateSize,
c.title AS title,
DATE_FORMAT(c.datePub, '%m-%d-%y') AS datePub,
dbr.name AS dateBuiltRange,
m.myId AS meuble,
c.rawData_id AS rawData_id,
GROUP_CONCAT(img.captionWebPath) AS paths
FROM
immobilier_ad_blank AS c
LEFT JOIN PropertyFeature AS pf ON (c.propertyFeature_id = pf.id)
LEFT JOIN Adresse AS a ON (c.adresse_id = a.id)
LEFT JOIN Size AS s ON (pf.size_id = s.id)
LEFT JOIN EstateSize AS es ON (pf.estateSize_id = es.id)
LEFT JOIN Meuble AS m ON (pf.meuble_id = m.id)
LEFT JOIN DateBuiltRange AS dbr ON (pf.dateBuiltRange_id = dbr.id)
LEFT JOIN ImageAd AS img ON (img.commonAd_id = c.rawData_id)
WHERE
c.prix != 0
AND pf.subCatMyId = 1
AND (
(
c.datePub > STR_TO_DATE('01-04-2016', '%d-%m-%Y')
AND c.datePub < STR_TO_DATE('30-04-2016', '%d-%m-%Y')
)
OR date_format(c.datePub, '%d-%m-%Y') = '30-04-2016'
)
AND a.validPoint = 1
GROUP BY
c.id
#ORDER BY
# RAND()
LIMIT
5000
Here is the explain query:
Visual Portion:
And here is a screenshot of mysqltuner
EDIT 1
I have many indexes Here they are:
EDIT 2:
So you guys did it. Down to .5 secs to 2.5 secs.
I mostly followed all of your advices and changed some of my.cnf + runned optimized on my tables.
You're searching for dates in a very suboptimal way. Try this.
... c.datePub >= STR_TO_DATE('01-04-2016', '%d-%m-%Y')
AND c.datePub < STR_TO_DATE('30-04-2016', '%d-%m-%Y') + INTERVAL 1 DAY
That allows a range scan on an index on the datePub column. You should create a compound index for that table on (datePub, prix, addresse_id, rawData_id) and see if it helps.
Also try an index on a (valid_point). Notice that your use of a geometry data type in that table is probably not helping anything.
To begin with you have quite a lot of indexes but many of them are not useful. Remember more indexes means slower inserts and updates. Also mysql is not good at using more than one index per table in complex queries. The following indexes have a cardinality < 10 and probably should be dropped.
IDX_...E88B
IDX....62AF
IDX....7DEE
idx2
UNIQ...F210
UNIQ...F210..
IDX....0C00
IDX....A2F1
At this point I got tired of the excercise, there are many more
Then you have some duplicated data.
point
lat
lng
The point field has the lat and lng in it. So the latter two are not needed. That means you can lose two more indexes idxlat and idxlng. I am not quite sure how idxlng appears twice in the index list for the same table.
These optimizations will lead to an overall increase in performance for INSERTS and UPDATES and possibly for all SELECTs as well because the query planner needs to spend less time deciding which index to use.
Then we notice from your explain that the query does not use any index on table Adresse (a). But your where clause has a.validPoint = 1 clearly you need an index on it as suggested by #Ollie-Jones
However I suspect that this index may have low cardinality. In that case I recommend that you create a composite index on this column + another.
The problem is your join with (a). The table has an index, but the index can't be used, more than likely due to the sort (/group by), or possibly incompatible types. The EXPLAIN shows three quarters of a million rows examined, this means that index lookup was not possible.
When designing a query, look for the smallest possible result set - search by that index, and then join from there. Perhaps "c" isn't the best table for the primary query.
(You could try using FORCE INDEX (id) on table a, if it doesn't work, the error may give you more information).
As others have pointed out, you need an index on a.validPoint but what about c.datePub that is also used in the WHERE clause. Why not a multiple column index on datePub, address_id the index on address_id is already used, so a multiple column index will be better here.

MySQL Query Extremely Slow any Suggestions?

I have this MySQL query which seems to be very very slow. It takes 3 seconds to run. This is trying to get all the posts from either who they're following or any interests they have. Also its trying to make sure it doesn'tshow any duplicate shares that match any post_id. What do you guys think I should do?
SELECT p.*,
IFNULL(post_data_share, UUID()) AS unq_share,
UNIX_TIMESTAMP(p.post_time) AS a
FROM posts p
LEFT JOIN users_interests i ON (i.user_id=1
AND p.post_interest = i.interest)
LEFT JOIN following f ON (f.user_id=1
AND p.post_user_id = f.follower_id)
WHERE (post_user_id=1
OR f.follower_id IS NOT NULL
OR i.interest IS NOT NULL)
AND (POST_DATA_SHARE NOT IN
(SELECT POST_ID
FROM posts p
LEFT JOIN following f ON f.user_id=1
AND p.post_user_id = f.follower_id
LEFT JOIN users_interests i ON (i.user_id=1
AND p.post_interest = i.interest)
WHERE (post_user_id=1
OR f.follower_id IS NOT NULL
OR i.interest IS NOT NULL))
OR POST_DATA_SHARE IS NULL)
GROUP BY unq_share
ORDER BY `post_id` DESC LIMIT 10;
Below are the Performance tips will definitely make difference.
Try Explain Statement.
Alter the Table by Normalize your tables by adding Primary Key and Foreign Key
Add Index for Repeated Values.
Avoid select * from table. Mention the specify column name.
Convert IS NULL to (='')
Convert IS NOT NULL to (!='')
Avoid More OR Condition.
MySQL Configurations to explore
key_buffer_size
innodb_buffer_pool_size
query_cache_size
thread_cache
Much more refer this SO Answer Best my.cnf configuration for a 8GB MySQL server with MyISAM use only
I would start by looking at the execution plan for the query. Here is a link to MySQL documentation on the EXPLAIN keyword to show you how the optimizer is structuring your query: http://dev.mysql.com/doc/refman/5.5/en/using-explain.html
If CPU usage is low, likely the bottleneck is disk access for large table scans.
The way the query is executed is often different from how it was written. Once you see how the execution plan is structured, you are probably going to create indexes on the largest joins. Every table should have one clustered index (often it is created by default), but other fields can often benefit from unclustered indexes.
If the problem is extremely bad and this is vital to your application, you may want to consider reorganizing the database.

Why does the query take a long time in mysql even with a LIMIT clause?

Say I have an Order table that has 100+ columns and 1 million rows. It has a PK on OrderID and FK constraint StoreID --> Store.StoreID.
1) select * from 'Order' order by OrderID desc limit 10;
the above takes a few milliseconds.
2) select * from 'Order' o join 'Store' s on s.StoreID = o.StoreID order by OrderID desc limit 10;
this somehow can take up to many seconds. The more inner joins I add, slows it down further more.
3) select OrderID, column1 from 'Order' o join 'Store' s on s.StoreID = o.StoreID order by OrderID desc limit 10;
this seems to speed the execution up, by limiting the columns we select.
There are a few points that I dont understand here and would really appreciate it if anyone more knowledgeable with mysql (or rmdb query execution in general) can enlighten me.
Query 1 is fast since it's just a reverse lookup by PK and DB only needs to return the first 10 rows it encountered.
I don't see why Query 2 should take for ever. Shouldn't the operation be the same? i.e. get the first 10 rows by PK and then join with other tables. Since there's a FK constraint, it is guaranteed that the relationship will be satisfied. So DB doesn't need to join more rows than necessary and then trim the result, right? Unless, FK constraint allows null FK? In which case I guess a left join would make this much faster than an inner join?
Lastly, I'm guess query 3 is simply faster because less columns are used in those unnecessary joins? But why would the query execution need the other columns while joining? Shouldn't it just join using PKs first, and then get the columns for just the 10 rows?
Thanks!
My understanding is that the mysql engine applies limit after any join's happen.
From http://dev.mysql.com/doc/refman/5.0/en/select.html, The HAVING clause is applied nearly last, just before items are sent to the client, with no optimization. (LIMIT is applied after HAVING.)
EDIT: You could try using this query to take advantage of the PK speed.
select * from (select * from 'Order' order by OrderID desc limit 10) o
join 'Store' s on s.StoreID = o.StoreID;
All of your examples are asking for tablescans of the existing tables, so none of them will be more or less performant than the degree to which mysql can cache the data or results. Some of your queries have order by or join criteria, which can take advantage of indexes purely to make the joining process more efficient, however, that still is not the same as having a set of criteria that will trigger the use of indexes.
Limit is not a criteria -- it can be thought of as filtration once a result set is determined. You save time on the client, once the result set is prepared, but not on the server.
Really, the only way to get the answers you are seeking is to become familiar with:
EXPLAIN EXTENDED your_sql_statement
The output of EXPLAIN will show you how many rows are being looked at by mysql, as well as whether or not any indexes are being used.

MySQL indexes on BINARY(16). What size?

I got a table with over 6.6 millions rows.
I got a field, named trip_id who's in BINARY(16). I find my query too slow (0.2 seconds). This query run near once every 3 seconds.
Before doing anything stupid, I want to know if I lower the index size on trip_id from full to 12, would it make a difference ?
If I try to tweak my query more, would it make a difference ?
Thanks
EDIT:
Query :
SELECT stop_times.stop_id
FROM trips
LEFT JOIN stop_times ON trips.trip_id = stop_times.trip_id
WHERE trips.route_id = '141'
GROUP BY stop_times.stop_id
ORDER BY trips.trip_headsign ASC,
stop_times.stop_sequence ASC
trip_id BINARY(16)
route_id SMALLINT(3)
trip_headsign VARCHAR(50)
stop_sequence SMALLINT(3)
Explain of the query :
After doing researches, I've found the problem because yes, 0.2 seconds is slow.
SELECT t.trip_headsign, st.stop_sequence, s.stop_code, s.stop_name
FROM stop_times AS st
JOIN stops AS s USING (stop_id)
JOIN ( SELECT trip_id,
route_id,
trip_headsign
FROM trips
WHERE route_id = '141'
LIMIT 2
) AS t
WHERE t.trip_id = st.trip_id
GROUP BY st.stop_id
First, instead of doing a LEFT JOIN, JOIN is faster here. But the important point, I was matching all results from trips in the WHERE statement.
However, since a bus can only have 2 directions, I only have to limit my results to 2. Now, my results are near 0.018. Over 1000% improvement.
You've got 'Using temporary' and 'Using filesort' in your 'Extra' column.
These are surefire signs that you could improve things. The reason that these are showing up is because of your GROUP and ORDER clauses.
First step: are they truly necessary? You may find that, end to end, it's cheaper to sort them with the language that consumes this data.
Second step: if you still need ORDER BY, then take a look at ORDER BY Optimization in the MySQL docs. The reason that an index is not used for sorting here is the differing GROUP BY and ORDER BY clauses.
Think outside of the box. You're not doing any aggregation, so maybe grouping isn't necessary. Maybe just pull all of the rows and then ignore the duplicated ids.
Try adding trip_headsign to your "route" index. Because you are using that in the ORDER BY, mysql needs to go to the actual table to fetch it for every record it finds in the index that matches the route_id. If you don't see "Using index" in the Extra column of the explain, that means MySQL is forced to go back to the actual table to get additional information.

MySQL performance, inner join, how to avoid Using temporary and filesort

I have a table 1 and table 2.
Table 1
PARTNUM - ID_BRAND
partnum is the primary key
id_brand is "indexed"
Table 2
ID_BRAND - BRAND_NAME
id_brand is the primary key
brand_name is "indexed"
The table 1 contains 1 million of records and the table 2 contains 1.000 records.
I'm trying to optimize some query using EXPLAIN and after a lot of try I have reached a dead end.
EXPLAIN
SELECT pm.partnum, pb.brand_name
FROM products_main AS pm
LEFT JOIN products_brands AS pb ON pm.id_brand=pb.id_brand
ORDER BY pb.brand ASC
LIMIT 0, 10
The query returns this execution plan:
ID, SELECT_TYPE, TABLE, TYPE, POSSIBLE_KEYS, KEY, KEY_LEN , REF, ROWS, EXTRA
1, SIMPLE, pm, range, PRIMARY, PRIMARY, 1, , 1000000, Using where; Using temporary; Using filesort
1, SIMPLE, pb, ref, PRIMARY, PRIMARY, 4, demo.pm.id_pbrand, 1,
The MySQL query optimizer shows a temporary + filesort in the execution plan.
How can I avoid this?
The "EVIL" is in the ORDER BY pb.brand ASC. Ordering by that external field seems to be the bottleneck..
First of all, I question the use of an outer join seeing as the order by is operating on the rhs, and the NULL's injected by the left join are likely to play havoc with it.
Regardless, the simplest approach to speeding up this query would be a covering index on pb.id_brand and pb.brand. This will allow the order by to be evaluated 'using index' with the join condition. The alternative is to find some way to reduce the size of the intermediate result passed to the order-by.
Still, the combination of outer-join, order-by, and limit, leaves me wondering what exactly you are querying for, and if there might not be a better way of expressing the query itself.
Try replacing the join with a subquery. MySQL's optimizer kind of sucks; subqueries often give better performance than joins.
First, try changing your index on the products_brands table. Delete the existing one on brand_name, and create a new one:
ALTER TABLE products_brands ADD INDEX newIdx (brand_name, id_brand)
Then, the table will already have a "orderedByBrandName" index with the ids you need for the join, and you can try:
EXPLAIN
SELECT pb.brand_name, pm.partnum
FROM products_brands AS pb
LEFT JOIN products_main AS pm ON pb.id_brand = pm.id_brand
LIMIT 0, 10
Note that I also changed the order of the tables in the query, so you start with the small one.
This question is somewhat outdated, but I did find it, and so will other people.
Mysql uses temporary if the ORDER BY or GROUP BY contains columns from tables other than the first table in the join queue.
So you just need to have the join order reversed by using STRAIGHT_JOIN, to bypass the order invented by optimizer:
SELECT STRAIGHT_JOIN pm.partnum, pb.brand_name
FROM products_brands AS pb
RIGHT JOIN products_main AS pm ON pm.id_brand=pb.id_brand
ORDER BY pb.brand ASC
LIMIT 0, 10
Also make sure that max_heap_table_size AND tmp_table_size variables are set to a number big enough to store the results:
SET global tmp_table_size=100000000;
SET global max_heap_table_size=100000000;
-- 100 megabytes in this example. These can be set in my.cnf config file, too.