I have a music database with a table for releases and the release titles. This "releases_view" gets the title/title_id and the alternative title/alternative title_id of a track. This is the code of the view:
SELECT
t1.`title` AS title,
t1.`id` AS title_id,
t2.`title` AS title_alt,
t2.`id` AS title_alt_id
FROM
releases
LEFT JOIN titles t1 ON t1.`id`=`releases`.`title_id`
LEFT JOIN titles t2 ON t2.`id`=`releases`.`title_alt_id`
The title_id and title_alt_id fields in the joined tables are both int(11), title and title_alt are varchars.
The issue
This query will take less than 1 ms:
SELECT * FROM `releases_view` WHERE title_id=12345
This query will take less then 1 ms, too:
SELECT * FROM `releases_view` WHERE title_id=12345 OR title_alt_id!=54321
BUT: This query will take 0,2 s. It's 200 times slower!
SELECT * FROM `releases_view` WHERE title_id=20956 OR title_alt_id=38849
As soon I have two comparisons using "=" in the WHERE clause, things really get slow (although all queries only have a couple of results).
Can you help me to understand what is going on?
EDIT
´EXPLAIN´ shows a USING WHERE for the title_alt_id, but I do not understand why. How can I avoid this?
** EDIT **
Here is the EXPLAIN DUMP.
id select_type table partitions type possible_keys key key_len ref rows Extra
1 SIMPLE releases NULL ALL NULL NULL NULL NULL 76802 Using temporary; Using filesort
1 SIMPLE t1 NULL eq_ref PRIMARY PRIMARY 4 db.releases.title_id 1
1 SIMPLE t2 NULL eq_ref PRIMARY PRIMARY 4 db.releases.title_alt_id 1 Using where
The "really slow" is because the Optimizer does not work well with OR.
Plan A (of the Optimizer): Scan the entire table, evaluating the entire OR.
Plan B: "Index Merge Union" could be used for title_id = 20956 OR title_alt_id = 38849 if you have separate indexes in title_id and title_alt_id: use each index to get two lists of PRIMARY KEYs and "merge" the lists, then reach into the table to get *. Multiple steps, not cheap. So Plan B is rarely used.
title_id = 12345 OR title_alt_id != 54321 is a mystery, since it should return most of the table. Please provide EXPLAIN SELECT....
LEFT JOIN (as opposed to JOIN) needs to assume that the row may be missing in the 'right' table.
Related
I have a query that looks like this:
SELECT * FROM foo
LEFT JOIN bar
ON bar.id = foo.bar_id
WHERE bar.id IS NULL
LIMIT 30;
It's attempting to find entries in foo that have a bar_id that does not have a corresponding entry in bar.
There is an index on foo.bar_id and a unique (incremental) index on bar.id
The output of explain looks like this:
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE foo NULL ALL NULL NULL NULL NULL 4411867 100.00 NULL
1 SIMPLE bar NULL eq_ref PRIMARY PRIMARY 4 foo.bar_id 1 100.00 Using where; Not exists
(sorry, i've struggled to get the formatting of that table to display very nicely).
This query is sometimes taking 5+ minutes (although for some reason it sometimes takes 30 seconds shortly after running for 5 minutes)
Is there anyway to improve it? Or is it just a hardware limitation?
In response to the comments, this version makes no noticeable improvement in terms of performance.
SELECT foo.* FROM foo
LEFT JOIN bar
ON bar.id = foo.bar_id
WHERE bar.id IS NULL
ORDER BY foo.id ASC
LIMIT 30;
There is a slight difference when selecting foo.* as opposed to * but in my case I was selecting specific columns anyway, I just did not think to include them in the pseudo example.
No
The query as written must check foo rows until it finds 30 missing rows from bar. If foo is big, and there aren't many missing rows, that could take a long time.
Maybe
You have SELECT *. See if it runs faster if it returns only SELECT foo.id. There is no need for the bar.* part of *; they will always be NULL. If id is not sufficient, you can use id to get other columns.
Yes
If you are walking through the table picking a few (30) to work on and delete, this is a terribly inefficient way to do it. It is "Order(N^2)". I have a much faster way (O(N)), which involves "remembering where you left off": http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks (That discussion is couched in terms of "deleting in chunks", but it can be adapted to other actions.)
So this might be a bit silly, but the alternative I was using is worse. I am trying to write an excel sheet using data from my database and a PHP tool called Box/Spout. The thing is that Box/Spout reads rows one at a time, and they are not retrieved via index ( e.g. rows[10], rows[42], rows[156] )
I need to retrieve data from the database in the order the rows come out. I have a database with a list of customers, that came in via Import and I have to write them into the excel spreadsheet. They have phone numbers, emails, and an address. Sorry for the confusion... :/ So I compiled this fairly complex query:
SELECT
`Import`.`UniqueID`,
`Import`.`RowNum`,
`People`.`PeopleID`,
`People`.`First`,
`People`.`Last`,
GROUP_CONCAT(
DISTINCT CONCAT_WS(',', `PhonesTable`.`Phone`, `PhonesTable`.`Type`)
ORDER BY `PhonesTable`.`PhoneID` DESC
SEPARATOR ';'
) AS `Phones`,
GROUP_CONCAT(
DISTINCT CONCAT_WS(',', `EmailsTable`.`Email`)
ORDER BY `EmailsTable`.`EmailID` DESC
SEPARATOR ';'
) AS `Emails`,
`Properties`.`Address1`,
`Properties`.`city`,
`Properties`.`state`,
`Properties`.`PostalCode5`,
...(17 more `People` Columns)...,
FROM `T_Import` AS `Import`
LEFT JOIN `T_CustomerStorageJoin` AS `CustomerJoin`
ON `Import`.`UniqueID` = `CustomerJoin`.`ImportID`
LEFT JOIN `T_People` AS `People`
ON `CustomerJoin`.`PersID`=`People`.`PeopleID`
LEFT JOIN `T_JoinPeopleIDPhoneID` AS `PeIDPhID`
ON `People`.`PeopleID` = `PeIDPhID`.`PeopleID`
LEFT JOIN `T_Phone` AS `PhonesTable`
ON `PeIDPhID`.`PhoneID`=`PhonesTable`.`PhoneID`
LEFT JOIN `T_JoinPeopleIDEmailID` AS `PeIDEmID`
ON `People`.`PeopleID` = `PeIDEmID`.`PeopleID`
LEFT JOIN `T_Email` AS `EmailsTable`
ON `PeIDEmID`.`EmailID`=`EmailsTable`.`EmailID`
LEFT JOIN `T_JoinPeopleIDPropertyID` AS `PeIDPrID`
ON `People`.`PeopleID` = `PeIDPrID`.`PeopleID`
AND `PeIDPrID`.`PropertyCP`='CurrentImported'
LEFT JOIN `T_Property` AS `Properties`
ON `PeIDPrID`.`PropertyID`=`Properties`.`PropertyID`
WHERE `Import`.`CustomerCollectionID`=$ccID
AND `RowNum` >= $rnOffset
AND `RowNum` < $rnLimit
GROUP BY `RowNum`;
So I have indexes on every ON segment, and the WHERE segment. When RowNumber is like around 0->2500 in value, the query runs great and executes within a couple seconds. But it seems like the query execution time exponentially multiplies the larger RowNumber gets.
I have an EXPLAIN here: and at pastebin( https://pastebin.com/PksYB4n2 )
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE Import NULL ref CustomerCollectionID,RowNumIndex CustomerCollectionID 4 const 48108 8.74 Using index condition; Using where; Using filesort;
1 SIMPLE CustomerJoin NULL ref ImportID ImportID 4 MyDatabase.Import.UniqueID 1 100 NULL
1 SIMPLE People NULL eq_ref PRIMARY,PeopleID PRIMARY 4 MyDatabase.CustomerJoin.PersID 1 100 NULL
1 SIMPLE PeIDPhID NULL ref PeopleID PeopleID 5 MyDatabase.People.PeopleID 8 100 NULL
1 SIMPLE PhonesTable NULL eq_ref PRIMARY,PhoneID,PhoneID_2 PRIMARY 4 MyDatabase.PeIDPhID.PhoneID 1 100 NULL
1 SIMPLE PeIDEmID NULL ref PeopleID PeopleID 5 MyDatabase.People.PeopleID 5 100 NULL
1 SIMPLE EmailsTable NULL eq_ref PRIMARY,EmailID,DupeDeleteSelect PRIMARY 4 MyDatabase.PeIDEmID.EmailID 1 100 NULL
1 SIMPLE PeIDPrID NULL ref PeopleMSCP,PeopleID,PropertyCP PeopleMSCP 5 MyDatabase.People.PeopleID 4 100 Using where
1 SIMPLE Properties NULL eq_ref PRIMARY,PropertyID PRIMARY 4 MyDatabase.PeIDPrID.PropertyID 1 100 NULL
I apologize if the formatting is absolutely terrible. I'm not sure what good formatting looks like so I may have jumbled it a bit on accident, plus the tabs got screwed up.
What I want to know is how to speed up the query time. The databases are very large, like in the 10s of millions of rows. And they aren't always like this as our tables are constantly changing, however I would like to be able to handle it when they are.
I tried using LIMIT 2000, 1000 for example, but I know that it's less efficient than using an indexed column. So I switched over to RowNumber. I feel like this was a good decision, but it seems like MySQL is still looping every single row before the offset variable which kind of defeats the purpose of my index... I think? I'm not sure. I also basically split this particular query into about 10 singular queries, and ran them one by one, for each row of the excel file. It takes a LONG time... TOO LONG. This is fast, but, obviously I'm having a problem.
Any help would be greatly appreciated, and thank you ahead of time. I'm sorry again for my lack of post organization.
The order of the columns in an index matters. The order of the clauses in WHERE does not matter (usually).
INDEX(a), INDEX(b) is not the same as the "composite" INDEX(a,b). I deliberately made composite indexes where they seemed useful.
INDEX(a,b) and INDEX(b,a) are not interchangeable unless both a and b are tested with =. (Plus a few exceptions.)
A "covering" index is one where all the columns for the one table are found in the one index. This sometimes provides an extra performance boost. Some of my recommended indexes are "covering". It implies that only the index BTree need be accessed, not also the data BTree; this is where it picks up some speed.
In EXPLAIN SELECT ... a "covering" index is indicated by "Using index" (which is not the same as "Using index condition"). (Your Explain shows no covering indexes currently.)
An index 'should not' have more than 5 columns. (This is not a hard and fast rule.) T5's index had f5 columns to be covering; it was not practical to make a covering index for T2.
When JOINing, the order of the tables does not matter; the Optimizer is free to shuffle them around. However, these "rules" apply:
A LEFT JOIN may force ordering of the tables. (I think it does in this case.) (I ordered the columns based on what I think the Optimizer wants; there may be some flexibility.)
The WHERE clause usually determines which table to "start with". (You test on T1 only, so obviously it will start with T1.
The "next table" to be referenced (via NLJ - Nested Loop Join) is determined by a variety of things. (In your case it is pretty obvious -- namely the ON column(s).)
More on indexing: http://mysql.rjweb.org/doc.php/index_cookbook_mysql
Revised Query
1. Import: (CustomerCollectionID, -- '=' comes first
RowNum, -- 'range'
UniqueID) -- 'covering'
Import shows up in WHERE, so is first in Explain; Also due to LEFTs
Properties: (PropertyID) -- is that the PK?
PeIDPrID: (PropertyCP, PeopleID, PropertyID)
3. People: (PeopleID)
I assume that is the `PRIMARY KEY`? (Too many for "covering")
(Since `People` leads to 3 other table; I won't number the rest.)
EmailsTable: (EmailID, Email)
PeIDEmID: (PeopleID, -- JOIN from People
EmailID) -- covering
PhonesTable: (PhoneID, Type, Phone)
PeIDPhID: (PeopleID, PhoneID)
2. CustomerJoin: (ImportID, -- coming from `Import` (see ON...)
PersID) -- covering
After adding those, I expect most lines of EXPLAIN to say Using index.
The lack of at least a composite index on Import is the main problem leading to your performance complaint.
Bad GROUP BY
When there is a GROUP BY that does not include all the non-aggregated columns that are not directly dependent on the group by column(s), you get random values for the extras. I see from the EXPLAIN ("Rows") that several tables probably have multiple rows. You really ought to think about the garbage being generated by this query.
Curiously, Phones and Emails are feed into GROUP_CONCAT(), thereby avoiding the above issue, but the "Rows" is only 1.
(Read about ONLY_FULL_GROUP_BY; it might explain the issue better.)
(I'm listing this as a separate Answer since it is orthogonal to my other Answer.)
I call this the "explode-implode" syndrome. The query does a JOIN, getting a bunch of rows, thereby generating several rows, and puts multiple rows into an intermediate table. Then the GROUP BY implodes back to down to the original set of rows.
Let me focus on a portion of the query that could be reformulated to provide a performance improvement:
SELECT ...
GROUP_CONCAT(
DISTINCT CONCAT_WS(',', `EmailsTable`.`Email`)
ORDER BY `EmailsTable`.`EmailID` DESC
SEPARATOR ';'
) AS `Emails`,
...
FROM ...
LEFT JOIN `T_Email` AS `EmailsTable`
ON `PeIDEmID`.`EmailID`=`EmailsTable`.`EmailID`
...
GROUP BY `RowNum`;
Instead, move the table and aggregation function into a subquery
SELECT ...
( SELECT GROUP_CONCAT(
DISTINCT CONCAT_WS(',', `Email`)
ORDER BY `EmailID` DESC
SEPARATOR ';' )
FROM T_Email
WHERE `PeIDEmID`.`EmailID` = `EmailID`
) AS `Emails`,
...
FROM ...
-- and Remove: LEFT JOIN `T_Email` ON ...
...
-- and possibly Remove: GROUP BY ...;
Ditto for PhonesTable.
(It is unclear whether the GROUP BY can be removed; other things may need it.)
I have the following two MySQL/MariaDB tables:
CREATE TABLE requests (
request_id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
unix_timestamp DOUBLE NOT NULL,
[...]
INDEX unix_timestamp_index (unix_timestamp)
);
CREATE TABLE served_objects (
request_id BIGINT UNSIGNED NOT NULL,
object_name VARCHAR(255) NOT NULL,
[...]
FOREIGN KEY (request_id) REFERENCES requests (request_id)
);
There are several million columns in each table. There are zero or more served_objects per request. I have a view that provides a complete served_objects view by joining these two tables:
CREATE VIEW served_objects_view AS
SELECT
r.request_id AS request_id,
unix_timestamp,
object_name
FROM requests r
RIGHT JOIN served_objects so ON r.request_id=so.request_id;
This all seems pretty straightforward so far. But when I do a simple SELECT like this:
SELECT * FROM served_objects_view ORDER BY unix_timestamp LIMIT 5;
It takes a full minute or more. It's obviously not using the index. I've tried many different approaches, including flipping things around and using a LEFT or INNER join instead, but to no avail.
This is the output of the EXPLAIN for this SELECT:
+------+-------------+-------+--------+---------------+---------+---------+------------------+---------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+---------------+---------+---------+------------------+---------+---------------------------------+
| 1 | SIMPLE | so | ALL | NULL | NULL | NULL | NULL | 5196526 | Using temporary; Using filesort |
| 1 | SIMPLE | r | eq_ref | PRIMARY | PRIMARY | 8 | db.so.request_id | 1 | |
+------+-------------+-------+--------+---------------+---------+---------+------------------+---------+---------------------------------+
Is there something fundamental here that prevents the index from being used? I understand that it needs to use a temporary table to satisfy the view and that that's interfering with the ability to use the index. But I'm hoping that some trick exists that will allow me SELECT from the view while honouring the indexes in the requests table.
You're using a notorious performance antipattern.
SELECT * FROM served_objects_view ORDER BY unix_timestamp LIMIT 5;
You've told the query planner to make a copy of your whole view (in RAM or temp storage), sort it, and toss out all but five rows. So, it obeyed. It really didn't care how long it took.
SELECT * is generally considered harmful to query performance, and this is the kind of case why that's true.
Try this deferred-join optimization
SELECT a.*
FROM served_objects_view a
JOIN (
SELECT request_id
FROM served_objects_view
ORDER BY unix_timestamp
LIMIT 5
) b ON a.request_id = b.request_id
This sorts a smaller subset of data (just the request_id and timestamp values). It then fetches a small subset of the view's rows.
If it's still too slow for your purposes, try creating a compound index on request (unix_timestamp, request_id). But that's probably unnecessary. If it is necessary, concentrate on optimizing the subquery.
Remark: RIGHT JOIN? Really? Don't you want just JOIN?
VIEWs are not always well-optimized. Does the query run slow when you use the SELECT? Have you added the suggested index?
What version of MySQL/MariaDB are you using? There may have been optimization improvements in newer versions, and an upgrade might help.
My point is, you may have to abandon VIEW.
The answer provided by O. Jones was the right approach; thanks! The big saviour here is that if the inner SELECT refers only to columns from the requests table (such as the case when SELECTing only request_id), the optimizer can satisfy the view without performing a join, making it lickety-split.
I had to make two adjustments, though, to make it produce the same results as the original SELECT. First, if non-unique request_ids are returned by the inner SELECT, the outer JOIN creates a cross-product of these non-unique entries. These duplicate rows can be effectively discarded by changing the outer SELECT into a SELECT DISTINCT.
Second, if the ORDER BY column can contain non-unique values, the result can contain irrelevant rows. These can be effectively discarded by also SELECTing orderByCol and adding AND a.orderByCol = b.orderByCol to the JOIN rule.
So my final solution, which works well if orderByCol comes from the requests table, is the following:
SELECT DISTINCT a.*
FROM served_objects_view a
JOIN (
SELECT request_id, <orderByCol> FROM served_objects_view
<whereClause>
ORDER BY <orderByCol> LIMIT <startRow>,<nRows>
) b ON a.request_id = b.request_id AND a.<orderByCol> = b.<orderByCol>
ORDER BY <orderByCol>;
This is a more convoluted solution than I was hoping for, but it works, so I'm happy.
One final comment. An INNER JOIN and a RIGHT JOIN are effectively the same thing here, so I originally formulated it in terms of a RIGHT JOIN because that's the way I was conceptualizing it. However, after some experimentation (after your challenge) I discovered that an INNER join is much more efficient. (It's what allows the optimizer to satisfy the view without performing a join if the inner SELECT refers only to columns from the requests table.) Thanks again!
Simplifying, I have four tables.
ref_TagGroup (top-level descriptive containers for various tags)
ref_Tag (tags with name and unique tagIDs)
ref_Product
ref_TagMap (TagID,Container,ContainerType)
A fifth table, ref_ProductFamily exists but is not directly part of this query.
I use the ref_TagMap table to map tags to products, but also to map Tags to TagGroups and also to product families. The ContainerType is set to PROD/TAGGROUP/PRODFAM accordingly.
So, I want to return the tag group, tagname and the number of products AND product families that the tag is mapped to...so results like:
GroupName | TagName | TagHitCnt
My question is, why does the first query come back in milliseconds, the second query comes back in milliseconds but the third query (which is just adding an "OR" condition to include both tag to product and tag to family mappings) takes forever (well, over ten minutes anyway...I haven't let it run all night yet.)
QUERY 1:
SELECT ref_taggroup.groupname,ref_tag.tagname,COUNT(DISTINCT IFNULL(ref_product.familyid,ref_product.id + 100000000),ref_product.name) AS 'taghitcnt'
FROM (ref_taggroup,ref_tag,ref_product)
LEFT JOIN ref_tagmap GROUPMAP ON GROUPMAP.containerid=ref_taggroup.groupid
LEFT JOIN ref_tagmap PRODMAP ON PRODMAP.containerid=ref_product.id
WHERE
GROUPMAP.tagid=ref_tag.tagid AND GROUPMAP.containertype='TAGGROUP'
AND
PRODMAP.tagid=ref_tag.tagid AND PRODMAP.containertype='PROD'
GROUP BY tagname
ORDER BY groupname,tagname ;
QUERY 2:
SELECT ref_taggroup.groupname,ref_tag.tagname,COUNT(DISTINCT IFNULL(ref_product.familyid,ref_product.id + 100000000),ref_product.name) AS 'taghitcnt'
FROM (ref_taggroup,ref_tag,ref_product)
LEFT JOIN ref_tagmap GROUPMAP ON GROUPMAP.containerid=ref_taggroup.groupid
LEFT JOIN ref_tagmap PRODFAMMAP ON PRODFAMMAP.containerid=ref_product.familyid
WHERE
GROUPMAP.tagid=ref_tag.tagid AND GROUPMAP.containertype='TAGGROUP'
AND
PRODFAMMAP.tagid=ref_tag.tagid AND PRODFAMMAP.containertype='PRODFAM'
GROUP BY tagname
ORDER BY groupname,tagname ;
QUERY 3:
SELECT ref_taggroup.groupname,ref_tag.tagname,COUNT(DISTINCT IFNULL(ref_product.familyid,ref_product.id + 100000000),ref_product.name) AS 'taghitcnt'
FROM (ref_taggroup,ref_tag,ref_product)
LEFT JOIN ref_tagmap GROUPMAP ON GROUPMAP.containerid=ref_taggroup.groupid
JOIN ref_tagmap PRODMAP ON PRODMAP.containerid=ref_product.id
JOIN ref_tagmap PRODFAMMAP ON PRODFAMMAP.containerid=ref_product.familyid
WHERE
GROUPMAP.tagid=ref_tag.tagid AND GROUPMAP.containertype='TAGGROUP'
AND
((PRODMAP.tagid=ref_tag.tagid AND PRODMAP.containertype='PROD')
OR
(PRODFAMMAP.tagid=ref_tag.tagid AND PRODFAMMAP.containertype='PRODFAM' ))
GROUP BY tagname
ORDER BY groupname,tagname ;
--
To answer a question that may come up, the COUNT Distinct ifnull in the select is designed to return one record for large numbers of products that are grouped into families and one record for each 'standalone' product that isn't in a family as well. This code works well in other queries.
I've tried doing a UNION on the first two queries, and that works and comes back very quickly, but it's not practical for other reasons that I won't go into here.
What is the best way to do this? What am I doing wrong?
Thanks!
ADDING EXPLAIN OUTPUT
QUERY1
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE GROUPMAP ALL 5640 Using where; Using temporary; Using filesort
1 SIMPLE ref_tag ref PRIMARY PRIMARY 4 lsslave01.GROUPMAP.tagid 1 Using index
1 SIMPLE ref_taggroup ref PRIMARY PRIMARY 4 lsslave01.GROUPMAP.containerid 3 Using index
1 SIMPLE PRODMAP ALL 5640 Using where; Using join buffer
1 SIMPLE ref_product eq_ref PRIMARY PRIMARY 4 lsslave01.PRODMAP.containerid 1
QUERY2
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE GROUPMAP ALL 5640 Using where; Using temporary; Using filesort
1 SIMPLE ref_tag ref PRIMARY PRIMARY 4 lsslave01.GROUPMAP.tagid 1 Using index
1 SIMPLE ref_taggroup ref PRIMARY PRIMARY 4 lsslave01.GROUPMAP.containerid 3 Using index
1 SIMPLE PRODFAMMAP ALL 5640 Using where; Using join buffer
1 SIMPLE ref_product ref FixtureType FixtureType 5 lsslave01.PRODFAMMAP.containerid 39 Using where
QUERY3
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE GROUPMAP ALL 5640 Using where; Using temporary; Using filesort
1 SIMPLE ref_tag ref PRIMARY PRIMARY 4 lsslave01.GROUPMAP.tagid 1 Using index
1 SIMPLE ref_taggroup ref PRIMARY PRIMARY 4 lsslave01.GROUPMAP.containerid 3 Using index
1 SIMPLE PRODMAP ALL 5640 Using join buffer
1 SIMPLE PRODFAMMAP ALL 5640 Using where; Using join buffer
1 SIMPLE ref_product eq_ref PRIMARY,FixtureType PRIMARY 4 lsslave01.PRODMAP.containerid 1 Using where
enter code here
One more update for anyone who is interested:
I finally let the third query above run to completion. It took right around 1000 seconds. Dividing this time by the time it takes each of the queries (1 or 2) to run, we get a number around 6000...which is very close to the size of the ref_tagmap table that we're using in our dev environment (much larger in production). So, it looks like we're running one query against each record in that table...but I still can't see why.
Any help would be much appreciated...and I mean seriously, seriously appreciated.
This is less an "answer" than a couple of observations/suggestions.
First, I'm curious whether you could GROUP BY on an integer ID instead of the tag name? I'd change the ref_TagMap.containertype field to hold tinyint enumerated values representing the three possible values of TAGGROUP, PROD and PRODFAM. An indexed tinyint field should be slightly faster than an index of string values. It probably won't make much difference though because it's the second conditional in the join clause and there isn't that much spread in the indexed values anyway.
Next is the observation/reminder that when the first half of an OR statement evaluates to FALSE often, then you're making MySQL evaluate both halves of the conditional every time. So you want to put the condition most likely to evaluate to TRUE first (aka prior to the OR).
I doubt either of those two issues are your real problem... though the issue in the second paragraph may play a part. Seems like the quickest way to a performant version of query 3 may be to simply populate a temp table with the results from the first two queries and pull from that temp table to get the results you're looking for from the third. Perhaps in doing so you'll discover why that third query is so slow.
I was using a query that looked similar to this one:
SELECT `episodes`.*, IFNULL(SUM(`views_sum`.`clicks`), 0) as `clicks`
FROM `episodes`, `views_sum`
WHERE `views_sum`.`index` = "episode" AND `views_sum`.`key` = `episodes`.`id`
GROUP BY `episodes`.`id`
... which takes ~0.1s to execute. But it's problematic, because some episodes don't have a corresponding views_sum row, so those episodes aren't included in the result.
What I want is NULL values when a corresponding views_sum row doesn't exist, so I tried using a LEFT JOIN instead:
SELECT `episodes`.*, IFNULL(SUM(`views_sum`.`clicks`), 0) as `clicks`
FROM `episodes`
LEFT JOIN `views_sum` ON (`views_sum`.`index` = "episode" AND `views_sum`.`key` = `episodes`.`id`)
GROUP BY `episodes`.`id`
This query produces the same columns, and it also includes the few rows missing from the 1st query.
BUT, the 2nd query takes 10 times as long! A full second.
Why is there such a huge discrepancy between the execution times when the result is so similar? There's nowhere near 10 times as many rows — it's like 60 from the 1st query, and 70 from the 2nd. That's not to mention that the 10 additional rows have no views to sum!
Any light shed would be greatly appreciated!
(There are indexes on episodes.id, views_sum.index, and views_sum.key.)
EDIT:
I copy-pasted the SQL from above, and here are the EXPLAINs, in order:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE views_sum ref index,key index 27 const 6532 Using where; Using temporary; Using filesort
1 SIMPLE episodes eq_ref PRIMARY PRIMARY 4 db102914_itw.views_sum.key 1 Using where
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE episodes ALL NULL NULL NULL NULL 70 Using temporary; Using filesort
1 SIMPLE views_sum ref index,key index 27 const 6532
Here's the query I ultimately came up with, after many, many iterations. (The SQL_NO_CACHE flag is there so I can test execution times.)
SELECT SQL_NO_CACHE e.*, IFNULL(SUM(vs.`clicks`), 0) as `clicks`
FROM `episodes` e
LEFT JOIN
(SELECT * FROM `views_sum` WHERE `index` = "episode") vs
ON vs.`key` = e.`id`
GROUP BY e.`id`
Because the ON condtion views_sum.index = "episode" is static, i.e., isn't dependent on the row it's joined to, I was able to get a massive performance boost by first using a subquery to limit the views_sum table before joining.
My query now takes ~0.2s. And what's even better, the time doesn't grow as you increase the offset of the query (unlike my first LEFT JOIN attempt). It stays the same, even if you do a sort on the clicks column.
You should have a combined index on views_sum.index and views_sum.key. I suspect you will always use both fields together if i look at the names. Also, I would rewrite the first query to use a proper INNER JOIN clause instead of a filtered cartesian product.
I suspect the performance of both queries will be much closer together if you do this. And, more importantly, much faster than they are now.
edit: Thinking about it, I would probably add a third column to that index: views_sum.clicks, which probably can be used for the SUM. But remember that multi-column indexes can only be used left to right.
It's all about the indexes. You'll have to play around with it a bit or post your database schema on here. Just as a rough guess i'd say you should make sure you have an index on views_sum.key.
Normally, a LEFT JOIN will be slower than an INNER JOIN or a CROSS JOIN because it has to view the first table differently. Put another way, the difference in time isn't related to the size of the result, but the full size of the left table.
I also wonder if you're asking MySQL to figure things out for you that you should be doing yourself. Specifically, that SUM() function would normally require a GROUP BY clause.