Joining table twice makes the query slow - mysql

My problem is that my query is very slow when use JOIN on the same table twice.
I want to retrieve all the products from a given category. But since the product can be in multiple categories I also want to get the (c.canonical) category that should provide the URL base. Therefore I have 2 extra JOIN on categories AS c and categories_products AS cp2.
Original query
SELECT p.product_id
FROM products AS p
JOIN categories_products AS cp
ON p.product_id = cp.product_id
JOIN product_variants AS pv
ON pv.product_id = p.product_id
WHERE cp.category_id = 2
AND p.status = 2
GROUP BY p.product_id
ORDER BY cp.product_sortorder ASC
LIMIT 0, 40
EXPLAIN
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra |
|----|-------------|-------|--------|------------------------|------------------------|---------|-------------------------|------|----------------------------------------------|
| 1 | SIMPLE | cp | ref | FK_categories_products | FK_categories_products | 4 | const | 1074 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | p | eq_ref | PRIMARY | PRIMARY | 4 | superlove.cp.product_id | 1 | Using where |
| 1 | SIMPLE | pv | ref | FK_product_variants | FK_product_variants | 4 | superlove.p.product_id | 1 | Using where |
Slow query
SELECT p.product_id, c.category_id
FROM products AS p
JOIN categories_products AS cp
ON p.product_id = cp.product_id
JOIN categories_products AS cp2 // Extra line
ON p.product_id = cp2.product_id // Extra line
JOIN categories AS c // Extra line
ON cp2.category_id = c.category_id // Extra line
JOIN product_variants AS pv
ON pv.product_id = p.product_id
WHERE cp.category_id = 2
AND p.status = 2
AND c.canonical = 1 // Extra line
GROUP BY p.product_id
ORDER BY cp.product_sortorder ASC
LIMIT 0, 40
EXPLAIN
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra |
|----|-------------|-------|--------|------------------------|------------------------|---------|--------------------------|------|----------------------------------------------|
| 1 | SIMPLE | c | ALL | PRIMARY | (null) | (null) | (null) | 221 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | cp2 | ref | FK_categories_products | FK_categories_products | 4 | superlove.c.category_id | 33 | |
| 1 | SIMPLE | p | eq_ref | PRIMARY | PRIMARY | 4 | superlove.cp2.product_id | 1 | Using where |
| 1 | SIMPLE | pv | ref | FK_product_variants | FK_product_variants | 4 | superlove.p.product_id | 1 | Using where |
| 1 | SIMPLE | cp | ref | FK_categories_products | FK_categories_products | 4 | const | 1074 | Using where |

The MySQL optimizer seems to have trouble with this query. I get the impression that only rather few products would be in the requested category, but there would likely be many canonical categories. However, the optimizer apparently cannot tell that cp.category_id = 2 is a stronger condition than c.canonical = 1, so it starts the new query with c instead of cp, leading to a lot of superfluous rows along the way.
Providing data to the optimizer
Your first attempt should be trying to provide the optimizer with the required data: using the ANALYZE TABLE command, you can collect information about key distribution. For this to work, you'd have to have suitable keys in place. So perhaps you should add a key on categories.canonical. Then MySQL would know that there are (if I understand you correctly) only two distinct values for that column, and perhaps even how many rows in each. With a bit of luck, that would tell it that using c.canonical = 1 as the starting point would be a poor choice.
Forcing join order
If that does not help, then I suggest you force the order using STRAIGHT_JOIN. In particular, you might want to force cp as the first table, just as your original (and fast) query had it. If that solves the problem, you can stick to that solution. If not, then you should provide a new EXPLAIN output, so we can see where that approach fails.
Schema considerations
One more thing to consider: your question implies that for every product, there is exactly one canonical category associated with it. But your database schema does not reflect that fact. You might want to consider ways to modify your schema to reflect that fact. For example, you could have a column called canonical_category_id in products table, and use categories_products for non-canonical categories only. If you use such a setup, you might want to create a VIEW which joins products to all their categories, both canonical and non-canonical ones, using a UNION like this:
CREATE VIEW products_all_categories AS
SELECT product_id, canonical_category_id AS category_id
FROM products
UNION ALL
SELECT product_id, category_id
FROM categories_products
You could use this instead of categories_products in those places where you don't care whether a category is canonical or not. You could even rename the table and name the view categories_products instead, so that your existing queries work as they used to. You should add an index on the two columns from products used in this query. Perhaps even two indices, one for either order of these columns.
Not sure whether this whole setup would be acceptable in your application. Not sure whether it would really bring the intended speed gain. In the end, you might be forced to maintain redundant data, like a products.canonical column in addition to a reference to the canonical category in the categories_products table. I know redundant data is ugly from a design point of view, but for the sake of performance it might be necessary in order to avoid long computations. At least on a RDBMS which doesn't support materialized views. You could probably use triggers to keep data consistent, though I have no actual experience there.

Related

How to improve this MySQL Query using join?

I have got a simple query and it takes more than 14 seconds.
select
e.title, e.date, v.name, v.city, v.region, v.country
from seminar e force index for join (venueid)
left join venues v on e.venueid = v.id
where v.country = 'US'
and v.city = 'New York'
and v.region = 'NY'
and e.date > curdate()
and e.someid != 0
Note: count(e.id) stands for an abbreviation for debugging purposes. In fact we get information from both tables.
Explain gives this:
+----+-------------+-------+-------------+--------------------------------------------------------------------------------------+--------------------------+---------+-----------------+------+--------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------------+--------------------------------------------------------------------------------------+--------------------------+---------+-----------------+------+--------------------------------------------------------+
| 1 | SIMPLE | v | index_merge | PRIMARY,city,country,region | city,region | 378,378 | NULL | 2 | Using intersect(city,region); Using where |
| 1 | SIMPLE | e | ref | venueid | venueid | 5 | v.id | 11 | Using where |
+----+-------------+-------+-------------+--------------------------------------------------------------------------------------+--------------------------+---------+-----------------+------+--------------------------------------------------------+
I have indexes on e.id, e.date, e.someid, as well as v.id, v.country, v.city and v.region.
I know the db-setup is a mess but that's what I have to deal with right now.
Why does the SQL take so long as in the end there will be an approx. count 150? In events there are about 1M entries and in venues about 100K.
Both tables are MyISAM. Any ideas how to improve this?
Upon creating an index like this
create index location on venues (city, region, country)
it takes 20 seconds, the explain is this:
+----+-------------+-------+------+--------------------------------------+--------------+---------+-------------------+------+------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+--------------------------------------+--------------+---------+-------------------+------+------------------------------------+
| 1 | SIMPLE | v | ref | PRIMARY,city,country,region,location | location | 765 | const,const,const | 410 | Using index condition; Using where |
| 1 | SIMPLE | e | ref | EventVenueID | venueid | 5 | v.id | 11 | Using where |
+----+-------------+-------+------+--------------------------------------+--------------+---------+-------------------+------+------------------------------------+
You have left join venues, but you have conditions in the where clause on the joined venues row, so only joined rows will be returned. However, that's a side issue - read on for why you don't need a join at all.
Next, if the city is vancouver, there's no need to also test for country or state.
Finally, if you're trying to find "how many future events are in Vancouver", you don't need a join, as the venue id is a constant!
Try this:
select count(*) as event_count
from events
where venueid = (select id from venues where city = 'vancouver')
and startdate > curdate()
and te_id != 0
Mysql will use the index on venueid without you having to use a hint. If it doesn't, execute this:
analyze events
which will update the statistics of the data distribution in the indexed columns. Note that if a lot of your events are in Vancouver, it's more efficient to not use an index (as most of the rows will have to be accessed anyway).
This would make the first part of the query faster:
INDEX(city, region, country)
I went another way since it seems that MySQL can't handle joins effectively:
Created one big new table with all the columns I need from the join
So the seminars and events are in one table now
added indexes
Now the query is fast. Don't know why...
From 25 seconds, we are down to .08 seconds
That's how I wanted it.
If anybody still knows why, you are more than welcome to provide an answer.

Why is my MySQL query is so slow?

I'm trying to figure out why that query so slow (take about 6 second to get result)
SELECT DISTINCT
c.id
FROM
z1
INNER JOIN
c ON (z1.id = c.id)
INNER JOIN
i ON (c.member_id = i.member_id)
WHERE
c.id NOT IN (... big list of ids which should be excluded)
This is execution plan
+----+-------------+-------+--------+-------------------+---------+---------+--------------------+--------+----------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+--------+-------------------+---------+---------+--------------------+--------+----------+--------------------------+
| 1 | SIMPLE | z1 | index | PRIMARY | PRIMARY | 4 | NULL | 318563 | 99.85 | Using where; Using index; Using temporary |
| 1 | SIMPLE | c | eq_ref | PRIMARY,member_id | PRIMARY | 4 | z1.id | 1 | 100.00 | |
| 1 | SIMPLE | i | eq_ref | PRIMARY | PRIMARY | 4 | c.member_id | 1 | 100.00 | Using index |
+----+-------------+-------+--------+-------------------+---------+---------+--------------------+--------+----------+--------------------------+
is it because mysql has to take out almost whole 1st table ? Can it be adjusted ?
You can try to replace c with a subquery.
SELECT DISTINCT
c.id
FROM
z1
INNER JOIN
(select c.id
from c
WHERE
c.id NOT IN (... big list of ids which should be excluded)) c ON (z1.id = c.id)
INNER JOIN
i ON (c.member_id = i.member_id)
to leave only necessary id's
It is imposible to say from the information you've provided whether there is a faster solution to obtaining the same data (we would need to know abou data distributions and what foreign keys are obligatory). However assuming that this is a hierarchical data set, then the plan is probably not optimal: the only predicate to reduce the number of rows is c.id NOT IN.....
The first question to ask yourself when optimizing any query is Do I need all the rows? How many rows is this returning?
I'm struggling to see any utlity in a query which returns a list of 'id' values (implying a set of autoincrement integers).
You can't use an index for a NOT IN (or <>) hence the most eficient solution is probably to start with a full table scan on 'c' - which should be the outcome of StanislavL's query.
Since you don't use the values from i and z, the joins could be replaced with 'exists' which may help performance.
I would consider creating a compound index for c(id, member_id). This way the query should work at index level only without scanning any rows in tables.

joining table in mysql not using index properly?

I have four tables that I am trying to join and output the result to a new table. My code looks like this:
create table tbl
select a.dte, a.permno, (ret - rf) f0_xs_ret, (xs_ret - (betav*xs_mkt)) f0_resid, mkt_cap last_year_mkt_cap, betav beta_value
from a inner join b using (dte)
inner join c on (year(a.dte) = c.yr and a.permno = c.permno)
inner join d on (a.permno = d.permno and year(a.dte)-1 = year(d.dte));
All of the tables have multiple indices and for table a, (dte, permno) identify a unique record, for table b, dte id's a unique record, for table c, (yr, permno) id a unique record and for table d, (dte, permno) id a unique record. the explain from the select part of the query is:
+----+-------------+-------+--------+-------------------+---------+---------+---------- ------------------------+--------+-------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+-------------------+---------+---------+---------- ------------------------+--------+-------------------+
| 1 | SIMPLE | d | ALL | idx1 | NULL | NULL | NULL | 264129 | |
| 1 | SIMPLE | c | ref | idx2 | idx2 | 4 | achernya.d.permno | 16 | |
| 1 | SIMPLE | b | ALL | PRIMARY,idx2 | NULL | NULL | NULL | 12336 | Using join buffer |
| 1 | SIMPLE | a | eq_ref | PRIMARY,idx1,idx2 | PRIMARY | 7 | achernya.b.dte,achernya.d.permno | 1 | Using where |
+----+-------------+-------+--------+-------------------+---------+---------+----------------------------------+--------+-------------------+
Why does mysql have to read so many rows to process this thing? and if i am reading this correctly, it has to read (264129*16*12336) rows which should take a good month.
Could someone please explain what's going on here?
MySQL has to read the rows because you're using functions as your join conditions. An index on dte will not help resolve YEAR(dte) in a query. If you want to make this fast, then put the year in its own column to use in joins and move the index to that column, even if that means some denormalization.
As for the other columns in your index that you don't apply functions to, they may not be used if the index won't provide much benefit, or they aren't the leftmost column in the index and you don't use the leftmost prefix of that index in your join condition.
Sometimes MySQL does not use an index, even if one is available. One circumstance under which this occurs is when the optimizer estimates that using the index would require MySQL to access a very large percentage of the rows in the table. (In this case, a table scan is likely to be much faster because it requires fewer seeks.)
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html

MySQL Query Optimization; SELECT multiple fields vs. JOIN

We've got a relatively straightforward query that does LEFT JOINs across 4 tables. A is the "main" table or the top-most table in the hierarchy. B links to A, C links to B. Furthermore, X links to A. So the hierarchy is basically
A
C => B => A
X => A
The query is essentially:
SELECT
a.*, b.*, c.*, x.*
FROM
a
LEFT JOIN b ON b.a_id = a.id
LEFT JOIN c ON c.b_id = b.id
LEFT JOIN x ON x.a_id = a.id
WHERE
b.flag = true
ORDER BY
x.date DESC
LIMIT 25
Via EXPLAIN, I've confirmed that the correct indexes are in place, and that the built-in MySQL query optimizer is using those indexes correctly and properly.
So here's the strange part...
When we run the query as is, it takes about 1.1 seconds to run.
However, after doing some checking, it seems that if I removed most of the SELECT fields, I get a significant speed boost.
So if instead we made this into a two-step query process:
First query same as above except change the SELECT clause to only SELECT a.id instead of SELECT *
Second query also same as above, except change the WHERE clause to only do an a.id IN agains the result of Query 1 instead of what we have before
The result is drastically different. It's .03 seconds for the first query and .02 for the second query.
Doing this two-step query in code essentially gives us a 20x boost in performance.
So here's my question:
Shouldn't this type of optimization already be done within the DB engine? Why does the difference in which fields that are actually SELECTed make a difference on the overall performance of the query?
At the end of the day, it's merely selecting the exact same 25 rows and returning the exact same full contents of those 25 rows. So, why the wide disparity in performance?
ADDED 2012-08-24 13:02 PM PDT
Thanks eggyal and invertedSpear for the feedback. First off, it's not a caching issue -- I've run tests running both queries multiple times (about 10 times) alternating between each approach. The result averages at 1.1 seconds for the first (single query) approach and .03+.02 seconds for the second (2 query) approach.
In terms of indexes, I thought I had done an EXPLAIN to ensure that we're going thru the keys, and for the most part we are. However, I just did a quick check again and one interesting thing to note:
The slower "single query" approach doesn't show the Extra note of "Using index" for the third line:
+----+-------------+-------+--------+------------------------+-------------------+---------+-------------------------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+------------------------+-------------------+---------+-------------------------------+------+----------------------------------------------+
| 1 | SIMPLE | t1 | index | PRIMARY | shop_group_id_idx | 5 | NULL | 102 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | dbmodl_v18.t1.organization_id | 1 | Using where |
| 1 | SIMPLE | t0 | ref | bundle_idx,shop_id_idx | shop_id_idx | 4 | dbmodl_v18.t1.organization_id | 309 | |
| 1 | SIMPLE | t3 | eq_ref | PRIMARY | PRIMARY | 4 | dbmodl_v18.t0.id | 1 | |
+----+-------------+-------+--------+------------------------+-------------------+---------+-------------------------------+------+----------------------------------------------+
While it does show "Using index" for when we query for just the IDs:
+----+-------------+-------+--------+------------------------+-------------------+---------+-------------------------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+------------------------+-------------------+---------+-------------------------------+------+----------------------------------------------+
| 1 | SIMPLE | t1 | index | PRIMARY | shop_group_id_idx | 5 | NULL | 102 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | dbmodl_v18.t1.organization_id | 1 | Using where |
| 1 | SIMPLE | t0 | ref | bundle_idx,shop_id_idx | shop_id_idx | 4 | dbmodl_v18.t1.organization_id | 309 | Using index |
| 1 | SIMPLE | t3 | eq_ref | PRIMARY | PRIMARY | 4 | dbmodl_v18.t0.id | 1 | |
+----+-------------+-------+--------+------------------------+-------------------+---------+-------------------------------+------+----------------------------------------------+
The strange thing is that both do list the correct index being used... but I guess it begs the questions:
Why are they different (considering all the other clauses are the exact same)? And is this an indication of why it's slower?
Unfortunately, the MySQL docs do not give much information for when the "Extra" column is blank/null in the EXPLAIN results.
More important than speed, you have a flaw in your query logic. When you test a LEFT JOINed column in the WHERE clause (other than testing for NULL), you force that join to behave as if it were an INNER JOIN. Instead, you'd want:
SELECT
a.*, b.*, c.*, x.*
FROM
a
LEFT JOIN b ON b.a_id = a.id
AND b.flag = true
LEFT JOIN c ON c.b_id = b.id
LEFT JOIN x ON x.a_id = a.id
ORDER BY
x.date DESC
LIMIT 25
My next suggestion would be to examine all of those .*'s in your SELECT. Do you really need all the columns from all the tables?

SQL Query Optimization

I am trying to speed up this django app (note: I didn't design this... just stuck maintaining it) and the biggest bottle neck seems to be these queries that are being generated by the admin. We have a content class that 4-5 other sub-classes inherit from and anytime the master list is pulled up in the admin a query like this is generated:
SELECT `content_content`.`id`,
`content_content`.`issue_id`,
`content_content`.`slug`,
`content_content`.`section_id`,
`content_content`.`priority`,
`content_content`.`group_id`,
`content_content`.`rotatable`,
`content_content`.`pub_status`,
`content_content`.`created_on`,
`content_content`.`modified_on`,
`content_content`.`old_pk`,
`content_content`.`content_type_id`,
`content_image`.`content_ptr_id`,
`content_image`.`caption`,
`content_image`.`kicker`,
`content_image`.`pic`,
`content_image`.`crop_x`,
`content_image`.`crop_y`,
`content_image`.`crop_side`,
`content_issue`.`id`,
`content_issue`.`special_issue_name`,
`content_issue`.`web_publish_date`,
`content_issue`.`issue_date`,
`content_issue`.`fm_name`,
`content_issue`.`arts_name`,
`content_issue`.`comments`,
`content_section`.`id`,
`content_section`.`name`,
`content_section`.`audiodizer_id`
FROM `content_image`
INNER
JOIN `content_content`
ON `content_image`.`content_ptr_id` = `content_content`.`id`
INNER
JOIN `content_issue`
ON `content_content`.`issue_id` = `content_issue`.`id`
INNER
JOIN `content_section`
ON `content_content`.`section_id` = `content_section`.`id`
WHERE NOT ( `content_content`.`pub_status` = -1 )
ORDER BY `content_issue`.`issue_date` DESC LIMIT 30
I ran an EXPLAIN on this and got the following:
+----+-------------+-----------------+--------+-------------------------------------------------------------------------------------------------+---------+---------+--------------------------------------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------+--------+-------------------------------------------------------------------------------------------------+---------+---------+--------------------------------------+-------+---------------------------------+
| 1 | SIMPLE | content_image | ALL | PRIMARY | NULL | NULL | NULL | 40499 | Using temporary; Using filesort |
| 1 | SIMPLE | content_content | eq_ref | PRIMARY,issue_id,content_content_issue_id,content_content_section_id,content_content_pub_status | PRIMARY | 4 | content_image.content_ptr_id | 1 | Using where |
| 1 | SIMPLE | content_section | eq_ref | PRIMARY | PRIMARY | 4 | content_content.section_id | 1 | |
| 1 | SIMPLE | content_issue | eq_ref | PRIMARY | PRIMARY | 4 | content_content.issue_id | 1 | |
+----+-------------+-----------------+--------+-------------------------------------------------------------------------------------------------+---------+---------+--------------------------------------+-------+---------------------------------+
Now, from what I've read, I need to somehow figure out how to make the query to content_image not be terrible; however, I'm drawing a blank on where to start.
Currently, judging by the execution plan, MySQL is starting with content_image, retrieving all rows, and only thereafter using primary keys on the other tables: content_image has a foreign key to content_content, and content_content has foreign keys to content_issue and content_section. Also, only after all the joins are complete can it make much use of the ORDER BY content_issue.issue_date DESC LIMIT 30, since it can't tell which of these joins might fail, and therefore, how many records from content_issue will really be needed before it can get the first thirty rows of output.
So, I would try the following:
Change JOIN content_issue to JOIN (SELECT * FROM content_issue ORDER BY issue_date DESC LIMIT 30) content_issue. This will allow MySQL, if it starts with content_issue and works its way to the other tables, to grab a very small subset of content_issue.
Note: properly speaking, this changes the semantics of the query: it means that only records from at most the last 30 content_issues will be retrieved, and therefore that if some of those issues don't have published contents with images, then fewer than 30 records will be retrieved. I don't have enough information about your data to gauge whether this change of semantics would actually change the results you get.
Also note: I'm not suggesting to remove the ORDER BY content_issue.issue_date DESC LIMIT 30 from the end of the query. I think you want it in both places.
Add an index on content_issue.issue_date, to optimize the above subquery.
Add an index on content_image.content_ptr_id, so MySQL can work its way from content_content to content_image without doing a full table scan.