I have the following query which is working correctly, however it is running very poorly. I am suspecting that my issue is with the two comparison conditions in the INNER JOIN statement. Both of the fields have an index, but the query optimiser in MySQL seems to be ignoring them. Here is my query:
EDIT: Changed query to use the one suggested below by Gordon, as it has kept the same results but is performing faster. EXPLAIN statement is still not happy though, and the output is shown below.
SELECT a.id
FROM pc a INNER JOIN
(SELECT correction_value, MAX(seenDate) mxdate
FROM pc FORCE INDEX (IDX_SEENDATE)
WHERE seenDate BETWEEN '2017-03-01' AND '2017-04-01'
GROUP BY correction_value
) b
ON a.correction_value = b.correction_value AND
a.seenDate = b.mxdate INNER JOIN
cameras c
ON c.camera_id = a.camerauid
WHERE c.in_out = 0;
EXPLAIN
+----+-------------+------------+------------+-------+-------------------+--------------+---------+----------+---------+----------+---------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+-------+-------------------+--------------+---------+----------+---------+----------+---------------------------------------+
| 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 2414394 | 100 | Using where; |
| | | | | | | | | | | | Using temporary; |
| | | | | | | | | | | | Using filesort |
+----+-------------+------------+------------+-------+-------------------+--------------+---------+----------+---------+----------+---------------------------------------+
| 1 | PRIMARY | a | NULL | ref | correction_value, | idx_seenDate | 5 | b.mxdate | 1 | 3.8 | Using where |
| | | | | | idx_seenDate, | | | | | | |
| | | | | | fk_camera_idx | | | | | | |
+----+-------------+------------+------------+-------+-------------------+--------------+---------+----------+---------+----------+---------------------------------------+
| 1 | PRIMARY | c | NULL | ALL | PRIMARY | NULL | NULL | NULL | 41 | 2.44 | Using where; |
| | | | | | | | | | | | Using join buffer (Block Nested Loop) |
+----+-------------+------------+------------+-------+-------------------+--------------+---------+----------+---------+----------+---------------------------------------+
| 2 | DERIVED | pc | NULL | range | correction_value, | idx_seenDate | 5 | NULL | 2414394 | 100 | Using index Condition; |
| | | | | | idx_seenDate | | | | | | Using temporary; |
| | | | | | | | | | | | Using filesort |
+----+-------------+------------+------------+-------+-------------------+--------------+---------+----------+---------+----------+---------------------------------------+
How can the query be optimised but still have the same outcome?
Let's start by focusing on the subquery.
SELECT correction_value,
MAX(seenDate) mxdate
FROM pc
WHERE seenDate BETWEEN '2017-03-01' AND '2017-04-01'
GROUP BY correction_value
Please run that with twice, with
INDEX sc (seenDate, correction_value)
INDEX cs (correction_value, seenDate)
Please FORCE one index, then the other. Depending on what version of MySQL you are running, one of the indexes will work better than the other.
I think that later versions will prefer "cs" because it can leapfrog through the index very efficiently.
Once you have determined which composite index to use, then remove the FORCE and the unused index, then try the entire query. The same index should do fine for the combined query.
Since your task seems to involve a "groupwise max", I suggest you see if there are performance tips here: http://mysql.rjweb.org/doc.php/groupwise_max
Try this
SELECT
a.id
FROM pc a
INNER JOIN
(SELECT correction_value, MAX(seenDate) mxdate
FROM pc
INNER JOIN cameras ON (cameras.camera_id = pc.camerauid AND cameras.in_out = 0)
WHERE pc.seenDate BETWEEN '2017-03-01' AND '2017-04-01'
GROUP BY correction_value) b ON (a.correction_value = b.correction_value AND a.seenDate = b.mxdate);
use index on pc.seenDate column.
I would start by writing the query as:
SELECT a.id
FROM pc a INNER JOIN
(SELECT correction_value, MAX(seenDate) mxdate
FROM pc
WHERE seenDate BETWEEN '2017-03-01' AND '2017-04-01'
GROUP BY correction_value
) b
ON a.correction_value = b.correction_value AND
a.seenDate = b.mxdate INNER JOIN
cameras c
ON c.camera_id = a.camerauid
WHERE c.in_out = 0; - don't use single quotes if `in_out` is a number
The place to start with this query is to have indexes: pc(seendate, correction_value, seendate) and cameras(camera_id, in_out).
There may also be ways to rewrite the query, if that is not sufficient.
RDBMS uses the output of the first query as a input of the next query. So, if we look at the derived query, it is using a filter so we can use that as a first query, then join to the pc then join to the camera table.
Indexes: mentioned by Gordon Linof or pc(id, correction_value, seendate) and cameras(camera_id, in_out)
The final query could be rewrite as below:
SELECT a.id
--add any other column here, you want to show in the EXPLAINED output
FROM
(
SELECT id, correction_value, MAX(seenDate) mxdate
FROM pc
WHERE seenDate BETWEEN '2017-03-01' AND '2017-04-01'
GROUP BY correction_value
) a
INNER JOIN pc b
ON a.correction_value = b.correction_value
AND a.seenDate = b.mxdate
INNER JOIN cameras c
ON c.camera_id = a.camerauid
WHERE c.in_out = 0;
From your question it is not clear how the tables are indexed, but in this subquery
(SELECT correction_value, MAX(seenDate) mxdate
FROM pc FORCE INDEX (IDX_SEENDATE)
WHERE seenDate BETWEEN '2017-03-01' AND '2017-04-01'
GROUP BY correction_value
) b
you want to have a composite index on both fields seenDate, correction_value:
CREATE INDEX seenCorr_ndx ON pc (seenDate, correction_value);
(you can drop any index on seenDate alone, and I expect you do not need the FORCE INDEX either).
You may end up needing two composite indexes, one with seenDate first, one with correction_value first.
Related
I have a select query, that selects over 50k records from MySQL 5.5 database at once, and this amount is expected to grow. The query contains multiple subquery which is taking over 120s to execute.
Initially some of the sale_items and stock tables didn't have more that the ID keys, so I added some more:
SELECT
`p`.`id` AS `id`,
`p`.`Name` AS `Name`,
`p`.`Created` AS `Created`,
`p`.`Image` AS `Image`,
`s`.`company` AS `supplier`,
`s`.`ID` AS `supplier_id`,
`c`.`name` AS `category`,
IFNULL((SELECT
SUM(`stocks`.`Total_Quantity`)
FROM `stocks`
WHERE (`stocks`.`Product_ID` = `p`.`id`)), 0) AS `total_qty`,
IFNULL((SELECT
SUM(`sale_items`.`quantity`)
FROM `sale_items`
WHERE (`sale_items`.`product_id` = `p`.`id`)), 0) AS `total_sold`,
IFNULL((SELECT
SUM(`sale_items`.`quantity`)
FROM `sale_items`
WHERE ((`sale_items`.`product_id` = `p`.`id`) AND `sale_items`.`Sale_ID` IN (SELECT
`refunds`.`Sale_ID`
FROM `refunds`))), 0) AS `total_refund`
FROM ((`products` `p`
LEFT JOIN `cats` `c`
ON ((`c`.`ID` = `p`.`cat_id`)))
LEFT JOIN `suppliers` `s`
ON ((`s`.`ID` = `p`.`supplier_id`)))
This is the explain result
+----+--------------------+------------+----------------+------------------------+------------------------+---------+---------------------------------
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+------------+----------------+------------------------+------------------------+---------+---------------------------------
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 20981 | |
| 2 | DERIVED | p | ALL | NULL | NULL | NULL | NULL | 20934 | |
| 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | p.cat_id | 1 | |
| 2 | DERIVED | s | eq_ref | PRIMARY | PRIMARY | 4 | p.supplier_id | 1 | |
| 5 | DEPENDENT SUBQUERY | sale_items | ref | sales_items_product_id | sales_items_product_id | 5 | p.id | 33 | Using where |
| 6 | DEPENDENT SUBQUERY | refunds | index_subquery | IDX_refunds_sale_id | IDX_refunds_sale_id | 5 | func | 1 | Using index; Using where |
| 4 | DEPENDENT SUBQUERY | sale_items | ref | sales_items_product_id | sales_items_product_id | 5 | p.id | 33 | Using where |
| 3 | DEPENDENT SUBQUERY | stocks | ref | IDX_stocks_product_id | IDX_stocks_product_id | 5 | p.id | 1 | Using where |
+----+--------------------+------------+----------------+------------------------+------------------------+---------+---------------------------------
I am expecting that the query takes less that 3s at most, but I can't seem to figure out the best way to optimize this query.
The query looks fine to me. You select all data and aggregate some of it. This takes time. Your explain plan shows there are indexes on the IDs, which is good. And at a first glance there is not much we seem to be able to do here...
What you can do, though, is provide covering indexes, i.e. indexes that contain all columns you need from a table, so the data can be taken from the index directly.
create index idx1 on cats(id, name);
create index idx2 on suppliers(id, company);
create index idx3 on stocks(product_id, total_quantity);
create index idx4 on sale_items(product_id, quantity, sale_id);
This can really boost your query.
What you can try About the query itself is to move the subqueries to the FROM clause. MySQL's optimizer is not great, so although it should get the same execution plan, it may well be that it favors the FROM clause.
SELECT
p.id,
p.name,
p.created,
p.image,
s.company as supplier,
s.id AS supplier_id,
c.name AS category,
COALESCE(st.total, 0) AS total_qty,
COALESCE(si.total, 0) AS total_sold,
COALESCE(si.refund, 0) AS total_refund
FROM products p
LEFT JOIN cats c ON c.id = p.cat_id
LEFT JOIN suppliers s ON s.id = p.supplier_id
LEFT JOIN
(
SELECT SUM(total_quantity) AS total
FROM stocks
GROUP BY product_id
) st ON st.product_id = p.id
LEFT JOIN
(
SELECT
SUM(quantity) AS total,
SUM(CASE WHEN sale_id IN (SELECT sale_id FROM refunds) THEN quantity END) as refund
FROM sale_items
GROUP BY product_id
) si ON si.product_id = p.id;
(If sale_id is unique in refunds, then you can even join it to sale_items. Again: this should usually not make a difference, but in MySQL it may still. MySQL was once notorious for treating IN clauses much worse than the FROM clause. This may not be the case anymore, I don't know. You can try - if refunds.sale_id is unique).
I'm making a query that allows me to order recipes by score.
Tables structure
Structure is that a flyer contains one or many flyer_items, which can contain one or many ingredients_to_flyer_item (this table links ingredient to the flyer item). The other table ingredient_to_recipe links the same ingredients but to one or many recipes. Link to .sql file is included at the end.
Example query
I want to get recipe_id and a SUM of the MAX price weight of each ingredient that are part of the recipe (linked by ingredient_to_recipe), but if a recipe has multiple ingredients that belongs to the same flyers_item, it should be counted once.
SELECT itr.recipe_id,
SUM(itr.weight),
SUM(max_price_weight),
SUM(itr.weight + max_price_weight) AS score
FROM
( SELECT MAX(itf.max_price_weight) AS max_price_weight,
itf.flyer_item_id,
itf.ingredient_id
FROM
(SELECT ifi.ingredient_id,
MAX(i.price_weight) AS max_price_weight,
ifi.flyer_item_id
FROM flyer_items i
JOIN ingredient_to_flyer_item ifi ON i.id = ifi.flyer_item_id
WHERE i.flyer_id IN (1,
2)
GROUP BY ifi.ingredient_id ) itf
GROUP BY itf.flyer_item_id) itf2
JOIN `ingredient_to_recipe` AS itr ON itf2.`ingredient_id` = itr.`ingredient_id`
WHERE recipe_id = 5730
GROUP BY itr.`recipe_id`
ORDER BY score DESC
LIMIT 0,10
The query almost works fine, because most of the results are good, but for some lines, some ingredients are ignored and not counted from the score as they should.
Test cases
| recipe_id | 'score' with current query | what 'score' should be | explanation |
|-----------|----------------------------|------------------------|-----------------------------------------------------------------------------|
| 8376 | 51 | 51 | Good result |
| 3152 | 1 | 18 | Only 1 ingredient having a score of one is counted, should be 4 ingredients |
| 4771 | 41 | 45 | One ingredient worth score 4 is ignored |
| 10230 | 40 | 40 | Good result |
| 8958 | 39 | 39 | Good result |
| 4656 | 28 | 34 | One ingredient worth 6 is ignored |
| 11338 | 1 | 10 | 2 ingredients, worth 4 and 5 are ignored |
I have a very difficult time finding an easy way to explain it. Let me know if anything else could help.
Here is a link to the demo database to run the query, test examples and test cases: https://nofile.io/f/F4YSEu8DWmT/meta.zip
Thank you very much.
Update (as asked by Rick James):
Here is the furthest I could make it work. The results are always good, in subquery too, but, I've completely taken out the group by 'flyer_item_id'. So with this query, I get the good score, but if many ingredients of the recipe are the same flyer_item_item, they will be counted multiple times (Like score would be 59 for recipe_id = 10557 instead of the good 56, because 2 ingredients worth 3 are in the same flyers_item). The only thing I need more is to count one MAX(price_weight) per flyer_item_id per recipe, (which I originally tried by grouping by 'flyer_item_id' over the first group_by ingredient_id.
SELECT itr.recipe_id,
SUM(itr.weight) as total_ingredient_weight,
SUM(itf.price_weight) as total_price_weight,
SUM(itr.weight+itf.price_weight) as score
FROM
(SELECT fi1.id, MAX(fi1.price_weight) as price_weight, ingredient_to_flyer_item.ingredient_id as ingredient_id, recipe_id
FROM flyer_items fi1
INNER JOIN (
SELECT flyer_items.id as id, MAX(price_weight) as price_weight, ingredient_to_flyer_item.ingredient_id as ingredient_id
FROM flyer_items
JOIN ingredient_to_flyer_item ON flyer_items.id = ingredient_to_flyer_item.flyer_item_id
GROUP BY id
) fi2 ON fi1.id = fi2.id AND fi1.price_weight = fi2.price_weight
JOIN ingredient_to_flyer_item ON fi1.id = ingredient_to_flyer_item.flyer_item_id
JOIN ingredient_to_recipe ON ingredient_to_flyer_item.ingredient_id = ingredient_to_recipe.ingredient_id
GROUP BY ingredient_to_flyer_item.ingredient_id) AS itf
INNER JOIN `ingredient_to_recipe` AS `itr` ON `itf`.`ingredient_id` = `itr`.`ingredient_id`
GROUP BY `itr`.`recipe_id`
ORDER BY `score` DESC
LIMIT 10
Here is the explain, but I'm not sure it's useful as the last working part is still missing:
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | |
|----|-------------|--------------------------|------------|--------|-------------------------------|---------------|---------|-------------------------------------------------------|--------|----------|---------------------------------|---|
| 1 | PRIMARY | itr | NULL | ALL | recipe_id,ingredient_id | NULL | NULL | NULL | 151800 | 100.00 | Using temporary; Using filesort | |
| 1 | PRIMARY | <derived2> | NULL | ref | <auto_key0> | <auto_key0> | 4 | metadata3.itr.ingredient_id | 10 | 100.00 | NULL | |
| 2 | DERIVED | ingredient_to_flyer_item | NULL | ALL | NULL | NULL | NULL | NULL | 249 | 100.00 | Using temporary; Using filesort | |
| 2 | DERIVED | fi1 | NULL | eq_ref | id_2,id,price_weight | id_2 | 4 | metadata3.ingredient_to_flyer_item.flyer_item_id | 1 | 100.00 | NULL | |
| 2 | DERIVED | <derived3> | NULL | ref | <auto_key0> | <auto_key0> | 9 | metadata3.ingredient_to_flyer_item.flyer_item_id,m... | 10 | 100.00 | NULL | |
| 2 | DERIVED | ingredient_to_recipe | NULL | ref | ingredient_id | ingredient_id | 4 | metadata3.ingredient_to_flyer_item.ingredient_id | 40 | 100.00 | NULL | |
| 3 | DERIVED | ingredient_to_flyer_item | NULL | ALL | NULL | NULL | NULL | NULL | 249 | 100.00 | Using temporary; Using filesort | |
| 3 | DERIVED | flyer_items | NULL | eq_ref | id_2,id,flyer_id,price_weight | id_2 | 4 | metadata3.ingredient_to_flyer_item.flyer_item_id | 1 | 100.00 | NULL | |
Update 2
I managed to find a query that works, but now I have to make it faster, it takes over 500ms to run.
SELECT sum(ff.price_weight) as price_weight, sum(ff.weight) as weight, sum(ff.price_weight+ff.weight) as score, ff.recipe_id FROM
(
SELECT DISTINCT
itf.flyer_item_id as flyer_item_id,
itf.recipe_id,
itf.weight,
aprice_weight AS price_weight
FROM
(SELECT itfin.flyer_item_id AS flyer_item_id,
itfin.price_weight AS aprice_weight,
itfin.ingredient_id,
itr.recipe_id,
itr.weight
FROM
(SELECT ifi2.flyer_item_id, ifi2.ingredient_id as ingredient_id, MAX(ifi2.price_weight) as price_weight
FROM
ingredient_to_flyer_item ifi1
INNER JOIN (
SELECT id, MAX(price_weight) as price_weight, ingredient_to_flyer_item.ingredient_id as ingredient_id, ingredient_to_flyer_item.flyer_item_id
FROM ingredient_to_flyer_item
GROUP BY ingredient_id
) ifi2 ON ifi1.price_weight = ifi2.price_weight AND ifi1.ingredient_id = ifi2.ingredient_id
WHERE flyer_id IN (1,2)
GROUP BY ifi1.ingredient_id) AS itfin
INNER JOIN `ingredient_to_recipe` AS `itr` ON `itfin`.`ingredient_id` = `itr`.`ingredient_id`
) AS itf
) ff
GROUP BY recipe_id
ORDER BY `score` DESC
LIMIT 20
Here is the EXPLAIN:
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | |
|----|-------------|--------------------------|------------|-------|----------------------------------------------|---------------|---------|---------------------|------|----------|---------------------------------|---|
| 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 1318 | 100.00 | Using temporary; Using filesort | |
| 2 | DERIVED | <derived4> | NULL | ALL | NULL | NULL | NULL | NULL | 37 | 100.00 | Using temporary | |
| 2 | DERIVED | itr | NULL | ref | ingredient_id | ingredient_id | 4 | itfin.ingredient_id | 35 | 100.00 | NULL | |
| 4 | DERIVED | <derived5> | NULL | ALL | NULL | NULL | NULL | NULL | 249 | 100.00 | Using temporary; Using filesort | |
| 4 | DERIVED | ifi1 | NULL | ref | ingredient_id,itx_full,price_weight,flyer_id | ingredient_id | 4 | ifi2.ingredient_id | 1 | 12.50 | Using where | |
| 5 | DERIVED | ingredient_to_flyer_item | NULL | index | ingredient_id,itx_full | ingredient_id | 4 | NULL | 249 | 100.00 | NULL | |
Sounds like "explode-implode". This is where the query has a JOIN and GROUP BY.
The JOIN gathers the appropriate combinations of rows from the joined tables; then
The GROUP BY COUNTs, SUMs, etc, giving you inflated values for the aggregates.
There are two common fixes, both involve doing the aggregation separate from the JOIN.
Case 1:
SELECT ...
( SELECT SUM(x) FROM t2 WHERE id = ... ) AS sum_x,
...
FROM t1 ...
That case gets clumsy if you need multiple aggregates from t2, since it allows only one at a time.
Case 2:
SELECT ...
FROM ( SELECT grp,
SUM(x) AS sum_x,
COUNT(*) AS ct
FROM t2 ) AS s
JOIN t1 ON t1.grp = s.grp
You have 2 JOINs and 3 GROUP BYs, so I recommend you debug (and rewrite) your query from the inside out.
SELECT ifi.ingredient_id,
MAX(price_weight) as max_price_weight,
flyer_item_id
from flyer_items i
join ingredient_to_flyer_item ifi ON i.id = ifi.flyer_item_id
where flyer_id in (1, 2)
group by ifi.ingredient_id
But I can't help you, since you have not qualified price_weight by the table (or an alias) it is in. (Ditto for some other columns.)
(Actually, MAX and MIN won't get inflated values; AVG will get slightly wrong values; COUNT and SUM get "wrong" values.)
Hence, I will leave the rest as an "exercise" to the reader".
INDEXes
itr: (ingredient_id, recipe_id) -- for the JOIN and WHERE and GROUP BY
itr: (recipe_id, ingredient_id, weight) -- for 1st Update
(There is no optimization available for the ORDER BY and LIMIT)
flyer_items: (flyer_id, price_weight) -- unless flyer_id is the PRIMARY KEY
ifi: (flyer_item_id, ingredient_id)
ifi: (ingredient_id, flyer_item_id) -- for 1st Update
Please provide `SHOW CREATE TABLE for the relevant tables.
Please provide EXPLAIN SELECT ....
If ingredient_to_flyer_item is a many:many mapping tables, please follow the tips here . Ditto for ingredient_to_recipe?
GROUP BY itf.flyer_item_id is probably invalid since it does not include the non-aggregated ifi.ingredient_id. See "only_full_group_by".
Reformulate
After you finish evaluating the INDEXes, try the following. Caution: I do not know if it will work correctly.
JOIN `ingredient_to_recipe` AS itr ON itf2.`ingredient_id` = itr.`ingredient_id`
to
JOIN ( SELECT recipe_id,
ingredient_id,
SUM(weight) AS sum_weight
FROM ingredient_to_recipe ) AS itr
And change the initial SELECT to replace SUMs by these computed sums. (I suspect I have not handled ingredient_id correctly.)
What version of MySQL/MariaDB are you running?
I've been wanting to take a look at this but unfortunately haven't had time until now. I think this query will give you the results you are looking for.
SELECT recipe_id, SUM(weight) AS weight, SUM(max_price_weight) AS price_weight, SUM(weight + max_price_weight) AS score
FROM (SELECT recipe_id, ingredient_id, MAX(weight) AS weight, MAX(price_weight) AS max_price_weight
FROM (SELECT itr.recipe_id, MIN(itr.ingredient_id) AS ingredient_id, MAX(itr.weight) AS weight, fi.id, MAX(fi.price_weight) AS price_weight
FROM ingredient_to_recipe itr
JOIN ingredient_to_flyer_item itfi ON itfi.ingredient_id = itr.ingredient_id
JOIN flyer_items fi ON fi.id = itfi.flyer_item_id
GROUP BY itr.recipe_id, fi.id) ri
GROUP BY recipe_id, ingredient_id) r
GROUP BY recipe_id
ORDER BY score DESC
LIMIT 10
It groups first by flyer_item_id and then on MIN(ingredient_id) to take account of ingredients within a recipe which have the same flyer_item_id. Then it sums the results to get the score you want. If I use the query with a
HAVING recipe_id IN (8376, 3152, 4771, 10230, 8958, 4656, 11338)
clause it gives the following results, which match your "what score should be" column above:
recipe_id weight price_weight score
8376 10 41 51
4771 5 40 45
10230 10 30 40
8958 15 24 39
4656 15 19 34
3152 0 18 18
11338 0 10 10
I'm not sure how fast this query will execute on your system, it's comparable to your query on my laptop (which I would expect to be quite a bit slower). I'm pretty sure there are some optimisations possible but again, haven't had time to look into them thoroughly.
I hope this provides you with a bit more help getting to a workable solution.
I'm not sure I fully understood the problem. It seems to me you are grouping by the wrong column flyer_items.id. You should be grouping by the column ingredient_id instead. If you do this, it makes more sense (to me). Here's how I see it:
select
itr.recipe_id,
sum(itr.weight),
sum(max_price_weight),
sum(itr.weight + max_price_weight) as score
from (
select
ifi.ingredient_id,
max(price_weight) as max_price_weight
from flyer_items i
join ingredients_to_flyer_item ifi on i.id = ifi.flyer_item_id
where flyer_id in (1, 2)
group by ifi.ingredient_id
) itf
join `ingredient_to_recipe` as itr on itf.`ingredient_id` = itr.`ingredient_id`
group by itr.`recipe_id`
order by score desc
limit 0,10;
I hope it helps.
Below is the explain output for the slow query that has 10's of "copying to tmp table" state in mysql processlist.
explain SELECT distinct
(radgroupreply.groupname),
count(distinct (radusergroup.username)) AS users
FROM
radgroupreply
LEFT JOIN
radusergroup ON radgroupreply.groupname = radusergroup.groupname
WHERE
(radgroupreply.groupname NOT LIKE 'FB-%' AND radgroupreply.groupname NOT LIKE '%Dropped%')
GROUP BY radgroupreply.groupname
UNION SELECT distinct
(radgroupcheck.groupname),
count(distinct (radusergroup.username))
FROM
radgroupcheck
LEFT JOIN
radusergroup ON radgroupcheck.groupname = radusergroup.groupname
WHERE
(radgroupcheck.groupname NOT LIKE 'FB-%' AND radgroupcheck.groupname NOT LIKE '%Dropped%')
GROUP BY radgroupcheck.groupname
ORDER BY groupname asc;
+----+--------------+---------------+------+---------------+------+---------+------+-------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+---------------+------+---------------+------+---------+------+-------+----------------------------------------------+
| 1 | PRIMARY | radgroupreply | ALL | NULL | NULL | NULL | NULL | 456 | Using where; Using temporary; Using filesort |
| 1 | PRIMARY | radusergroup | ALL | NULL | NULL | NULL | NULL | 10261 | |
| 2 | UNION | radgroupcheck | ALL | NULL | NULL | NULL | NULL | 167 | Using where; Using temporary; Using filesort |
| 2 | UNION | radusergroup | ALL | NULL | NULL | NULL | NULL | 10261 | |
|NULL| UNION RESULT | <union1,2> | ALL | NULL | NULL | NULL | NULL | NULL | Using filesort |
+----+--------------+---------------+------+---------------+------+---------+------+-------+----------------------------------------------+
5 rows in set (0.00 sec)
I cant get my head around this query to create compound/single index and optimize since it has multiple joins, group by and like operations.
Here are three observations to get started.
The select distinct is unnecessary (the group by takes care of that).
The left joins are unnecessary (the where clauses turn them into inner joins).
The UNION should probably be UNION ALL. I doubt you really want to incur the overhead of removing duplicates.
So, you can write the query as:
SELECT rr.groupname, count(distinct rg.username) AS users
FROM radgroupreply rr JOIN
radusergroup rg
ON rr.groupname = rg.groupname
WHERE rr.groupname NOT LIKE 'FB-%' AND rr.groupname NOT LIKE '%Dropped%'
GROUP BY rr.groupname
UNION ALL
SELECT rc.groupname, count(rg.username)
FROM radgroupcheck rc JOIN
radusergroup rg
ON rc.groupname = rg.groupname
WHERE rc.groupname NOT LIKE 'FB-%' AND rc.groupname NOT LIKE '%Dropped%'
GROUP BY rc.groupname
ORDER BY groupname asc;
This query can take advantage of indexes on radusergroup(groupname). I am guessing an index on rc(radusergroup) would be used.
I would also advice you to remove the DISTINCT in COUNT(DISTINCT) if it is not necessary.
I have a query, which is not operating on a lot of data (IMHO) but takes a number of minutes (5-10) to execute and ends up filling the /tmp space (takes up to 20GB) while executing. Once it's finished the space is freed again.
The query is as follows:
SELECT c.name, count(b.id), c.parent_accounting_reference, o.contract, a.contact_person, a.address_email, a.address_phone, a.address_fax, concat(ifnull(concat(a.description, ', '),''), ifnull(concat(a.apt_unit, ', '),''), ifnull(concat(a.preamble, ', '),''), ifnull(addr_entered,'')) FROM
booking b
join visit v on (b.visit_id = v.id)
join super_booking s on (v.super_booking_id = s.id)
join customer c on (s.customer_id = c.id)
join address a on (a.customer_id = c.id)
join customer_number cn on (cn.customer_numbers_id = c.id)
join number n on (cn.number_id = n.id)
join customer_email ce on (ce.customer_emails_id = c.id)
join email e on (ce.email_id = e.id)
left join organization o on (o.accounting_reference = c.parent_accounting_reference)
left join address_type at on (a.type_id = at.id and at.name_key = 'billing')
where s.company_id = 1
and v.expected_start_date between '2015-01-01 00:00:00' and '2015-02-01 00:00:00'
group by s.customer_id
order by count(b.id) desc
And the explain plan for the same is:
+----+-------------+-------+--------+--------------------------------------------------------------+---------------------+---------+--------------------------------------+-------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+--------------------------------------------------------------+---------------------+---------+--------------------------------------+-------+----------------------------------------------+
| 1 | SIMPLE | s | ref | PRIMARY,FKC4F8739580E01B03,FKC4F8739597AD73B1 | FKC4F8739580E01B03 | 9 | const | 74088 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | ce | ref | FK864C4FFBAF6458E3,customer_emails_id,customer_emails_id_2 | customer_emails_id | 9 | id_dev.s.customer_id | 1 | Using where |
| 1 | SIMPLE | cn | ref | FK530F62CA30E87991,customer_numbers_id,customer_numbers_id_2 | customer_numbers_id | 9 | id_dev.ce.customer_emails_id | 1 | Using where |
| 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.s.customer_id | 1 | |
| 1 | SIMPLE | e | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.ce.email_id | 1 | Using index |
| 1 | SIMPLE | n | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.cn.number_id | 1 | Using index |
| 1 | SIMPLE | v | ref | PRIMARY,FK6B04D4BEF4FD9A | FK6B04D4BEF4FD9A | 8 | id_dev.s.id | 1 | Using where |
| 1 | SIMPLE | b | ref | FK3DB0859E1684683 | FK3DB0859E1684683 | 8 | id_dev.v.id | 1 | Using index |
| 1 | SIMPLE | o | ref | org_acct_reference | org_acct_reference | 767 | id_dev.c.parent_accounting_reference | 1 | |
| 1 | SIMPLE | a | ref | FKADDRCUST,customer_address_idx | FKADDRCUST | 9 | id_dev.c.id | 256 | Using where |
| 1 | SIMPLE | at | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.a.type_id | 1 | |
+----+-------------+-------+--------+--------------------------------------------------------------+---------------------+---------+--------------------------------------+-------+----------------------------------------------+
It appears to be using the correct indexes and such so I can't understand why the large usage of /tmp and long execution time.
Your query uses a temporary table, which you can see by the Using temporary; note in the EXPLAIN result. Your MySQL settings are probably configured to use /tmp to store temporary tables.
If you want to optimize the query further, you should probably investigate why the temporary table is needed at all. The best way to do that is gradually simplifying the query until you figure out what is causing it. In this case, probably just the amount of rows needed to be processed, so if you really do need all this data, you probably need the temp table too. But don't give up on optimizing on my account ;)
By the way, on another note, you might want to look into COALESCE for handling NULL values.
You're stuck with a temporary table, because you're doing an aggregate query and then ordering it by one of the results in the aggregate. Your optimizing goal should be to reduce the number of rows and/or columns in that temporary table.
Add an index on visit.expected_start_date. This may help MySQL satisfy your query more quickly, especially if your visit table has many rows that lie outside the date range in your query.
It looks like you're trying to find the customers with the most bookings in a particular date range.
So, let's start with a subquery to summarize the least amount of material from your database.
SELECT count(*) booking_count, s.customer_id
FROM visit v
JOIN super_booking s ON v.super_booking_id = s.id
JOIN booking b ON v.id = b.visit_id
WHERE v.expected_start_date <= '2015-01-01 00:00:00'
AND v.expected_start_date > '2015-02-01 00:00:00'
AND s.company_id = 1
GROUP BY s.customer_id
This gives back a list of booking counts and customer ids for the date range and company id in question. It will be pretty efficient, especially if you put an index on expected_start_date in the visit table
Then, let's join that subquery to the one that pulls out all that information you need.
SELECT c.name, booking_count, c.parent_accounting_reference,
o.contract,
a.contact_person, a.address_email, a.address_phone, a.address_fax,
concat(ifnull(concat(a.description, ', '),''),
ifnull(concat(a.apt_unit, ', '),''),
ifnull(concat(a.preamble, ', '),''),
ifnull(addr_entered,''))
FROM (
SELECT count(*) booking_count, s.customer_id
FROM visit v
JOIN super_booking s ON v.super_booking_id = s.id
JOIN booking b ON v.id = b.visit_id
WHERE v.expected_start_date <= '2015-01-01 00:00:00'
AND v.expected_start_date > '2015-02-01 00:00:00'
AND s.company_id = 1
GROUP BY s.customer_id
) top
join customer c on top.customer_id = c.id
join address a on (a.customer_id = c.id)
join customer_number cn on (cn.customer_numbers_id = c.id)
join number n on (cn.number_id = n.id)
join customer_email ce on (ce.customer_emails_id = c.id)
join email e on (ce.email_id = e.id)
left join organization o on (o.accounting_reference = c.parent_accounting_reference)
left join address_type at on (a.type_id = at.id and at.name_key = 'billing')
order by booking_count DESC
That should speed your work up a whole bunch, by reducing the size of the data you need to summarize.
Note: Beware the trap in date BETWEEN this AND that. You really want
date >= this
AND date < that
because BETWEEN means
date >= this
AND date <= that
this is a follow up from MySQL - Find rows matching all rows from joined table
Thanks to this site the query runs perfectly.
But now i had to extend the query for a search for artist and track. This has lead me to the following query:
SELECT DISTINCT`t`.`id`
FROM `trackwords` AS `tw`
INNER JOIN `wordlist` AS `wl` ON wl.id=tw.wordid
INNER JOIN `track` AS `t` ON tw.trackid=t.id
WHERE (wl.trackusecount>0) AND
(wl.word IN ('please','dont','leave','me')) AND
t.artist IN (
SELECT a.id
FROM artist as a
INNER JOIN `artistalias` AS `aa` ON aa.ref=a.id
WHERE a.name LIKE 'pink%' OR aa.name LIKE 'pink%'
)
GROUP BY tw.trackid
HAVING (COUNT(*) = 4);
The Explain for this query looks quite good i think:
+----+--------------------+-------+--------+----------------------------+---------+---------+-----------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+--------+----------------------------+---------+---------+-----------------+------+----------------------------------------------+
| 1 | PRIMARY | wl | range | PRIMARY,word,trackusecount | word | 767 | NULL | 4 | Using where; Using temporary; Using filesort |
| 1 | PRIMARY | tw | ref | wordid,trackid | wordid | 4 | mbdb.wl.id | 31 | |
| 1 | PRIMARY | t | eq_ref | PRIMARY | PRIMARY | 4 | mbdb.tw.trackid | 1 | Using where |
| 2 | DEPENDENT SUBQUERY | aa | ref | ref,name | ref | 4 | func | 2 | |
| 2 | DEPENDENT SUBQUERY | a | eq_ref | PRIMARY,name,namefull | PRIMARY | 4 | func | 1 | Using where |
+----+--------------------+-------+--------+----------------------------+---------+---------+-----------------+------+----------------------------------------------+
Did you see room for optimization ? Query has a runtime from around 7secs, which is to much unfortunatly. Any suggestions are welcome.
TIA
You have two possible selective conditions here: artists's name and the word list.
Assuming that the words are more selective than artists:
SELECT tw.trackid
FROM (
SELECT tw.trackid
FROM wordlist AS wl
JOIN trackwords AS tw
ON tw.wordid = wl.id
WHERE wl.trackusecount > 0
AND wl.word IN ('please','dont','leave','me')
GROUP BY
tw.trackid
HAVING COUNT(*) = 4
) tw
INNER JOIN
track AS t
ON t.id = tw.trackid
AND EXISTS
(
SELECT NULL
FROM artist a
WHERE a.name LIKE 'pink%'
AND a.id = t.artist
UNION ALL
SELECT NULL
FROM artist a
JOIN artistalias aa
ON aa.ref = a.id
AND aa.name LIKE 'pink%'
WHERE a.id = t.artist
)
You need to have the following indexes for this to be efficient:
wordlist (word, trackusecount)
trackwords (wordid, trackid)
artistalias (ref, name)
Have you already indexed the name columns? That should speed this up.
You can also try using fulltext searching with Match and Against.