I have a report that needs running to satisfy our reporting requirements for a government body. The report is supposed to return the study load for each student in each module for a given period of time.
For example the report needs to return the students enrolled in a given module for a given intake in a given year and semester, with a census date (a government specified date that after which the student is liable for the cost of the unit even if they withdraw)
So I've written this mysql query
SELECT
e.enrolstudent AS '313',
(SELECT c.ntiscode FROM course c WHERE c.courseid=ec.courseid) AS '307',
e.startdate as '534',
'AOU' as '333',
m.mod_eftsl as '339',
e.enrolmod as '354',
e.census_date as '489',
m.diciplinecode as '464',
(CASE
WHEN m.mode = 'Face to Face' THEN 1
WHEN m.mode = 'Online' THEN 2
WHEN m.mode = 'RPL' THEN 5
ELSE 3
END) AS '329',
'A6090' as '477',
up.citizen AS '358',
vf.maxcontribute as '392',
vf.studentstatus as '490',
vf.total_amount_charged as '384',
vf.amount_paid as '381',
vf.loan_fee as '529',
u.chessn as '488',
m.workexp as '337',
'0' as '390',
m.sumwinschool as '551',
vf.help_debt as '558'
FROM
enrolment e
INNER JOIN enrolcourse AS ec ON ec.studentid=e.enrolstudent
INNER JOIN vetfee AS vf ON vf.userid=e.enrolstudent
INNER JOIN users AS u ON u.userid = e.enrolstudent
INNER JOIN users_personal AS up ON up.userid = e.enrolstudent
INNER JOIN module AS m ON m.modshortname = e.enrolmod
WHERE
e.online_intake in (select oi.intakecode from online_intake oi where STR_TO_DATE(oi.censusdate,'%d-%m-%Y') > '2015-07-01' and STR_TO_DATE(oi.censusdate,'%d-%m-%Y') < '2015-09-31') AND
e.enrolstudent NOT LIKE '%onlinetutor%' AND
e.enrolstudent NOT LIKE '%tes%' AND
e.enrolstudent NOT like '%student%' AND
e.enrolrole = 'student'
ORDER BY e.enrolstudent;"
It seems to hang, I've left it running for an hour with no result. There's only 10189 records in th enrolment table, 1538 in enrolcourse,650 in module. I don't think its the number of records, I'm guessing I've just constructed my query wrong, first time using joins (other than natural). Any ideas or tips in improving this would be greatly appreciated.
select count(*) from enrolment;
+----------+
| count(*) |
+----------+
| 10189 |
+----------+
select count(*) from enrolcourse;
+----------+
| count(*) |
+----------+
| 1538 |
+----------+
select count(*) from vetfee;
+----------+
| count(*) |
+----------+
| 1538 |
+----------+
select count(*) from users;
+----------+
| count(*) |
+----------+
| 1249 |
+----------+
select count(*) from users_personal;
+----------+
| count(*) |
+----------+
| 941 |
+----------+
select count(*) from module;
+----------+
| count(*) |
+----------+
| 650 |
Here's the results of the EXPLAIN
+----+--------------------+-------+------+---------------+------+---------+------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+------+---------------+------+---------+------+-------+---------------------------------+
| 1 | PRIMARY | m | ALL | NULL | NULL | NULL | NULL | 691 | Using temporary; Using filesort |
| 1 | PRIMARY | up | ALL | NULL | NULL | NULL | NULL | 987 | Using join buffer |
| 1 | PRIMARY | u | ALL | NULL | NULL | NULL | NULL | 1180 | Using where; Using join buffer |
| 1 | PRIMARY | ec | ALL | NULL | NULL | NULL | NULL | 1607 | Using where; Using join buffer |
| 1 | PRIMARY | e | ALL | NULL | NULL | NULL | NULL | 10629 | Using where; Using join buffer |
| 1 | PRIMARY | vf | ALL | NULL | NULL | NULL | NULL | 10959 | Using where; Using join buffer |
| 3 | DEPENDENT SUBQUERY | oi | ALL | NULL | NULL | NULL | NULL | 42 | Using where |
| 2 | DEPENDENT SUBQUERY | c | ALL | NULL | NULL | NULL | NULL | 23 | Using where |
+----+--------------------+-------+------+---------------+------+---------+------+-------+---------------------------------+
Get rid of those correlated subqueries. Use a join instead.
Also, use BETWEEN to reduce one STR_TO_DATE call
Finally, you should look at a way of eliminating all those LIKE calls.
SELECT
e.enrolstudent AS '313',
c.ntiscode AS '307',
e.startdate as '534',
'AOU' as '333',
m.mod_eftsl as '339',
e.enrolmod as '354',
e.census_date as '489',
m.diciplinecode as '464',
(CASE
WHEN m.mode = 'Face to Face' THEN 1
WHEN m.mode = 'Online' THEN 2
WHEN m.mode = 'RPL' THEN 5
ELSE 3
END) AS '329',
'A6090' as '477',
up.citizen AS '358',
vf.maxcontribute as '392',
vf.studentstatus as '490',
vf.total_amount_charged as '384',
vf.amount_paid as '381',
vf.loan_fee as '529',
u.chessn as '488',
m.workexp as '337',
'0' as '390',
m.sumwinschool as '551',
vf.help_debt as '558'
FROM
enrolment e
INNER JOIN enrolcourse AS ec ON ec.studentid=e.enrolstudent
INNER JOIN course AS c ON c.courseid = ec.courseid
INNER JOIN vetfee AS vf ON vf.userid=e.enrolstudent
INNER JOIN users AS u ON u.userid = e.enrolstudent
INNER JOIN users_personal AS up ON up.userid = e.enrolstudent
INNER JOIN module AS m ON m.modshortname = e.enrolmod
INNER JOIN online_intake oi ON oi.intakecode = e.online_intake
AND STR_TO_DATE(oi.censusdate, '%d-%m-%Y') BETWEEN '2015-07-01' AND '2015-09-31'
WHERE e.enrolstudent NOT LIKE '%onlinetutor%'
AND e.enrolstudent NOT LIKE '%tes%'
AND e.enrolstudent NOT like '%student%'
AND e.enrolrole = 'student'
ORDER BY e.enrolstudent;
Given your posted EXPLAIN output, you'll also want to add the following indexes:
ALTER TABLE enrolment
ADD INDEX (enrolstudent),
ADD INDEX (enrolmod),
ADD INDEX (online_intake);
ALTER TABLE enrolcourse
ADD INDEX (studentid),
ADD INDEX (courseid);
ALTER TABLE course
ADD INDEX (courseid);
ALTER TABLE vetfee
ADD INDEX (userid);
ALTER TABLE users
ADD INDEX (userid);
ALTER TABLE users_personal
ADD INDEX (userid);
ALTER TABLE module
ADD INDEX (modshortname);
ALTER TABLE online_intake
ADD INDEX (intakecode);
Related
I have a select query, that selects over 50k records from MySQL 5.5 database at once, and this amount is expected to grow. The query contains multiple subquery which is taking over 120s to execute.
Initially some of the sale_items and stock tables didn't have more that the ID keys, so I added some more:
SELECT
`p`.`id` AS `id`,
`p`.`Name` AS `Name`,
`p`.`Created` AS `Created`,
`p`.`Image` AS `Image`,
`s`.`company` AS `supplier`,
`s`.`ID` AS `supplier_id`,
`c`.`name` AS `category`,
IFNULL((SELECT
SUM(`stocks`.`Total_Quantity`)
FROM `stocks`
WHERE (`stocks`.`Product_ID` = `p`.`id`)), 0) AS `total_qty`,
IFNULL((SELECT
SUM(`sale_items`.`quantity`)
FROM `sale_items`
WHERE (`sale_items`.`product_id` = `p`.`id`)), 0) AS `total_sold`,
IFNULL((SELECT
SUM(`sale_items`.`quantity`)
FROM `sale_items`
WHERE ((`sale_items`.`product_id` = `p`.`id`) AND `sale_items`.`Sale_ID` IN (SELECT
`refunds`.`Sale_ID`
FROM `refunds`))), 0) AS `total_refund`
FROM ((`products` `p`
LEFT JOIN `cats` `c`
ON ((`c`.`ID` = `p`.`cat_id`)))
LEFT JOIN `suppliers` `s`
ON ((`s`.`ID` = `p`.`supplier_id`)))
This is the explain result
+----+--------------------+------------+----------------+------------------------+------------------------+---------+---------------------------------
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+------------+----------------+------------------------+------------------------+---------+---------------------------------
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 20981 | |
| 2 | DERIVED | p | ALL | NULL | NULL | NULL | NULL | 20934 | |
| 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | p.cat_id | 1 | |
| 2 | DERIVED | s | eq_ref | PRIMARY | PRIMARY | 4 | p.supplier_id | 1 | |
| 5 | DEPENDENT SUBQUERY | sale_items | ref | sales_items_product_id | sales_items_product_id | 5 | p.id | 33 | Using where |
| 6 | DEPENDENT SUBQUERY | refunds | index_subquery | IDX_refunds_sale_id | IDX_refunds_sale_id | 5 | func | 1 | Using index; Using where |
| 4 | DEPENDENT SUBQUERY | sale_items | ref | sales_items_product_id | sales_items_product_id | 5 | p.id | 33 | Using where |
| 3 | DEPENDENT SUBQUERY | stocks | ref | IDX_stocks_product_id | IDX_stocks_product_id | 5 | p.id | 1 | Using where |
+----+--------------------+------------+----------------+------------------------+------------------------+---------+---------------------------------
I am expecting that the query takes less that 3s at most, but I can't seem to figure out the best way to optimize this query.
The query looks fine to me. You select all data and aggregate some of it. This takes time. Your explain plan shows there are indexes on the IDs, which is good. And at a first glance there is not much we seem to be able to do here...
What you can do, though, is provide covering indexes, i.e. indexes that contain all columns you need from a table, so the data can be taken from the index directly.
create index idx1 on cats(id, name);
create index idx2 on suppliers(id, company);
create index idx3 on stocks(product_id, total_quantity);
create index idx4 on sale_items(product_id, quantity, sale_id);
This can really boost your query.
What you can try About the query itself is to move the subqueries to the FROM clause. MySQL's optimizer is not great, so although it should get the same execution plan, it may well be that it favors the FROM clause.
SELECT
p.id,
p.name,
p.created,
p.image,
s.company as supplier,
s.id AS supplier_id,
c.name AS category,
COALESCE(st.total, 0) AS total_qty,
COALESCE(si.total, 0) AS total_sold,
COALESCE(si.refund, 0) AS total_refund
FROM products p
LEFT JOIN cats c ON c.id = p.cat_id
LEFT JOIN suppliers s ON s.id = p.supplier_id
LEFT JOIN
(
SELECT SUM(total_quantity) AS total
FROM stocks
GROUP BY product_id
) st ON st.product_id = p.id
LEFT JOIN
(
SELECT
SUM(quantity) AS total,
SUM(CASE WHEN sale_id IN (SELECT sale_id FROM refunds) THEN quantity END) as refund
FROM sale_items
GROUP BY product_id
) si ON si.product_id = p.id;
(If sale_id is unique in refunds, then you can even join it to sale_items. Again: this should usually not make a difference, but in MySQL it may still. MySQL was once notorious for treating IN clauses much worse than the FROM clause. This may not be the case anymore, I don't know. You can try - if refunds.sale_id is unique).
I'm making a query that allows me to order recipes by score.
Tables structure
Structure is that a flyer contains one or many flyer_items, which can contain one or many ingredients_to_flyer_item (this table links ingredient to the flyer item). The other table ingredient_to_recipe links the same ingredients but to one or many recipes. Link to .sql file is included at the end.
Example query
I want to get recipe_id and a SUM of the MAX price weight of each ingredient that are part of the recipe (linked by ingredient_to_recipe), but if a recipe has multiple ingredients that belongs to the same flyers_item, it should be counted once.
SELECT itr.recipe_id,
SUM(itr.weight),
SUM(max_price_weight),
SUM(itr.weight + max_price_weight) AS score
FROM
( SELECT MAX(itf.max_price_weight) AS max_price_weight,
itf.flyer_item_id,
itf.ingredient_id
FROM
(SELECT ifi.ingredient_id,
MAX(i.price_weight) AS max_price_weight,
ifi.flyer_item_id
FROM flyer_items i
JOIN ingredient_to_flyer_item ifi ON i.id = ifi.flyer_item_id
WHERE i.flyer_id IN (1,
2)
GROUP BY ifi.ingredient_id ) itf
GROUP BY itf.flyer_item_id) itf2
JOIN `ingredient_to_recipe` AS itr ON itf2.`ingredient_id` = itr.`ingredient_id`
WHERE recipe_id = 5730
GROUP BY itr.`recipe_id`
ORDER BY score DESC
LIMIT 0,10
The query almost works fine, because most of the results are good, but for some lines, some ingredients are ignored and not counted from the score as they should.
Test cases
| recipe_id | 'score' with current query | what 'score' should be | explanation |
|-----------|----------------------------|------------------------|-----------------------------------------------------------------------------|
| 8376 | 51 | 51 | Good result |
| 3152 | 1 | 18 | Only 1 ingredient having a score of one is counted, should be 4 ingredients |
| 4771 | 41 | 45 | One ingredient worth score 4 is ignored |
| 10230 | 40 | 40 | Good result |
| 8958 | 39 | 39 | Good result |
| 4656 | 28 | 34 | One ingredient worth 6 is ignored |
| 11338 | 1 | 10 | 2 ingredients, worth 4 and 5 are ignored |
I have a very difficult time finding an easy way to explain it. Let me know if anything else could help.
Here is a link to the demo database to run the query, test examples and test cases: https://nofile.io/f/F4YSEu8DWmT/meta.zip
Thank you very much.
Update (as asked by Rick James):
Here is the furthest I could make it work. The results are always good, in subquery too, but, I've completely taken out the group by 'flyer_item_id'. So with this query, I get the good score, but if many ingredients of the recipe are the same flyer_item_item, they will be counted multiple times (Like score would be 59 for recipe_id = 10557 instead of the good 56, because 2 ingredients worth 3 are in the same flyers_item). The only thing I need more is to count one MAX(price_weight) per flyer_item_id per recipe, (which I originally tried by grouping by 'flyer_item_id' over the first group_by ingredient_id.
SELECT itr.recipe_id,
SUM(itr.weight) as total_ingredient_weight,
SUM(itf.price_weight) as total_price_weight,
SUM(itr.weight+itf.price_weight) as score
FROM
(SELECT fi1.id, MAX(fi1.price_weight) as price_weight, ingredient_to_flyer_item.ingredient_id as ingredient_id, recipe_id
FROM flyer_items fi1
INNER JOIN (
SELECT flyer_items.id as id, MAX(price_weight) as price_weight, ingredient_to_flyer_item.ingredient_id as ingredient_id
FROM flyer_items
JOIN ingredient_to_flyer_item ON flyer_items.id = ingredient_to_flyer_item.flyer_item_id
GROUP BY id
) fi2 ON fi1.id = fi2.id AND fi1.price_weight = fi2.price_weight
JOIN ingredient_to_flyer_item ON fi1.id = ingredient_to_flyer_item.flyer_item_id
JOIN ingredient_to_recipe ON ingredient_to_flyer_item.ingredient_id = ingredient_to_recipe.ingredient_id
GROUP BY ingredient_to_flyer_item.ingredient_id) AS itf
INNER JOIN `ingredient_to_recipe` AS `itr` ON `itf`.`ingredient_id` = `itr`.`ingredient_id`
GROUP BY `itr`.`recipe_id`
ORDER BY `score` DESC
LIMIT 10
Here is the explain, but I'm not sure it's useful as the last working part is still missing:
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | |
|----|-------------|--------------------------|------------|--------|-------------------------------|---------------|---------|-------------------------------------------------------|--------|----------|---------------------------------|---|
| 1 | PRIMARY | itr | NULL | ALL | recipe_id,ingredient_id | NULL | NULL | NULL | 151800 | 100.00 | Using temporary; Using filesort | |
| 1 | PRIMARY | <derived2> | NULL | ref | <auto_key0> | <auto_key0> | 4 | metadata3.itr.ingredient_id | 10 | 100.00 | NULL | |
| 2 | DERIVED | ingredient_to_flyer_item | NULL | ALL | NULL | NULL | NULL | NULL | 249 | 100.00 | Using temporary; Using filesort | |
| 2 | DERIVED | fi1 | NULL | eq_ref | id_2,id,price_weight | id_2 | 4 | metadata3.ingredient_to_flyer_item.flyer_item_id | 1 | 100.00 | NULL | |
| 2 | DERIVED | <derived3> | NULL | ref | <auto_key0> | <auto_key0> | 9 | metadata3.ingredient_to_flyer_item.flyer_item_id,m... | 10 | 100.00 | NULL | |
| 2 | DERIVED | ingredient_to_recipe | NULL | ref | ingredient_id | ingredient_id | 4 | metadata3.ingredient_to_flyer_item.ingredient_id | 40 | 100.00 | NULL | |
| 3 | DERIVED | ingredient_to_flyer_item | NULL | ALL | NULL | NULL | NULL | NULL | 249 | 100.00 | Using temporary; Using filesort | |
| 3 | DERIVED | flyer_items | NULL | eq_ref | id_2,id,flyer_id,price_weight | id_2 | 4 | metadata3.ingredient_to_flyer_item.flyer_item_id | 1 | 100.00 | NULL | |
Update 2
I managed to find a query that works, but now I have to make it faster, it takes over 500ms to run.
SELECT sum(ff.price_weight) as price_weight, sum(ff.weight) as weight, sum(ff.price_weight+ff.weight) as score, ff.recipe_id FROM
(
SELECT DISTINCT
itf.flyer_item_id as flyer_item_id,
itf.recipe_id,
itf.weight,
aprice_weight AS price_weight
FROM
(SELECT itfin.flyer_item_id AS flyer_item_id,
itfin.price_weight AS aprice_weight,
itfin.ingredient_id,
itr.recipe_id,
itr.weight
FROM
(SELECT ifi2.flyer_item_id, ifi2.ingredient_id as ingredient_id, MAX(ifi2.price_weight) as price_weight
FROM
ingredient_to_flyer_item ifi1
INNER JOIN (
SELECT id, MAX(price_weight) as price_weight, ingredient_to_flyer_item.ingredient_id as ingredient_id, ingredient_to_flyer_item.flyer_item_id
FROM ingredient_to_flyer_item
GROUP BY ingredient_id
) ifi2 ON ifi1.price_weight = ifi2.price_weight AND ifi1.ingredient_id = ifi2.ingredient_id
WHERE flyer_id IN (1,2)
GROUP BY ifi1.ingredient_id) AS itfin
INNER JOIN `ingredient_to_recipe` AS `itr` ON `itfin`.`ingredient_id` = `itr`.`ingredient_id`
) AS itf
) ff
GROUP BY recipe_id
ORDER BY `score` DESC
LIMIT 20
Here is the EXPLAIN:
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | |
|----|-------------|--------------------------|------------|-------|----------------------------------------------|---------------|---------|---------------------|------|----------|---------------------------------|---|
| 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 1318 | 100.00 | Using temporary; Using filesort | |
| 2 | DERIVED | <derived4> | NULL | ALL | NULL | NULL | NULL | NULL | 37 | 100.00 | Using temporary | |
| 2 | DERIVED | itr | NULL | ref | ingredient_id | ingredient_id | 4 | itfin.ingredient_id | 35 | 100.00 | NULL | |
| 4 | DERIVED | <derived5> | NULL | ALL | NULL | NULL | NULL | NULL | 249 | 100.00 | Using temporary; Using filesort | |
| 4 | DERIVED | ifi1 | NULL | ref | ingredient_id,itx_full,price_weight,flyer_id | ingredient_id | 4 | ifi2.ingredient_id | 1 | 12.50 | Using where | |
| 5 | DERIVED | ingredient_to_flyer_item | NULL | index | ingredient_id,itx_full | ingredient_id | 4 | NULL | 249 | 100.00 | NULL | |
Sounds like "explode-implode". This is where the query has a JOIN and GROUP BY.
The JOIN gathers the appropriate combinations of rows from the joined tables; then
The GROUP BY COUNTs, SUMs, etc, giving you inflated values for the aggregates.
There are two common fixes, both involve doing the aggregation separate from the JOIN.
Case 1:
SELECT ...
( SELECT SUM(x) FROM t2 WHERE id = ... ) AS sum_x,
...
FROM t1 ...
That case gets clumsy if you need multiple aggregates from t2, since it allows only one at a time.
Case 2:
SELECT ...
FROM ( SELECT grp,
SUM(x) AS sum_x,
COUNT(*) AS ct
FROM t2 ) AS s
JOIN t1 ON t1.grp = s.grp
You have 2 JOINs and 3 GROUP BYs, so I recommend you debug (and rewrite) your query from the inside out.
SELECT ifi.ingredient_id,
MAX(price_weight) as max_price_weight,
flyer_item_id
from flyer_items i
join ingredient_to_flyer_item ifi ON i.id = ifi.flyer_item_id
where flyer_id in (1, 2)
group by ifi.ingredient_id
But I can't help you, since you have not qualified price_weight by the table (or an alias) it is in. (Ditto for some other columns.)
(Actually, MAX and MIN won't get inflated values; AVG will get slightly wrong values; COUNT and SUM get "wrong" values.)
Hence, I will leave the rest as an "exercise" to the reader".
INDEXes
itr: (ingredient_id, recipe_id) -- for the JOIN and WHERE and GROUP BY
itr: (recipe_id, ingredient_id, weight) -- for 1st Update
(There is no optimization available for the ORDER BY and LIMIT)
flyer_items: (flyer_id, price_weight) -- unless flyer_id is the PRIMARY KEY
ifi: (flyer_item_id, ingredient_id)
ifi: (ingredient_id, flyer_item_id) -- for 1st Update
Please provide `SHOW CREATE TABLE for the relevant tables.
Please provide EXPLAIN SELECT ....
If ingredient_to_flyer_item is a many:many mapping tables, please follow the tips here . Ditto for ingredient_to_recipe?
GROUP BY itf.flyer_item_id is probably invalid since it does not include the non-aggregated ifi.ingredient_id. See "only_full_group_by".
Reformulate
After you finish evaluating the INDEXes, try the following. Caution: I do not know if it will work correctly.
JOIN `ingredient_to_recipe` AS itr ON itf2.`ingredient_id` = itr.`ingredient_id`
to
JOIN ( SELECT recipe_id,
ingredient_id,
SUM(weight) AS sum_weight
FROM ingredient_to_recipe ) AS itr
And change the initial SELECT to replace SUMs by these computed sums. (I suspect I have not handled ingredient_id correctly.)
What version of MySQL/MariaDB are you running?
I've been wanting to take a look at this but unfortunately haven't had time until now. I think this query will give you the results you are looking for.
SELECT recipe_id, SUM(weight) AS weight, SUM(max_price_weight) AS price_weight, SUM(weight + max_price_weight) AS score
FROM (SELECT recipe_id, ingredient_id, MAX(weight) AS weight, MAX(price_weight) AS max_price_weight
FROM (SELECT itr.recipe_id, MIN(itr.ingredient_id) AS ingredient_id, MAX(itr.weight) AS weight, fi.id, MAX(fi.price_weight) AS price_weight
FROM ingredient_to_recipe itr
JOIN ingredient_to_flyer_item itfi ON itfi.ingredient_id = itr.ingredient_id
JOIN flyer_items fi ON fi.id = itfi.flyer_item_id
GROUP BY itr.recipe_id, fi.id) ri
GROUP BY recipe_id, ingredient_id) r
GROUP BY recipe_id
ORDER BY score DESC
LIMIT 10
It groups first by flyer_item_id and then on MIN(ingredient_id) to take account of ingredients within a recipe which have the same flyer_item_id. Then it sums the results to get the score you want. If I use the query with a
HAVING recipe_id IN (8376, 3152, 4771, 10230, 8958, 4656, 11338)
clause it gives the following results, which match your "what score should be" column above:
recipe_id weight price_weight score
8376 10 41 51
4771 5 40 45
10230 10 30 40
8958 15 24 39
4656 15 19 34
3152 0 18 18
11338 0 10 10
I'm not sure how fast this query will execute on your system, it's comparable to your query on my laptop (which I would expect to be quite a bit slower). I'm pretty sure there are some optimisations possible but again, haven't had time to look into them thoroughly.
I hope this provides you with a bit more help getting to a workable solution.
I'm not sure I fully understood the problem. It seems to me you are grouping by the wrong column flyer_items.id. You should be grouping by the column ingredient_id instead. If you do this, it makes more sense (to me). Here's how I see it:
select
itr.recipe_id,
sum(itr.weight),
sum(max_price_weight),
sum(itr.weight + max_price_weight) as score
from (
select
ifi.ingredient_id,
max(price_weight) as max_price_weight
from flyer_items i
join ingredients_to_flyer_item ifi on i.id = ifi.flyer_item_id
where flyer_id in (1, 2)
group by ifi.ingredient_id
) itf
join `ingredient_to_recipe` as itr on itf.`ingredient_id` = itr.`ingredient_id`
group by itr.`recipe_id`
order by score desc
limit 0,10;
I hope it helps.
I have a query, which is not operating on a lot of data (IMHO) but takes a number of minutes (5-10) to execute and ends up filling the /tmp space (takes up to 20GB) while executing. Once it's finished the space is freed again.
The query is as follows:
SELECT c.name, count(b.id), c.parent_accounting_reference, o.contract, a.contact_person, a.address_email, a.address_phone, a.address_fax, concat(ifnull(concat(a.description, ', '),''), ifnull(concat(a.apt_unit, ', '),''), ifnull(concat(a.preamble, ', '),''), ifnull(addr_entered,'')) FROM
booking b
join visit v on (b.visit_id = v.id)
join super_booking s on (v.super_booking_id = s.id)
join customer c on (s.customer_id = c.id)
join address a on (a.customer_id = c.id)
join customer_number cn on (cn.customer_numbers_id = c.id)
join number n on (cn.number_id = n.id)
join customer_email ce on (ce.customer_emails_id = c.id)
join email e on (ce.email_id = e.id)
left join organization o on (o.accounting_reference = c.parent_accounting_reference)
left join address_type at on (a.type_id = at.id and at.name_key = 'billing')
where s.company_id = 1
and v.expected_start_date between '2015-01-01 00:00:00' and '2015-02-01 00:00:00'
group by s.customer_id
order by count(b.id) desc
And the explain plan for the same is:
+----+-------------+-------+--------+--------------------------------------------------------------+---------------------+---------+--------------------------------------+-------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+--------------------------------------------------------------+---------------------+---------+--------------------------------------+-------+----------------------------------------------+
| 1 | SIMPLE | s | ref | PRIMARY,FKC4F8739580E01B03,FKC4F8739597AD73B1 | FKC4F8739580E01B03 | 9 | const | 74088 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | ce | ref | FK864C4FFBAF6458E3,customer_emails_id,customer_emails_id_2 | customer_emails_id | 9 | id_dev.s.customer_id | 1 | Using where |
| 1 | SIMPLE | cn | ref | FK530F62CA30E87991,customer_numbers_id,customer_numbers_id_2 | customer_numbers_id | 9 | id_dev.ce.customer_emails_id | 1 | Using where |
| 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.s.customer_id | 1 | |
| 1 | SIMPLE | e | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.ce.email_id | 1 | Using index |
| 1 | SIMPLE | n | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.cn.number_id | 1 | Using index |
| 1 | SIMPLE | v | ref | PRIMARY,FK6B04D4BEF4FD9A | FK6B04D4BEF4FD9A | 8 | id_dev.s.id | 1 | Using where |
| 1 | SIMPLE | b | ref | FK3DB0859E1684683 | FK3DB0859E1684683 | 8 | id_dev.v.id | 1 | Using index |
| 1 | SIMPLE | o | ref | org_acct_reference | org_acct_reference | 767 | id_dev.c.parent_accounting_reference | 1 | |
| 1 | SIMPLE | a | ref | FKADDRCUST,customer_address_idx | FKADDRCUST | 9 | id_dev.c.id | 256 | Using where |
| 1 | SIMPLE | at | eq_ref | PRIMARY | PRIMARY | 8 | id_dev.a.type_id | 1 | |
+----+-------------+-------+--------+--------------------------------------------------------------+---------------------+---------+--------------------------------------+-------+----------------------------------------------+
It appears to be using the correct indexes and such so I can't understand why the large usage of /tmp and long execution time.
Your query uses a temporary table, which you can see by the Using temporary; note in the EXPLAIN result. Your MySQL settings are probably configured to use /tmp to store temporary tables.
If you want to optimize the query further, you should probably investigate why the temporary table is needed at all. The best way to do that is gradually simplifying the query until you figure out what is causing it. In this case, probably just the amount of rows needed to be processed, so if you really do need all this data, you probably need the temp table too. But don't give up on optimizing on my account ;)
By the way, on another note, you might want to look into COALESCE for handling NULL values.
You're stuck with a temporary table, because you're doing an aggregate query and then ordering it by one of the results in the aggregate. Your optimizing goal should be to reduce the number of rows and/or columns in that temporary table.
Add an index on visit.expected_start_date. This may help MySQL satisfy your query more quickly, especially if your visit table has many rows that lie outside the date range in your query.
It looks like you're trying to find the customers with the most bookings in a particular date range.
So, let's start with a subquery to summarize the least amount of material from your database.
SELECT count(*) booking_count, s.customer_id
FROM visit v
JOIN super_booking s ON v.super_booking_id = s.id
JOIN booking b ON v.id = b.visit_id
WHERE v.expected_start_date <= '2015-01-01 00:00:00'
AND v.expected_start_date > '2015-02-01 00:00:00'
AND s.company_id = 1
GROUP BY s.customer_id
This gives back a list of booking counts and customer ids for the date range and company id in question. It will be pretty efficient, especially if you put an index on expected_start_date in the visit table
Then, let's join that subquery to the one that pulls out all that information you need.
SELECT c.name, booking_count, c.parent_accounting_reference,
o.contract,
a.contact_person, a.address_email, a.address_phone, a.address_fax,
concat(ifnull(concat(a.description, ', '),''),
ifnull(concat(a.apt_unit, ', '),''),
ifnull(concat(a.preamble, ', '),''),
ifnull(addr_entered,''))
FROM (
SELECT count(*) booking_count, s.customer_id
FROM visit v
JOIN super_booking s ON v.super_booking_id = s.id
JOIN booking b ON v.id = b.visit_id
WHERE v.expected_start_date <= '2015-01-01 00:00:00'
AND v.expected_start_date > '2015-02-01 00:00:00'
AND s.company_id = 1
GROUP BY s.customer_id
) top
join customer c on top.customer_id = c.id
join address a on (a.customer_id = c.id)
join customer_number cn on (cn.customer_numbers_id = c.id)
join number n on (cn.number_id = n.id)
join customer_email ce on (ce.customer_emails_id = c.id)
join email e on (ce.email_id = e.id)
left join organization o on (o.accounting_reference = c.parent_accounting_reference)
left join address_type at on (a.type_id = at.id and at.name_key = 'billing')
order by booking_count DESC
That should speed your work up a whole bunch, by reducing the size of the data you need to summarize.
Note: Beware the trap in date BETWEEN this AND that. You really want
date >= this
AND date < that
because BETWEEN means
date >= this
AND date <= that
I have a MYSQL Table with the following structure called daily_measurements
+------------+----------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+----------+------+-----+---------------------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| user_id | int(11) | NO | | 0 | |
| date | datetime | NO | MUL | 0000-00-00 00:00:00 | |
| weight | float | NO | | 0 | |
| bicep | float | NO | | 0 | |
| chest | float | NO | | 0 | |
| waist | float | NO | | 0 | |
| neck | float | NO | | 0 | |
| thigh | float | NO | | 0 | |
| hips | float | NO | | 0 | |
| shoulders | float | NO | | 0 | |
| knee | float | NO | | 0 | |
| ankle | float | NO | | 0 | |
| created_on | datetime | NO | | 0000-00-00 00:00:00 | |
+------------+----------+------+-----+---------------------+----------------+
I need to retrieve a list of every user's weight for there first and last entry.
I've tried various combinations of GROUP BY, MIN(date), MAX(date), etc. but I can't seem to figure out a way to do it efficiently.
The only way I've been able to get this to work is to do the following query on the users table, w/ 2 subqueries, but since there are aprox 30,000 users and > 200,000 measurements the query chokes up pretty bad.
SELECT u.id,
(SELECT user_id, weight, date FROM daily_measurements WHERE user_id = u.id ORDER BY date DESC limit 1) as starting_weight,
(SELECT user_id, weight, date FROM daily_measurements WHERE user_id = u.id ORDER BY date ASC limit 1) as ending_weight
FROM users u
Any help would be appreciated.
My solution:
SELECT
u1.user_id,
u2.first_entry_weight,
u1.weight AS last_entry_weight
FROM daily_measurements u1
INNER JOIN (SELECT
u1.user_id,
u1.weight AS first_entry_weight,
u2.fe,
u2.le
FROM daily_measurements u1
INNER JOIN (SELECT
daily_measurements.user_id,
MIN(date_entry) fe,
MAX(date_entry) le
FROM daily_measurements
GROUP BY daily_measurements.user_id) u2
ON u1.user_id = u2.user_id
AND u1.date_entry = u2.fe) u2
ON u1.user_id = u2.user_id
AND u1.date_entry = u2.le
can not test it and it's performance at the moment but I thing u can start from the following query:
SELECT
u.id,
SUBSTRING_INDEX( GROUP_CONCAT(CAST(d.weight AS CHAR) ORDER BY d.date ASC ), ',', 1 ) as starting_weight,
SUBSTRING_INDEX( GROUP_CONCAT(CAST(d.weight AS CHAR) ORDER BY d.date DESC), ',', 1 ) as ending_weight
FROM users as u
LEFT JOIN daily_measurements as d ON (u.id = d.user_id)
edit please treat this as a suggestion for your Query...
with such amount of users "JOIN" could be hundreds times faster then two SELECT sub-queries
SELECT A.user_id,
B.weight InitialWeight,
B.`date` InitialDate,
C.weight LatestWeight,
C.`date` LatestDate
FROM
(
SELECT user_id,MIN(id) idmin,MAX(id) idmax
FROM daily_measurements GROUP BY user_id
) A
INNER JOIN daily_measurements B ON (A.user_id=B.user_id AND A.idmin = B.id)
INNER JOIN daily_measurements C ON (A.user_id=C.user_id AND A.idmax = C.id);
Please make sure you have an index like this
ALTER TABLE daily_measurements ADD UNIQUE INDEX userid_id_ndx (user_id,id);
Try this:
select tb.* from daily_measurements tb
join (
select user_id, MIN(date) firstDate, MAX(date) lastDate
from daily_measurements
group by user_id
) temp
on tb.user_id = temp.user_id
and (tb.date = temp.firstDate or tb.date = temp.lastDate)
The subquery will identify first date and last date rows for each user_id, and main query will fetch the rows again to get all the data.
this is a follow up from MySQL - Find rows matching all rows from joined table
Thanks to this site the query runs perfectly.
But now i had to extend the query for a search for artist and track. This has lead me to the following query:
SELECT DISTINCT`t`.`id`
FROM `trackwords` AS `tw`
INNER JOIN `wordlist` AS `wl` ON wl.id=tw.wordid
INNER JOIN `track` AS `t` ON tw.trackid=t.id
WHERE (wl.trackusecount>0) AND
(wl.word IN ('please','dont','leave','me')) AND
t.artist IN (
SELECT a.id
FROM artist as a
INNER JOIN `artistalias` AS `aa` ON aa.ref=a.id
WHERE a.name LIKE 'pink%' OR aa.name LIKE 'pink%'
)
GROUP BY tw.trackid
HAVING (COUNT(*) = 4);
The Explain for this query looks quite good i think:
+----+--------------------+-------+--------+----------------------------+---------+---------+-----------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+--------+----------------------------+---------+---------+-----------------+------+----------------------------------------------+
| 1 | PRIMARY | wl | range | PRIMARY,word,trackusecount | word | 767 | NULL | 4 | Using where; Using temporary; Using filesort |
| 1 | PRIMARY | tw | ref | wordid,trackid | wordid | 4 | mbdb.wl.id | 31 | |
| 1 | PRIMARY | t | eq_ref | PRIMARY | PRIMARY | 4 | mbdb.tw.trackid | 1 | Using where |
| 2 | DEPENDENT SUBQUERY | aa | ref | ref,name | ref | 4 | func | 2 | |
| 2 | DEPENDENT SUBQUERY | a | eq_ref | PRIMARY,name,namefull | PRIMARY | 4 | func | 1 | Using where |
+----+--------------------+-------+--------+----------------------------+---------+---------+-----------------+------+----------------------------------------------+
Did you see room for optimization ? Query has a runtime from around 7secs, which is to much unfortunatly. Any suggestions are welcome.
TIA
You have two possible selective conditions here: artists's name and the word list.
Assuming that the words are more selective than artists:
SELECT tw.trackid
FROM (
SELECT tw.trackid
FROM wordlist AS wl
JOIN trackwords AS tw
ON tw.wordid = wl.id
WHERE wl.trackusecount > 0
AND wl.word IN ('please','dont','leave','me')
GROUP BY
tw.trackid
HAVING COUNT(*) = 4
) tw
INNER JOIN
track AS t
ON t.id = tw.trackid
AND EXISTS
(
SELECT NULL
FROM artist a
WHERE a.name LIKE 'pink%'
AND a.id = t.artist
UNION ALL
SELECT NULL
FROM artist a
JOIN artistalias aa
ON aa.ref = a.id
AND aa.name LIKE 'pink%'
WHERE a.id = t.artist
)
You need to have the following indexes for this to be efficient:
wordlist (word, trackusecount)
trackwords (wordid, trackid)
artistalias (ref, name)
Have you already indexed the name columns? That should speed this up.
You can also try using fulltext searching with Match and Against.