Why is this MySQL query slow? - mysql

I have the following query, all relevant columns are indexed correctly. MySQL version 5.0.8. The query takes forever:
SELECT COUNT(*) FROM `members` `t` WHERE t.member_type NOT IN (1,2)
AND ( SELECT end_date FROM subscriptions s
WHERE s.sub_auth_id = t.member_auth_id AND s.sub_status = 'Completed'
AND s.sub_pkg_id > 0 ORDER BY s.id DESC LIMIT 1 ) < curdate( )
EXPLAIN output:
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
1 | PRIMARY | t | ALL | membership_type | NULL | NULL | NULL | 9610 | Using where
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
2 | DEPENDENT SUBQUERY | s | index | subscription_auth_id, | PRIMARY | 4 | NULL | 1 | Using where
| | | | subscription_pkg_id, | | | | |
| | | | subscription_status | | | | |
----+--------------------+-------+-------+-----------------------+---------+---------+------+------+-------------
Why?

Your subselect refers to values in the parent query. This is known as a correlated (dependent) subquery, and such a query has to be executed once for every row in the parent query, which often leads to poor performance. It is often faster to rewrite the query as a JOIN, for example like this
(Note: without a sample schema to test with, it is impossible to say in advance if this will be faster and still correct, you might need to adjust it a little):
SELECT COUNT(*) FROM members t
LEFT JOIN (
SELECT sub_auth_id as member_id, max(id) as sid FROM subscriptions
WHERE sub_status = 'Completed'
AND sub_pkg_id > 0
GROUP BY sub_auth_id
LEFT JOIN (
SELECT id AS subid, end_date FROM subscriptions
WHERE sub_status = 'Completed'
AND sub_pkg_id > 0
) sdate ON sid = subid
) sub ON sub.member_id = t.member_auth_id
WHERE t.member_type NOT IN (1,2)
AND sub.end_date < curdate( )
The logic here is:
For each member, find his latest subscription.
For each latest subscription, find its end date.
Join these member-latest_sub_date pair to the members list.
Filter the list.

Your query is slow because as written you are considering 9,610 rows and therefore performing 9,610 SELECT subqueries in your WHERE clause. You really should rewrite your query to JOIN the members and subscriptions tables first, to which your WHERE conditions could still apply.
EDIT: Try this.
SELECT COUNT(*)
FROM `members` `t`
JOIN subscriptions s ON (s.sub_auth_id = t.member_auth_id)
WHERE t.member_type NOT IN (1,2)
AND s.sub_status = 'Completed'
AND s.sub_pkg_id > 0
AND end_date < curdate()
ORDER BY s.id DESC LIMIT 1

Caveat: I'm not a MySQL expert, but pretty good in a different SQL flavour (VFP), but I believe you will save some time if:
You count just one field, let's say memberid, instead of *.
Your comparison NOT IN (1,2) is replaced with > 2 (provided that is valid).
The ORDER BY in your subselect is unnecessary, I think. You're trying to get the last completed subscription?
The < curdate() should be inside your subselect's WHERE.
(SELECT end_date FROM subscriptions s
WHERE s.end_date < curdate() and s.sub_auth_id = t.member_auth_id AND
s.sub_status = 'Completed' AND s.sub_pkg_id > 0 ORDER BY s.id DESC LIMIT 1 )
Tune your subselect so as to trim down the set as quickly as possible. The first conditional should be the one least likely to occur.

I ended up doing it like this:
select count(*) from members t
JOIN subscriptions s ON s.sub_auth_id = t.member_auth_id
WHERE t.membership_type > 2 AND s.sub_status = 'Completed' AND s.sub_pkg_id > 0
AND s.sub_end_date < curdate( )
AND s.id = (SELECT MAX(ss.id) FROM subscriptions ss WHERE ss.sub_auth_id = t.member_auth_id)
I believe that the problem is due to a bug that won't be fixed until MySQL 6.

Related

Mysql subquery much faster than join

I have the following queries which both return the same result and row count:
select * from (
select UNIX_TIMESTAMP(network_time) * 1000 as epoch_network_datetime,
hbrl.business_rule_id,
display_advertiser_id,
hbrl.campaign_id,
truncate(sum(coalesce(hbrl.ad_spend_network, 0))/100000.0, 2) as demand_ad_spend_network,
sum(coalesce(hbrl.ad_view, 0)) as demand_ad_view,
sum(coalesce(hbrl.ad_click, 0)) as demand_ad_click,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else 100*sum(hbrl.ad_click)/sum(hbrl.ad_view) end, 0), 2) as ctr_percent,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else sum(hbrl.ad_spend_network)/100.0/sum(hbrl.ad_view) end, 0), 2) as ecpm,
truncate(coalesce(case when sum(hbrl.ad_click) = 0 then 0 else sum(hbrl.ad_spend_network)/100000.0/sum(hbrl.ad_click) end, 0), 2) as ecpc
from hourly_business_rule_level hbrl
where (publisher_network_id = 31534)
and network_time between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f') and str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f')
and (network_time IS NOT NULL and display_advertiser_id > 0)
group by network_time, hbrl.campaign_id, hbrl.business_rule_id
having demand_ad_spend_network > 0
OR demand_ad_view > 0
OR demand_ad_click > 0
OR ctr_percent > 0
OR ecpm > 0
OR ecpc > 0
order by epoch_network_datetime) as atb
left join dim_demand demand on atb.display_advertiser_id = demand.advertiser_dsp_id
and atb.campaign_id = demand.campaign_id
and atb.business_rule_id = demand.business_rule_id
ran explain extended, and these are the results:
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 1451739 | 100.00 | NULL |
| 1 | PRIMARY | demand | ref | PRIMARY,join_index | PRIMARY | 4 | atb.campaign_id | 1 | 100.00 | Using where |
| 2 | DERIVED | hourly_business_rule_level | ALL | _hourly_business_rule_level_supply_idx,_hourly_business_rule_level_demand_idx | NULL | NULL | NULL | 1494447 | 97.14 | Using where; Using temporary; Using filesort |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+
and the other is:
select UNIX_TIMESTAMP(network_time) * 1000 as epoch_network_datetime,
hbrl.business_rule_id,
display_advertiser_id,
hbrl.campaign_id,
truncate(sum(coalesce(hbrl.ad_spend_network, 0))/100000.0, 2) as demand_ad_spend_network,
sum(coalesce(hbrl.ad_view, 0)) as demand_ad_view,
sum(coalesce(hbrl.ad_click, 0)) as demand_ad_click,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else 100*sum(hbrl.ad_click)/sum(hbrl.ad_view) end, 0), 2) as ctr_percent,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else sum(hbrl.ad_spend_network)/100.0/sum(hbrl.ad_view) end, 0), 2) as ecpm,
truncate(coalesce(case when sum(hbrl.ad_click) = 0 then 0 else sum(hbrl.ad_spend_network)/100000.0/sum(hbrl.ad_click) end, 0), 2) as ecpc
from hourly_business_rule_level hbrl
join dim_demand demand on hbrl.display_advertiser_id = demand.advertiser_dsp_id
and hbrl.campaign_id = demand.campaign_id
and hbrl.business_rule_id = demand.business_rule_id
where (publisher_network_id = 31534)
and network_time between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f') and str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f')
and (network_time IS NOT NULL and display_advertiser_id > 0)
group by network_time, hbrl.campaign_id, hbrl.business_rule_id
having demand_ad_spend_network > 0
OR demand_ad_view > 0
OR demand_ad_click > 0
OR ctr_percent > 0
OR ecpm > 0
OR ecpc > 0
order by epoch_network_datetime;
and these are the results for the second query:
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+
| 1 | SIMPLE | hourly_business_rule_level | ALL | _hourly_business_rule_level_supply_idx,_hourly_business_rule_level_demand_idx | NULL | NULL | NULL | 1494447 | 97.14 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | demand | ref | PRIMARY,join_index | PRIMARY | 4 | my6sense_datawarehouse.hourly_business_rule_level.campaign_id | 1 | 100.00 | Using where; Using index |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+
the first one takes about 2 seconds while the second one takes over 2 minutes!
why is the second query taking so long?
what am I missing here?
thanks.
Use a subquery whenever the subquery significantly shrinks the number of rows before - ANY JOIN - always to reinforce Rick James Plan B.
To reinforce Rick & Paul's answer which you have already documented. The answers by Rick and Paul deserve Acceptance.
One possible reason is the number of rows that have to be joined with the second table.
The GROUP BY clause and the HAVING clause will limit the number of rows returned from your subquery.
Only those rows will be used for the join.
Without the subquery only the WHERE clause is limiting the number of rows for the JOIN.
The JOIN is done before the GROUP BY and HAVING clauses are processed.
Depending on group size and the selectivity of the HAVING conditions there would be much more rows that need to be joined.
Consider the following simplified example:
We have a table users with 1000 entries and the columns id, email.
create table users(
id smallint auto_increment primary key,
email varchar(50) unique
);
Then we have a (huge) log table user_actions with 1,000,000 entries and the columns id, user_id, timestamp, action
create table user_actions(
id mediumint auto_increment primary key,
user_id smallint not null,
timestamp timestamp,
action varchar(50),
index (timestamp, user_id)
);
The task is to find all users who have at least 900 entries in the log table since 2017-02-01.
The subquery solution:
select a.user_id, a.cnt, u.email
from (
select a.user_id, count(*) as cnt
from user_actions a
where a.timestamp >= '2017-02-01 00:00:00'
group by a.user_id
having cnt >= 900
) a
left join users u on u.id = a.user_id
The subquery returns 135 rows (users). Only those rows will be joined with the users table.
The subquery runs in about 0.375 seconds. The time needed for the join is almost zero, so the full query runs in about 0.375 seconds.
Solution without subquery:
select a.user_id, count(*) as cnt, u.email
from user_actions a
left join users u on u.id = a.user_id
where a.timestamp >= '2017-02-01 00:00:00'
group by a.user_id
having cnt >= 900
The WHERE condition filters the table to 866,081 rows.
The JOIN has to be done for all those 866K rows.
After the JOIN the GROUP BY and the HAVING clauses are processed and limit the result to 135 rows.
This query needs about 0.815 seconds.
So you can already see, that a subquery can improve the performance.
But let's make things worse and drop the primary key in the users table.
This way we have no index which can be used for the JOIN.
Now the first query runs in 0.455 seconds. The second query needs 40 seconds - almost 100 times slower.
Notes
It's difficult to say if the same applies to your case. Reasons are:
Your queries are quite complex and far away from from beeing an MVCE.
I don't see anything beeng selected from the demand table - So it's unclear why you are joining it at all.
You use a LEFT JOIN in one query and an INNER JOIN in another one.
The relation between the two tables is unclear.
No information about indexes. You should provide the CREATE statements (SHOW CREATE table_name).
Test setup
drop table if exists users;
create table users(
id smallint auto_increment primary key,
email varchar(50) unique
)
select seq as id, rand(1) as email
from seq_1_to_1000
;
drop table if exists user_actions;
create table user_actions(
id mediumint auto_increment primary key,
user_id smallint not null,
timestamp timestamp,
action varchar(50),
index (timestamp, user_id)
)
select seq as id
, floor(rand(2)*1000)+1 as user_id
#, '2017-01-01 00:00:00' + interval seq*20 second as timestamp
, from_unixtime(unix_timestamp('2017-01-01 00:00:00') + seq*20) as timestamp
, rand(3) as action
from seq_1_to_1000000
;
MariaDB 10.0.19 with sequence plugin.
The queries are different. One says JOIN, the other says LEFT JOIN. You are not using demand, so the join is probably useless. However, in the case of JOIN, you are filtering out advertisers that are not in dim_demand; it that the intent?
But that does not address the question.
The EXPLAINs estimate that there are 1.5M rows in hbrl. But how many show up in the result? I would guess it is a lot fewer. From this, I can answer your question.
Consider these two:
SELECT ... FROM ( SELECT ... FROM a
GROUP BY or HAVING or LIMIT ) x
JOIN b
SELECT ... FROM a
JOIN b
GROUP BY or HAVING or LIMIT
The first will decrease the number of rows that need to join to b; the second will need to do a full 1.5M joins. I suspect that the time taken to do the JOIN (be it LEFT or not) is where the difference is.
Plan A: Remove demand from the query.
Plan B: Use a subquery whenever the subquery significantly shrinks the number of rows before the JOIN.
Indexing (may speed up both variants):
INDEX(publisher_network_id, network_time)
and get rid of this as being useless (since the between will fail anyway for NULL):
and network_time IS NOT NULL
Side note: I recommend simplifying and fixing this
and network_time
between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f')
AND str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f')
to
and network_time >= '2017-08-13 17:00:00
and network_time < '2017-08-13 17:00:00 + INTERVAL 24 HOUR

SQL: Previous Column empty when setting AVG()

Ok, I am a little noobie when it comes to SQL. In fact very muchly so, so I apologize if this is self evident.
I am trying to find out 3 things from database (This table is a log of every message sent):
Total Reply Time
Total # of Replies that were Under 10 Mins
Average Reply Time
Here is my SQL:
SELECT
*, SUM(case when tmp.reply_time <= 10 then 1 else 0 end) as under_10_mins,
COUNT(tmp.reply_time) AS total_replies
FROM
(SELECT
TIMESTAMPDIFF(MINUTE, `date`, reply_date) as reply_time
FROM
tme_email_staff_reply sr
JOIN
tme_user u
ON
u.id = sr.staff_id
JOIN
tme_email_message m
ON
m.id = sr.message_id
WHERE
`reply_date` >= '2017-04-01 00:00:00'
AND
`reply_date` < '2017-04-27 00:00:00'
)
AS tmp
Which outputs:
| reply_time | under_10_mins | total_replies |
| 106 | 165 | 375 |
Now, when I add in:
SELECT
*, SUM(case when tmp.reply_time <= 10 then 1 else 0 end) as under_10_mins,
COUNT(tmp.reply_time) AS total_replies
FROM
(SELECT
TIMESTAMPDIFF(MINUTE, `date`, reply_date) as reply_time,
(AVG(TIMESTAMPDIFF(SECOND, `date`, reply_date))/60) AS average_reply_time
FROM
tme_email_staff_reply sr
JOIN
tme_user u
ON
u.id = sr.staff_id
JOIN
tme_email_message m
ON
m.id = sr.message_id
WHERE
`reply_date` >= '2017-04-01 00:00:00'
AND
`reply_date` < '2017-04-27 00:00:00'
)
AS tmp
my response is:
| reply_time | average_reply_time |under_10_mins | total_replies |
| 106 | 149.08626667 | 0 | 1 |
As you can see, the under_10_mins and total_replies fields have changed.
Schema for tables linked:
tme_email_staff_reply:
id | staff_id | message_id | reply_date |
1 | 234,221,001 | 15fg16d5dgw2 | 2017-04-01 09:34:16 |
tme_user
id | username | password | email | dob | gender |
// data omited
tme_email_message
id | thread_id | From | To | subject | message | message_id
// data omited
Can anyone tell me why this is so? and how to fix it?
Why this is so?
Let's see AVG:
AVG([DISTINCT] expr)
Returns the average value of expr. The DISTINCT option can be used to return the average of the distinct values of expr.
If there are no matching rows, AVG() returns NULL.
And doc in 13.19.1 Aggregate (GROUP BY) Function Descriptions also said:
If you use a group function in a statement containing no GROUP BY clause, it is equivalent to grouping on all rows. For more information, see Section 13.19.3, “MySQL Handling of GROUP BY”.
This means in your subquery, you used avg without group by, this will avg all the rows, then return one row in subquery.
How to fix it?
I think you should move avg from subquery to outer query:
SELECT
SUM(case when tmp.reply_time <= 10 then 1 else 0 end) as under_10_mins,
COUNT(tmp.reply_time) AS total_replies,
AVG(average_reply_time) AS average_reply_time
FROM
(SELECT
TIMESTAMPDIFF(MINUTE, `date`, reply_date) as reply_time,
(TIMESTAMPDIFF(SECOND, `date`, reply_date))/60 AS average_reply_time
FROM
tme_email_staff_reply sr
JOIN
tme_user u
ON
u.id = sr.staff_id
JOIN
tme_email_message m
ON
m.id = sr.message_id
WHERE
`reply_date` >= '2017-04-01 00:00:00'
AND
`reply_date` < '2017-04-27 00:00:00'
)
AS tmp
The issue is because, in your nested query, you are referring to nonaggregated columns not named in the GROUP BY clause on a MySQL version under 5.7.5. See documentation, notice that: The server is free to choose any value from each group.
MySQL < 5.7.5 allow this syntax but has special behaviour (your case):
MySQL extends the standard SQL use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Result set sorting occurs after values have been chosen, and ORDER BY does not affect which values within each group the server chooses.
MySQL >= 5.7.5 allow this syntax and checks for functional dependence:
MySQL 5.7.5 and up implements detection of functional dependence. If the ONLY_FULL_GROUP_BY SQL mode is enabled (which it is by default), MySQL rejects queries for which the select list, HAVING condition, or ORDER BY list refer to nonaggregated columns that are neither named in the GROUP BY clause nor are functionally dependent on them.

Query works too slow when there is no results. How to improve it?

I have three tables
filters (id, name)
items(item_id, name)
items_filters(item_id, filter_id, value_id)
values(id, filter_id, filter_value)
about 20000 entries in items.
about 80000 entries in items_filters.
SELECT i.*
FROM items_filters itf INNER JOIN items i ON i.item_id = itf.item_id
WHERE (itf.filter_id = 1 AND itf.value_id = '1')
OR (itf.filter_id = 2 AND itf.value_id = '7')
GROUP BY itf.item_id
WITH ROLLUP
HAVING COUNT(*) = 2
LIMIT 0,10;
It 0.008 time when there is entries that match query and 0.05 when no entries match.
I tried different variations before:
SELECT * FROM items WHERE item_id IN (
SELECT `item_id`
FROM `items_filters`
WHERE (`filter_id`='1' AND `value_id`=1)
OR (`filter_id`='2' AND `value_id`=7)
GROUP BY `item_id`
HAVING COUNT(*) = 2
) LIMIT 0,6;
This completely freezes mysql when there are no entries.
What I really don't get is that
SELECT i.*
FROM items_filters itf INNER JOIN items i ON i.item_id = itf.item_id
WHERE itf.filter_id = 1 AND itf.value_id = '1' LIMIT 0,1
takes ~0.05 when no entries found and ~0.008 when there are
Explain
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | i | ALL | PRIMARY | NULL | NULL | NULL | 10 | Using temporary; Using filesort |
| 1 | SIMPLE | itf | ref | item_id | item_id | 4 | ss_stylet.i.item_id | 1 | Using where; Using index |
Aside from ensuring and index on items_filters on both (filter_id, value_id), I would prequalify your item IDs up front with a group by, THEN join to the items table. It looks like you are trying to find an item that meets two specific conditions, and for those, grab the items...
I've also left the "group by with rollup" in the outer, even though there will be a single instance per ID returned from the inner query. But since the inner query is already applying the limit of 0,10 records, its not throwing too many results to be joined to your items table.
However, since you are not doing any aggregates, I believe the outer group by and rollup are not really going to provide you any benefit and could otherwise be removed.
SELECT i.*
FROM
( select itf.item_id
from items_filters itf
WHERE (itf.filter_id = 1 AND itf.value_id = '1')
OR (itf.filter_id = 2 AND itf.value_id = '7')
GROUP BY itf.item_id
HAVING COUNT(*) = 2
LIMIT 0, 10 ) PreQualified
JOIN items i
ON PreQualified.item_id = i.item_id
Another approach MIGHT be to do a JOIN on the inner query so you don't even need to apply a group by and having. Since you are explicitly looking for exactly two items, I would then try the following. This way, the first qualifier is it MUST have an entry of the ID = 1 and value = '1'. It it doesn't even hit THAT entry, it would never CARE about the second. Then, by applying a join to the same table (aliased itf2), it has to find on that same ID -- AND the conditions for the second (id = 2 value = '7'). This basically forces a look almost like a single pass against the one entry FIRST and foremost before CONSIDERING anything else. That would STILL result in your limited set of 10 before getting item details.
SELECT i.*
FROM
( select itf.item_id
from items_filters itf
join items_filters itf2
on itf.item_id = itf2.item_id
AND itf2.filter_id = 2
AND itf2.value_id = '7'
WHERE
itf.filter_id = 1 AND itf.value_id = '1'
LIMIT 0, 10 ) PreQualified
JOIN items i
ON PreQualified.item_id = i.item_id
I also removed the group by / with rollup as per your comment of duplicates (which is what I expected).
That looks like four tables to me.
Do an EXPLAIN PLAN on the query and look for a TABLE SCAN. If you see one, add indexes on the columns in the WHERE clauses. Those will certainly help.

Optimizing unexplainably slow MySQL query

I'm losing hair on a stupid query. First, I would explain what's its goal. I have a set of values fetched every hour and stored in the DB. These values can increase or stay equal with time. This query extracts the latest value day by day for latest 60 days (I have twins query for extract lastest value by weeks and months, they are similar). The query is self explanatory:
SELECT l.value AS value
FROM atable AS l
WHERE l.time = (
SELECT MAX(m.time)
FROM atable AS m
WHERE DATE(l.time) = DATE(m.time)
LIMIT 1
)
ORDER BY l.time DESC
LIMIT 60
It looks no special. But it's extremely slow (> 30 secs), considering time is an index and table contains less than 5000 rows. And I'm sure the problem is with sub-query.
Where is the noob mistake?
Update 1: Same situation if I avoid MAX() using SELECT m.time ... ORDER BY m.time DESC.
Update 2: Seems is not a problem with DATE() function called to many times. I've tried to create a calculated field day DATE. The UPDATE atable SET day = DATE(time) runs in less than 2secs. The modified query, with l.day = m.day (no functions!), runs in the same exactly time as before.
The main issue I see is using DATE() on the left of the expression in the WHERE clause. Using the function DATE() on both sides of the WHERE expression explicitly prevents MySQL from using an index on the date field. Instead, it must scan all rows to apply the function on each row.
Instead of this:
WHERE DATE(l.time) = DATE(m.time)
Try something like this:
WHERE l.time BETWEEN
DATE_SUB(m.date, INTERVAL TIME_TO_SEC(m.date) SECOND)
AND DATE_ADD(DATE_SUB(m.date, INTERVAL TIME_TO_SEC(m.date) SECOND), INTERVAL 86399 SECOND)
Maybe you know of a better way to turn m.date into a range like 2012-02-09 00:00:00 and 2012-02-09 23:59:59 than the above example, but the idea is that you want to keep the left side of the expression as the raw column name, l.time in this case, and give it a range in the form of two constants (or two expressions that can be converted to constants) on the right side.
EDIT
I'm using your pre-calculated day field:
SELECT *
FROM atable a
WHERE a.time IN
(SELECT MAX(time)
FROM atable
GROUP BY day
ORDER BY day DESC
LIMIT 60)
At least here, the inner query is only ran once, and then a binary search is done with the IN cluase. You're still scanning the table, but just once, and the advantage of the inner query being run just once will probably make a huge dent.
If you know that you have values for every day, you could improve that inner query by adding a WHERE clause, limiting it to the last 60 calendar days, and losing the LIMIT 60. Make sure that day and time are indexed.
Instead of using MAX(m.time) do the following in the sub-select
SELECT m.time
FROM table AS m
WHERE DATE(l.time) = DATE(m.time)
ORDER BY m.time DESC
LIMIT 1
This might help speed up the query since it is giving the query parser an alternative
However one other piece i noticed is you are using the DATE(l.time) and DATE(m.time) which if your index is not created on DATE(m.time) then you will not be using the index and hence could cause slowness.
Based on the feedback answer, if the entries are sequentially added via date/time, directly correlated to the auto-increment ID, who cares about the TIME... get the auto-inc number for exact, non-ambiguous join
select
A1.AutoID,
A1.time,
A1.Value
from
( select date( A2.time ) as SingleDate,
max( A2.AutoID ) as MaxAutoID
from aTable A2
where date( A2.Time ) >= date( date_sub( now(), interval 60 day ))
group by date( A2.time ) ) into MaxPerDate
JOIN aTable A1
on MaxPerDate.MaxAutoID = A1.AutoID
order by
A1.AutoID DESC
You could use the "explain" statement to get mysql to tell you what it's doing.
EXPLAIN SELECT l.value AS value
FROM table AS l
WHERE l.time = (
SELECT MAX(m.time)
FROM table AS m
WHERE DATE(l.time) = DATE(m.time) LIMIT 1
)
ORDER BY l.time DESC LIMIT 60
That should at least give you an insight where to look further.
If you have an index on time, I would suggest getting TOP 1 instead of MAX as follows:
SELECT l.value AS value
FROM table AS l
WHERE l.time = (
SELECT TOP 1 m.time
FROM table AS m
ORDER BY m.time DESC LIMIT 1
)
ORDER BY l.time DESC LIMIT 60
Your outer query is using a filesort without indexes.
Try changing to InnoDB engine to see if it improves things.
Doing a quick test:
mysql> show create table atable\G
*************************** 1. row ***************************
Table: atable
Create Table: CREATE TABLE `atable` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`t` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `t` (`t`)
) ENGINE=InnoDB AUTO_INCREMENT=51 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql> explain SELECT id FROM atable AS l WHERE l.t = ( SELECT MAX(m.t) FROM atable AS m WHERE DATE(l.t) = DATE(m.t) LIMIT 1 ) ORDER BY l.t DESC LIMIT 50;
+----+--------------------+-------+-------+---------------+------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+------+---------+------+------+--------------------------+
| 1 | PRIMARY | l | index | NULL | t | 4 | NULL | 50 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | m | index | NULL | t | 4 | NULL | 50 | Using where; Using index |
+----+--------------------+-------+-------+---------------+------+---------+------+------+--------------------------+
2 rows in set (0.00 sec)
After changing to MyISAM:
mysql> explain SELECT id FROM atable AS l WHERE l.t = ( SELECT MAX(m.t) FROM atable AS m WHERE DATE(l.t) = DATE(m.t) LIMIT 1 ) ORDER BY l.t DESC LIMIT 50;
+----+--------------------+-------+-------+---------------+------+---------+------+------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+------+---------+------+------+-----------------------------+
| 1 | PRIMARY | l | ALL | NULL | NULL | NULL | NULL | 50 | Using where; Using filesort |
| 2 | DEPENDENT SUBQUERY | m | index | NULL | t | 4 | NULL | 50 | Using where; Using index |
+----+--------------------+-------+-------+---------------+------+---------+------+------+-----------------------------+
2 rows in set (0.00 sec)

How to select an item, the one below and the one above in MYSQL

I have a database with ID's that are non-integers like this:
b01
b02
b03
d01
d02
d03
d04
s01
s02
s03
s04
s05
etc. The letters represent the type of product, the numbers the next one in that group.
I'd like to be able to select an ID, say d01, and get b03, d01, d02 back. How do I do this in MYSQL?
Here is another way to do it using UNIONs. I think this is a little easier to understand and more flexible than the accepted answer. Note that the example assumes the id field is unique, which appears to be the case based on your question.
The SQL query below assumes your table is called demo and has a single unique id field, and the table has been populated with the values you listed in your question.
( SELECT id FROM demo WHERE STRCMP ( 'd01', id ) > 0 ORDER BY id DESC LIMIT 1 )
UNION ( SELECT id FROM demo WHERE id = 'd01' ORDER BY id ) UNION
( SELECT id FROM demo WHERE STRCMP ( 'd01', id ) < 0 ORDER BY id ASC LIMIT 1 )
ORDER BY id
It produces the following result: b03, d01, d02.
This solution is flexible because you can change each of the LIMIT 1 statements to LIMIT N where N is any number. That way you can get the previous 3 rows and the following 6 rows, for example.
Note: this is from M$ SQL Server, but the only thing that needs tweaking is the isnull function.
select *
from test m
where id between isnull((select max(id) from #test where col < 'd01'),'d01')
and isnull((select min(id) from #test where col > 'd01'),'d01')
Find your target row,
SELECT p.id FROM product WHERE id = 'd01'
and the row above it with no other row between the two.
LEFT JOIN product AS p1 ON p1.id > p.id -- gets the rows above it
LEFT JOIN -- gets the rows between the two which needs to not exist
product AS p1a ON p1a.id > p.id AND p1a.id < p1.id
and similarly for the row below it. (Left as an exercise for the reader.)
In my experience this is also quite efficient.
SELECT
p.id, p1.id, p2.id
FROM
product AS p
LEFT JOIN
product AS p1 ON p1.id > p.id
LEFT JOIN
product AS p1a ON p1a.id > p.id AND p1a.id < p1.id
LEFT JOIN
product AS p2 ON p2.id < p.id
LEFT JOIN
product AS p2a ON p2a.id < p.id AND p2a.id > p2.id
WHERE
p.id = 'd01'
AND p1a.id IS NULL
AND p2a.ID IS NULL
Although not a direct answer to your question I personally wouldn't rely on the natural order, since it may change duo to import/exports and produce side effects not easily understandable by fellow programmers. What about creating an alternate INTEGER index and fire up another query? "WHERE id > ...yourdesiredid ... LIMIT 1"?
mysql> describe test;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id | varchar(50) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
mysql> select * from test;
+------+
| id |
+------+
| b01 |
| b02 |
| b03 |
| b04 |
+------+
mysql> select * from test where id >= 'b02' LIMIT 3;
+------+
| id |
+------+
| b02 |
| b03 |
| b04 |
+------+
What about using a cursor? This would let you traverse the returned set one row at a time. using it with two variables (like "current" and "last"), you could inchworm along the result until you hit your target. Then return the value of "last" (for n-1), your entered target (n), and then traverse / iterate one more time and return the "current" (n+1).