MySQL - get users who placed 25th order during period - mysql

I have users and orders tables with this structure (simplified for question):
USERS
userid
registered(date)
ORDERS
id
date (order placed date)
user_id
I need to get array of users (array of userid) who placed their 25th order during specified period (for example in May 2019), date of 25th order for each user, number of days to place 25th order (difference between registration date for user and date of 25th order placed).
For example if user registered in April 2018, then placed 20 orders in 2018, and then placed 21-30th orders in Jan-May 2019 - this user should be in this array, if he placed 25th (overall for his account) order in May 2019.
How I can do this with MySQL request?
Sample data and structure: http://www.sqlfiddle.com/#!9/998358 (for testing you can get 3rd order as ex., not 25th, to not add a lot of sample data records).
One request is not required - if this can't be done in one request, few is possible and allowed.

You can use a correlated subquery to get the count of orders placed before the current one by a user. If that's 24 the current order is the 25th. Then check if the date is in the desired range.
SELECT o1.user_id,
o1.date,
datediff(o1.date, u1.registered)
FROM orders o1
INNER JOIN users u1
ON u1.userid = o1.user_id
WHERE (SELECT count(*)
FROM orders o2
WHERE o2.user_id = o1.user_id
AND o2.date < o1.date
OR o2.date = o1.date
AND o2.id < o1.id) = 24
AND o1.date >= '2019-01-01'
AND o1.date < '2019-06-01';

The basic inefficient way of doing this would be to get the user_id for every row in ORDERS where the date is in your target range AND the count of rows in ORDERS with the same user_id and a lower date is exactly 24.
This can get very ugly, very quickly, though.
If you're calling this from code you control, can't you do it from the code?
If not, there should be a way to assign to each row an index describing its rank among orders for its specific user_id, and select from this all user_id from rows with an index of 25 and a correct date. This will give you a select from select from select, but it should be much faster. The difficulty here is to control the order of the rows, so here are the selects I envision:
Select all rows, order by user_id asc, date asc, union-ed to nothing from a table made of two vars you'll initialize at 0.
from this, select all while updating a var to know if a row's user_id is the same as the last, and adding a field that will report so (so for each user_id the first line in order will have a specific value like 0 while the other rows for the same user_id will have a 1)
from this, select all plus a field that equals itself plus one in case the first added field is 1, else 0
from this, select the user_id from the rows where the second added field is 25 and the date is in range.
The union thingy is only necessary if you need to do it all in one request (you have to initialize them in a lower select than the one they're used in).
Edit: Well if you need the date too you can just select it along with the user_id, but calculating the number of days in sql will be a pain. Just join the result table to the users table and get both the date of 25th order and their date of registration, you'll surely be able to do the difference in code.
I'll try building an actual request, however if you want to truly understand what you need to make this you gotta read up on mysql variables, unions, and conditional statements.
"Looks too complicated. I am sure that this can be done with current DB structure and 1-2 requests." Well, yeah. Use the COUNT request, it will be easy, and slow as hell.
For the complex answer, see http://www.sqlfiddle.com/#!9/998358/21
Since you can use multiple requests, you can just initialize the vars first.
It isn't actually THAT complicated, you just have to understand how to concretely express what you mean by "an user's 25th command" to a SQL engine.
See http://www.sqlfiddle.com/#!9/998358/24 for the difference in days, turns out there's a method for that.
Edit 5: seems you're going with the COUNT method. I'll pray your DB is small.
Edit 6: For posterity:
The count method will take years on very large databases. Since OP didn't come back, I'm assuming his is small enough to overlook query speed. If that's not your case and let's say it's 10 years from now and the sqlfiddle links are dead; here's the two-queries solution:
SET #PREV_USR:=0;
SELECT user_id, date_ FROM (
SELECT user_id, date_, SAME_USR AS IGNORE_SMUSR,
#RANK_USR:=(CASE SAME_USR WHEN 0 THEN 1 ELSE #RANK_USR+1 END) AS RANK FROM (
SELECT orders.*, CASE WHEN #PREV_USR = user_id THEN 1 ELSE 0 END AS SAME_USR,
#PREV_USR:=user_id AS IGNORE_USR FROM
orders
ORDER BY user_id ASC, date_ ASC, id ASC
) AS DERIVED_1
) AS DERIVED_2
WHERE RANK = 25 AND YEAR(date_) = 2019 AND MONTH(date_) = 4 ;
Just change RANK = ? and the conditions to fit your needs. If you want to fully understand it, start by the innermost SELECT then work your way high; this version fuses the points 1 & 2 of my explanation.
Now sometimes you will have to use an API or something and it wont let you keep variable values in memory unless you commit it or some other restriction, and you'll need to do it in one query. To do that, you put the initialization one step lower and make it so it does not affect the higher statements. IMO the best way to do this is in a UNION with a fake table where the only row is excluded. You'll avoid the hassle of a JOIN and it's just better overall.
SELECT user_id, date_ FROM (
SELECT user_id, date_, SAME_USR AS IGNORE_SMUSR,
#RANK_USR:=(CASE SAME_USR WHEN 0 THEN 1 ELSE #RANK_USR+1 END) AS RANK FROM (
SELECT DERIVED_4.*, CASE WHEN #PREV_USR = user_id THEN 1 ELSE 0 END AS SAME_USR,
#PREV_USR:=user_id AS IGNORE_USR FROM
(SELECT * FROM orders
UNION
SELECT * FROM (
SELECT (#PREV_USR:=0) AS INIT_PREV_USR, 0 AS COL_2, 0 AS COL_3
) AS DERIVED_3
WHERE INIT_PREV_USR <> 0
) AS DERIVED_4
ORDER BY user_id ASC, date_ ASC, id ASC
) AS DERIVED_1
) AS DERIVED_2
WHERE RANK = 25 AND YEAR(date_) = 2019 AND MONTH(date_) = 4 ;
With that method, the thing to watch for is the amount and the type of columns in your basic table. Here orders' first field is an int, so I put INIT_PREV_USR in first then there are two more fields so I just add two zeroes with names and call it a day. Most types work, since the union doesn't actually do anything, but I wouldn't try this when your first field is a blob (worst comes to worst you can use a JOIN).
You'll note this is derived from a method of pagination in mysql. If you want to apply this to other engines, just check out their best pagination calls and you should be able to work thinks out.

Related

how can i do mysql query to find a user everytime he is logged on a table, and all the other users with him, filtered by date, and time

I am making a covid log db for easy contact tracing.
these are my tables
log_tbl (fk_UserID, fk_EstID, log_date, log_time)
est_tbl (EstID, EstName)
user_tbl (User_ID, Name, Address, MobileNumber)
I wanted to write a statement that shows when and where an individual (User_ID)
enters an Establishment (EstID),
SELECT l.*
FROM log_tbl l
WHERE (l.EstID, l.log_date) IN (SELECT l2.EstID, l2.log_date
FROM log_tbl l2
WHERE l2.User_ID = 'LIN78JFF5WG'
);
[Result of Query]1
this currently works,
but it still has to be filterd by +-2 hours based on the time the when User_ID was logged on log_tbl, so that it would narrow down result when first query would spit out 1000 logs. Because these Results will be Contacted, and to reduce Costs, it needs to be narrowed down to less than 50%.
So, table below should not include first 2 and last one because it doesn't fit with 1, the date, and 2 the time, in relation to the searched userLIN78JFF5WG
[Unfiltered Result]2
FROM log_tbl
WHERE User_ID = 'LIN78JFF5WG'
AND (BETWEEN subtime(log_tbl.log_time, '02:00:00') AND addtime(log_tbl.log_time, '02:00:00'
I know this is wrong, but I don't have any idea how to join the two queries
and result should include
EstID, Name, Address, MobileNumber, log_date, log_time sorted by Date
Imagine it like this,
There are 3 baskets full of tomatoes,
2 of the baskets have rotten tomatoes inside.
Do you throw away the whole basket full of tomatoes?
No.. you select the rotten tomato, and others close to it, and throw them away.
I need that for the DB, instead of Getting Result for the Whole Day,
I only need the People who are in close contact with The Target user.
is it possible to do this on mysql? I have to use mysql because of reasons..
Here I include the data sample fiddle:
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=050b2103d3adf5828524f49066c12e74
MySQL supports window functions with the range window frame specification. I would suggest:
select l.*
from (select l.*,
sum(case when fk_UserID = 'LIN78JFF5WG' then 1 else 0 end) over
(partition by log_date
order by log_time
range between interval 2 hour preceding and interval 2 hour following
) as cnt_user
from log_tbl l
) l
where cnt_user > 0;
Here is a db<>fiddle.
You can then annotate the results would other columns from other tables to get your final result.
This should be much faster than alternative methods.
Note, however, that you have a flaw in this logic, because it is not checking four hours between 0:00-2:00 a.m. and 22:00-0:00. You can store the date/time in a single column to make it easier to get a more accurate list.
I am not fully understand your requirements.
but I write sample sql so that we can make it clear
select *,(select UNIX_TIMESTAMP(CONCAT(log_date," ",log_time)) as ts from log_tbl where fk_UserID='LIN78JFF5WG') as target_time
from
log_tbl as l
-- simple join query.to get intend information
left join user_tbl as u on (u.User_id=l.fk_UserID)
left join est_tbl as e on (l.fk_EstID=e.EstID)
-- mysql datediff only return day as unit.so we convert to timestamp to do the diff
where UNIX_TIMESTAMP(CONCAT(l.log_date," ",l.log_time)) - target_time between 60*60*2 and 60*60*2
-- solution two
-- but I suggest you divide it into two sql like this.
select UNIX_TIMESTAMP(CONCAT(log_date," ",log_time)) as ts from log_tbl where fk_UserID='LIN78JFF5WG';
-- we get the user log timestamp.and use it in next query
select *
from
log_tbl as l
-- simple join query.to get intend information
left join user_tbl as u on (u.User_id=l.fk_UserID)
left join est_tbl as e on (l.fk_EstID=e.EstID)
-- mysql datediff only return day as unit.so we convert to timestamp to do the diff
where UNIX_TIMESTAMP(CONCAT(l.log_date," ",l.log_time)) - [target_time(passed by code)] between 60*60*2 and 60*60*2

Generate a join similar to a vlookup based on closest date

I have the following two tables:
movie_sales (provided daily)
movie_id
date
revenue
movie_rank (provided every few days or weeks)
movie_id
date
rank
The tricky thing is that every day I have data for sales, but only data for ranks once every few days. Here is an example of sample data:
`movie_sales`
- titanic (ID), 2014-06-01 (date), 4.99 (revenue)
- titanic (ID), 2014-06-02 (date), 5.99 (revenue)
`movie_rank`
- titanic (ID), 2014-05-14 (date), 905 (rank)
- titanic (ID), 2014-07-01 (date), 927 (rank)
And, because the movie_rate.date of 2014-05-14 is closer to the two sales dates, the output should be:
id date revenue closest_rank
titanic 2014-06-01 4.99 905
titanic 2014-06-02 5.99 905
The following query works to get the results by getting the min date difference in the sub-select:
SELECT
id,
date,
revenue,
(SELECT rank from movie_rank where id=s.id ORDER BY ABS(DATEDIFF(date, s.date)) ASC LIMIT 1)
FROM
movie_sales s
But I'm afraid that this would have terrible performance as it will literally be doing millions of subselects...on millions of rows. What would be a better way to do this, or is there really no proper way to do this since an index can not be properly done with a DATEDIFF ?
Unfortunately, you are right. The movie rank table must be searched for each movie sale and of all matching movie rows the closest be picked.
With an index on movie_rank(id) the DBMS finds the movie rows quickly, but an index on movie_rank(id, date) would be better, because the date could be read from the index and only the one best match would be read from the table.
But you also say that there are new ranks every few dates. If it is guaranteed to find a rank in a certain range, e.g. for each date there will be at least one rank in the twenty days before and at least one rank in the twenty days after, you can limit the search accordingly. (The index on movie_rank(id, date) would be essential for this, though.)
SELECT
id,
date,
revenue,
(
select r.rank
from movie_rank r
where r.id = s.id
and r.date between s.date - interval 20 days
and s.date + interval 20 days
order by abs(datediff(date, s.date)) asc
limit 1
)
FROM movie_sales s;
This is difficult to get quick with SQL. In a programming language I would choose this algorithm:
Sort the two tables by date and point to the first rows.
Move the rank pointer forward until we match the sales date or are beyond it. (If we aren't there already.)
Compare the sales date with the rank date we are pointing at and with the rank date of the previous row. Take the closer one.
Move the sales pointer one row forward.
Go to 2.
With this algorithm we would already be in about the position we want to be. Let's see, if we can do the same with SQL. Iterations are done with recursive queries in SQL. These are available in MySQL as of version 8.0.
We start with sorting the rows, i.e. giving them numbers. Then we iterate through both data sets.
with recursive
sales as
(
select *, row_number() over (partition by movie_id order by date) as rn
from movie_sales
),
ranks as
(
select *, row_number() over (partition by movie_id order by date) as rn
from movie_rank
),
cte (movie_id, revenue, srn, rrn, sdate, rdate, rrank, closest_rank) as
(
select
movie_id, s.revenue, s.rn, r.rn, s.date, r.date, r.ranking,
case when s.date <= r.date then r.ranking end
from (select * from sales where rn = 1) s
join (select * from ranks where rn = 1) r using (movie_id)
union all
select
cte.movie_id,
cte.revenue,
coalesce(s.rn, cte.srn),
coalesce(r.rn, cte.rrn),
coalesce(s.date, cte.sdate),
coalesce(r.date, cte.rdate),
coalesce(r.ranking, cte.rrank),
case when coalesce(r.date, cte.rdate) >= coalesce(s.date, cte.sdate) then
case when abs(datediff(coalesce(r.date, cte.rdate), coalesce(s.date, cte.sdate))) <
abs(datediff(cte.rdate, coalesce(s.date, cte.sdate)))
then coalesce(r.ranking, cte.rrank)
else cte.rrank
end
end
from cte
left join sales s on s.movie_id = cte.movie_id and s.rn = cte.srn + 1 and cte.closest_rank is not null
left join ranks r on r.movie_id = cte.movie_id and r.rn = cte.rrn + 1 and cte.rdate < cte.sdate
where s.movie_id is not null or r.movie_id is not null
-- where cte.closest_rank is null
)
select
movie_id,
sdate,
revenue,
closest_rank
from cte
where closest_rank is not null;
(BTW: I named the column ranking, because rank is a reserved word in SQL.)
Demo: https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=e994cb56798efabc8f7249fd8320e1cf
This is probably still slow. The reason for this is: there are no pointers to a row in SQL. If we want to go from row #1 to row #2, we must search that row, while in a programming language we would really just move the pointer one step forward. If the tables had an ID, we could build a chain (next_row_id) instead of using row numbers. That could speed this process up. But well, I guess you already notice: this is not an algorithm made for SQL.
Another approach... Avoid the problem by cleansing the data.
Make sure the rank is available for every day. When a new date comes in, find the previous rank, then fill in all the rows for the intervening days.
(This will take some initial effort to 'fix' all the previous missing dates. After that, it is a small effort when a new list of ranks comes in.)
The "report" would be a simple JOIN on the date. You would probably need a 2-column INDEX(movie_id, date) or something like that.
Ultimate solution would be not to calculate all the ranks every time, but store them (in a new column, or even in a new table if you don't want to change existing tables).
Each time you update you could look for sales data without rank and calculate only for those.
With above approach you get rank always from last available rank BEFORE sales data (e.g. if you've data 14 days before and 1 days after, still the one before would be used)
If you strictly need to use ranking closest in time, then you need to run UPDATE also for newly arrived ranking info. I believe it would still be more efficient in the long run.

Generating complex sql tables

I currently have an employee logging sql table that has 3 columns
fromState: String,
toState: String,
timestamp: DateTime
fromState is either In or Out. In means employee came in and Out means employee went out. Each row can only transition from In to Out or Out to In.
I'd like to generate a temporary table in sql to keep track during a given hour (hour by hour), how many employees are there in the company. Aka, resulting table has columns HourBucket, NumEmployees.
In non-SQL code I can do this by initializing the numEmployees as 0 and go through the table row by row (sorted by timestamp) and add (employee came in) or subtract (went out) to numEmployees (bucketed by timestamp hour).
I'm clueless as how to do this in SQL. Any clues?
Use a COUNT ... GROUP BY query. Can't see what you're using toState from your description though! Also, assuming you have an employeeID field.
E.g.
SELECT fromState AS 'Status', COUNT(*) AS 'Number'
FROM StaffinBuildingTable
INNER JOIN (SELECT employeeID AS 'empID', MAX(timestamp) AS 'latest' FROM StaffinBuildingTable GROUP BY employeeID) AS LastEntry ON StaffinBuildingTable.employeeID = LastEntry.empID
GROUP BY fromState
The LastEntry subquery will produce a list of employeeIDs limited to the last timestamp for each employee.
The INNER JOIN will limit the main table to just the employeeIDs that match both sides.
The outer GROUP BY produces the count.
SELECT HOUR(SBT.timestamp) AS 'Hour', SBT.fromState AS 'Status', COUNT(*) AS 'Number'
FROM StaffinBuildingTable AS SBT
INNER JOIN (
SELECT SBIJ.employeeID AS 'empID', MAX(timestamp) AS 'latest'
FROM StaffinBuildingTable AS SBIJ
WHERE DATE(SBIJ.timestamp) = CURDATE()
GROUP BY SBIJ.employeeID) AS LastEntry ON SBT.employeeID = LastEntry.empID
GROUP BY SBT.fromState, HOUR(SBT.timestamp)
Replace CURDATE() with whatever date you are interested in.
Note this is non-optimal as it calculates the HOUR twice - once for the data and once for the group.
Again you are using the INNER JOIN to limit the number of returned row, this time to the last timestamp on a given day.
To me your description of the FromState and ToState seem the wrong way round, I'd expect to doing this based on the ToState. But assuming I'm wrong on that the following should point you in the right direction:
First, I create a "Numbers" table containing 24 rows one for each hour of the day:
create table tblHours
(Number int);
insert into tblHours values
(0),(1),(2),(3),(4),(5),(6),(7),
(8),(9),(10),(11),(12),(13),(14),(15),
(16),(17),(18),(19),(20),(21),(22),(23);
Then for each date in your employee logging table, I create a row in another new table to contain your counts:
create table tblDailyHours
(
HourBucket datetime,
NumEmployees int
);
insert into tblDailyHours (HourBucket, NumEmployees)
select distinct
date_add(date(t.timeStamp), interval h.Number HOUR) as HourBucket,
0 as NumEmployees
from
tblEmployeeLogging t
CROSS JOIN tblHours h;
Then I update this table to contain all the relevant counts:
update tblDailyHours h
join
(select
h2.HourBucket,
sum(case when el.fromState = 'In' then 1 else -1 end) as cnt
from
tblDailyHours h2
join tblEmployeeLogging el on
h2.HourBucket >= el.timeStamp
group by h2.HourBucket
) cnt ON
h.HourBucket = cnt.HourBucket
set NumEmployees = cnt.cnt;
You can now retrieve the counts with
select *
from tblDailyHours
order by HourBucket;
The counts give the number on site at each of the times displayed, if you want during the hour in question, we'd need to tweak this a little.
There is a working version of this code (using not very realistic data in the logging table) here: rextester.com/DYOR23344
Original Answer (Based on a single over all count)
If you're happy to search over all rows, and want the current "head count" you can use this:
select
sum(case when t.FromState = 'In' then 1 else -1) as Heads
from
MyTable t
But if you know that there will always be no-one there at midnight, you can add a where clause to prevent it looking at more rows than it needs to:
where
date(t.timestamp) = curdate()
Again, on the assumption that the head count reaches zero at midnight, you can generalise that method to get a headcount at any time as follows:
where
date(t.timestamp) = "CENSUS DATE" AND
t.timestamp <= "CENSUS DATETIME"
Obviously you'd need to replace my quoted strings with code which returned the date and datetime of interest. If the headcount doesn't return to zero at midnight, you can achieve the same by removing the first line of the where clause.

SQL comparison within max function

I'm trying to get a list of 20 events grouped by their Ids and sorted by whether they are in progress, pending, or already finished. The problem is that there are events with the same id that include finished, pending, and in progress events and I want to have 20 distinct Ids in the end. What I want to do is group these events together but if one of them is in progress then sort that group by that event. So basically I want to sort by the latest end time that is also before now().
What I have so far is something like this where end and start are end/start times. I'm not sure if what is inside max() is behaving how I should expect.
select * from event_schedule as t1
JOIN (
SELECT DISTINCT(event_id) as e
from event_schedule
GROUP BY event_id
order by MAX(end < unix_timestamp(now())) asc,
MIN(start >= unix_timestamp(now())) asc,
MAX(start) desc
limit 0, 20
)
as t2 on (t1.event_id = t2.e)
This results in some running / pending events to be mixed around in order when I want them to be in the order running -> pending -> Ended.
I would suggest to first create a view in order to not get an overcomplicated SELECT statement:
CREATE VIEW v_event_schedule AS
SELECT *,
CASE
WHEN end < unix_timestamp(now())
THEN 1
WHEN start > unix_timestamp(now())
THEN 2
ELSE 3
END AS category
FROM event_schedule;
This view v_event_schedule returns an extra column, in addition to the columns of event_schedule, which represents the priority of the category (running, pending, past):
running (in progress)
pending (future)
past
Then the following will do what you want:
SELECT a.*
FROM v_event_schedule a
INNER JOIN (
SELECT id,
MIN(category) category
FROM v_event_schedule b
GROUP BY id
) b
ON a.id = b.id
AND a.category = b.category
ORDER BY category,
start DESC
LIMIT 20;
The ORDER BY can be further adapted to your needs as to how you want to sort within the same category. I added start DESC as that seemed what you were doing in your attempt.
About the original ORDER BY
You had this:
order by MAX(end < unix_timestamp(now())) asc,
MIN(start >= unix_timestamp(now())) asc,
The expressions you have there evaluate to boolean values, and both elements in the ORDER BY each divide the groups into two sections, one for false and one for true, so in total 4 groups.
The first of the two will order IDs first that have no record with an end value in the past, because only then the boolean expression is always false which is the only way to make the MAX of them false as well.
Now let's say for the same ID you have both records that have an end date in the future as well as records with an end date in the past. In that case the MAX aggregates to true, and so the id will be sorted secondary. This is not intended, as this ID might have a "running" record.
I did not look into making your query work based on such aggregates on boolean expressions. It requires some time to understand what they are doing. A CASE WHEN to determine the category with a number really makes the SQL a lot easier to understand, at least to me.

MySql retrieve products and prices

I would like to retrieve a list of all the products, with their associated prices for a given period.
The Product table:
Id
Name
Description
The Price table:
Id
Product_id
Name
Amount
Start
End
Duration
The most important thing to not here, is that a Product can have mutliple prices, even over the same period, but not with the same duration.
For example, a price from "2013-06-01 -> 2013-06-08" and another from "2013-06-01 -> 2013-06-05"
So my aim is to retrieve, for a given period, the lists of all products, paginated by 10 product for example, joined to the prices existant over the period.
The basic way to do so would be:
SElECT *
FROM product
LEFT JOIN prices ON ...
WHERE prices.start >= XXX And prices.end <= YYY
LIMIT 0,10
The problem while using this simple solution, is that I can't retrieve only 10 Products, but 10 Products*Prices, which is not acceptable in my case.
So the solution would be:
SElECT *
FROM product
LEFT JOIN prices ON ...
WHERE prices.start >= XXX And prices.end <= YYY
GROUP BY product.id
LIMIT 0,10
But the problem here is, i'll only retrieve "1" price for each product.
So I wonder what would be the best way to handle this.
I could for example use a group function, like "group_concat", and retrieve in a field all the prices in a string, like "200/300/100" and so on. That seem weird, and would need work on server-language side to transform to a readable information, but it could work.
Another solution would be to use different column for each prices, depending on duration:
SELECT
IF( NOT ISNULL(price.start) AND price.duration = 1, price.amount , NULL) AS price_1_day
---- same here for all possible durations ---
From ...
Thta would work too i guess (i'm not really sure if this is possible however), but I may need to create about 250 columns to cover all possibilities. Is that a safe option ?
Any help will be much appreciated
I believe that a group_concat would be the best way forward on this, as its very purpose is to aggregate multiple pieces of data relating to a particular column.
However, adapting on peterm's SQL fiddle, this is possible to do in 1 query if using user defined variables. (If one ignores the initial query for setting the vars)
http://dev.mysql.com/doc/refman/5.7/en/user-variables.html
SET #productTemp := '', #increment := 0;
SElECT
#increment := if(#productTemp != Product_id, #increment + 1, #increment) AS limiter,
#productTemp :=Product_id as Product_id,
Product.name,
Price.id as Price_id,
Price.start,
Price.end
FROM
Product
LEFT JOIN
Price ON Product.Id=Price.Product_id
WHERE
`start` >= '2013-05-01' AND `end` <= '2013-05-15'
GROUP BY
Price_id
HAVING
limiter <=2
What we're doing here is only incrementing the user defined var "incrementer" only when the product id is not the same as the last one that was encountered.
As aliases cannot be used in the WHERE condition, we must GROUP by the unique ID (in this case price ID) so that we can reduce the result using HAVING. In this case, I have a full result set that should include 3 Product IDs, reduced to only showing 2.
Please note: This is not a solution I would recommend on large data sets, or in a production enviornment. Even the mysql manual makes a point of highlighting that user defined vars can behave somewhat erratically depending on what paths the optimizer takes. However, I have used them to great effect for some internal statistics in the past.
Fiddle: http://sqlfiddle.com/#!2/96c92/3
It's hard to tell without sample data and desired output but you can try something like this
SElECT p.*, q2.*
FROM
(
SElECT Product_id
FROM Price
WHERE `start` >= '2013-05-01' AND `end` <= '2013-05-15'
GROUP BY Product_id
LIMIT 0,10
) q1 JOIN
(
SELECT *
FROM Price
WHERE `start` >= '2013-05-01' AND `end` <= '2013-05-15'
) q2 ON q1.Product_id = q2.Product_id JOIN product p
ON q1.Product_id = p.Id
Here is SQLFiddle demo