Find the nearest date from entered date in SQL, both ways - mysql

I have a problem, I have a task to find a nearest date to a given date, looking both ways, older or younger. But I have no idea, I'm new to SQL, tried googling but didn't find any help.
create proc Task
(#Date date)
as
begin
select top(1) p.FirstName, p.LastName, e.BirthDate, e.JobTitle from HumanResources.Employee e
join Person.Person p
on p.BusinessEntityID = e.BusinessEntityID
where e.BirthDate>#Date
end
I started something like this, and then lost it

Always remember: TOP without ORDER BY by doesn't make much sense; add an order by that is ascending (your birthdate > #date comparison asks for all birthdates greater than/after, so the TOP(1) ordered by birthdate ascending would be the earliest birthdate that is greater than your variable)
Then take the entire query, paste it again, put UNION ALL between them and flip your ORDER BY to be descending and your comparison to be less than, in this second query
You thus end up with a query that chooses the smallest that is greater than and the largest that is less than i.e. The nearest ones to your variable
Consider whether you should be using >= and <= if a date that is bang on meets the specification

I would not use functions in order (as server will not be able to use indexes).
Instead, I'd go for two-queries solution.
It could be wrapped in SP something like this (MySQL version):
CREATE FUNCTION `Task`(
`aDate` DATE
)
RETURNS INT
BEGIN
SELECT
`BusinessEntityID`
, `BirthDate`
INTO
#id_next
, #birthdate_next
FROM
`Employee`
WHERE
`BirthDate` >= aDdate
ORDER BY
`BirthDate` ASC
LIMIT
1
;
IF #birthdate_next IS NULL THEN
SELECT
`BusinessEntityID`
, `BirthDate`
INTO
#id_prev
, #birthdate_prev
FROM
`Employee`
WHERE
`BirthDate` < aDate
ORDER BY
`BirthDate` DESC
LIMIT
1
;
ELSE
IF DATEDIFF(#birthdate_next, aDate) > 1 THEN
SELECT
`BusinessEntityID`
, `BirthDate`
INTO
#id_prev
, #birthdate_prev
FROM
`Employee`
WHERE
`BirthDate` < aDate
AND `BirthDate` > DATE_SUB(aDate, INTERVAL DATEDIFF(#birthdate_next, aDate) DAY)
ORDER BY
`BirthDate` DESC
LIMIT
1
;
END IF;
END IF;
CASE
WHEN #id_prev IS NULL AND #id_next IS NULL THEN RETURN NULL;
WHEN #id_prev IS NULL THEN RETURN #id_next;
WHEN #id_next IS NULL THEN RETURN #id_prev;
WHEN DATEDIFF(#birthdate_next, aDate) < DATEDIFF(aDate, #birthdate_prev) THEN RETURN #id_next;
ELSE RETURN #id_prev;
END CASE;
END
So in some cases only single query (the first one) would be executed.
The query will use index by BirthDate.
If the first query diff from specified date is less than 2 days, the second query will not be executed at all (it is more complicated as ordered DESC).
It is possible to further simplify the SP, however I'm keeping it "as is" so it is easier to understand.

Use datediff() to get the duration between the two dates. Since you don't care whether the date is in the future or in the past, use abs() to get the absolute value of the duration. Then order by the absolute duration and take the top one record.
I'm not sure if you're really on MySQL or on SQL Server. The TOP (1) indicates SQL Server, the tag says MySQL.
Here's the MySQL version:
SELECT p.firstname,
p.lastname,
e.birthdate,
e.jobtitle
FROM humanresources.employee e
INNER JOIN person.person p
ON p.businessentityid = e.businessentityid
ORDER BY abs(datediff(e.birthdate, #date))
LIMIT 1;
And here for SQL Server:
SELECT TOP (1)
p.firstname,
p.lastname,
e.birthdate,
e.jobtitle
FROM humanresources.employee e
INNER JOIN person.person p
ON p.businessentityid = e.businessentityid
ORDER BY abs(datediff(day, e.birthdate, #date));
May need some tweaks depending on the actual data types you're using.
Edit:
Addressing fifoniks's concern a version, that could perform better, if the respective indexes exist (on humanresources.employee.birthdate optimally once ascending and once descending).
It first gets the union of the nearest record in the the future of #date (including #date) and the analog record for the past, hopefully using indexes along the way. From these two records, the one with the lowest absolute duration to #date is picked. Then person gets joined.
SELECT p.firstname,
p.lastname,
y.bithdate,
y.jobtitle
FROM (SELECT TOP (1)
x.businessentityid,
x.birthdate,
x.jobtitle
FROM (SELECT TOP (1)
e.businessentityid,
e.birthdate,
e.jobtitle
FROM humanresources.employee e
WHERE e.birthdate >= #date
ORDER BY e.birthdate ASC
UNION ALL
SELECT TOP (1)
e.businessentityid,
e.birthdate,
e.jobtitle
FROM humanresources.employee e
WHERE e.birthdate <= #date
ORDER BY e.birthdate DESC) x
ORDER BY abs(datediff(day, x.birthdate, #date)) ASC) y
INNER JOIN person.person p
ON p.businessentityid = y.businessentityid;

Related

Speed up Mysql Query using a split function

I am trying to speed up a MYSQL query.
In a column called "MISC", I first have to extract a "traceID" variable, that will be used to match row of another table.
Example of the MISC column:
PFFCC_Strip/fkk49322/PMethod=Diners/CardType=Diners/9999******9999/2010/TraceId=7122910
I am extracting the value "7122910" as traceID and find corresponding row with a left join. The traceId value being unique, only one row must be present on each table.
I cannot set Index on the tables to speed up process. Any approach that could make this query run faster? As it is, it takes a few seconds to run which is not possible.
select *
from
(select TraceID,PP,UDef2, Payment_Method, Approved, TransactionID, Amount
from pr) pr
left join
(select
PAYMENT_ID as Payment_ID_omega, TRANSACTION_TYPE,
REQUESTED_AMOUNT, AMOUNT, `STATUS` as StatusRef_omega,
REQUEST_DATE, Agent,
if (locate('TraceId=',MISC)>0, SUBSTRING_INDEX(MISC,'TraceId=',-1),'') as traceID
from BankingActivity ) omega
on pr.TraceID = omega.traceID
having
(REQUEST_DATE BETWEEN DATE_ADD(DATE(NOW()), INTERVAL -1 DAY) AND NOW())
ORDER BY pr.TraceID DESC
You can place your filters inside the query before join that must make a difference and you must have the index on table pr(TraceID) and BankingActivity(REQUEST_DATE, traceID). For more optimised query, Please post the execution plan.
select * from(select TraceID
,PP
,UDef2
,Payment_Method
,Approved
,TransactionID
,Amount
from pr) pr
left join (select PAYMENT_ID as Payment_ID_omega
,TRANSACTION_TYPE
,REQUESTED_AMOUNT
,AMOUNT
,`STATUS` as StatusRef_omega
,REQUEST_DATE
,Agent
,if (locate('TraceId=', MISC) > 0, SUBSTRING_INDEX(MISC,'TraceId=',-1),'') as traceID
from BankingActivity
WHERE REQUEST_DATE BETWEEN DATE_ADD(DATE(NOW()), INTERVAL -1 DAY) AND NOW()) omega
on pr.TraceID = omega.traceID
ORDER BY pr.TraceID DESC

Query taking lot of time to execute

I am trying to run a query to get data one time from a client database to our database but a query is taking a lot of time to execute, when I change the order by from primary key user_appoint.id to user_appoint.u_id below is my query
SELECT
CONCAT('D',user_appoint.`id`) AS ApptId,
user_appoint.`u_id`,
tbl_questions.CandAns,
tbl_questions.ExamAns,
tbl_questions.QueNote,
CONCAT("[",GROUP_CONCAT(CONCAT('"',`tbl_investigations`.`test_id`,'":"',tbl_investigations.`result`,'"')),"]") AS CandInv,
CONCAT("[",GROUP_CONCAT(CONCAT('"',`tbl_investigations`.`test_id`,'":"',tbl_investigations.`comments`,'"')),"]") AS IntComm,
IF(tbl_questions.LastUpdatedDateTime>MAX(tbl_investigations.`ModifiedAt`),tbl_questions.LastUpdatedDateTime,MAX(tbl_investigations.`ModifiedAt`)) AS LastUpdatedDateTime,
CONCAT('D',user_appoint.`id`) AS UniqueId
FROM user_appoint
LEFT JOIN tbl_investigations ON tbl_investigations.`appt_id`=user_appoint.`id` AND tbl_investigations.`ModifiedAt`>'2011-01-01 00:00:00'
LEFT JOIN tbl_questions ON tbl_questions.`appt_id` =user_appoint.`id` AND tbl_questions.`LastUpdatedDateTime`>'2011-01-01 00:00:00'
GROUP BY user_appoint.`id`
HAVING LastUpdatedDateTime>'2011-01-01 00:00:00'
ORDER BY user_appoint.`u_id`
LIMIT 0, 2000;
user_appoint.u_id is properly indexed.
Please check the explain plan of your query. And its better to always share explain plan with your original question.
explain format=json
SELECT CONCAT('D',user_appoint.id) AS ApptId, user_appoint.u_id,
tbl_questions.CandAns, tbl_questions.ExamAns, tbl_questions.QueNote,
CONCAT("[",GROUP_CONCAT(CONCAT('"',tbl_investigations.test_id,'":"',tbl_investigations.result,'"')),"]")
AS CandInv,
CONCAT("[",GROUP_CONCAT(CONCAT('"',tbl_investigations.test_id,'":"',tbl_investigations.comments,'"')),"]")
AS IntComm,
IF(tbl_questions.LastUpdatedDateTime>MAX(tbl_investigations.ModifiedAt),tbl_questions.LastUpdatedDateTime,MAX(tbl_investigations.ModifiedAt))
AS LastUpdatedDateTime, CONCAT('D',user_appoint.id) AS UniqueId FROM
user_appoint LEFT JOIN tbl_investigations ON
tbl_investigations.appt_id=user_appoint.id AND
tbl_investigations.ModifiedAt>'2011-01-01 00:00:00' LEFT JOIN
tbl_questions ON tbl_questions.appt_id =user_appoint.id AND
tbl_questions.LastUpdatedDateTime>'2011-01-01 00:00:00' GROUP BY
user_appoint.id HAVING LastUpdatedDateTime>'2011-01-01 00:00:00'
ORDER BY user_appoint.u_id LIMIT 0, 2000;
On looking at your query,I could see lot of concat,aggregate function and join is being performed in single query.
These operations will be performed for all 2000 records as you have set limit on query execution.
This might have caused query to slow down its execution.
You have 2 identical columns with different aliases
CONCAT('D',user_appoint.`id`) AS ApptId,
CONCAT('D',user_appoint.`id`) AS UniqueId
(changed) Assuming NULLs may occur in these date columns then comparing the max() values will overcome any adverse impacts by NULL:
if(max(tbl_questions.lastupdateddatetime) > max(tbl_investigations.`modifiedat`) , max(tbl_questions.lastupdateddatetime), max(tbl_investigations.`modifiedat`)) AS LastUpdatedDateTime
Try this:
SELECT *
FROM (
SELECT
Concat('D', user_appoint.`id`) AS ApptId
, user_appoint.`u_id`
, tbl_questions.candans
, tbl_questions.examans
, tbl_questions.quenote
, Concat("[", Group_concat(Concat('"', `tbl_investigations`.`test_id`, '":"', tbl_investigations.`result`, '"')), "]") AS CandInv
, Concat("[", Group_concat(Concat('"', `tbl_investigations`.`test_id`, '":"', tbl_investigations.`comments`, '"')), "]") AS IntComm
, if(max(tbl_questions.lastupdateddatetime) > max(tbl_investigations.`modifiedat`) , max(tbl_questions.lastupdateddatetime), max(tbl_investigations.`modifiedat`) ) AS LastUpdatedDateTime
, Concat('D', user_appoint.`id`) AS UniqueId
FROM user_appoint
LEFT JOIN tbl_investigations
ON tbl_investigations.`appt_id` = user_appoint.`id`
AND tbl_investigations.`modifiedat` > '2011-01-01 00:00:00'
LEFT JOIN tbl_questions
ON tbl_questions.`appt_id` = user_appoint.`id`
AND tbl_questions.`lastupdateddatetime` > '2011-01-01 00:00:00'
GROUP BY user_appoint.`id`
HAVING lastupdateddatetime > '2011-01-01 00:00:00'
) d
ORDER BY `u_id`
LIMIT 0, 2000
;
HOWEVER
You are using a non-current and non-standard form of GROUP BY clause. MySQL started life allowing this bizarre situation where you could select many columns but only group by one of those. This is completely non-standard for SQL.
In recent versions of MySQL the default settings have changed and using just one column in the GROUP BY clause will cause an error.
So, you may have to change the way you perform the grouping to
GROUP BY
user_appoint.`id`
, user_appoint.`u_id`
, tbl_questions.candans
, tbl_questions.examans
, tbl_questions.quenote
If none of these improve performance please provide the execution plan (as text).

Eliminate First 14 For Each Symbol From Query

The following query pulls all rows that do not exist in a relative_strength_index table. But I also need to eliminate the first 14 rows for each symbol based on date asc from the historical_data table. I have tried several attempts to do this but am having real trouble with the 14 days. How could this issue be resolved and added into my current query?
Current Query
select *
from historical_data hd
where not exists (select rsi_symbol, rsi_date from relative_strength_index where hd.symbol = rsi_symbol and hd.histDate = rsi_date);
What you want is the first argument of the limit clause. Which states which row to start from accompanied by order by asc.
select * from historical_data hd where not exists (select rsi_symbol, rsi_date from relative_strength_index where hd.symbol = rsi_symbol and hd.histDate = rsi_date ORDER BY rsi_date ASC LIMIT 14)
use OFFSET along with LIMIT like this this will return maximum of 100,000 rows starting at row 15
select *
from historical_data hd
where not exists (select rsi_symbol, rsi_date from relative_strength_index where hd.symbol = rsi_symbol and hd.histDate = rsi_date)
order by date asc
limit 100000 offset 14;
but because you're using limit and offset, you might want to ORDER BY by some order before specifying limit and offset.
UPDATE you mentioned for each symbol, so try this query, it ranks each symbol based on date asc, then only selects rows where rank >= 15
SELECT *
FROM
(select hd.*,
CASE WHEN #previous_symbol = hd.symbol THEN #rank:=#rank+1
ELSE #rank := 1
END as rank,
#previous_symbol := hd.symbol
from historical_data hd
where not exists (select rsi_symbol, rsi_date from relative_strength_index where hd.symbol = rsi_symbol and hd.histDate = rsi_date)
order by hd.symbol, hd.date asc
)T
WHERE T.rank >= 15
It's not clear (to me) what resultset you want to return, or the conditions that specify whether a row should be returned.
All we have to go on is a confusingly vague description, to exclude "the first 14 rows", or "the first 14 days" for each symbol.
What we don't have is a represetative sample of the data, or an example of what rows should be returned.
Without that, we don't have a way to know if we understand the description of the specification, and we don't have anything to test against or to compare our results to.
So, we are basically just guessing. (Which seems to be the most popular kind of answer provided by the "try this" enthusiatss.)
I can provide some examples of some patterns, which may suit your specification, or may not.
To get the earliest `histdate` for each `symbol`, and add 14 days to that, we can use an inline view. We can then do a semi-join to the `historical_data` data, to exclude rows that have a `histdate` before the date returned from the inline view.
(This is based on an assumption that the datatype of the `histdate` column is DATE.)
SELECT hd.*
FROM ( SELECT d.symbol
, MIN(d.histdate) + INTERVAL 14 DAY AS histdate
FROM historical_data d
GROUP BY d.symbol
) dd
JOIN historical_data hd
ON hd.symbol = dd.symbol
AND hd.histdate > dd.histdate
ORDER
BY hd.symbol
, hd.histdate
But that query doesn't include any reference to the `relative_strength_index` table. The original query includes a NOT EXISTS predicate, with a correlated subquery of the `relative_strength_index` table.
If the goal is get the earliest `rsi_date` for each `rsi_symbol` from that table, and then add 14 days to that value...
SELECT hd.*
FROM ( SELECT rsi.rsi_symbol
, MIN(rsi.rsi_date) + INTERVAL 14 DAY AS rsi_date
FROM relative_strength_index rsi
GROUP BY rsi.rsi_symbol
) rs
JOIN historical_data hd
ON hd.symbol = rs.rsi_symbol
ON hd.histdate > rs.rsi_date
ORDER
BY hd.symbol
, hd.histdate
If the goal is to exclude rows where a matching row in relative_strength_index already exists, I would use an anti-join pattern...
SELECT hd.*
FROM ( SELECT d.symbol
, MIN(d.histdate) + INTERVAL 14 DAY AS histdate
FROM historical_data d
GROUP BY d.symbol
) dd
JOIN historical_data hd
ON hd.symbol = dd.symbol
AND hd.histdate > dd.histdate
LEFT
JOIN relative_strength_index xr
ON xr.rsi_symbol = hd.symbol
AND xr.rsi_date = hd.histdate
WHERE xr.rsi_symbol IS NULL
ORDER
BY hd.symbol
, hd.histdate
These are just example query patterns, which are likely not suited to your exact specification, since they are guesses.
It doesn't make much sense to provide more examples of other patterns, without a more detailed specification.

SQL query for counting multiple strings with one output

I have a database including certain strings, such as '{TICKER|IBM}' to which I will refer as ticker-strings. My target is to count the amount of ticker-strings per day for multiple strings.
My database table 'tweets' includes the rows 'tweet_id', 'created at' (dd/mm/yyyy hh/mm/ss) and 'processed text'. The ticker-strings, such as '{TICKER|IBM}', are within the 'processed text' row.
At this moment, I have a working SQL query for counting one ticker-string (thanks to the help of other Stackoverflow-ers). What I would like to have is a SQL query in which I can count multiple strings (next to '{TICKER|IBM}' also '{TICKER|GOOG}' and '{TICKER|BAC}' for instance).
The working SQL query for counting one ticker-string is as follows:
SELECT d.date, IFNULL(t.count, 0) AS tweet_count
FROM all_dates AS d
LEFT JOIN (
SELECT COUNT(DISTINCT tweet_id) AS count, DATE(created_at) AS date
FROM tweets
WHERE processed_text LIKE '%{TICKER|IBM}%'
GROUP BY date) AS t
ON d.date = t.date
The eventual output should thus give a column with the date, a column with {TICKER|IBM}, a column with {TICKER|GOOG} and one with {TICKER|BAC}.
I was wondering whether this is possible and whether you have a solution for this? I have more than 100 different ticker-strings. Of course, doing them one-by-one is an option, but it is a very time-consuming one.
If I understand correctly, you can do this with conditional aggregation:
SELECT d.date, coalesce(IBM, 0) as IBM, coalesce(GOOG, 0) as GOOG, coalesce(BAC, 0) AS BAC
FROM all_dates d LEFT JOIN
(SELECT DATE(created_at) AS date,
COUNT(DISTINCT CASE WHEN processed_text LIKE '%{TICKER|IBM}%' then tweet_id
END) as IBM,
COUNT(DISTINCT CASE WHEN processed_text LIKE '%{TICKER|GOOG}%' then tweet_id
END) as GOOG,
COUNT(DISTINCT CASE WHEN processed_text LIKE '%{TICKER|BAC}%' then tweet_id
END) as BAC
FROM tweets
GROUP BY date
) t
ON d.date = t.date;
I'd return the specified resultset like this, adding expressions to the SELECT list for each "ticker" I want returned as a separate column:
SELECT d.date
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|IBM}%' ),0) AS `cnt_ibm`
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|GOOG}%'),0) AS `cnt_goog`
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|BAC}%' ),0) AS `cnt_goog`
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|...}%' ),0) AS `cnt_...`
FROM all_dates d
LEFT
JOIN tweets t
ON t.created_at >= d.date
AND t.created_at < d.date + INTERVAL 1 DAY
GROUP BY d.date
NOTES: The expressions within the SUM aggregates above are evaluated as booleans, so they return 1 (if true), 0 (if false), or NULL. I'd avoid wrapping the created_at column in a DATE() function, and use a range scan instead, especially if a predicate is added (WHERE clause) that restricts the values ofdatebeing returned fromall_dates`.
As an alternative, expressions like this will return an equivalent result:
, SUM(IF(t.process_text LIKE '%{TICKER|IBM}%' ,1,0)) AS `cnt_ibm`

Catching latest column value change in SQL

How can I get the date for the latest value change in one column with one SQL query?
Possible database situation:
Date State
2012-11-25 state one
2012-11-26 state one
2012-11-27 state two
2012-11-28 state two
2012-11-29 state one
2012-11-30 state one
So result should return 2012-11-29 as latest change state. If I group by State value, I will get the date for first time I have that state in database.
The query will group the table on state and show the state and in the date field the latest date created of that state.
From the given input the output would be
Date State
2012-11-30 state one
2012-11-28 state two
This will get you the last state:
-- Query 1
SELECT state
FROM tableX
ORDER BY date DESC
LIMIT 1 ;
Encapsulating the above, we can use it to get the date just before the last change:
-- Query 2
SELECT t.date
FROM tableX AS t
JOIN
( SELECT state
FROM tableX
ORDER BY date DESC
LIMIT 1
) AS last
ON last.state <> t.state
ORDER BY t.date DESC
LIMIT 1 ;
And then use that to find the date (or the whole row) where the last change occurred:
-- Query 3
SELECT a.date -- can also be used: a.*
FROM tableX AS a
JOIN
( SELECT t.date
FROM tableX AS t
JOIN
( SELECT state
FROM tableX
ORDER BY date DESC
LIMIT 1
) AS last
ON last.state <> t.state
ORDER BY t.date DESC
LIMIT 1
) AS b
ON a.date > b.date
ORDER BY a.date
LIMIT 1 ;
Tested in SQL-Fiddle
And a solution that uses MySQL variables:
-- Query 4
SELECT date
FROM
( SELECT t.date
, #r := (#s <> state) AS result
, #s := state AS prev_state
FROM tableX AS t
CROSS JOIN
( SELECT #r := 0, #s := ''
) AS dummy
ORDER BY t.date ASC
) AS tmp
WHERE result = 1
ORDER BY date DESC
LIMIT 1 ;
I believe this is the answer:
SELECT
DISTINCT State AS State, `Date`
FROM
Table_1 t1
WHERE t1.`Date`=(SELECT MAX(`Date`) FROM Table_1 WHERE State=t1.State)
...and the test:
http://sqlfiddle.com/#!2/8b0d8/5
If you add another column 'changed datetime' you can fill this using an update trigger that inserts NOW(). If you query your table ordering on the changed column, it will endup first.
CREATE TRIGGER `trigger` BEFORE UPDATE ON `table`
FOR EACH ROW
BEGIN
SET ROW.changed = NOW();
END$$
Try this ::
Select
MAX(`Date`), state from mytable
group by state
If you had been using postgres, you could compare different rows in the same table using "LEAD .. OVER" I have not managed to find the same functionallity in mysql.
A bit hairy, but I think this will do:
select min(t1.date) from table_1 t1 where
(select count(distinct state) from table_1 where table_1.date>=t1.date)=1
Basically, this asks for the first time no changes in state is found for any later values. Be warned, it may be this query scales terribly for large data sets....
I think your best choice here are analytical functions. Try this - it should be OK performance-wise:
SELECT *
FROM test
WHERE my_date = (SELECT MAX (my_date)
FROM (SELECT MY_DATE
FROM ( SELECT MY_DATE,
STATE,
LAG (state) OVER (ORDER BY MY_DATE)
lag_val
FROM test
ORDER BY MY_DATE) a
WHERE state != lag_val))
In the inner select, the LAG function gets the previous value in the STATE column and in the outer select I mark the date of a change - those with lag value different than the current state value. And outside, I'm getting the latest date from those dates of a change... I hope that this is what you needed.
SELECT MAX(DATE) FROM YOUR_TABLE
Above answer doesn't seem to satisfy what OP needs.
UPDATED ANSWER WITH AFTER INSERT/UPDATE TRIGGER
DELCARE #latestState varchar;
DELCARE #latestDate date;
CREATE TRIGGER latestInsertTrigger AFTER INSERT ON myTable
FOR EACH ROW
BEGIN
IF OLD.DATE <> NEW.date THEN
SET #latestState = NEW.state
SET #latestDate = NEW.date
END IF
END
;
CREATE TRIGGER latestUpdateTrigger AFTER UPDATE ON myTable
FOR EACH ROW
BEGIN
IF OLD.DATE = NEW.date AND OLD.STATE <> NEW.STATE THEN
SET #latestState = NEW.state
SET #latestDate = NEW.date
END IF
END
;
You may use the following query to get the latest record added/updated:
SELECT DATE, STATE FROM myTable
WHERE STATE = #latestState
OR DATE = #latestDate
ORDER BY DATE DESC
;
Results:
DATE STATE
November, 30 2012 00:00:00+0000 state one
November, 28 2012 00:00:00+0000 state two
November, 27 2012 00:00:00+0000 state two
The above query results needs to be limitted to 2, 3 or n based on what you need.
Frankly it seems like you want to get max from both columns based on the data sample you have given. Assuming that your state only increases with the date. Only I wish if the state was an integer :D
Then union of two max sub queries on both columns would have solved it easily. Still a string manipulation regex can find what's the max in state column. Finally this approach needs limit x. However it still has lope hole. Anyway it took me sometime to figure your need out :$