I Have a lot of columns to process in a query (columns result can be also NULL) and at the end i need an unique list of all pieces for a timetable (e.g. "what part of what piece i should work first")
my table is something like this
piece type | deadline for first check | deadline for second check | deadline for third check | deadline for n. check
---------------------------------------------------------------------------------------------------------------------
FIRST | NULL | 2022-02-01 | 2022-01-18 | 2022-04-01
SECOND | 2022-03-01 | 2022-01-15 | 2022-03-15 | 2022-05-01
Current query and php processing (slow) give me out something like :
2022-01-15 SECOND (second check)
2022-03-01 SECOND (first check)
2022-05-01 SECOND (n. check)
2022-01-18 FIRST (third check)
...
As i've more than 600 pieces (of different types) and 6-7 checks to do (in total, but something like 4 for a piece type, 2 for a piece type and so on) i would like to know if is there a way to limit (let's say "least of deadlines < today" or something like 'least of deadlines within 10 days ) if (php based) is there "no filtering" list (on piece type)
Any help appreciated!
Consider something like:
SELECT piece_type, check_date, which
FROM (
SELECT piece_type, deadline1 AS check_date, 1 AS which
FROM tbl WHERE deadline1 IS NOT NULL
UNION ALL
SELECT piece_type, deadline2 AS check_date, 2 AS which
FROM tbl WHERE deadline2 IS NOT NULL
UNION ALL
SELECT piece_type, deadline3 AS check_date, 3 AS which
FROM tbl WHERE deadline3 IS NOT NULL
) AS x
ORDER BY check_date;
Related
I have a set of conditions in my where clause like
WHERE
d.attribute3 = 'abcd*'
AND x.STATUS != 'P'
AND x.STATUS != 'J'
AND x.STATUS != 'X'
AND x.STATUS != 'S'
AND x.STATUS != 'D'
AND CURRENT_TIMESTAMP - 1 < x.CREATION_TIMESTAMP
Which of these conditions will be executed first? I am using oracle.
Will I get these details in my execution plan?
(I do not have the authority to do that in the db here, else I would have tried)
Are you sure you "don't have the authority" to see an execution plan? What about using AUTOTRACE?
SQL> set autotrace on
SQL> select * from emp
2 join dept on dept.deptno = emp.deptno
3 where emp.ename like 'K%'
4 and dept.loc like 'l%'
5 /
no rows selected
Execution Plan
----------------------------------------------------------
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 62 | 4 (0)|
| 1 | NESTED LOOPS | | 1 | 62 | 4 (0)|
|* 2 | TABLE ACCESS FULL | EMP | 1 | 42 | 3 (0)|
|* 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)|
|* 4 | INDEX UNIQUE SCAN | SYS_C0042912 | 1 | | 0 (0)|
----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("EMP"."ENAME" LIKE 'K%' AND "EMP"."DEPTNO" IS NOT NULL)
3 - filter("DEPT"."LOC" LIKE 'l%')
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
As you can see, that gives quite a lot of detail about how the query will be executed. It tells me that:
the condition "emp.ename like 'K%'" will be applied first, on the full scan of EMP
then the matching DEPT records will be selected via the index on dept.deptno (via the NESTED LOOPS method)
finally the filter "dept.loc like 'l%' will be applied.
This order of application has nothing to do with the way the predicates are ordered in the WHERE clause, as we can show with this re-ordered query:
SQL> select * from emp
2 join dept on dept.deptno = emp.deptno
3 where dept.loc like 'l%'
4 and emp.ename like 'K%';
no rows selected
Execution Plan
----------------------------------------------------------
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 62 | 4 (0)|
| 1 | NESTED LOOPS | | 1 | 62 | 4 (0)|
|* 2 | TABLE ACCESS FULL | EMP | 1 | 42 | 3 (0)|
|* 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)|
|* 4 | INDEX UNIQUE SCAN | SYS_C0042912 | 1 | | 0 (0)|
----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("EMP"."ENAME" LIKE 'K%' AND "EMP"."DEPTNO" IS NOT NULL)
3 - filter("DEPT"."LOC" LIKE 'l%')
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
The database will decide what order to execute the conditions in.
Normally (but not always) it will use an index first where possible.
As has been said, looking at the execution plan will give you some information. However, unless you use the plan stability feature, you can't rely on the execution plan always remaining the same.
In the case of the query you posted, it doesn't look like the order of evaluation will change the logic in any way, so I guess what you are thinking about is efficiency. It's fairly likely that the Oracle optimizer will choose a plan that is efficient.
There are tricks you can do to encourage a particular ordering if you want to compare the performance with base query. Say for instance that you wanted the timestamp condition to be executed first. You could do this:
WITH subset AS
( SELECT /*+ materialize */
FROM my_table
WHERE CURRENT_TIMESTAMP - 1 < x.CREATION_TIMESTAMP
)
SELECT *
FROM subset
WHERE
d.attribute3 = 'abcd*'
AND x.STATUS != 'P'
AND x.STATUS != 'J'
AND x.STATUS != 'X'
AND x.STATUS != 'S'
AND x.STATUS != 'D'
The "materialize" hint should cause the optimizer to execute the inline query first, then scan that result set for the other conditions.
I'm not advising you do this as a general habit. In most cases just writing the simple query will lead to the best execution plans.
To add to the other comments on execution plans, under the cpu-based costing model introduced in 9i and used by default in 10g+ Oracle will also make an assessment of which predicate evaluation order will result in lower computational cost even if that does not affect the table access order and method. If executing one predicate before another results in fewer predicates calculations being executed then that optimisaton can be applied.
See this article for more details: http://www.oracle.com/technology/pub/articles/lewis_cbo.html
Furthermore, Oracle doesn't even have to execute predicates where comparison with a check constraint or partition definitions indicates that no rows would be returned anyway.
Complex stuff.
Finally, relational database theory says that you can never depend on the order of execution of the query clauses, so best not to try. As others have said, the cost-based optimizer tries to choose what it thinks is best, but even viewing explain plan won't guarantee the actual order that's used. Explain plan just tells you what the CBO recommends, but that's still not 100%.
Maybe if you explain why you're trying to do this, some could suggest a plan?
Tricky question. Just faced the same dilemma. I need to mention a function within a query. The function itself makes another query, so you understand how it affects performance in general. But in most cases we have, the function wouldn't be called so often if the rest of conditions executed first.
Well, thought it would be useful to post here another article for topic.
The following quote is copied from Donald Burleson's site (http://www.dba-oracle.com/t_where_clause.htm) .
The ordered_predicates hint is specified in the Oracle WHERE clause of
a query and is used to specify the order in which Boolean predicates
should be evaluated.
In the absence of ordered_predicates, Oracle uses
the following steps to evaluate the order of SQL predicates:
Subqueries are evaluated before the outer Boolean conditions in the WHERE clause.
All Boolean conditions without built-in functions or subqueries are evaluated in reverse from the order they are found in the WHERE
clause, with the last predicate being evaluated first.
Boolean predicates with built-in functions of each predicate are evaluated in increasing order of their estimated evaluation costs.
I have the following table in Mysql
Name Age Group
abel 7 A
joe 6 A
Rick 7 A
Diana 5 B
Billy 6 B
Pat 5 B
I want to randomize the rows, but they should still remain grouped by the Group column.
For exmaple i want my result to look something like this.
Name Age Group
joe 6 A
abel 7 A
Rick 7 A
Billy 6 B
Pat 5 B
Diana 5 B
What query should i use to get this result? The entire table should be randomised and then grouped by "Group" column.
What you describe in your question as GROUPing is more correctly described as sorting. This is a particular issue when talking about SQL databases where "GROUP" means something quite different and determines the scope of aggregation operations.
Indeed "group" is a reserved word in SQL, so although mysql and some other SQL databases can work around this, it is a poor choice as an attribute name.
SELECT *
FROM yourtable
ORDER BY `group`
Using random values also has a lot of semantic confusion. A truly random number would have a different value every time it is retrieved - which would make any sorting impossible (and databases do a lot of sorting which is normally invisible to the user). As long as the implementation uses a finite time algorithm such as quicksort that shouldn't be a problem - but a bubble sort would never finish, and a merge sort could get very confused.
There are also degrees of randomness. There are different algorithms for generating random numbers. For encryption it's critical than the random numbers be evenly distributed and completely unpredictable - often these will use hardware events (sometimes even dedicated hardware) but I don't expect you would need that. But do you want the ordering to be repeatable across invocations?
SELECT *
FROM yourtable
ORDER BY `group`, RAND()
...will give different results each time.
OTOH
SELECT
FROM yourtable
ORDER BY `group`, MD5(CONCAT(age, name, `group`))
...would give the results always sorted in the same order. While
SELECT
FROM yourtable
ORDER BY `group`, MD5(CONCAT(DATE(), age, name, `group`))
...will give different results on different days.
DROP TABLE my_table;
CREATE TABLE my_table
(name VARCHAR(12) NOT NULL
,age INT NOT NULL
,my_group CHAR(1) NOT NULL
);
INSERT INTO my_table VALUES
('Abel',7,'A'),
('Joe',6,'A'),
('Rick',7,'A'),
('Diana',5,'B'),
('Billy',6,'B'),
('Pat',5,'B');
SELECT * FROM my_table ORDER BY my_group,RAND();
+-------+-----+----------+
| name | age | my_group |
+-------+-----+----------+
| Joe | 6 | A |
| Abel | 7 | A |
| Rick | 7 | A |
| Pat | 5 | B |
| Diana | 5 | B |
| Billy | 6 | B |
+-------+-----+----------+
Do the random first then sort by column group.
select Name, Age, Group
from (
select *
FROM yourtable
order by RAND()
) t
order by Group
Try this:
SELECT * FROM table order by Group,rand()
I have a scenario where I need to display total number of attendees of an event. With the help of registration form I have already captured the details of people who are attending and my table looks like below.
ID | NAME | PHONE_NUMBER | IS_LIFE_PARTNER_ATTENDING
1 | ABC | 1234567890 | N
2 | PQR | 1234567891 | Y
3 | XYZ | 1234567892 | N
I can easily display number of registrations by using count(id). But while displaying number of attendees I have to consider as two attendees if registrant is coming with his/her partner. (identified by IS_LIFE_PARTNER_ATTEDNING column)
So, in the above case, the number of registrants are 3, but number of attendees are 4, because "PQR" is coming with his/her life partner.
How can we do this in mysql query?
You can use the following query:
SELECT
SUM( 1 + (IS_LIFE_PARTNER_ATTEDNING = 'Y')) AS totalAttendees
FROM your_table;
WORKING DEMO
Since boolean expression resolves into 0/1 in MySQL so that you can capitalize this in your case.
Note:
SUM(a=b) returns 1 only if a is equal to b otherwise it returns 0
Caution:
*Never underestimate these parentheses (IS_LIFE_PARTNER_ATTEDNING = 'Y'). If you omit them then the whole summation would result in zero(0).
* because of operator precedence
Use SUM with CASE
SELECT
Name,
SUM(CASE WHEN IS_LIFE_PARTNER_ATTEDNING='y' THEN 2 ELSE 1 END ) AS'Attendes'
FROM
table
GROUP by name
I am by no means an MySQL expert, so I am looking for any help on this matter.
I need to perform a simple test (in principle), I have this (simplified) table:
tableid | userid | car | From | To
--------------------------------------------------------
1 | 1 | Fiesta | 2015-01-01 | 2015-01-31
2 | 1 | MX5 | 2015-02-01 | 2015-02-28
3 | 1 | Navara | 2015-03-01 | 2015-03-31
4 | 1 | GTR | 2015-03-28 | 2015-04-30
5 | 2 | Focus | 2015-01-01 | 2015-01-31
6 | 2 | i5 | 2015-02-01 | 2015-02-28
7 | 2 | Aygo | 2015-03-01 | 2015-03-31
8 | 2 | 206 | 2015-03-29 | 2015-04-30
9 | 1 | Skyline | 2015-04-29 | 2015-05-31
10 | 2 | Skyline | 2015-04-29 | 2015-05-31
I need to find two things here:
If any user has date overlaps in his car assignments of more than one day (end of the assignment can be on the same day as the new assignment start).
Did any two users tried to get the same car assigned on the same date, or the date ranges overlap for them on the same car.
So the query (or queries) I am looking for should return those rows:
tableid | userid | car | From | To
--------------------------------------------------------
3 | 1 | Navara | 2015-03-01 | 2015-03-31
4 | 1 | GTR | 2015-03-28 | 2015-04-30
7 | 2 | Aygo | 2015-03-01 | 2015-03-31
8 | 2 | 206 | 2015-03-29 | 2015-04-30
9 | 1 | Skyline | 2015-04-29 | 2015-05-31
10 | 2 | Skyline | 2015-04-29 | 2015-05-31
I feel like I am bashing my head against the wall here, I would be happy with being able to do these comparisons in separate queries. I need to display them in one table but I could always then join the results.
I've done research and few hours of testing but I cant get nowhere near the result I want.
SQLFiddle with the above test data
I've tried these posts btw (they were not exactly what I needed but were close enough, or so I thought):
Comparing two date ranges within the same table
How to compare values of text columns from the same table
This was the closest solution I could find but when I tried it on a single table (joining table to itself) I was getting crazy results: Checking a table for time overlap?
EDIT
As a temporary solution I have adapted a different approach, similar to the posts I have found during my research (above). I will now check if the new car rental / assignment date overlaps with any date range within the table. If so I will save the id(s) of the rows that the date overlaps with. This way at least I will be able to flag overlaps and allow a user to look at the flagged rows and to resolve any overlaps manually.
Thanks to everyone who offered their help with this, I will flag philipxy answer as the chosen one (in next 24h) unless someone has better way of achieving this. I have no doubt that following his answer I will be able to eventually reach the results I need. At the moment though I need to adopt any solution that works as I need to finish my project in next few days, hence the change of approach.
Edit #2
The both answers are brilliant and to anyone who finds this post having the same issue as I did, read them both and look at the fiddles! :) A lot of amazing brain-work went into them! Temporarily I had to go with the solution I mention in #1 Edit of mine but I will be adapting my queries to go with #Ryan Vincent approach + #philipxy edits/comments about ignoring the initial one day overlap.
Here is the first part: Overlapping cars per user...
SQLFiddle - correlated Query and Join Query
Second part - more than one user in one car at the same time: SQLFiddle - correlated Query and Join Query. Query below...
I use the correlated queries:
You will likely need indexes on userid and 'car'. However - please check the 'explain plan' to see how it mysql is accessing the data. And just try it :)
Overlapping cars per user
The query:
SELECT `allCars`.`userid` AS `allCars_userid`,
`allCars`.`car` AS `allCars_car`,
`allCars`.`From` AS `allCars_From`,
`allCars`.`To` AS `allCars_To`,
`allCars`.`tableid` AS `allCars_id`
FROM
`cars` AS `allCars`
WHERE
EXISTS
(SELECT 1
FROM `cars` AS `overlapCar`
WHERE
`allCars`.`userid` = `overlapCar`.`userid`
AND `allCars`.`tableid` <> `overlapCar`.`tableid`
AND NOT ( `allCars`.`From` >= `overlapCar`.`To` /* starts after outer ends */
OR `allCars`.`To` <= `overlapCar`.`From`)) /* ends before outer starts */
ORDER BY
`allCars`.`userid`,
`allCars`.`From`,
`allCars`.`car`;
The results:
allCars_userid allCars_car allCars_From allCars_To allCars_id
-------------- ----------- ------------ ---------- ------------
1 Navara 2015-03-01 2015-03-31 3
1 GTR 2015-03-28 2015-04-30 4
1 Skyline 2015-04-29 2015-05-31 9
2 Aygo 2015-03-01 2015-03-31 7
2 206 2015-03-29 2015-04-30 8
2 Skyline 2015-04-29 2015-05-31 10
Why it works? or How I think about it:
I use the correlated query so I don't have duplicates to deal with and it is probably the easiest to understand for me. There are other ways of expressing the query. Each has advantages and drawbacks. I want something I can easily understand.
Requirement: For each user ensure that they don't have two or more cars at the same time.
So, for each user record (AllCars) check the complete table (overlapCar) to see if you can find a different record that overlaps for the time of the current record. If we find one then select the current record we are checking (in allCars).
Therefore the overlap check is:
the allCars userid and the overLap userid must be the same
the allCars car record and the overlap car record must be different
the allCars time range and the overLap time range must overlap.
The time range check:
Instead of checking for overlapping times use positive tests. The easiest approach, is to check it doesn't overlap, and apply a NOT to it.
One car with More than One User at the same time...
The query:
SELECT `allCars`.`car` AS `allCars_car`,
`allCars`.`userid` AS `allCars_userid`,
`allCars`.`From` AS `allCars_From`,
`allCars`.`To` AS `allCars_To`,
`allCars`.`tableid` AS `allCars_id`
FROM
`cars` AS `allCars`
WHERE
EXISTS
(SELECT 1
FROM `cars` AS `overlapUser`
WHERE
`allCars`.`car` = `overlapUser`.`car`
AND `allCars`.`tableid` <> `overlapUser`.`tableid`
AND NOT ( `allCars`.`From` >= `overlapUser`.`To` /* starts after outer ends */
OR `allCars`.`To` <= `overlapUser`.`From`)) /* ends before outer starts */
ORDER BY
`allCars`.`car`,
`allCars`.`userid`,
`allCars`.`From`;
The results:
allCars_car allCars_userid allCars_From allCars_To allCars_id
----------- -------------- ------------ ---------- ------------
Skyline 1 2015-04-29 2015-05-31 9
Skyline 2 2015-04-29 2015-05-31 10
Edit:
In view of the comments, by #philipxy , about time ranges needing 'greater than or equal to' checks I have updated the code here. I havn't changed the SQLFiddles.
For each input and output table find its meaning. Ie a statement template parameterized by column names, aka predicate, that a row makes into a true or false statement, aka proposition. A table holds the rows that make its predicate into a true proposition. Ie rows that make a true proposition go in a table and rows that make a false proposition stay out. Eg for your input table:
rental [tableid] was user [userid] renting car [car] from [from] to [to]
Then phrase the output table predicate in terms of the input table predicate. Don't use descriptions like your 1 & 2:
If any user has date overlaps in his car assignments of more than one day (end of the assignment can be on the same day as the new assignment start).
Instead find the predicate that an arbitrary row states when in the table:
rental [tableid] was user [user] renting car [car] from [from] to [to]
in self-conflict with some other rental
For the DBMS to calculate the rows making this true we must express this in terms of our given predicate(s) plus literals & conditions:
-- query result holds the rows where
FOR SOME t2.tableid, t2.userid, ...:
rental [t1.tableid] was user [t1.userid] renting car [t1.car] from [t1.from] to [t1.to]
AND rental [t2.tableid] was user [t2.userid] renting car [t2.car] from [t2.from] to [t2.to]
AND [t1.userid] = [t2.userid] -- userids id the same users
AND [t1.to] > [t2.from] AND ... -- tos/froms id intervals with overlap more than one day
...
(Inside an SQL SELECT statement the cross product of JOINed tables has column names of the form alias.column. Think of . as another character allowed in column names. Finally the SELECT clause drops the alias.s.)
We convert a query predicate to an SQL query that calculates the rows that make it true:
A table's predicate gets replaced by the table alias.
To use the same predicate/table multiple times make aliases.
Changing column old to new in a predicate adds ANDold=new.
AND of predicates gets replaced by JOIN.
OR of predicates gets replaced by UNION.
AND NOT of predicates gets replaced by EXCEPT, MINUS or appropriate LEFT JOIN.
ANDcondition gets replaced by WHERE or ON condition.
For a predicate true FOR SOMEcolumns to drop or when THERE EXISTScolumns to drop, SELECT DISTINCTcolumns to keep.
Etc. (See this.)
Hence (completing the ellipses):
SELECT DISTINCT t1.*
FROM t t1 JOIN t t2
ON t1.userid = t1.userid -- userids id the same users
WHERE t1.to > t2.from AND t2.to > t1.from -- tos/froms id intervals with overlap more than one day
AND t1.tableid <> t2.tableid -- tableids id different rentals
Did any two users tried to get the same car assigned on the same date, or the date ranges overlap for them on the same car.
Finding the predicate that an arbitrary row states when in the table:
rental [tableid] was user [user] renting car [car] from [from] to [to]
in conflict with some other user's rental
In terms of our given predicate(s) plus literals & conditions:
-- query result holds the rows where
FOR SOME t2.*
rental [t1.tableid] was user [t1.userid] renting car [t1.car] from [t1.from] to [t1.to]
AND rental [t2.tableid] was user [t2.userid] renting car [t2.car] from [t2.from] to [t2.to]
AND [t1.userid] <> [t2.userid] -- userids id different users
AND [t1.car] = [t2.car] -- .cars id the same car
AND [t1.to] >= [t2.from] AND [t2.to] >= [t1.from] -- tos/froms id intervals with any overlap
AND [t1.tableid] <> [t2.tableid] -- tableids id different rentals
The UNION of queries for predicates 1 & 2 returns the rows for which predicate 1ORpredicate 2.
Try to learn to express predicates--what rows state when in tables--if only as the goal for intuitive (sub)querying.
PS It is good to always have data checking edge & non-edge cases for a condition being true & being false. Eg try query 1 with GTR starting on the 31st, an overlap of only one day, which should not be a self-conflict.
PPS Querying involving duplicate rows, as with NULLs, has quite complex query meanings. It's hard to say when a tuple goes in or stays out of a table and how many times. For queries to have the simple intuitive meanings per my correspondences they can't have duplicates. Here SQL unfortunately differs from the relational model. In practice people rely on idioms when allowing non-distinct rows & they rely on rows being distinct because of constraints. Eg joining on UNIQUE columns per UNIQUEs, PKs & FKs. Eg: A final DISTINCT step is only doing work at a different time than a version that doesn't need it; time might or might not be be an important implementation issue affecting the phrasing chosen for a given predicate/result.
I have a database with a created_at column containing the datetime in Y-m-d H:i:s format.
The latest datetime entry is 2011-09-28 00:10:02.
I need the query to be relative to the latest datetime entry.
The first value in the query should be the latest datetime entry.
The second value in the query should be the entry closest to 7 days from the first value.
The third value should be the entry closest to 7 days from the second value.
REPEAT #3.
What I mean by "closest to 7 days from":
The following are dates, the interval I desire is a week, in seconds a week is 604800 seconds.
7 days from the first value is equal to 1316578202 (1317183002-604800)
the value closest to 1316578202 (7 days) is... 1316571974
unix timestamp | Y-m-d H:i:s
1317183002 | 2011-09-28 00:10:02 -> appear in query (first value)
1317101233 | 2011-09-27 01:27:13
1317009182 | 2011-09-25 23:53:02
1316916554 | 2011-09-24 22:09:14
1316836656 | 2011-09-23 23:57:36
1316745220 | 2011-09-22 22:33:40
1316659915 | 2011-09-21 22:51:55
1316571974 | 2011-09-20 22:26:14 -> closest to 7 days from 1317183002 (first value)
1316499187 | 2011-09-20 02:13:07
1316064243 | 2011-09-15 01:24:03
1315967707 | 2011-09-13 22:35:07 -> closest to 7 days from 1316571974 (second value)
1315881414 | 2011-09-12 22:36:54
1315794048 | 2011-09-11 22:20:48
1315715786 | 2011-09-11 00:36:26
1315622142 | 2011-09-09 22:35:42
I would really appreciate any help, I have not been able to do this via mysql and no online resources seem to deal with relative date manipulation such as this. I would like the query to be modular enough to be able to change the interval weekly, monthly, or yearly. Thanks in advance!
Answer #1 Reply:
SELECT
UNIX_TIMESTAMP(created_at)
AS unix_timestamp,
(
SELECT MIN(UNIX_TIMESTAMP(created_at))
FROM my_table
WHERE created_at >=
(
SELECT max(created_at) - 7
FROM my_table
)
)
AS `random_1`,
(
SELECT MIN(UNIX_TIMESTAMP(created_at))
FROM my_table
WHERE created_at >=
(
SELECT MAX(created_at) - 14
FROM my_table
)
)
AS `random_2`
FROM my_table
WHERE created_at =
(
SELECT MAX(created_at)
FROM my_table
)
Returns:
unix_timestamp | random_1 | random_2
1317183002 | 1317183002 | 1317183002
Answer #2 Reply:
RESULT SET:
This is the result set for a yearly interval:
id | created_at | period_index | period_timestamp
267 | 2010-09-27 22:57:05 | 0 | 1317183002
1 | 2009-12-10 15:08:00 | 1 | 1285554786
I desire this result:
id | created_at | period_index | period_timestamp
626 | 2011-09-28 00:10:02 | 0 | 0
267 | 2010-09-27 22:57:05 | 1 | 1317183002
I hope this makes more sense.
It's not exactly what you asked for, but the following example is pretty close....
Example 1:
select
floor(timestampdiff(SECOND, tbl.time, most_recent.time)/604800) as period_index,
unix_timestamp(max(tbl.time)) as period_timestamp
from
tbl
, (select max(time) as time from tbl) most_recent
group by period_index
gives results:
+--------------+------------------+
| period_index | period_timestamp |
+--------------+------------------+
| 0 | 1317183002 |
| 1 | 1316571974 |
| 2 | 1315967707 |
+--------------+------------------+
This breaks the dataset into groups based on "periods", where (in this example) each period is 7-days (604800 seconds) long. The period_timestamp that is returned for each period is the 'latest' (most recent) timestamp that falls within that period.
The period boundaries are all computed based on the most recent timestamp in the database, rather than computing each period's start and end time individually based on the timestamp of the period before it. The difference is subtle - your question requests the latter (iterative approach), but I'm hoping that the former (approach I've described here) will suffice for your needs, since SQL doesn't lend itself well to implementing iterative algorithms.
If you really do need to determine each period based on the timestamp in the previous period, then your best bet is going to be an iterative approach -- either using a programming language of your choice (like php), or by building a stored procedure that uses a cursor.
Edit #1
Here's the table structure for the above example.
CREATE TABLE `tbl` (
`id` int(10) unsigned NOT NULL auto_increment PRIMARY KEY,
`time` datetime NOT NULL
)
Edit #2
Ok, first: I've improved the original example query (see revised "Example 1" above). It still works the same way, and gives the same results, but it's cleaner, more efficient, and easier to understand.
Now... the query above is a group-by query, meaning it shows aggregate results for the "period" groups as I described above - not row-by-row results like a "normal" query. With a group-by query, you're limited to using aggregate columns only. Aggregate columns are those columns that are named in the group by clause, or that are computed by an aggregate function like MAX(time)). It is not possible to extract meaningful values for non-aggregate columns (like id) from within the projection of a group-by query.
Unfortunately, mysql doesn't generate an error when you try to do this. Instead, it just picks a value at random from within the grouped rows, and shows that value for the non-aggregate column in the grouped result. This is what's causing the odd behavior the OP reported when trying to use the code from Example #1.
Fortunately, this problem is fairly easy to solve. Just wrap another query around the group query, to select the row-by-row information you're interested in...
Example 2:
SELECT
entries.id,
entries.time,
periods.idx as period_index,
unix_timestamp(periods.time) as period_timestamp
FROM
tbl entries
JOIN
(select
floor(timestampdiff( SECOND, tbl.time, most_recent.time)/31536000) as idx,
max(tbl.time) as time
from
tbl
, (select max(time) as time from tbl) most_recent
group by idx
) periods
ON entries.time = periods.time
Result:
+-----+---------------------+--------------+------------------+
| id | time | period_index | period_timestamp |
+-----+---------------------+--------------+------------------+
| 598 | 2011-09-28 04:10:02 | 0 | 1317183002 |
| 996 | 2010-09-27 22:57:05 | 1 | 1285628225 |
+-----+---------------------+--------------+------------------+
Notes:
Example 2 uses a period length of 31536000 seconds (365-days). While Example 1 (above) uses a period of 604800 seconds (7-days). Other than that, the inner query in Example 2 is the same as the primary query shown in Example 1.
If a matching period_time belongs to more than one entry (i.e. two or more entries have the exact same time, and that time matches one of the selected period_time values), then the above query (Example 2) will include multiple rows for the given period timestamp (one for each match). Whatever code consumes this result set should be prepared to handle such an edge case.
It's also worth noting that these queries will perform much, much better if you define an index on your datetime column. For my example schema, that would look like this:
ALTER TABLE tbl ADD INDEX idx_time ( time )
If you're willing to go for the closest that is after the week is out then this'll work. You can extend it to work out the closest but it'll look so disgusting it's probably not worth it.
select unix_timestamp
, ( select min(unix_tstamp)
from my_table
where sql_tstamp >= ( select max(sql_tstamp) - 7
from my_table )
)
, ( select min(unix_tstamp)
from my_table
where sql_tstamp >= ( select max(sql_tstamp) - 14
from my_table )
)
from my_table
where sql_tstamp = ( select max(sql_tstamp)
from my_table )