mysql> select id,id_product,date_start from prices where date_start >= '2014-05-01 00:00:00.000' and id_product=21 and id_cr=2733 order by id limit 1;
| 660713 | 21 | 2014-05-01 01:00:00 |
mysql> select id,id_product,date_start,date_end from prices where date_start >= '2014-05-01 00:00:00.000' and id_product=21 and id_cr=2733 order by id limit 1;
| 660712 | 21 | 2014-05-01 00:00:00 | 2014-05-01 01:00:00 |
So, the simple fact that I added a new field to the second SELECT (date_end) changed the result of my query (and I can guarantee my database wasn't being modified at the same time... I also actually rerun those same commands a couple times and this happened everytime).
There's something else funny.
If I use "select * ... limit 8", I get 8 records starting from 660713
If I use "select * ... limit 7", I get 7 records starting from 660712 (which is correct), since 660712 matches this query
So, any ideas what's going on with record 660712?!
I'm guessing some indexes problem
thanks!
Related
Given a table "events_log" in this form :
| id | started_at | duration |
| 1 | 2017-06-01 09:00:00 | 80 |
| 1 | 2017-06-01 09:01:00 | 40 |
| 1 | 2017-06-01 09:01:23 | 20 |
I want to know when the most events were occuring (with a minute precision) :
|period |count|
| 2017-06-01 09:00:00 | 1 |
| 2017-06-01 09:01:00 | 3 |
In reality, there a millions of events to handle.
My solution is to :
Create a temporary table with event start grouped by minute
LEFT JOINing it with the events between each period
See http://sqlfiddle.com/#!9/8546a/1
But performance is terrible ...
Is there a better way to do it ?
I would think group by, something like this:
select date_format(started_at, '%Y-%m-%d %h:%i') as yyyymmddhhmi, count(*)
from t
group by yyyymmddhhmi
order by count(*) desc
limit 10;
Performance will not be great.
Here is modified version of your code. It will scan through events_log table twice. Once when building event_starts helper table and second time when selecting all events that are happening in specified interval. Also note added index, that will significantly speed up execution. It might be also reason why your original query was so slow.
CREATE TABLE events_log (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,started_at DATETIME,duration INT(11));
INSERT INTO events_log (started_at, duration) VALUES ('2017-06-01 09:00:00', 80);
INSERT INTO events_log (started_at, duration) VALUES ('2017-06-01 09:01:00', 40);
INSERT INTO events_log (started_at, duration) VALUES ('2017-06-01 09:01:23', 20);
CREATE /* TEMPORARY */ TABLE tmp_event_starts AS (
select DISTINCT DATE_ADD(started_at, INTERVAL -SECOND(started_at) SECOND) AS period_start
from events_log
);
create index idx_tmp_event_starts
on tmp_event_starts (period_start);
select period_start, count(*), group_concat(id) from events_log as log
join tmp_event_starts as per
on per.period_start >= DATE_ADD(started_at, INTERVAL -SECOND(started_at) SECOND)
and per.period_start <= DATE_ADD(started_at, INTERVAL -SECOND(started_at)+duration SECOND)
group by period_start
;
If you have lot of events happening in same minute and there are not minutes without events, then you might consider generating helper table as a sequence of minutes independent of data. In MySql it is quite uneasy task, but some hints can be found in this blog post Calendar Tables: An Invaluable Database Tool
It will also allow for generating helper table in advance and thus speeding up execution of query itself significantly.
You might also consider adding ended_at columnt to your event_log table, which will eliminate need for conversion during query execution.
I want to run a CRON job at the beginning of every month to select all the records that are from the last month or even prior to that. The script runs at ~00:15am on the first of the month but the SELECT must not include records that may have been created within that ~15 mins. The column I'm running the condition against is stored as datetime and the database is MySQL.
EDIT:
Example:
rowID | time
---------------------------
6 | 2016-06-01 00:12:21
5 | 2016-06-01 00:04:34
4 | 2016-05-28 19:46:45
3 | 2016-05-17 19:25:01
2 | 2016-05-08 06:33:32
1 | 2016-04-25 12:22:54
Basically, looking for all rows where ID < 5.
SELECT rowID
FROM table
WHERE time < beginning_of_current_month;
Thanks in advance!
Have you tried something like this?
select rowID from table
where time < DATE_FORMAT(NOW(), '%Y-%m-01')
What I need. An SQL statement that will look up a record, the newest, then look up the next record which is 5 minutes behind the timestamp of that record and repeat until limit is reached. Timestamps are unix time. So get newest record then subtract 5 minutes, get that record and repeat. The 5 minutes is an example. It well actually be a variable from second to hours.
I can easily program something in Perl, Ruby or Bash to do a loop with a select inside that works, but was hoping for a pure SQL way that might be faster. Any help is appreciated.
Added more info below.
Below shows a very small clip of the records in the DB Table. Basically data is inserted 20 seconds apart. I want to be able to select a record at different intervals based on a variable passed through a CGI script, along with how many records I want total returned.
> select * from Sensor1 order by ts desc limit 5;
+------------+-------------+----------+
| ts | temperature | humidity |
+------------+-------------+----------+
| 1407612981 | 75.91 | 56.5 |
| 1407612961 | 75.92 | 56.4 |
| 1407612941 | 75.92 | 56.5 |
| 1407612921 | 75.91 | 56.4 |
| 1407612901 | 75.91 | 56.4 |
+------------+-------------+----------+
So an example would be, I want the newest record, then the one 5 minutes back, then the one 5 more minutes back to some other variable passed by CGI script.
The below would be sample output based on wanting records that are the closest to 5 minutes apart for 5 iterations.
+------------+-------------+----------+
| ts | temperature | humidity |
+------------+-------------+----------+
| 1407612681 | 75.92 | 56.4 |
| 1407612381 | 75.92 | 56.4 |
| 1407612081 | 75.90 | 56.3 |
| 1407611781 | 75.91 | 56.4 |
| 1407611481 | 75.90 | 56.4 |
+------------+-------------+----------+
So I can accomplish the above with a simple bash script. See below.
#!/bin/bash
increment=5 # How many records we want
interval=300 # The number of seconds between each returned result
tc1=1
while [ $tc1 -lt $increment ]
do
time=$(($time-$interval)) # This line makes sure our next select query is 300 seconds behind
record=`echo "select * from Sensor${sensor} where ts >= ${time} order by ts asc limit 1;" | mysql -u env -penv -h localhost dc_temp | sed 's/\t/|/g' | grep -v "ts|temperature|humidity"`
echo "Debug: ${record}"
time=`echo $record | cut -d'|' -f1` # Get the time from output.
tc1=$((tc1+1)) # Add 1 to our temporary count to end the while loop when we reach how many records they want
done
So the above script gives me the output I want and control. But I fear the huge number of Selects would be a slow down. I'm looking to pull 24 hours, 1 week, 1 month, 1 year of data at different intervals between each record. Basically for 24 hours, pull every 5 minutes. For 1 week pull maybe a couple records an hour etc. The data is all going to RGraph to create a line graph of the history temperature and humidity.
Can you try running the below code:
CREATE TABLE dbo.TimeTable(value INT,time DATETIME);
GO
SET NOCOUNT ON;
GO
INSERT dbo.TimeTable(value, time) SELECT 1, DATEADD(MINUTE, -7, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 2, DATEADD(MINUTE, -6, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 3, DATEADD(MINUTE, -5, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 4, DATEADD(MINUTE, -4, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 5, DATEADD(MINUTE, -3, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 6, DATEADD(MINUTE, -2, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 7, DATEADD(MINUTE, -1, GETDATE());
INSERT dbo.TimeTable(value, time) SELECT 8, GETDATE();
;WITH myTbl AS
(
SELECT
time, value, RANK() OVER (PARTITION BY (DATEDIFF(Mi,0, time)/5) ORDER BY time desc) RK
FROM TimeTable
)
SELECT * FROM myTbl
WHERE RK <= 1
ORDER BY time DESC
DROP TABLE TimeTable
This will return the most recent record in every interval of 5 minutes.
if i want to get the total_consumption over a range of dates, how would i do that?
I thought i could do:
SELECT id, SUM(consumption)
FROM consumption_info
WHERE date_time BETWEEN 2013-09-15 AND 2013-09-16
GROUP BY id;
however this returns: Empty set, 2 warnings(0.00 sec)
---------------------------------------
id | consumption | date_time |
=======================================|
1 | 5 | 2013-09-15 21:35:03 |
2 | 5 | 2013-09-15 24:35:03 |
3 | 7 | 2013-09-16 11:25:23 |
4 | 3 | 2013-09-16 20:15:23 |
----------------------------------------
any ideas what i'm doing wrong here?
thanks in advance
You're missing quotes around the date strings: the WHERE clause should actually be written as...
BETWEEN '2013-09-15' AND '2013-09-16'
The irony is that 2013-09-15 is a valid SQL expression - it means 2013 minus 09 minus 15. Obviously, there's no date lying in between the corresponding results; hence an empty set in return
Yet there might be another, more subtle error here: you probably should have used this clause...
BETWEEN '2013-09-15 00:00:00' AND '2013-09-16 23:59:59'
... instead. Without setting the time explicitly it'll be set to '00:00:00' on both dates (as DATETIME values are compared here).
While it's obviously ok for the starting date, it's not so for the ending one - unless, of course, exclusion of all the records for any time of that day but midnight is actually the desired outcome.
SELECT SUM(consumption)
FROM consumption_info
WHERE date_time >= 2013-09-15 AND date_time <= 2013-09-16;
or
SELECT SUM(consumption)
FROM consumption_info
WHERE date_time BETWEEN 2013-09-15 AND 2013-09-16;
Its better to use CAST when comparing the date function.
SELECT id, SUM(consumption)
FROM consumption_info
WHERE date_time
BETWEEN CAST('2013-09-15' AS DATETIME)
AND CAST('2013-09-16' AS DATETIME)
GROUP BY id;
I have a database with a created_at column containing the datetime in Y-m-d H:i:s format.
The latest datetime entry is 2011-09-28 00:10:02.
I need the query to be relative to the latest datetime entry.
The first value in the query should be the latest datetime entry.
The second value in the query should be the entry closest to 7 days from the first value.
The third value should be the entry closest to 7 days from the second value.
REPEAT #3.
What I mean by "closest to 7 days from":
The following are dates, the interval I desire is a week, in seconds a week is 604800 seconds.
7 days from the first value is equal to 1316578202 (1317183002-604800)
the value closest to 1316578202 (7 days) is... 1316571974
unix timestamp | Y-m-d H:i:s
1317183002 | 2011-09-28 00:10:02 -> appear in query (first value)
1317101233 | 2011-09-27 01:27:13
1317009182 | 2011-09-25 23:53:02
1316916554 | 2011-09-24 22:09:14
1316836656 | 2011-09-23 23:57:36
1316745220 | 2011-09-22 22:33:40
1316659915 | 2011-09-21 22:51:55
1316571974 | 2011-09-20 22:26:14 -> closest to 7 days from 1317183002 (first value)
1316499187 | 2011-09-20 02:13:07
1316064243 | 2011-09-15 01:24:03
1315967707 | 2011-09-13 22:35:07 -> closest to 7 days from 1316571974 (second value)
1315881414 | 2011-09-12 22:36:54
1315794048 | 2011-09-11 22:20:48
1315715786 | 2011-09-11 00:36:26
1315622142 | 2011-09-09 22:35:42
I would really appreciate any help, I have not been able to do this via mysql and no online resources seem to deal with relative date manipulation such as this. I would like the query to be modular enough to be able to change the interval weekly, monthly, or yearly. Thanks in advance!
Answer #1 Reply:
SELECT
UNIX_TIMESTAMP(created_at)
AS unix_timestamp,
(
SELECT MIN(UNIX_TIMESTAMP(created_at))
FROM my_table
WHERE created_at >=
(
SELECT max(created_at) - 7
FROM my_table
)
)
AS `random_1`,
(
SELECT MIN(UNIX_TIMESTAMP(created_at))
FROM my_table
WHERE created_at >=
(
SELECT MAX(created_at) - 14
FROM my_table
)
)
AS `random_2`
FROM my_table
WHERE created_at =
(
SELECT MAX(created_at)
FROM my_table
)
Returns:
unix_timestamp | random_1 | random_2
1317183002 | 1317183002 | 1317183002
Answer #2 Reply:
RESULT SET:
This is the result set for a yearly interval:
id | created_at | period_index | period_timestamp
267 | 2010-09-27 22:57:05 | 0 | 1317183002
1 | 2009-12-10 15:08:00 | 1 | 1285554786
I desire this result:
id | created_at | period_index | period_timestamp
626 | 2011-09-28 00:10:02 | 0 | 0
267 | 2010-09-27 22:57:05 | 1 | 1317183002
I hope this makes more sense.
It's not exactly what you asked for, but the following example is pretty close....
Example 1:
select
floor(timestampdiff(SECOND, tbl.time, most_recent.time)/604800) as period_index,
unix_timestamp(max(tbl.time)) as period_timestamp
from
tbl
, (select max(time) as time from tbl) most_recent
group by period_index
gives results:
+--------------+------------------+
| period_index | period_timestamp |
+--------------+------------------+
| 0 | 1317183002 |
| 1 | 1316571974 |
| 2 | 1315967707 |
+--------------+------------------+
This breaks the dataset into groups based on "periods", where (in this example) each period is 7-days (604800 seconds) long. The period_timestamp that is returned for each period is the 'latest' (most recent) timestamp that falls within that period.
The period boundaries are all computed based on the most recent timestamp in the database, rather than computing each period's start and end time individually based on the timestamp of the period before it. The difference is subtle - your question requests the latter (iterative approach), but I'm hoping that the former (approach I've described here) will suffice for your needs, since SQL doesn't lend itself well to implementing iterative algorithms.
If you really do need to determine each period based on the timestamp in the previous period, then your best bet is going to be an iterative approach -- either using a programming language of your choice (like php), or by building a stored procedure that uses a cursor.
Edit #1
Here's the table structure for the above example.
CREATE TABLE `tbl` (
`id` int(10) unsigned NOT NULL auto_increment PRIMARY KEY,
`time` datetime NOT NULL
)
Edit #2
Ok, first: I've improved the original example query (see revised "Example 1" above). It still works the same way, and gives the same results, but it's cleaner, more efficient, and easier to understand.
Now... the query above is a group-by query, meaning it shows aggregate results for the "period" groups as I described above - not row-by-row results like a "normal" query. With a group-by query, you're limited to using aggregate columns only. Aggregate columns are those columns that are named in the group by clause, or that are computed by an aggregate function like MAX(time)). It is not possible to extract meaningful values for non-aggregate columns (like id) from within the projection of a group-by query.
Unfortunately, mysql doesn't generate an error when you try to do this. Instead, it just picks a value at random from within the grouped rows, and shows that value for the non-aggregate column in the grouped result. This is what's causing the odd behavior the OP reported when trying to use the code from Example #1.
Fortunately, this problem is fairly easy to solve. Just wrap another query around the group query, to select the row-by-row information you're interested in...
Example 2:
SELECT
entries.id,
entries.time,
periods.idx as period_index,
unix_timestamp(periods.time) as period_timestamp
FROM
tbl entries
JOIN
(select
floor(timestampdiff( SECOND, tbl.time, most_recent.time)/31536000) as idx,
max(tbl.time) as time
from
tbl
, (select max(time) as time from tbl) most_recent
group by idx
) periods
ON entries.time = periods.time
Result:
+-----+---------------------+--------------+------------------+
| id | time | period_index | period_timestamp |
+-----+---------------------+--------------+------------------+
| 598 | 2011-09-28 04:10:02 | 0 | 1317183002 |
| 996 | 2010-09-27 22:57:05 | 1 | 1285628225 |
+-----+---------------------+--------------+------------------+
Notes:
Example 2 uses a period length of 31536000 seconds (365-days). While Example 1 (above) uses a period of 604800 seconds (7-days). Other than that, the inner query in Example 2 is the same as the primary query shown in Example 1.
If a matching period_time belongs to more than one entry (i.e. two or more entries have the exact same time, and that time matches one of the selected period_time values), then the above query (Example 2) will include multiple rows for the given period timestamp (one for each match). Whatever code consumes this result set should be prepared to handle such an edge case.
It's also worth noting that these queries will perform much, much better if you define an index on your datetime column. For my example schema, that would look like this:
ALTER TABLE tbl ADD INDEX idx_time ( time )
If you're willing to go for the closest that is after the week is out then this'll work. You can extend it to work out the closest but it'll look so disgusting it's probably not worth it.
select unix_timestamp
, ( select min(unix_tstamp)
from my_table
where sql_tstamp >= ( select max(sql_tstamp) - 7
from my_table )
)
, ( select min(unix_tstamp)
from my_table
where sql_tstamp >= ( select max(sql_tstamp) - 14
from my_table )
)
from my_table
where sql_tstamp = ( select max(sql_tstamp)
from my_table )