mysql: Intergrate values of a table alog time - mysql

I want to integrate the v values with timediffs of t from one row to the next, in a table like this: "p_values"=
+------------+-------+----------+
| measure_id | v | t |
+------------+-------+----------+
| 1 | 32 | 10:45:00 |
| 2 | 17 | 10:42:00 |
| 3 | 20 | 10:39:00 |
| 4 | 21 | 10:36:00 |
| 5 | 35 | 10:33:00 |
| 6 | 59 | 10:30:00 |
| 7 | 47 | 10:27:00 |
| 8 | 45 | 10:24:00 |
| 9 | 40 | 10:21:00 |
| 10 | 39 | 10:18:00 |
| 11 | 42 | 10:15:00 |
+------------+-------+----------+
I want to integrate the v values with timediffs of t:
result = v[1]*(t[1]-t[2]) + v[2]*(t[2]-t[3]) + v[3]*(t[3]-t[4]) + ...
Can I do this on a single query?
I'm trying creating a table joining each column with the column below, like this:
select * from
(select measure_id, v, t from p_values order by t desc) a,
(select measure_id, v, t from p_values order by t desc) b
where a.t < b.t group by b.t desc;
+------------+----+----------+------------+----+----------+
| measure_id | v | t | measure_id | v | t |
+------------+----+----------+------------+----+----------+
| 9 | 83 | 11:12:00 | 10 | 25 | 11:15:00 |
| 8 | 90 | 11:09:00 | 9 | 83 | 11:12:00 |
| 7 | 24 | 11:06:00 | 8 | 90 | 11:09:00 |
| 6 | 29 | 11:03:00 | 7 | 24 | 11:06:00 |
| 5 | 72 | 11:00:00 | 6 | 29 | 11:03:00 |
| 4 | 28 | 10:57:00 | 5 | 72 | 11:00:00 |
| 3 | 22 | 10:54:00 | 4 | 28 | 10:57:00 |
| 2 | 42 | 10:51:00 | 3 | 22 | 10:54:00 |
| 1 | 35 | 10:48:00 | 2 | 42 | 10:51:00 |
| 0 | 31 | 10:45:00 | 1 | 35 | 10:48:00 |
+------------+----+----------+------------+----+----------+
Based on this table, I calculate the integral value in a single query as:
select sum(v) from
(select (a.v + b.v)/2 * (TIME_TO_SEC(b.t) - TIME_TO_SEC(a.t))/3600 as v from
(select measure_id, v, t from p_values order by t desc) a,
(select measure_id, v, t from p_values order by t desc) b
where a.t < b.t group by b.t desc) as c;
+---------+
| sum(v) |
+---------+
| 246.948 |
+---------+
But I'm not sure if this is the most efficient way to do this.
Thanks.

If you assume that the measure_id is incremental with no gaps, then you can do this with a self join. The resulting query is something like this:
select sum(p1.v*(p2.t - p1.t))
from p_values p1 join
p_values p2
on p2.measure_id = p1.measure_id + 1;
A couple of notes. First, this ignores the last v value, because there is no matching row. The question doesn't specify what to do in this case, so I assume you don't want that difference included.
I also left the simple notation for difference of times. Your question appears to be about handling the values from different rows, not actually calculating the difference of the time column. That, in turn, depends on the data type for the column, which is not specified in the question.
Finally, your subquery has a fatal flaw. It has columns in the select that are not in the group by. This uses a group by extension that the documentation explicitly warns against using.

select sum(value)
from
(select (p.v*(t-#prev)) as value,
#prev:=t
from (select #prev:=0) sess, p_values p
order by p.measure_id desc) raw
Here we introduce a variable #prev where we store value from previous row (but we sort in desc order).
Then just sum the results
UPDATE query for the fiddle
select sum(value)
from
(select (p.v*(t-#prev)) as value,
#prev:=v
from (select #prev:=0) sess, p_values p
order by v desc) raw

Related

How can I get the last row from each given row value in a column through date? [duplicate]

This question already has answers here:
Retrieving the last record in each group - MySQL
(33 answers)
Closed 4 years ago.
I have the following table.
+--------------------+--------------+-------+
Date | SymbolNumber | Value
+--------------------+--------------+-------+
2018-08-31 15:00:00 | 123 | data
2018-09-31 15:00:00 | 456 | data
2018-09-31 15:00:00 | 123 | data
2018-09-31 15:00:00 | 555 | data
2018-10-31 15:00:00 | 555 | data
2018-10-31 15:00:00 | 231 | data
2018-10-31 15:00:00 | 123 | data
2018-11-31 15:00:00 | 123 | data
2018-11-31 15:00:00 | 555 | data
2018-12-31 15:00:00 | 123 | data
2018-12-31 15:00:00 | 555 | data
I need a query that can select the last row of each SymbolNumber stated in the query.
SELECT
*
FROM
MyTable
WHERE
symbolNumber IN (123, 555)
AND
**lastOfRow ordered by latest-date**
Expected results:
2018-12-31 15:00:00 | 123 | data
2018-12-31 15:00:00 | 555 | data
How can I do this?
First, you will need a query that get the latest date for each symbolNumber. Second, you can inner join to this table (using date) for get the rest of the columns. Like this:
SELECT
t.*
FROM
<table_name> AS t
INNER JOIN
(SELECT
symbolNumber,
MAX(date) AS maxDate
FROM
<table_name>
GROUP BY
symbolNumber) AS latest_date ON latest_date.symbolNumber = t.symbolNumber AND latest_date.maxDate = t.date
The previous query will get latest data for each existing symbolNumber on the table. If you want to restrict to symbolNumbers: 123 and 555, you will need to made next modification:
SELECT
t.*
FROM
<table_name> AS t
INNER JOIN
(SELECT
symbolNumber,
MAX(date) AS maxDate
FROM
<table_name>
WHERE
symbolNumber IN (123, 555)
GROUP BY
symbolNumber) AS latest_date ON latest_date.symbolNumber = t.symbolNumber AND latest_date.maxDate = t.date
We can do a "self-left-join" on symbolNumber, and match to other rows in the same group with higher Date value on the right side.
We will eventually consider only those rows, where higher date could not be found (meaning the current row belongs to highest date in the group).
Here is a solution avoiding subquery, and utilizing Left Join:
SELECT t1.*
FROM MyTable AS t1
LEFT JOIN MyTable AS t2
ON t2.symbolNumber = t1.symbolNumber AND
t2.Date > t1.Date -- Joining to a row in same group with higher date
WHERE t1.symbolNumber IN (123, 555) AND
t2.symbolNumber IS NULL -- Higher date not found; so this is highest row
EDIT:
Benchmarking studies comparing Left Join method v/s Derived Table (Subquery)
#Strawberry ran a little benchmark test in 5.6.21. Here's what he found...
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id SERIAL PRIMARY KEY
,dense_user INT NOT NULL
,sparse_user INT NOT NULL
);
INSERT INTO my_table (dense_user,sparse_user)
SELECT RAND()*100,RAND()*100000;
INSERT INTO my_table (dense_user,sparse_user)
SELECT RAND()*100,RAND()*100000 FROM my_table;
-- REPEAT THIS LINE A FEW TIMES !!!
SELECT COUNT(DISTINCT dense_user) dense
, COUNT(DISTINCT sparse_user) sparse
, COUNT(*) total
FROM my_table;
+-------+--------+---------+
| dense | sparse | total |
+-------+--------+---------+
| 101 | 99999 | 1048576 |
+-------+--------+---------+
ALTER TABLE my_table ADD INDEX(dense_user);
ALTER TABLE my_table ADD INDEX(sparse_user);
--dense_test
SELECT x.*
FROM my_table x
LEFT
JOIN my_table y
ON y.dense_user = x.dense_user
AND y.id < x.id
WHERE y.id IS NULL
ORDER
BY dense_user
LIMIT 10;
+------+------------+-------------+
| id | dense_user | sparse_user |
+------+------------+-------------+
| 1212 | 0 | 1950 |
| 153 | 1 | 23193 |
| 255 | 2 | 27472 |
| 28 | 3 | 86440 |
| 18 | 4 | 47886 |
| 291 | 5 | 76563 |
| 15 | 6 | 85049 |
| 16 | 7 | 78384 |
| 135 | 8 | 52304 |
| 62 | 9 | 40930 |
+------+------------+-------------+
10 rows in set (2.64 sec)
SELECT x.*
FROM my_table x
JOIN
( SELECT dense_user, MIN(id) id FROM my_table GROUP BY dense_user ) y
ON y.dense_user = x.dense_user
AND y.id = x.id
ORDER
BY dense_user
LIMIT 10;
+------+------------+-------------+
| id | dense_user | sparse_user |
+------+------------+-------------+
| 1212 | 0 | 1950 |
| 153 | 1 | 23193 |
| 255 | 2 | 27472 |
| 28 | 3 | 86440 |
| 18 | 4 | 47886 |
| 291 | 5 | 76563 |
| 15 | 6 | 85049 |
| 16 | 7 | 78384 |
| 135 | 8 | 52304 |
| 62 | 9 | 40930 |
+------+------------+-------------+
10 rows in set (0.05 sec)
Uncorrelated query is 50 times faster.
--sparse test
SELECT x.*
FROM my_table x
LEFT
JOIN my_table y
ON y.sparse_user = x.sparse_user
AND y.id < x.id
WHERE y.id IS NULL
ORDER
BY sparse_user
LIMIT 10;
+--------+------------+-------------+
| id | dense_user | sparse_user |
+--------+------------+-------------+
| 165055 | 75 | 0 |
| 37598 | 63 | 1 |
| 170596 | 70 | 2 |
| 46142 | 87 | 3 |
| 33546 | 21 | 4 |
| 323114 | 87 | 5 |
| 86592 | 96 | 6 |
| 156711 | 36 | 7 |
| 17148 | 62 | 8 |
| 139965 | 71 | 9 |
+--------+------------+-------------+
10 rows in set (0.03 sec)
SELECT x.*
FROM my_table x
JOIN ( SELECT sparse_user, MIN(id) id FROM my_table GROUP BY sparse_user ) y
ON y.sparse_user = x.sparse_user
AND y.id = x.id
ORDER
BY sparse_user
LIMIT 10;
+--------+------------+-------------+
| id | dense_user | sparse_user |
+--------+------------+-------------+
| 165055 | 75 | 0 |
| 37598 | 63 | 1 |
| 170596 | 70 | 2 |
| 46142 | 87 | 3 |
| 33546 | 21 | 4 |
| 323114 | 87 | 5 |
| 86592 | 96 | 6 |
| 156711 | 36 | 7 |
| 17148 | 62 | 8 |
| 139965 | 71 | 9 |
+--------+------------+-------------+
10 rows in set (4.73 sec)
Exclusion Join is 150 times faster
However, as you move further up the result set, the picture begins to change very dramatically...
SELECT x.*
FROM my_table x
JOIN ( SELECT sparse_user, MIN(id) id FROM my_table GROUP BY sparse_user ) y
ON y.sparse_user = x.sparse_user
AND y.id = x.id
ORDER
BY sparse_user
LIMIT 10000,10;
+--------+------------+-------------+
| id | dense_user | sparse_user |
+--------+------------+-------------+
| 9810 | 93 | 10000 |
| 162438 | 4 | 10001 |
| 467371 | 62 | 10002 |
| 8258 | 13 | 10003 |
| 297049 | 17 | 10004 |
| 68354 | 23 | 10005 |
| 192701 | 64 | 10006 |
| 176225 | 92 | 10007 |
| 156595 | 37 | 10008 |
| 318266 | 1 | 10009 |
+--------+------------+-------------+
10 rows in set (9.17 sec)
SELECT x.*
FROM my_table x
LEFT
JOIN my_table y
ON y.sparse_user = x.sparse_user
AND y.id < x.id
WHERE y.id IS NULL
ORDER
BY sparse_user
LIMIT 10000,10;
+--------+------------+-------------+
| id | dense_user | sparse_user |
+--------+------------+-------------+
| 9810 | 93 | 10000 |
| 162438 | 4 | 10001 |
| 467371 | 62 | 10002 |
| 8258 | 13 | 10003 |
| 297049 | 17 | 10004 |
| 68354 | 23 | 10005 |
| 192701 | 64 | 10006 |
| 176225 | 92 | 10007 |
| 156595 | 37 | 10008 |
| 318266 | 1 | 10009 |
+--------+------------+-------------+
10 rows in set (32.19 sec) -- !!!
In summary, the exclusion join (the so-called 'strawberry query' can be (significantly) faster in certain, limited situations. More generally, an uncorrelated query will be faster.

MySQL Group by complex script

I have an script that works perfect, but need to add values from another table
Current script is
select v.id, vm.producto_id, sum(vm.total), count(v.id)
from visita v, reporte r, visitamaquina vm, maquina m,
(select r.id, empleado_id, fecha, cliente_id from ruta r, rutacliente rc where r.id=rc.ruta_id and
fecha>='2016-10-01' and fecha<='2016-10-30' group by fecha, cliente_id, empleado_id) as rem
where rem.fecha=v.fecha and v.cliente_Id=rem.cliente_id and r.visita_id=v.id and vm.visita_id=v.id and m.id=vm.maquina_id
group by vm.visita_id, vm.producto_id
Current Script returns this (I need some extra columns but for this purpose I only leave the ones with issues):
| Producto_Id | Id | Total | count(id) |
|---------------|--------------|-----------|-----------|
| 1 | 31 | 21 | 2 |
| 2 | 31 | 15 | 3 |
| 3 | 31 | 18 | 2 |
Table VisitaMaquina has multiple records for same producto_id
VisitaMaquina has this:
| Producto_Id | Visita_Id | Total |
|---------------|--------------|-----------|
| 1 | 31 | 8 |
| 1 | 31 | 13 |
| 2 | 31 | 9 |
Same situation happens with table called reporteproducto, where multiple times producto_id is repeated.
Table reporteproducto has
| Producto_Id | Visita_Id | Quantity |
|---------------|--------------|-----------|
| 1 | 31 | 4 |
| 1 | 31 | 7 |
| 2 | 31 | 5 |
My previous query works fine, and I just need to get the sum of quantity
I used this Script and this is what I got
select v.id, vm.producto_id, sum(vm.total), sum(quantity), count(id)
from visita v, reporte r, visitamaquina vm, maquina m, reporteproducto rp,
(select r.id, empleado_id, fecha, cliente_id from ruta r, rutacliente rc where r.id=rc.ruta_id and
fecha>='2016-10-01' and fecha<='2016-10-30' group by fecha, cliente_id, empleado_id) as rem
where rem.fecha=v.fecha and v.cliente_Id=rem.cliente_id and r.visita_id=v.id and vm.visita_id=v.id and m.id=vm.maquina_id and rp.visita_Id=v.id and rp.producto_id=vm.producto_id
group by vm.visita_id, vm.producto_id
I got this
|Producto_Id | Visita_Id | Total |Quantity | count(id)
|---------------|--------------|-----------|-----------|-----------|
| 1 | 31 | 42 | 11 | 4 |
| 2 | 31 | 45 | 18 | 6 |
| 3 | 31 | 36 | 44 | 4 |
The desired result is (focus on producto_id=1):
|Producto_Id | Visita_Id | Total |Quantity |
|---------------|--------------|-----------|-----------|
| 1 | 31 | 21 | 11 |
| 2 | 31 | 15 | 18 |
| 3 | 31 | 18 | 44 |
Any Idea on how to solve this?
Better group the sub table that has multiple data with the same group of your outer group by columns.In your case the VisitaMaquina and reporteproducto should be group by with visita_id, producto_id since they all have repeat rows with the same combination of vid=31 and pid=1.
You can change the visitamaquina vm and reporteproducto rp table alias to sub query form of the following:
(select visita_id, Producto_Id, sum(Total) as Total from visitamaquina
group by visita_id, Producto_Id) vm,
(select Producto_Id, Visita_Id, sum(Quantity) as Quantity from reporteproducto
group by Producto_Id, Visita_Id) rp
Also I found that there is vm.maquina_id in your where clause, maybe this causes your problem.Because if the visitamaquina and reporteproducto both have repeat values of visita_id, producto_id then the output should have Total, Quantity both doubled.In your output the Quantity is right, that's odd.
My Mistake
I got this
|Producto_Id | Visita_Id | Total |Quantity | count(id)
|---------------|--------------|-----------|-----------|-----------|
| 1 | 31 | 42 | 22 | 4 |
| 2 | 31 | 45 | 36 | 6 |
| 3 | 31 | 36 | 88 | 4 |

SQL Query Conditional accumulation

it is possible to display accumulated data, resetting the count based on a condition?
I would like to create a script to accumulate if there is value 1 in cell number, but if another value the count should be restarted. Something like what is displayed in the column cumulative_with_condition.
+----+------------+--------+
| id | release | number |
+----+------------+--------+
| 1 | 2016-07-08 | 4 |
| 2 | 2016-07-09 | 1 |
| 3 | 2016-07-10 | 1 |
| 4 | 2016-07-12 | 2 |
| 5 | 2016-07-13 | 1 |
| 6 | 2016-07-14 | 1 |
| 7 | 2016-07-15 | 1 |
| 8 | 2016-07-16 | 2-3 |
| 9 | 2016-07-17 | 3 |
| 10 | 2016-07-18 | 1 |
+----+------------+--------+
select * from version where id > 1 and id < 9;
+----+------------+--------+---------------------------+
| id | release | number | cumulative_with_condition |
+----+------------+--------+---------------------------+
| 2 | 2016-07-09 | 1 | 1 |
| 3 | 2016-07-10 | 1 | 2 |
| 4 | 2016-07-12 | 2 | 0 |
| 5 | 2016-07-13 | 1 | 1 |
| 6 | 2016-07-14 | 1 | 2 |
| 7 | 2016-07-15 | 1 | 3 |
| 8 | 2016-07-16 | 2-3 | 0 |
+----+------------+--------+---------------------------+
You want something like row_number() (not exactly, but like that). You can do that using variables:
select t.*,
(#rn := if(number = 1, #rn + 1,
if(#n := number, 0, 0)
)
) as cumulative_with_condition
from t cross join
(select #n := '', #rn := 0) params
order by t.id;
As an alternative to using user variables, as demonstrated by Gordon Linoff, in this case it's also possible to self-join, group and count:
SELECT t.id, t.release, t.number, COUNT(version.id) AS cumulative_with_condition
FROM version RIGHT JOIN (
SELECT highs.*, MAX(lows.id) min
FROM version lows RIGHT JOIN version highs ON lows.id <= highs.id
WHERE lows.number <> '1'
GROUP BY highs.id
) t ON version.id > t.min AND version.id <= t.id
WHERE t.id > 1 AND t.id < 9
GROUP BY t.id
See it on sqlfiddle.
But, frankly, neither approach is particularly elegant—as I commented previously, you're probably best off implementing this within your application code.

Select most recent MAX() and MIN() - WebSQL

i'm build an exercises web app and i'm working with two tables like this:
Table 1: weekly_stats
| id | code | type | date | time |
|----|--------------|--------------------|------------|----------|
| 1 | CC | 1 | 2015-02-04 | 19:15:00 |
| 2 | CC | 2 | 2015-01-28 | 19:15:00 |
| 3 | CPC | 1 | 2015-01-26 | 19:15:00 |
| 4 | CPC | 1 | 2015-01-25 | 19:15:00 |
| 5 | CP | 1 | 2015-01-24 | 19:15:00 |
| 6 | CC | 1 | 2015-01-23 | 19:15:00 |
| .. | ... | ... | ... | ... |
Table 2: global_stats
| id | exercise_number |correct | wrong |
|----|-----------------|--------|-----------|
| 1 | 138 | 1 | 0 |
| 2 | 246 | 1 | 0 |
| 3 | 988 | 1 | 10 |
| 4 | 13 | 5 | 0 |
| 5 | 5 | 4 | 7 |
| 6 | 5 | 4 | 7 |
| .. | ... | ... | ... |
What i would like is to get MAX(correct-wrong) and MIN(correct-wrong) and now i'm working with this query:
SELECT
exercise_number,
date,
time
FROM weekly_stats AS w JOIN global_stats AS g
ON w.id=g.id
WHERE correct - wrong = (SELECT MAX(correct - wrong) from global_stats)
UNION
SELECT
exercise_number,
date,
time
FROM weekly_stats AS w JOIN global_stats AS g
ON w.id=g.id
WHERE correct - wrong = (SELECT MIN(correct - wrong) from global_stats);
This query is working good, except for one thing: when "WHERE correct - wrong = (SELECT MIN(correct - wrong)[...]" selects more than one row, the row selected is the first but i would like to have returned the most recent (in other words: ordered by datetime(date, time)). Is it possible?
Thanks!
I think you can solve it like this:
SELECT * FROM (
SELECT
1 as sort_column,
exercise_number,
date,
time
FROM weekly_stats AS w JOIN global_stats AS g
ON w.id=g.id
WHERE correct - wrong = (SELECT MAX(correct - wrong) from global_stats)
ORDER BY date DESC, time DESC
LIMIT 1 ) as a
UNION
SELECT * FROM (
SELECT
2 as sort_column,
exercise_number,
date,
time
FROM weekly_stats AS w JOIN global_stats AS g
ON w.id=g.id
WHERE correct - wrong = (SELECT MIN(correct - wrong) from global_stats)
ORDER BY date DESC, time DESC
LIMIT 1) as b
ORDER BY sort_column;
Here is the documentation about how UNION works.

get amount between range [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
This a simple my table
+-----------+----------------+-----------+
| id | date | meter |
------------+----------------+-----------+
| 1 | 2103-11-01 | 5 |
| 2 | 2103-11-10 | 8 |
| 4 | 2103-11-14 | 10 |
| 6 | 2103-11-20 | 18 |
| 7 | 2103-11-25 | 25 |
| 10 | 2103-11-29 | 30 |
+-----------+----------------+-----------+
how do I get the results to the use of meters between two ranges of the results of recording time,
like bellow
+----------------+----------------+-------+-----+--------+
| date1 | date2 | start | end | amount |
+----------------+----------------+-------+-----+--------+
| 2013-11-01 | 2013-11-10 | 5 | 8 | 3 |
| 2013-11-10 | 2013-11-14 | 8 | 10 | 2 |
| 2013-11-14 | 2013-11-20 | 10 | 18 | 8 |
| 2013-11-20 | 2013-11-25 | 18 | 25 | 7 |
| 2013-11-25 | 2013-11-29 | 25 | 30 | 5 |
+----------------+----------------+-------+-----+--------+
Edit:
I got it:
select meters1.date as date1, min(meters2.date) as date2, meters1.meter as start,
meters2.meter as end, (meters2.meter - meters1.meter) as amount
from meters meters1, meters meters2 where meters1.date < meters2.date
group by date1;
Outputs:
+------------+------------+-------+-----+--------+
| date1 | date2 | start | end | amount |
+------------+------------+-------+-----+--------+
| 2013-11-01 | 2013-11-10 | 5 | 8 | 3 |
| 2013-11-10 | 2013-11-14 | 8 | 10 | 2 |
| 2013-11-14 | 2013-11-20 | 10 | 18 | 8 |
| 2013-11-20 | 2013-11-25 | 18 | 25 | 7 |
| 2013-11-25 | 2013-11-29 | 25 | 30 | 5 |
+------------+------------+-------+-----+--------+
Original Post:
This is most of the way there:
select meters1.date as date1, meters2.date as date2, meters1.meter as start,
meters2.meter as end, (meters2.meter - meters1.meter) as amount
from meters meters1, meters meters2 having date1 < date2 order by date1;
It outputs:
+------------+------------+-------+-----+--------+
| date1 | date2 | start | end | amount |
+------------+------------+-------+-----+--------+
| 2013-11-01 | 2013-11-10 | 5 | 8 | 3 |
| 2013-11-01 | 2013-11-20 | 5 | 18 | 13 |
| 2013-11-01 | 2013-11-29 | 5 | 30 | 25 |
| 2013-11-01 | 2013-11-14 | 5 | 10 | 5 |
| 2013-11-01 | 2013-11-25 | 5 | 25 | 20 |
| 2013-11-10 | 2013-11-20 | 8 | 18 | 10 |
| 2013-11-10 | 2013-11-29 | 8 | 30 | 22 |
| 2013-11-10 | 2013-11-14 | 8 | 10 | 2 |
| 2013-11-10 | 2013-11-25 | 8 | 25 | 17 |
| 2013-11-14 | 2013-11-25 | 10 | 25 | 15 |
| 2013-11-14 | 2013-11-20 | 10 | 18 | 8 |
| 2013-11-14 | 2013-11-29 | 10 | 30 | 20 |
| 2013-11-20 | 2013-11-25 | 18 | 25 | 7 |
| 2013-11-20 | 2013-11-29 | 18 | 30 | 12 |
| 2013-11-25 | 2013-11-29 | 25 | 30 | 5 |
+------------+------------+-------+-----+--------+
If it's SQL server try it this way
WITH cte AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY date) rnum
FROM table1
)
SELECT c.date date1, p.date date2, c.meter [start], p.meter [end], p.meter - c.meter amount
FROM cte c JOIN cte p
ON c.rnum = p.rnum - 1
Here is SQLFiddle demo
If it's MySQL then you can do
SELECT date1, date2, meter1, meter2, meter2 - meter1 amount
FROM
(
SELECT #d date2, date date1, #m meter2, meter meter1, #d := date, #m := meter
FROM table1 CROSS JOIN (SELECT #d := NULL, #m := NULL) i
ORDER BY date DESC
) q
WHERE date2 IS NOT NULL
ORDER BY date1
Here is SQLFiddle demo
Output in both cases:
| DATE1 | DATE2 | START | END | AMOUNT |
|------------|------------|-------|-----|--------|
| 2103-11-01 | 2103-11-10 | 5 | 8 | 3 |
| 2103-11-10 | 2103-11-14 | 8 | 10 | 2 |
| 2103-11-14 | 2103-11-20 | 10 | 18 | 8 |
| 2103-11-20 | 2103-11-25 | 18 | 25 | 7 |
| 2103-11-25 | 2103-11-29 | 25 | 30 | 5 |
MySql
SELECT DATES.date1,
DATES.date2,
m1.meter as start,
m2.meter as end,
m2.meter - m1.meter as amount
FROM
(SELECT date as date1,
(SELECT min(date)
FROM tableName t2
WHERE t2.date > t1.date) as date2
FROM tableName t1
)DATES,
tableName m1,
tableName m2
WHERE DATES.date2 IS NOT NULL
AND m1.date = DATES.date1
AND m2.date = DATES.date2
ORDER BY DATES.date1
sqlFiddle here
in MS-SQL SERVER 2002 change the word end to "end" as it complains about syntax near end
You haven't made it clear whether you're really using mySQL or SQL Server but I'm posting a solution that works for SQL 2008 and above. Might work for 2005 but I can't test that.
-- Set up a temp table with sample data
DECLARE #testData AS TABLE(
id int,
dt date,
meter int)
INSERT #testData(id, dt, meter) VALUES
(1, '2013-11-01', 5)
,(2, '2013-11-10', 8)
,(4, '2013-11-14', 10)
,(6, '2013-11-20', 18)
,(7, '2013-11-25', 25)
,(10, '2013-11-29',30)
---------------------------------------------
-- Begin SQL Server solution
;WITH cte AS (
SELECT
ROW_NUMBER() OVER (ORDER BY id) AS rownum
,id
,dt
,meter
FROM
#testData AS [date2]
)
SELECT
t1.id
,t1.dt AS [date1]
,t2.dt AS [date2]
,t1.meter AS [start]
,t2.meter AS [end]
,t2.meter - t1.meter AS [amount]
FROM
cte t1
LEFT OUTER JOIN cte t2 ON (t2.rownum = t1.rownum + 1)
WHERE
t2.dt IS NOT NULL
If you're using MySQL, then a self-join will work well here. Join the table to itself, using an ON clause to make sure you don't join the same record to itself. This will give you ((N * N) - N) permutations of your data, where N is the number of original rows.
SELECT
...
FROM
tableName first
JOIN
tableName second
ON first.id != second.id
Then, it's all about SELECTing the right stuff (including the calculation of the difference between the two meter values). To get the columns in the result set you posted, you'd probably want to SELECT:
first.date AS date1,
second.date AS date2,
first.meter AS start,
second.meter AS end,
ABS(first.meter - second.meter) AS amount
Edit
Ah, I see. I'd envisioned something like a inter-city mileage chart that you used to see on road maps (where you'd have the same cities in the rows and columns, and the cell in the intersection would indicate the number of miles between those two cities.
But it looks like you just want to compare values from one date to the next. If that's the case, you can take advantage of the way MySQL handles GROUPing and ORDERing... but be careful, because I'm not sure this is guaranteed:
mysql> SELECT
table1.date AS date1,
table2.date AS date2,
table1.meter AS start,
table2.meter AS end,
ABS(table1.meter - table2.meter) AS amount
FROM tableName table1
JOIN tableName table2
WHERE table2.date > table1.date
GROUP BY table1.date
ORDER BY table2.date - table1.date;
+---------------------+---------------------+-------+------+--------+
| date1 | date2 | start | end | amount |
+---------------------+---------------------+-------+------+--------+
| 2103-11-25 00:00:00 | 2103-11-29 00:00:00 | 25 | 30 | 5 |
| 2103-11-10 00:00:00 | 2103-11-14 00:00:00 | 8 | 10 | 2 |
| 2103-11-20 00:00:00 | 2103-11-25 00:00:00 | 18 | 25 | 7 |
| 2103-11-14 00:00:00 | 2103-11-20 00:00:00 | 10 | 18 | 8 |
| 2103-11-01 00:00:00 | 2103-11-10 00:00:00 | 5 | 8 | 3 |
+---------------------+---------------------+-------+------+--------+
5 rows in set (0.00 sec)