I am executing a cumulative sum order by date and I do not get the expected result. Some records are presented in a different order compared to the order used for the sum.
Please have a look at SQL Fiddle
I would expect either the following result:
2015-05-05T00:00:00Z 50 30 20 90 120
2015-05-05T00:00:00Z 60 30 30 120 100
2015-05-04T00:00:00Z 70 50 20 30 70
2015-05-04T00:00:00Z 80 40 40 70 50
2015-05-03T00:00:00Z 30 20 10 10
or the following order:
2015-05-05T00:00:00Z 60 30 30 120
2015-05-05T00:00:00Z 50 30 20 90
2015-05-05T00:00:00Z 60 30 30 120
2015-05-04T00:00:00Z 80 40 40 70
2015-05-04T00:00:00Z 70 50 20 30
2015-05-04T00:00:00Z 80 40 40 70
2015-05-03T00:00:00Z 30 20 10 10
(Added) please note that negative values are also possible. This is the reason why I have mentioned on answers below that an order on the cumulative sum would not solve the problem. As an example I will modify slightly the result:
2015-05-05T00:00:00Z 30 60 -30 60
2015-05-05T00:00:00Z 50 30 20 90
2015-05-04T00:00:00Z 80 40 40 70
2015-05-04T00:00:00Z 70 50 20 30
2015-05-03T00:00:00Z 30 20 10 10
Thanks for the help.
http://sqlfiddle.com/#!9/7204d4/2
gives your expceted output.. I have added #cum := #cum + tot_earn_pts - tot_redeem_pts asc after your query.
Add extra fields in order by "cum_liability_pts" desc:
SQL Fiddle
SELECT *
FROM (
SELECT date,
tot_earn_pts,
tot_redeem_pts,
tot_earn_pts - tot_redeem_pts AS tot_liability_pts,
#cum := #cum + tot_earn_pts - tot_redeem_pts AS cum_liability_pts
FROM (
SELECT date,
earn_points AS tot_earn_pts,
redeem_points AS tot_redeem_pts
FROM i_report_total_order
/* WHERE website_id = 36 */
) tots
JOIN (SELECT #cum := 0) init
ORDER BY date asc
) res_asc
ORDER BY date desc, cum_liability_pts desc;
Related
I have a transaction table with positive and negative numbers and a budget table. A negative number will increase if being minus with budget.
bud_tab
id
budget
1
90
2
80
3
50
4
30
trans_tab
id
trans
1
-50
2
80
3
-70
4
60
What query should I use to get an output like this:
budget
trans
total
90
-50
140
80
80
0
50
-70
120
30
60
-30
From the look of output, this seems like a basic math calculations. Am I missing something ?
select budget, trans, budget - trans as total from bud_tab natural join trans_tab ;
I have a data that looks like more or less like this table. Let's call this "expenses" table.
date
transportation
fruits
vegetables
2022-05-25
10
10
30
2022-05-26
10
0
40
2022-05-27
10
20
40
2022-05-28
10
20
30
2022-05-29
10
0
60
2022-05-30
10
10
30
2022-05-31
10
10
40
2022-06-01
10
10
30
2022-06-02
10
20
30
2022-06-03
10
30
40
2022-06-04
10
0
20
2022-06-05
10
30
30
2022-06-06
10
20
30
2022-06-07
10
0
30
2022-06-08
10
0
30
2022-06-09
10
10
20
2022-06-10
10
30
30
I want to know how many days, for the months of May and June, the sum of fruits and vegetables was equal to or greater than 50.
The answer that I'm expecting to get is 4 days for May (05-27,05-28, 05-29, 05-31) and 5 days for June (06-02, 06-03, 06-05, 06-06, 06-10)
I tried to use this script, however...
SELECT date_format(date, '%M' '%Y'), COUNT(date)
FROM expenses
GROUP BY date_format(date, '%M' '%Y')
HAVING SUM(fruits + vegetables)>=50
...instead of counting only the number days per month whose sum of fruits and vegetables was equal to or greater than 50, it counts all days in the table for the two months, that is to say, it yields an answer of 7 days for May and 10 days for June.
I am using the latest version of MySQL.
Instead of having clause, filter the rows with Where.
Query
select date_format(date, '%M' '%Y') as month, count(*) as no_of_days
from expenses
where fruits + vegetables >= 50
group by date_format(date, '%M' '%Y');
SQL Fiddle
I have a database with a Datetime column containing intervals of +/- 30 seconds and a Value column containing random numbers between 10 and 100. My table looks like this:
datetime value
----------------------------
2016-05-04 20:47:20 12
2016-05-04 20:47:40 44
2016-05-04 20:48:30 56
2016-05-04 20:48:40 25
2016-05-04 20:49:30 92
2016-05-04 20:49:40 61
2016-05-04 20:50:00 79
2016-05-04 20:51:20 76
2016-05-04 20:51:30 10
2016-05-04 20:51:40 47
2016-05-04 20:52:40 23
2016-05-04 20:54:00 40
2016-05-04 20:54:10 18
2016-05-04 20:54:50 12
2016-05-04 20:56:00 55
What I want the following output:
datetime max_val min_val
-----------------------------------------
2016-05-04 20:45:00 92 12
2016-05-04 20:50:00 79 10
2016-05-04 20:55:00 55 55
Before I can even continue getting the maximum value and the minimum value, I first have to GROUP the datetime column into 5 minute intervals. According to my research I came up with this:
SELECT
time,
value
FROM random_number_minute
GROUP BY
UNIX_TIMESTAMP(time) DIV 300
Which actually GROUPS the datetime column into 5 minute intervals like this:
datetime
-------------------
2016-05-04 20:47:20
2016-05-04 20:50:00
2016-05-04 20:56:00
This comes very close as it takes the next closest datetime to, in this case, 20:45:00, 20:50:00, etc. I would like to rounddown the datetime to the nearest 5 minutes regardless of the seconds, for instance if the minutes are:
minutes rounddown
--------------------
10 10
11 10
12 10
13 10
14 10
15 15
16 15
17 15
18 15
19 15
20 20
The time could be 14:59 and I would like to rounddown to 10:00. I also tried using this after hours of research:
SELECT
time,
time_rounded =
dateadd(mi,(datepart(mi,dateadd(mi,1,time))/5)*5,dateadd(hh,datediff(hh,0,dateadd(mi,1,time)),0))
But sadly this did not work. I get this error:
Incorrect parameter count in the call to native function 'datediff'
I tried this too:
SELECT
time, CASE
WHEN DATEDIFF(second, DATEADD(second, DATEDIFF(second, 0, time_out) / 300 * 300, 0), time) >= 240
THEN DATEADD(second, (DATEDIFF(second, 0, time) / 300 * 300) + 300, 0)
ELSE DATEADD(second, DATEDIFF(second, 0, time) / 300 * 300, 0)
END
Returning the same error.
How can I do this? And after the datetime is grouped, how can I get the max and min value of the data grouping?
Sorry if I'm repeating another answer. I'll delete if I am..
SELECT FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP(datetime)/300)*300) x
, MIN(value) min_value
, MAX(value) max_value
FROM my_table
GROUP
BY x;
Use various date partition functions inside a GROUP BY.
Code:
SELECT from_unixtime(300 * round(unix_timestamp(r.datetime)/300)) AS 5datetime,
MAX(r.value) AS max_value,
MIN(r.value) As min_value,
(SELECT r.value FROM random_number_minute ra WHERE ra.datetime = r.datetime order by ra.datetime desc LIMIT 1) as first_val
FROM random_number_minute r
GROUP BY UNIX_TIMESTAMP(r.datetime) DIV 300
Output:
5datetime max_value min_value first_val
May, 04 2016 20:45:00 92 12 12
May, 04 2016 20:50:00 79 10 79
May, 04 2016 20:55:00 55 55 55
SQL Fiddle: http://sqlfiddle.com/#!9/e16b1/17/0
SELECT
timestamp(concat(date(time), ' ', hour(time), ':', minute(time) div 5 * 5)) as floor_time,
min(value),
max(value)
FROM random_number_minute
GROUP BY date(time), hour(time), minute(time) div 5 * 5
http://sqlfiddle.com/#!9/91212f/5
I have the following structure
Country - UserId - Points
840 23 24
840 32 31
840 22 38
840 15 35
840 10 20
250 15 33
724 17 12
etc
I want to get the position of user in the ranking of each country accordin points
I'm using
select #rownum:=#rownum+1 Num, Country, UserId, points
from users , (SELECT #rownum:=0) r
where country=840 order by points DESC ;
I want to get the position of a single user inside his country
In this example, in country 840, if I select user id=23, I'll get position 4
Country - UserId - Points- Order
840 22 38 1
840 15 35 2
840 32 31 3
840 23 24 4
840 10 20 5
Try doing:
select * from (
select #rownum: = #rownum + 1 Num,
Country,
UserId,
points
from users, (select #rownum: = 0) r
where country = 840
order by points desc
) a
where userId = 23
Using your query you'll receive row number in your results so it's not what you want to. Best way is to generate positions and save them to separated column. This way you'll be able to select it easy and there will be no need to recalculate it each time (which is very important).
To do it you can modify your query to update rows instead of selecting it.
For some reason i have to do this. i have a query that have result like this :
limit usage tariff total
0 10 10 700 7000
11 20 10 900 9000
21 30 10 1800 18000
31 > 11 2700 29700
the query return 4 rows maximum (like above) or sometime just 3 rows.
I want to change the rows to just one row and multi column like this (the list below just one row):
limit1 usage1 tariff1 total1 limit2 usage2 tariff2 total2
0 10 10 700 7000 11 20 10 900 9000
limit3 usage3 tariff3 total3 limit4 usage4 tariff4 total4
21 30 10 1800 18000 31 > 11 2700 29700
if the query return just 3 rows, the values in column limit4 until total4 will be empty. I dont know how to do like that.
EDITED
I add one ID column so the list will be :
ID limit usage tariff total
1 0 10 10 700 7000
2 11 20 10 900 9000
3 21 30 10 1800 18000
4 31 > 11 2700 29700
I try to make it one row like this :
SELECT e.*,f.id AS id4,f.limit AS limit4,f.usage AS usage4,f.tariff AS tariff4,f.total AS total4
FROM
(SELECT c.*,d.id AS id3,d.limit AS limit3,d.usage AS usage3,d.tariff AS tariff3,d.total AS total3
FROM
(SELECT b.id AS id1,b.limit AS limit1,b.usage AS usage1,b.tariff AS tariff1,b.total AS total1,
a.id AS id2,a.limit AS limit2,a.usage AS usage2,a.tariff AS tariff2,a.total AS total2
FROM testtariff a
INNER JOIN testtariff b ON a.id!=b.id
LIMIT 1) c INNER JOIN testtariff d ON c.id1 != d.id AND c.id2 != d.id
LIMIT 1) e INNER JOIN testtariff f ON e.id1 != f.id AND e.id2 != f.id
AND e.id3 != f.id
LIMIT 1
it work as i expected for 4 rows but not work for 3 rows. should i use cursor ?
This is generally called a "pivot". Here's how you do it:
select
'0 10' as limit1,
sum(limit between 0 and 10 * usage) as usage1,
sum(limit between 0 and 10 * tariff) as tariff1,
sum(limit between 0 and 10 * usage * tariff) as total1,
'11 20' as limit2,
sum(limit between 11 and 20 * usage) as usage2,
sum(limit between 11 and 20 * tariff) as tariff2,
sum(limit between 11 and 20 * usage * tariff) as total2,
-- etc
from mytable
group by 1,5 -- etc
This works because limit between x and x is 1 if true and 0 if false, so using multiplying by this is a simple way to filter the results into different groups.