I'm trying to write a query that will return data grouped by date ranges and am wondering if there's a way to do it in one query (including the calculation), or if I need to write three separate ones? The dates in my table are stored as unix timestamps.
For example, my records look like this:
id type timestamp
312 87 1299218991
313 87 1299299232
314 90 1299337639
315 87 1299344130
316 87 1299348977
497 343 1304280210
498 343 1304280392
499 343 1304280725
500 343 1304280856
501 343 1304281015
502 343 1304281200
503 343 1304281287
504 343 1304281447
505 343 1304281874
566 90 1305222137
567 343 1305250276
568 343 1305387869
569 343 1305401114
570 343 1305405062
571 343 1305415659
573 343 1305421418
574 343 1305431457
575 90 1305431756
576 343 1305432456
577 259 1305441833
578 259 1305442234
580 343 1305456152
581 343 1305467261
582 343 1305483902
I'm trying to write a query that will find all records with a "created" date between:
2011-05-01 and 2011-06-01 (Month)
2011-03-01 and 2011-06-01 (Quarter)
2010-05-01 AND 2011-06-01 (Year)
I tried the following (in this case, I hardcoded the unix value for just the month to see if I could get it to work... ):
SELECT COUNT(id) AS idCount,
MIN(FROM_UNIXTIME(timestamp)) AS fromValue,
MAX(FROM_UNIXTIME(timestamp)) AS toValue
FROM uc_items
WHERE ADDDATE(FROM_UNIXTIME(timestamp), INTERVAL 1 MONTH)
>= FROM_UNIXTIME(1304233200)
But, it doesn't seem to work because the fromValue is: 2011-04-02 21:12:56 and the toValue is 2011-10-25 06:20:14, which, obviously, isn't a date between 5/1/2011 and 6/1/2011.
This aught to work:
SELECT COUNT(id) AS idCount,
FROM_UNIXTIME(MIN(timestamp)) AS fromValue,
FROM_UNIXTIME(MAX(timestamp)) AS toValue
FROM uc_items
WHERE timestamp BETWEEN UNIX_TIMESTAMP('2011-05-01') AND UNIX_TIMESTAMP('2011-06-01 23:59:59')
Also, as a performance tip - avoid applying functions to columns in your tables in a WHERE clause (e.g you have WHERE ADDDATE(FROM_UNIXTIME(timestamp))). Doing that will prevent MySQL from using any indexes on the timestamp column.
Related
Does the following MySQL code or "DENSE_RANK()" function works in MySQL or is it only used in Oracle database ???
Select Employee, Cost_Center, Cost_Grant, Percent
,DENSE_RANK() over (PARTITION BY Employee order by Percent ASC) as Rank
Employee
Cost_Center
Cost_Grant
Percent
AB61526
10030
54
AB61526
14020
46
AB60020
1040
68
AB60020
10010
32
AB60038
11000
71
AB60038
10010
29
AK50051
10020
23
AK50051
11520
78
Expected results output:
Employee
Cost_Center
Cost_Grant
Percent
Rank
AB61526
10030
54
1
AB61526
14020
46
2
AB60020
1040
68
2
AB60020
10010
32
1
AB60038
11000
71
2
AB60038
10010
29
1
AK50051
10020
23
1
AK50051
11520
78
2
DENSE_RANK is supported in mysql beginning with version 8.0, and in MariaDB beginning with version 10.2.
I'm trying to filter, so the column salaryMonth only contains data which has 2020 inside, so 2019 is filtering out.
SELECT sum(km_amount) as total
, user_id
, salaryMonth
from kms
, users
where users.id = kms.user_id
group
by salaryMonth
, user_id
Did you try something like this?
SELECT
sum(km_amount) as total,
user_id,
salaryMonth
FROM kms, users
WHERE
users.id=kms.user_id
AND salaryMonth LIKE '%2020%'
GROUP BY
salaryMonth, user_id
You could save yourself no end of misery be refactoring your table as:
total user_id salary_yearmonth
625 64 2020-02-01
595 70 2020-02-01
600 74 2020-02-01
632 75 2020-02-01
471 77 2020-02-01
788 29 2019-03-01
35 4 2020-03-01
22 39 2020-03-01
373 47 2020-03-01
196 53 2020-03-01
140 74 2020-03-01
228 75 2020-03-01
49 29 2019-04-01
96 63 2019-05-01
406 4 2019-06-01
966 4 2019-07-01
514 1 2019-08-01
637 4 2019-08-01
580 47 2019-08-01
11 1 2019-09-01
Trying to sort the following TEAM_TOTAL Column Desc
MATCHID TEAM_TOTAL
---------- -----------------
573 Total 112
573 Total 2 for 115
574 Total 9 for 97
574 Total 2 for 100
575 Total 9 for 129
575 Total 9 for 101
576 Total 4 for 191
576 Total 9 for 160
577 Total 8 for 157
577 Total 7 for 137
578 Total 6 for 193
578 Total 119
But the problem is TEAM_TOTAL is varchar, is there a way with query alone I can get the results in the sorted desc order.
More over there is a text as well in that column. I am running out of ideas to get this up
Result should have beeen like the below Result Set
MATCHID TEAM_TOTAL
---------- -----------------
578 Total 6 for 193
576 Total 4 for 191
576 Total 9 for 160
577 Total 8 for 157
577 Total 7 for 137
575 Total 9 for 129
578 Total 119
573 Total 2 for 115
573 Total 112
575 Total 9 for 101
574 Total 2 for 100
574 Total 9 for 97
Give this a try:
select * from t
order by substring(
team_total, locate('for', team_total) +
if(locate('for', team_total) > 0, 4, 7))+0 desc
Fiddle here.
Try to extract the integer (string after the last space):
-- 'Total 112' - extracts 112
SELECT SUBSTRING('Total 112', LENGTH('Total 112') - LOCATE(' ', REVERSE('Total 112')));
-- 'Total 6 for 193' - extracts 193
SELECT SUBSTRING('Total 6 for 193', LENGTH('Total 6 for 193') - LOCATE(' ', REVERSE('Total 6 for 193')));
Now, you can convert that string to a number and then order by it.
SELECT * FROM teams
ORDER BY
CAST(SUBSTRING(TEAM_TOTAL, LENGTH(TEAM_TOTAL) - LOCATE(' ', REVERSE(TEAM_TOTAL))) AS INT) DESC
I'm working on a problem of finding mean processing times. I'm trying to eliminate outlier data by essentially performing a average on only the best 80% of the data.
I am struggling trying to adapt existing Top N per Group solutions to perform averaging per group. Using SQL Server 2008.
Here is a sample of what the table looks like:
OpID | ProcessMin | Datestamp
2 | 234 | 2012-01-26 09:07:29.000
2 | 222 | 2012-01-26 10:04:22.000
3 | 127 | 2012-01-26 11:09:51.000
3 | 134 | 2012-01-26 05:02:11.000
3 | 566 | 2012-01-26 05:27:31.000
4 | 234 | 2012-01-26 04:08:41.000
I want it to take the lowest 80% of the ProcessMin for each OpID, and take the average of that array. Any help would be appreciated!
* UPDATE *
Given the following table:
OpID ProcessMin Datestamp
602 33 46:54.0
602 36 38:59.0
602 37 18:45.0
602 39 22:01.0
602 41 36:43.0
602 42 33:00.0
602 49 03:48.0
602 51 22:08.0
602 69 39:15.0
602 105 59:56.0
603 13 34:07.0
603 18 07:17.0
603 31 57:07.0
603 39 01:52.0
603 39 01:02.0
603 40 40:10.0
603 46 22:56.0
603 47 11:03.0
603 48 40:13.0
603 56 25:01.0
I would expect this output:
OptID ProcessMin
602 41
603 34.125
Notice that since there are 10 data points for each OpID, it would only average the lowest 8 values (80%).
You can use ntile
select OpID,
avg(ProcessMin) as ProcessMin
from
(
select OpID,
ProcessMin,
ntile(5) over(partition by OpID order by ProcessMin) as nt
from YourTable
) as T
where nt <= 4
group by OpID
SE-Data
If ProcessMin is an integer you can do avg(cast(ProcessMin as float)) as ProcessMin to get the decimal average value.
I have a table like this.
id day1 day2 day3
1 411 523 223
2 413 554 245
3 417 511 209
4 420 515 232
5 422 522 212
6 483 567 212
7 456 512 256
8 433 578 209
9 438 532 234
10 418 555 223
11 460 510 263
12 453 509 245
13 441 524 233
14 430 543 261
15 456 582 222
16 444 524 241
17 478 511 211
18 421 583 222
I want to select all the IDs that have duplicate values in day2.
I'm doing
select day2,count(*) from resultater group by day having count(*)>1;
Is it possible to list all the IDs within the groups?
select day2,count(*), group_concat(id)
from resultater
group by day
having count(*)>1;
should do the trick.