how to get the minimum created date per user_id? sql_mode=only_full_group_by is enabled
SELECT
user_id,
min(created_at)
FROM
subscribers
GROUP BY
created_at,
user_id
GROUP BY
user_id,
created_at
HAVING
MIN(DATE_FORMAT(created_at, '%Y-%m-%d')) > '2022-10-01'
Data is like this
user_id
created_at
12
2022-11-11 12:10:11
13
2021-10-11 12:10:11
12
2022-08-11 11:10:11
15
2022-08-11 11:10:11
Expected result
user_id
13
12
Just use GROUP BY for grouping your users and then use MIN to find the minimum date;
SELECT
d.`user_id`,
MIN(d.`created_at`)
FROM
subscribers d
GROUP BY d.`user_id`
If you want it in more sophisticated way, use:
SELECT
* ,
(SELECT MIN(d1.created_at) FROM subscribers d1 WHERE d1.user_id=a.userId ) AS createdAt
FROM
(SELECT
DISTINCT d.`user_id` AS userId
FROM
subscribers d) a
Related
My data looks like,
Table - usr_weight
user_id
weight
log_time
1.
10
2021-11-30 10:29:03
1.
12
2021-11-30 12:29:03
1.
11
2021-11-30 14:29:03
1.
18
2021-12-01 08:29:03
1.
12
2021-12-15 13:29:03
1.
14
2021-12-15 17:29:03
Here, I have duplicates for each date with different time. So, group date and return the record with max time for each date.
Query
select weight, log_time from usr_weight where user_id = 1 group by DATE(log_time)
Here, I get 1 record for each date, but the row is not by max(log_time).
Using ROW_NUMBER we can try:
WITH cte AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY DATE(log_time)
ORDER BY log_time DESC) rn
FROM usr_weight
WHERE user_id = 1
)
SELECT user_id, weight, log_time
FROM cte
WHERE rn = 1;
Here is an old school join way of doing this:
SELECT uw1.user_id, uw1.weight, uw1.log_time
FROM usr_weight uw1
INNER JOIN
(
SELECT DATE(log_time) AS log_time_date, MAX(log_time) AS max_log_time
FROM usr_weight
WHERE user_id = 1
GROUP BY DATE(log_time)
) uw2
ON uw2.log_time_date = DATE(uw1.log_time) AND
uw2.max_log_time = uw1.log_time
WHERE
uw1.user_id = 1;
I am developing a WordPress website for e learning. So one student attends the course, many times and scored the mark many times. Now I need to get one id with score and last record. I have tried many examples, but am able to get the result. I have given below my code.
SELECT m.id
, m.email
, t.id_tracking
, t.user_id
, FROM_UNIXTIME(t.date)
, t.score
, t.groupe_id
FROM tracking t
join membres m
WHERE t.id_tracking IN (
SELECT MAX(date)
FROM tracking
GROUP BY user_id
)
I have used about the query I don't know what I did wrong
user_id email score date
1 test#testmail.com 78 15-06-2019
1 test#testmail.com 89 12-08-2019
2 sam#testmail.com 66 24-03-2018
2 sam#testmail.com 44 19-07-2019
3 siv#testmail.com 98 09-02-2019
3 siv#testmail.com 78 13-08-2020
I need to get result below like
user_id email score date
1 test#testmail.com 89 12-08-2019
2 sam#testmail.com 44 19-07-2019
3 siva#testmail.com 98 09-08-2020
You can GROUP BY email/user_id and select maximum of date from each group, by converting the date to a UNIX TIMESTAMP, like this
SELECT user_id, email, score, FROM_UNIXTIME(MAX(UNIX_TIMESTAMP(date)))
FROM tableName
GROUP BY user_id
I am not sure about your DB but,
Have you tried like this...
SELECT
*
FROM
(SELECT * FROM process_table ORDER BY date desc) tbl1
GROUP BY
tbl1.id
I have a ticketing system that I am trying to run a report on. I am trying to get the number of tickets touched per user.
With this first query:
SELECT * FROM (
SELECT TicketID, UserID, EventDateTime
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
ORDER BY EventDateTime DESC) x
WHERE UserID=80
GROUP BY TicketID;
I am able to list the tickets touched for a particular user, and can count them manually:
TicketID UserID EventDateTime
99168 80 6/22/2016 13:21
99193 80 6/22/2016 7:42
99213 80 6/22/2016 13:02
99214 80 6/22/2016 6:30
99221 80 6/22/2016 6:57
99224 80 6/22/2016 7:48
99226 80 6/22/2016 6:27
99228 80 6/22/2016 8:49
99229 80 6/22/2016 8:53
99232 80 6/22/2016 9:18
99237 80 6/22/2016 13:08
But when I try to drop the WHERE UserID= statement, and try to use it as a subquery like so:
SELECT UserID, COUNT(*) as count FROM (
SELECT * FROM (
SELECT TicketID, UserID, EventDateTime
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
ORDER BY EventDateTime DESC) x
GROUP BY TicketID) y
GROUP BY UserID;
I get incorrect counts:
UserID count
9 2
28 1
31 1
42 1
80 5
95 1
99 6
108 4
116 12
117 26
123 24
As you can see, the count for UserID 80 should have been 11. most of the other results are also incorrect, they seem to all be lower numbers than I am expecting.
Am I doing something wrong with the GROUP BY/COUNT when using it on a subquery? How can I change my query to get the results I want?
Do you just want an aggregation?
SELECT UserID, COUNT(*)
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
GROUP BY UserID;
If the same ticket can appear in the data more than one time for a given user,then COUNT(DISTINCT) is more appropriate:
SELECT UserID, COUNT(DISTINCT TicketID)
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
GROUP BY UserID;
To get number of tickets touched per user, let's start with a proper query for just that:
SELECT count(*) as N, UserID
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
GROUP BY UserID;
A GROUP BY clause should always include all the non-aggregate columns mentioned in the SELECT clause. It doesn't make sense to ask for "the ticket ID and the number of tickets (per user)"!
Also, the SQL standard says ORDER BY cannot apply to subqueries. Best to think of ORDER BY as a convenience for viewing the output, not as information to be used in the query.
You also want to know something about the TicketID and EventDateTime. You can't ask for "the id of the count of the tickets", but you can get the first and last ticket. Same for time:
SELECT count(*) as N
, min(TicketID) as T1
, max(TicketID) as Tn
, min(EventDateTime) as E1
, max(EventDateTime) as En
, UserID
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
GROUP BY UserID;
Note that the earliest time may not be the time of the smallest TicketID. To get everything about the first ticket for each user, plus the count, join the two sources of information:
select N.N, T.*
from dcscontact.ticketevents as T
join (
SELECT count(*) as N, min(TicketID) as T1, UserID
FROM dcscontact.ticketevents
WHERE EventDateTime BETWEEN '2016-06-22' AND '2016-06-23'
GROUP BY UserID;
) as N
on T.UserID = N.UserID
and T.TicketID = N.TicketID
-- and maybe others, according to the key
order by EventDateTime DESC
In MySQL, I got a table similar to :
id user_id date
1 1 2014-09-27
2 1 2014-11-05
3 1 2014-11-14
4 2 2014-12-03
5 1 2014-12-23
I would like to select the total monthly amount of people.
ExpectedOutput : 4
2014-09 = 1 user
2014-10 = 0 user
2014-11 = 1 user //user 1 is present twice in november, but I want him only once per month
2014-12 = 2 user
total expected = 4
So far, my request is :
SELECT count(id)
FROM myTable u1
WHERE EXISTS(
SELECT id
FROM myTable u2
WHERE u2.user_id = u1.user_id
AND DATE_SUB(u2.date, INTERVAL 1 MONTH) > u1.date
);
It ouput the correct amount, but on my (not so heavy) table, it take hours to execute. Any hints to make this one lighter or faster ?
Bonus :
Since INTERVAL 1 MONTH is not available in DQL, is there any way to do it with a Doctrine QueryBuilder ?
Try this!
It should give you exactly what you need...
SELECT
EXTRACT(YEAR FROM dates) AS the_year,
EXTRACT(MONTH FROM dates) AS the_month,
COUNT( DISTINCT user_id ) AS total
FROM
myTable
GROUP BY
EXTRACT(YEAR FROM dates),
EXTRACT(MONTH FROM dates);
For you problem, what I would do is :
Creating a subrequst grouping the distinct sum of people by month
Creating a request making the sum of the sub-result.
Here is a working example (with your datas) sqlFiddle
And here is the request :
SELECT SUM(nb_people)
FROM (
-- This request return the number of distinct people in one month.
SELECT count(distinct(user_id)) AS nb_people, MONTH(`date`), YEAR(`date`)
FROM test
GROUP BY MONTH(`date`)
) AS subQuery
;
SELECT COUNT(DISTINCT user_id), YEAR(date) + '-' + MONTH(date)
FROM MyTable
GROUP BY YEAR(date), MONTH(date)
If I have a table(Oracle or MySQL), which stores the date user logins.
So how can I write a SQL(or something else) to find the users who have continuously login for n days.
For example:
userID | logindate
1000 2014-01-10
1000 2014-01-11
1000 2014-02-01
1000 2014-02-02
1001 2014-02-01
1001 2014-02-02
1001 2014-02-03
1001 2014-02-04
1001 2014-02-05
1002 2014-02-01
1002 2014-02-03
1002 2014-02-05
.....
We can see that user 1000 has continually logined for two days in 2014, and user 1001 has continually logined for 5 days. and user 1002 never continuously logins.
The SQL should be extensible , which means I can pick every number of n, and modify a little or pass a new parameter, and the results is as expected.
Thank you!
As we don't know what dbms you are using (you named both MySQL and Oracle), here are are two solutions, both doing the same: Order the rows and subtract rownumber days from the login date (so if the 6th record is 2014-02-12 and the 7th is 2014-02-13 they both result in 2014-02-06). So we group by user and that groupday and count the days. Then we group by user to find the longest series.
Here is a solution for a dbms with analytic window functions (e.g. Oracle):
select userid, max(days)
from
(
select userid, groupday, count(*) as days
from
(
select
userid, logindate - row_number() over (partition by userid order by logindate) as groupday
from mytable
)
group by userid, groupday
)
group by userid
--having max(days) >= 3
And here is a MySQL query (untested, because I don't have MySQL available):
select
userid, max(days)
from
(
select
userid, date_add(logindate, interval -row_number day) as groupday, count(*) as days
from
(
select
userid, logindate,
#row_num := #row_num + 1 as row_number
from mytable
cross join (select #row_num := 0) r
order by userid, logindate
)
group by userid, groupday
)
group by userid
-- having max(days) >= 3
I think the following query will give you a very extensible parametrization:
select z.userid, count(*) continuous_login_days
from
(
with max_dates as
( -- Get max date for every user ID
select t.userid, max(t.logindate) max_date
from test t
group by t.userid
),
ranks as
( -- Get ranks for login dates per user
select t.*,
row_number() over
(partition by t.userid order by t.logindate desc) rnk
from test t
)
-- So here, we select continuous days by checking if rank inside group
-- (per user ID) matches login date compared to max date
select r.userid, r.logindate, r.rnk, m.max_date
from ranks r, max_dates m
where m.userid = r.userid
and r.logindate + r.rnk - 1 = m.max_date -- here is the key
) z
-- Then we only group by user ID to get the number of continuous days
group by z.userid
;
Here is the result:
USERID CONTINUOUS_LOGIN_DAYS
1 1000 2
2 1001 5
3 1002 1
So you can just choose by querying field CONTINUOUS_LOGIN_DAYS.
EDIT : If you want to choose from all ranges (not only the last one), my query structure no longer works because it relied on the last range. But here is a workaround:
with w as
( -- Parameter
select 2 nb_cont_days from dual
)
select *
from
(
select t.*,
-- Get number of days around
(select count(*) from test t2
where t2.userid = t.userid
and t2.logindate between t.logindate - nb_cont_days + 1
and t.logindate) m1,
-- Get also number of days more in the past, and in the future
(select count(*) from test t2
where t2.userid = t.userid
and t2.logindate between t.logindate - nb_cont_days
and t.logindate + 1) m2,
w.nb_cont_days
from w, test t
) x
-- If these 2 fields match, then we have what we want
where x.m1 = x.nb_cont_days
and x.m2 = x.nb_cont_days
order by 1, 2
You just have to change the parameter in the WITH clause, so you can even create a function from this query to call it with this parameter.
SELECT userID,count(userID) as numOfDays FROM LOGINTABLE WHERE logindate between '2014-01-01' AND '2014-02-28'
GROUP BY userID
In this case you can check the login days per user, in a specific period