I need some help to do it right in one query (if it possible).
(this is a theoretical example and I assume the presence of events in event_name(like registration/action etc)
I have 3 colums:
-user_id
-event_timestamp
-event_name
From this 3 columns we need to create new table with 4 new columns:
-user year and month registration time
-number of new user registration in this month
-number of users who returned to the second calendar month after registration
-return probability
Result must be looks like this:
2019-1 | 1 | 1 | 100%
2019-2 | 3 | 2 | 67%
2019-3 | 2 | 0 | 0%
What I've done now:
I'm use this toy example of my possible main table:
CREATE TABLE `main` (
`event_timestamp` timestamp,
`user_id` int(10),
`event_name` char(12)
) DEFAULT CHARSET=utf8;
INSERT INTO `main` (`event_timestamp`, `user_id`, `event_name`) VALUES
('2019-01-23 20:02:21.550', '1', 'registration'),
('2019-01-24 20:03:21.550', '2', 'action'),
('2019-02-21 20:04:21.550', '3', 'registration'),
('2019-02-22 20:05:21.550', '4', 'registration'),
('2019-02-23 20:06:21.550', '5', 'registration'),
('2019-02-23 20:06:21.550', '1', 'action'),
('2019-02-24 20:07:21.550', '6', 'action'),
('2019-03-20 20:08:21.550', '3', 'action'),
('2019-03-21 20:09:21.550', '4', 'action'),
('2019-03-22 20:10:21.550', '9', 'action'),
('2019-03-23 20:11:21.550', '10', 'registration'),
('2019-03-22 20:10:21.550', '4', 'action'),
('2019-03-22 20:10:21.550', '5', 'action'),
('2019-03-24 20:11:21.550', '11', 'registration');
I'm trying to test some queries to create 4 new columns:
This is for column #1, we select month and year from timestamp where action is registration (as I guess), but I need to sum it for month (like 2019-11, 2019-12)
SELECT DATE_FORMAT(event_timestamp, '%Y-%m') AS column_1 FROM main
WHERE event_name='registration';
For column #2 we need to sum users with even_name registration in this month for every month, or.. we can trying for searching first time activity by user_id, but I don't know how to do this.
Here is some thinks about it...
SELECT COUNT(DISTINCT user_id) AS user_count
FROM main
GROUP BY MONTH(event_timestamp);
SELECT COUNT(DISTINCT user_id) AS user_count FROM main
WHERE event_name='registration';
For column #3 we need to compare user_id with the event_name registration and last month event with any event of the second month so we get users who returned for the next month.
Any idea how to create this query?
This is how to calc column #4
SELECT *,
ROUND ((column_3/column_2)*100) AS column_4
FROM main;
I hope you will find the following answer helpful.
The first column is the extraction of year and month. The new_users column is the COUNT of the unique user ids when the action is 'registration' since the user can be duplicated from the JOIN as a result of taking multiple actions the following month. The returned_users column is the number of users who have an action in the next month from the registration. The returned_users column needs a DISTINCT clause since a user can have multiple actions during one month. The final column is the probability that you asked from the two previous columns.
The JOIN clause is a self-join to bring the users that had at least one action the next month of their registration.
SELECT CONCAT(YEAR(A.event_timestamp),'-',MONTH(A.event_timestamp)),
COUNT(DISTINCT(CASE WHEN A.event_name LIKE 'registration' THEN A.user_id END)) AS new_users,
COUNT(DISTINCT B.user_id) AS returned_users,
CASE WHEN COUNT(DISTINCT(CASE WHEN A.event_name LIKE 'registration' THEN A.user_id END))=0 THEN 0 ELSE COUNT(DISTINCT B.user_id)/COUNT(DISTINCT(CASE WHEN A.event_name LIKE 'registration' THEN A.user_id END))*100 END AS My_Ratio
FROM main AS A
LEFT JOIN main AS B
ON A.user_id=B.user_id AND MONTH(A.event_timestamp)+1=MONTH(B.event_timestamp)
AND A.event_name='registration' AND B.event_name='action'
GROUP BY CONCAT(YEAR(A.event_timestamp),'-',MONTH(A.event_timestamp))
What we will do is to use window functions and aggregation -- window functions to get the earliest registration date. Then some conditional aggregation.
One challenge is the handling of calendar months. To handle this, we will truncate the dates to the beginning of the month to facilitate the date arithmetic:
select yyyymm_reg, count(*) as regs_in_month,
sum( month_2 > 0 ) as visits_2months,
avg( month_2 > 0 ) as return_rate_2months
from (select m.user_id, m.yyyymm_reg,
max( (timestampdiff(month, m.yyyymm_reg, m.yyyymm) = 1) ) as month_1,
max( (timestampdiff(month, m.yyyymm_reg, m.yyyymm) = 2) ) as month_2,
max( (timestampdiff(month, m.yyyymm_reg, m.yyyymm) = 3) ) as month_3
from (select m.*,
cast(concat(extract(year_month from event_timestamp), '01') as date) as yyyymm,
cast(concat(extract(year_month from min(case when event_name = 'registration' then event_timestamp end) over (partition by user_id)), '01') as date) as yyyymm_reg
from main m
) m
where m.yyyymm_reg is not null
group by m.user_id, m.yyyymm_reg
) u
group by u.yyyymm_reg;
Here is a db<>fiddle.
Here you go, done in T-SQL:
;with cte as(
select a.* from (
select form,user_id,sum(count_regs) as count_regs,sum(count_action) as count_action from (
select FORMAT(event_timestamp,'yyyy-MM') as form,user_id,event_name,
CASE WHEN event_name = 'registration' THEN 1 ELSE 0 END as count_regs,
CASE WHEN event_name = 'action' THEN 1 ELSE 0 END as count_action from main) a
group by form,user_id) a)
select final.form,final.count_regs,final.count_action,((CAST(final.count_action as float)/(CASE WHEN final.count_regs = '0' THEN '1' ELSE final.count_regs END))*100) as probability from (
select a.form,sum(a.count_regs) count_regs,CASE WHEN sum(b.count_action) is null then '0' else sum(b.count_action) end count_action from cte a
left join
cte b
ON a.user_id = b.user_id and
DATEADD(month,1,CONVERT(date,a.form+'-01')) = CONVERT(date,b.form+'-01')
group by a.form ) final where final.count_regs != '0' or final.count_action != '0'
Related
I am currently working on a project that has 2 very large sql tables Users and UserDocuments having around million and 2-3 millions records respectively. I have a query that will return the count of all the documents that each indvidual user has uploaded provided the document is not rejected.
A user can have multiple documents against his/her id.
My current query:-
SELECT
u.user_id,
u.name,
u.date_registered,
u.phone_no,
t1.docs_count,
t1.last_uploaded_on
FROM
Users u
JOIN(
SELECT user_id,
MAX(updated_at) AS last_uploaded_on,
SUM(CASE WHEN STATUS != 2 THEN 1 ELSE 0 END) AS docs_count
FROM
UserDocuments
WHERE
user_id IN(
SELECT
user_id
FROM
Users
WHERE
region_id = 1 AND city_id = 8 AND user_type = 1 AND user_suspended = 0 AND is_enabled = 1 AND verification_status = -1
) AND document_id IN('1', '2', '3', '4', '10', '11')
GROUP BY
user_id
ORDER BY
user_id ASC
) t1
ON
u.user_id = t1.user_id
WHERE
docs_count < 6 AND region_id = 1 AND city_id = 8 AND user_type = 1 AND user_suspended = 0 AND is_enabled = 1 AND verification_status = -1
LIMIT 1000, 100
Currently the query is taking very long around 20 secs to return data with indexes. can someone suggest some tweaks in the follwing query to gain some more preformance out of it.
SELECT
u.user_id,
max( u.name ) name,
max( u.date_registered ) date_registered,
max( phone_no ) phone_no,
MAX(d.updated_at) last_uploaded_on,
SUM(CASE WHEN d.STATUS != 2
THEN 1 ELSE 0 END) docs_count
FROM
Users u
JOIN UserDocuments d
ON u.user_id = d.user_id
AND d.document_id IN ('1', '2', '3', '4', '10', '11')
WHERE
u.region_id = 1
AND u.city_id = 8
AND u.user_type = 1
AND u.user_suspended = 0
AND u.is_enabled = 1
AND u.verification_status = -1
GROUP BY
u.user_id
HAVING
SUM(CASE WHEN d.STATUS != 2
THEN 1 ELSE 0 END) < 6
ORDER BY
u.user_id ASC
LIMIT
1000, 100
Have indexes on your tables as
user ( region_id, city_id, user_type, user_suspended, is_enabled, verification_status )
UserDocuments ( user_id, document_id, status, updated_at )
You are adding extra querying from the user table to both the inner and outer joins which might be killing it. Having an index on your critical "WHERE" components by user will pre-filter that set out. Only from that will it join to the UserDocuments table. By having the outer query get the counts() at the top level query.
Since the users name, registered and phone dont change per user, applying max() to each respectively prevents the need of adding those columns to the group by clause.
The index on the documents table on only the columns needed to confirm status and document_id and when last updated. This prevents the engine from having to go to the raw data pages as it can get the qualifying details directly from the index parts saving you time too.
LIMIT without ORDER BY does not make sense.
An ORDER BY in a 'derived table' is ignored.
Will you really have thousands of result rows? (I see the "offset of 1000".)
Use JOIN instead of IN ( SELECT ... )
What indexes do you have? I suggest INDEX(region_id, city_id, user_id)
CASE WHEN d.STATUS != 2 THEN 1 ELSE 0 END can be shortened to d.status != 2.
How many different values of status are there? If only two, then flip the test to d.status = 1`.
There are 2 tables ost_ticket and ost_ticket_action_history.
create table ost_ticket(
ticket_id int not null PRIMARY KEY,
created timestamp,
staff bool,
status varchar(50),
city_id int
);
create table ost_ticket_action_history(
ticket_id int not null,
action_id int not null PRIMARY KEY,
action_name varchar(50),
started timestamp,
FOREIGN KEY(ticket_id) REFERENCES ost_ticket(ticket_id)
);
In the ost_ticket_action_history table the data is:
INSERT INTO newdb.ost_ticket_action_history (ticket_id, action_id, action_name, started) VALUES (1, 1, 'Consultation', '2022-01-06 18:30:29');
INSERT INTO newdb.ost_ticket_action_history (ticket_id, action_id, action_name, started) VALUES (2, 2, 'Bank Application', '2022-02-06 18:30:45');
INSERT INTO newdb.ost_ticket_action_history (ticket_id, action_id, action_name, started) VALUES (3, 3, 'Consultation', '2022-05-06 18:42:48');
In the ost_ticket table the data is:
INSERT INTO newdb.ost_ticket (ticket_id, created, staff, status, city_id) VALUES (1, '2022-04-04 18:26:41', 1, 'open', 2);
INSERT INTO newdb.ost_ticket (ticket_id, created, staff, status, city_id) VALUES (2, '2022-05-05 18:30:48', 0, 'open', 3);
INSERT INTO newdb.ost_ticket (ticket_id, created, staff, status, city_id) VALUES (3, '2022-04-06 18:42:53', 1, 'open', 4);
My task is to get the conversion from the “Consultation” stage to the “Bank Application” stage broken down by months (based on the start date of the “Bank Application” stage).Conversion is calculated according to the following formula: (number of applications with the “Bank Application” stage / number of applications with the “Consultation” stage) * 100%.
My request is like this:
select SUM(action_name='Bank Application')/SUM(action_name='Consultation') * 2 as 'Conversion' from ost_ticket_action_history JOIN ost_ticket ot on ot.ticket_id = ost_ticket_action_history.ticket_id where status = 'open' and created > '2020 -01-01 00:00:00' group by action_name,started having action_name = 'Bank Application';
As a result I get:
Another query:
SELECT
SUM(CASE
WHEN b.ticket_id IS NOT NULL THEN 1
ELSE 0
END) / COUNT(*) conversion,
YEAR(a.started) AS 'year',
MONTH(a.started) AS 'month'
FROM
ost_ticket_action_history a
LEFT JOIN
ost_ticket_action_history b ON a.ticket_id = b.ticket_id
AND b.action_name = 'Bank Application'
WHERE
a.action_name = 'Consultation'
AND a.status = 'open'
AND a.created > '2020-01-01 00:00:00'
GROUP BY YEAR(a.started) , MONTH(a.started)
I apologize if I didn't write very clearly. Please explain what to do.
Like I explained in my comment, you exclude rows with your having clause.
I will show you in the next how to debug.
First check what the raw result of the select query is.
As you see, when you remove the GROUP BY and see what you actually get is only 1 row with bank application, because the having clause excludes all other rows
SELECT
*
FROM
ost_ticket_action_history
JOIN
ost_ticket ot ON ot.ticket_id = ost_ticket_action_history.ticket_id
WHERE
status = 'open'
AND created > '2020-01-01 00:00:00'
GROUP BY
action_name, started
HAVING
action_name = 'Bank Application';
Output:
ticket_id
action_id
action_name
started
ticket_id
created
staff
status
city_id
2
2
Bank Application
2022-02-06 18:30:45
2
2022-05-05 18:30:48
0
open
3
Second step, see what the result set is without calculating anything.
As you can see you make a division with 0, what you have learned in school, is forbidden, hat is why you have as result set NULL
SELECT
SUM(action_name = 'Bank Application')
#/
,SUM(action_name = 'Consultation') * 2 AS 'Conversion'
FROM
ost_ticket_action_history
JOIN
ost_ticket ot ON ot.ticket_id = ost_ticket_action_history.ticket_id
WHERE
status = 'open'
AND created > '2020-01-01 00:00:00'
GROUP BY action_name , started
HAVING action_name = 'Bank Application';
SUM(action_name = 'Bank Application') | Conversion
------------------------------------: | ---------:
1 | 0
db<>fiddle here
#Third what you can do exclude a division with 0, here i didn't remove all othe rows as this is only for emphasis
SELECT
SUM(action_name = 'Bank Application')
/
SUM(action_name = 'Consultation') * 2 AS 'Conversion'
FROM
ost_ticket_action_history
JOIN
ost_ticket ot ON ot.ticket_id = ost_ticket_action_history.ticket_id
WHERE
status = 'open'
AND created > '2020-01-01 00:00:00'
GROUP BY action_name , started
HAVING SUM(action_name = 'Consultation') > 0;
| Conversion |
| ---------: |
| 0.0000 |
| 0.0000 |
db<>fiddle here
Final words,
If you get a strange result, simply go back remove everything that doesn't matter and try to get all values, so hat you can check your math
SQL God...I need some help!
I have a data table that has a route_complete_percentage column and a created_at column.
I need two pieces of data:
the time stamp (within created_at column) when the route_complete_percentage is at its minimum but not zero
the time stamp (within created_at column) when the route_complete_percentage is at its maximum, it might be 100% or not, but when its at its highest.
Here is the kicker, there might be multiple time stamps for the highest route completion column. For example,
Example Table
I have multiple values when the route_completion_percentage is at its maximum, but I need the minimum time stamp value.
Here is the query so far...but the two time stamps are the same.
SELECT
A.fc,
A.plan_id,
A.route_id,
mintime.first_scan AS First_Batch_Scan,
min(route_complete_percentage),
maxtime.last_scan AS Last_Batch_Scan,
max(route_complete_percentage)
FROM
(SELECT
fc,
plan_id,
route_id,
route_complete_percentage,
CONCAT(plan_id, '-', route_id) AS JOINKEY
FROM
houdini_ops.BATCHINATOR_SCAN_LOGS_V2
WHERE
fc <> ''
AND order_id <> 'Can\'t find order'
AND source = 'scan'
AND created_at > DATE_ADD(CURDATE(), INTERVAL - 3 DAY)) A
LEFT JOIN
(SELECT
l.fc,
l.route_id,
l.plan_id,
CONCAT(l.plan_id, '-', l.route_id) AS JOINKEY,
CASE
WHEN MIN(route_complete_percentage) THEN CONVERT_TZ(l.created_at, 'UTC', s.time_zone)
END AS first_scan
FROM
houdini_ops.BATCHINATOR_SCAN_LOGS_V2 l
JOIN houdini_ops.O_SERVICE_AREA_ATTRIBUTES s ON l.fc = s.default_station_code
WHERE
l.fc <> ''
AND l.order_id <> 'Can\'t find order'
AND l.source = 'scan'
AND l.created_at > DATE_ADD(CURDATE(), INTERVAL - 3 DAY)
GROUP BY fc , plan_id , route_id) mintime ON A.JOINKEY = mintime.JOINKEY
LEFT JOIN
(SELECT
l.fc,
l.route_id,
l.plan_id,
CONCAT(l.plan_id, '-', l.route_id) AS JOINKEY,
CASE
WHEN MAX(route_complete_percentage) THEN CONVERT_TZ(l.created_at, 'UTC', s.time_zone)
END AS last_scan
FROM
houdini_ops.BATCHINATOR_SCAN_LOGS_V2 l
JOIN houdini_ops.O_SERVICE_AREA_ATTRIBUTES s ON l.fc = s.default_station_code
WHERE
l.fc <> ''
AND l.order_id <> 'Can\'t find order'
AND l.source = 'scan'
AND l.created_at > DATE_ADD(CURDATE(), INTERVAL - 3 DAY)
GROUP BY fc , plan_id , route_id) maxtime ON mintime.JOINKEY = maxtime.JOINKEY
GROUP BY fc , plan_id , route_id
I don't want to meddle with the rest of your query. Here is something that will do what it sounds like you need. There's sample data included. -- I interpreted your blank values as nulls from your sample data.
Basically, what you are looking for is the Minimum created_at value, inside each of the route_complete_percentage groups. So I treated route_complete_percentage as a group identifier. But you only care about two of the groups, so I identify those groups first in the cte, and use them to filter the aggregate query.
if object_id('tempdb.dbo.#Data') is not null drop table #Data
go
create table #Data (
route_complete_percentage int,
created_at datetime
)
insert into #Data (route_complete_percentage, created_at)
values
(0, '20170531 19:58'),
(1, null),
(2, null),
(3, null),
(4, null),
(5, null),
(6, null),
(7, null),
(80, null),
(90, null),
(100, '20170531 20:10'),
(100, '20170531 20:12'),
(100, '20170531 20:15')
;with cteMinMax(min_route_complete_percentage, max_route_complete_percentage) as (
select
min(route_complete_percentage),
max(route_complete_percentage)
from #Data D
-- This ensures the condition that you don't get the timestamp for 0
where D.route_complete_percentage > 0
)
select
route_complete_percentage,
min_created_at = min(created_at)
from #Data D
join cteMinMax MM on D.route_complete_percentage in (MM.min_route_complete_percentage, MM.max_route_complete_percentage)
group by route_complete_percentage
I need the following code to have all three proj1, proj4 and proj5 columns to be together in one row each according to dates.
As you can see dates are similar but it is showing in different records.
MYSQL Query is as follows:
select DISTINCT dates,proj1,proj4, proj5 from
(SELECT DISTINCT tc.dates AS dates , IF( tc.project_id = 1, tc.minutes, '' ) AS 'proj1',
IF(tc.project_id = 5, tc.minutes, '') AS 'proj5', IF(tc.project_id = 4, tc.minutes, '') AS 'proj4'
FROM timecard AS tc where (tc.dates between '2013-04-01' AND '2013-04-05') ) as X
I need all three proj1 , proj4 and proj5 records to display all in same rows and then query should have only 5 rows
You can group by the dates and then use max() to show values that are not empty
select dates, max(proj1) as proj1, max(proj4) as proj4, max(proj5) as proj5
from timecard
where tc.dates between '2013-04-01' AND '2013-04-05'
group by dates
Try this sql.
select dates,
(case t1.proj1
when t1.proj1 not null then t1.proj1
when t2.proj1 not null then t2.proj1
when t3.proj1 not null then t3.proj1
end) as "proj1",
(case t1.proj2
when t1.proj2 not null then t1.proj2
when t2.proj2 not null then t2.proj2
when t3.proj2 not null then t3.proj2
end) as "proj2",
(case t1.proj3
when t1.proj3 not null then t1.proj3
when t2.proj3 not null then t2.proj3
when t3.proj3 not null then t3.proj3
end) as "proj3"
from timecard t1,timecardt2,timecardt3
where t1.dates=t2.dates
and t2.dates=t3.dates
group by t1.dates
Having some trouble figuring out the logic to this. See the two queries below:
Query 1:
SELECT cId, crId, COUNT(EventType)
FROM Data
WHERE EventType='0' OR EventType='0p' OR EventType='n' OR EventType = 'np'
GROUP BY crId;
Query 2:
SELECT cId, crId, COUNT(EventType) AS Clicks
FROM Data
WHERE EventType='c'
GROUP BY crId;
Was just wondering if there was a way to make the column that I would get at the end of query 2 appear in query 1. Since the where statements are different, not really sure where to go, and any subquery that I've wrote just hasn't worked.
Thanks in advance
SELECT cId, crId,
SUM(CASE WHEN EventType='0' OR EventType='0p' OR EventType='n' OR EventType = 'np' THEN 1 ELSE 0 END) AS Count_1,
SUM(CASE WHEN EventType='c' THEN 1 ELSE 0 END) AS Count_2
FROM Data
WHERE EventType IN ('0','0p','n','np','c')
GROUP BY crId;
You can join the two, using the second as a correlated subquery.
SELECT
Data.cId,
Data.crId,
COUNT(EventType) AS event_type_count,
click_counts.Clicks
FROM
Data
/* Correlated subquery retrieves the Clicks (EventType 'c') per cId */
LEFT JOIN (
SELECT cId, crId, COUNT(EventType) AS Clicks
FROM Data
WHERE EventType='c'
GROUP BY crId
) AS click_count ON Data.cId = click_count.cId AND Data.crId = click_count.crId
/* OR chain replaced with IN() clause */
WHERE Data.EventType IN ('0','0p','n','np')
/* This GROUP BY should probably also include Data.cId... */
GROUP BY Data.crId;
You can do this all querying from the table once and using CASE statements.
SELECT cId, crId,
SUM(CASE WHEN EventType IN ('0', '0p', 'n', 'np') THEN 1 ELSE 0 END) as events,
SUM(CASE WHEN EventType = 'c' THEN 1 ELSE 0 END) as clicks
FROM Data
WHERE EventType IN ('0', '0p', 'n', 'np', 'c')
GROUP BY crId;
You want to use IN?
SELECT cId, crId, COUNT(EventType) as Clicks
FROM Data
WHERE EventType IN ('0','0p','n','np','c')
GROUP BY crId;
:) PUtting myself in right direction ;)
sqlfiddle demo
select id, crid,
count(case when type <> 'c'
then crid end) count_others,
count(case when type ='c'
then crid end) count_c
from tb
group by crid
;