Running WHILE or CURSOR or both in SQL Server 2008 - sql-server-2008

I am trying to run a loop of some sort in SQL Server 2008/TSQL and I am unsure whether this should be a WHILE or CURSOR or both. The end result is I am trying to loop through a list of user logins, then determine the unique users, then run a loop to determine how many visits it took for the user to be on the site for 5 minutes , broken out by the channel.
Table: LoginHistory
UserID Channel DateTime DurationInSeconds
1 Website 1/1/2013 1:13PM 170
2 Mobile 1/1/2013 2:10PM 60
3 Website 1/1/2013 3:10PM 180
4 Website 1/1/2013 3:20PM 280
5 Website 1/1/2013 5:00PM 60
1 Website 1/1/2013 5:05PM 500
3 Website 1/1/2013 5:45PM 120
1 Mobile 1/1/2013 6:00PM 30
2 Mobile 1/1/2013 6:10PM 90
5 Mobile 1/1/2013 7:30PM 400
3 Website 1/1/2013 8:00PM 30
1 Mobile 1/1/2013 9:30PM 200
SQL Fiddle to this schema
I can select the unique users into a new table like so:
SELECT UserID
INTO #Users
FROM LoginHistory
GROUP BY UserID
Now, the functionality I'm trying to develop is to loop over these unique UserIDs, order the logins by DateTime, then count the number of logins needed to get to 300 seconds.
The result set I would hope to get to would look something like this:
UserID TotalLogins WebsiteLogins MobileLogins Loginsneededto5Min
1 4 2 2 2
2 2 2 0 0
3 3 3 0 3
4 1 1 0 0
5 2 1 1 2
If I were performing this in another language, I would think it would something like this: (And apologies because this is not complete, just where I think I am going)
for (i in #Users):
TotalLogins = Count(*),
WebsiteLogins = Count(*) WHERE Channel = 'Website',
MobileLogins = Count(*) WHERE Channel = 'Mobile',
for (i in LoginHistory):
if Duration < 300:
count(NumLogins) + 1
** Ok - I'm laughing at myself the way I combined multiple different languages/syntaxes, but this is how I am thinking about solving this **
Thoughts on a good way to accomplish this? My preference is to use a loop so I can continue to write if/then logic into the code.

Ok, this is one of those times where a CURSOR would probably outperform a set based solution. Sadly, I'm not very good with cursors, so I can give you a set base solution for you to try:
;WITH CTE AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY UserID ORDER BY [DateTime]) RN
FROM UserLogins
), CTE2 AS
(
SELECT *, 1 RecursionLevel
FROM CTE
WHERE RN = 1
UNION ALL
SELECT B.UserID, B.Channel, B.[DateTime],
A.DurationInSeconds+B.DurationInSeconds,
B.RN, RecursionLevel+1
FROM CTE2 A
INNER JOIN CTE B
ON A.UserID = B.UserID AND A.RN = B.RN - 1
)
SELECT A.UserID,
COUNT(*) TotalLogins,
SUM(CASE WHEN Channel = 'Website' THEN 1 ELSE 0 END) WebsiteLogins,
SUM(CASE WHEN Channel = 'Mobile' THEN 1 ELSE 0 END) MobileLogins,
ISNULL(MIN(RecursionLevel),0) LoginsNeedeto5Min
FROM UserLogins A
LEFT JOIN ( SELECT UserID, MIN(RecursionLevel) RecursionLevel
FROM CTE2
WHERE DurationInSeconds > 300
GROUP BY UserID) B
ON A.UserID = B.UserID
GROUP BY A.UserID

A slightly different piece-wise approach. A minor difference is that the recursive portion terminates when it reaches 300 seconds for each user rather than summing all of the available logins.
An index on UserId/StartTime should improve performance on larger datasets.
declare #Logins as Table ( UserId Int, Channel VarChar(10), StartTime DateTime, DurationInSeconds Int )
insert into #Logins ( UserId, Channel, StartTime, DurationInSeconds ) values
( 1, 'Website', '1/1/2013 1:13PM', 170 ),
( 2, 'Mobile', '1/1/2013 2:10PM', 60 ),
( 3, 'Website', '1/1/2013 3:10PM', 180 ),
( 4, 'Website', '1/1/2013 3:20PM', 280 ),
( 5, 'Website', '1/1/2013 5:00PM', 60 ),
( 1, 'Website', '1/1/2013 5:05PM', 500 ),
( 3, 'Website', '1/1/2013 5:45PM', 120 ),
( 1, 'Mobile', '1/1/2013 6:00PM', 30 ),
( 2, 'Mobile', '1/1/2013 6:10PM', 90 ),
( 5, 'Mobile', '1/1/2013 7:30PM', 400 ),
( 3, 'Website', '1/1/2013 8:00PM', 30 ),
( 1, 'Mobile', '1/1/2013 9:30PM', 200 )
select * from #Logins
; with MostRecentLogins as (
-- Logins with flags for channel and sequenced by StartTime (ascending) for each UserId .
select UserId, Channel, StartTime, DurationInSeconds,
case when Channel = 'Website' then 1 else 0 end as WebsiteLogin,
case when Channel = 'Mobile' then 1 else 0 end as MobileLogin,
Row_Number() over ( partition by UserId order by StartTime ) as Seq
from #Logins ),
CumulativeDuration as (
-- Start with the first login for each UserId .
select UserId, Seq, DurationInSeconds as CumulativeDurationInSeconds
from MostRecentLogins
where Seq = 1
union all
-- Accumulate additional logins for each UserId until the running total exceeds 300 or they run out of logins.
select CD.UserId, MRL.Seq, CD.CumulativeDurationInSeconds + MRL.DurationInSeconds
from CumulativeDuration as CD inner join
MostRecentLogins as MRL on MRL.UserId = CD.UserId and MRL.Seq = CD.Seq + 1 and CD.CumulativeDurationInSeconds < 300 )
-- Display the summary.
select UserId, Sum( WebsiteLogin + MobileLogin ) as TotalLogins,
Sum( WebsiteLogin ) as WebsiteLogins, Sum( MobileLogin ) as MobileLogins,
( select Max( Seq ) from CumulativeDuration where UserId = LT3.UserId and CumulativeDurationInSeconds >= 300 ) as LoginsNeededTo5Min
from MostRecentLogins as LT3
group by UserId
order by UserId
Note that your sample results seem to have an error. UserId 3 reaches 300 seconds in two calls: 180 + 120. Your example shows three calls.

Related

How to parse <first_value> aggregate in a group by statement [SNOWFLAKE] SQL

How do you rewrite this code correctly in Snowflake?
select account_code, date,
sum(box_revenue_recognition_amount) as box_revenue_recognition_amount
, sum(case when box_flg = 1 then box_sku_quantity end) as box_sku_quantity
, sum(box_revenue_recognition_refund_amount) as box_revenue_recognition_refund_amount
, sum(box_discount_amount) as box_discount_amount
, sum(box_shipping_amount) as box_shipping_amount
, sum(box_cogs) as box_cogs
, max(invoice_number) as invoice_number
, max(order_number) as order_number
, min(box_refund_date) as box_refund_date
, first (case when order_season_rank = 1 then box_type end) as box_type
, first (case when order_season_rank = 1 then box_order_season end) as box_order_season
, first (case when order_season_rank = 1 then box_product_name end) as box_product_name
, first (case when order_season_rank = 1 then box_coupon_code end) as box_coupon_code
, first (case when order_season_rank = 1 then revenue_recognition_reason end) as revenue_recognition_reason
from dedupe_sub_user_day
group by account_code, date
I have tried to apply window rule has explained in first_value Snowflake documentation to no avail with the SQLCompilation Error: ... is not a valid group by expression
select account_code, date,
first_value(case when order_season_rank = 1 then box_type end) over (order by box_type ) as box_type
first_value(case when order_season_rank = 1 then box_order_season end) over (order by box_order_season ) as box_order_season,
first_value(case when order_season_rank = 1 then box_product_name end) over (order by box_product_name ) as box_product_name,
first_value(case when order_season_rank = 1 then box_coupon_code end) over (order by box_coupon_code ) as box_coupon_code,
first_value(case when order_season_rank = 1 then revenue_recognition_reason end) over (order by revenue_recognition_reason ) as revenue_recognition_reason
, sum(box_revenue_recognition_amount) as box_revenue_recognition_amount
, sum(case when box_flg = 1 then box_sku_quantity end) as box_sku_quantity
, sum(box_revenue_recognition_refund_amount) as box_revenue_recognition_refund_amount
, sum(box_discount_amount) as box_discount_amount
, sum(box_shipping_amount) as box_shipping_amount
, sum(box_cogs) as box_cogs
, max(invoice_number) as invoice_number
, max(order_number) as order_number
, min(box_refund_date) as box_refund_date
from dedupe_sub_user_day
group by 1,2
First_value is not an aggregate function. But an window function, thus you get an error when you use it in relation to a GROUP BY. If you want to use it with a group up put an ANY_VALUE around it.
here is some data I will use below in a CTE:
with data(id, seq, val) as (
select * from values
(1, 1, 10),
(1, 2, 11),
(1, 3, 12),
(1, 4, 13),
(2, 1, 20),
(2, 2, 21),
(2, 3, 22)
)
So to show FIRST_VALUE is a window function we can just use it
select *
,first_value(val)over(partition by id order by seq) as first_val
from data
ID
SEQ
VAL
FIRST_VAL
1
1
10
10
1
2
11
10
1
3
12
10
1
4
13
10
2
1
20
20
2
2
21
20
2
3
22
20
So if we GROUP BY id, to avoid an error we have to wrap the FIRST_VALUE by an aggregate value, as given the are all equal, ANY_VALUE is a good pick, and it seems it needs to be in another layer of SQL:
select id
,count(*) as count
,any_value(first_val) as first_val
from (
select *
,first_value(val)over(partition by id order by seq) as first_val
from data
)
group by 1
order by 1;
ID |COUNT |FIRST_VAL
1 |4 |10
2 |3 |20
now MAX can be fun to use where used in relation to ROW_NUMBER() to pick the best value:
select id
,count(*) as count
,max(first_val) as first_val
from (
select *
,row_number() over (partition by id order by seq) as rn
,iff(rn=1, val, null) as first_val
from data
)
group by 1
order by 1;
but this is almost more complex than the ANY_VALUE solution, but I feel the performance would be better, but if they have the same magnitude of performance, I would always choose readable to you and your team, over a smaller performance difference.
With the way you've written your case statement, it leads me to believe that there is only one row with order_season_rank = 1 when grouping by account_code and date.
If that is true, then you can use several of Snowflake's aggregate functions and you will get what you want. Rather than trying to get the first value, you could use min, max, any_value, mode (or really any aggregate function that will ignore nulls) to return the only non-null value in the aggregation.
first() this link suggests first is only supported by MS ACCESS however you've tagged the question with MYSQL, Snowflake. Could you confirm the DBMS's you are using?
by moving the first_value() function outside the aggregation it seems to work fine

loop over a date list (or any list) and append queries in mysql or snowflake

I am new to sql language and recently snowflake. I have a table that contains all checkin dates for all users for a business
user_id | checkin_date
001 03-06-2018
001 07-07-2018
001 08-01-2018
002 03-19-2018
002 03-27-2018
002 07-11-2018
Now I want to do a query such that I can look back from a query_date to see how many times each user checked in between query_date - 7 and query_date, qyery_date - 90 and query date ... the following snowflake query does the job properly for query_date='2018-08-01'.
with user_checkin_history_sum as (
select
user_id,
sum(iff(datediff(DAY, uc.checkin_date, '2018-08-01') <= 7, 1, 0)) as visits_past_7_days,
sum(iff(datediff(DAY, uc.checkin_date, '2018-08-01') <= 90, 1, 0)) as visits_past_90_days,
from user_checkin as uc
where uc.checkin_date < '2018-08-01'
group by user_id
order by user_id
)
This gives me result
user_id | visits_past_7_days | visits_past_90_days
001 0 2
002 0 1
My question is, if I have more than one day as the query_date, i.e., I have a list of checkin_date, for each checkin_date in the list, I do the query as above and append all them together. Basically, it is a loop over + table append, but I do not find an answer how to do this in sql language. Essentially, what I want to do is like the following
with user_checkin_history_sum as (
select
user_id,
sum(iff(datediff(DAY, uc.checkin_date, query_date) <= 7, 1, 0)) as visits_past_7_days,
sum(iff(datediff(DAY, uc.checkin_date, query_date) <= 90, 1, 0)) as visits_past_90_days,
from user_checkin as uc
where uc.checkin_date < query_date and
LOOP OVER
query_date in ('2018-08-01', '2018-06-01')
group by user_id
order by user_id
)
And hopefully it gives this result
user_id | query_date | visits_past_7_days | visits_past_90_days
001 '08-01-2018' 0 2
002 '08-01-2018' 0 1
001 '06-01-2018' 0 1
002 '06-01-2018' 0 2
You should be able to cross join a table containing all the dates you want to examine:
WITH dates AS (
SELECT '2018-06-01' AS query_date UNION ALL
SELECT '2018-08-01' UNION ALL
... -- maybe other dates as well
),
user_checkin_history_sum AS (
SELECT
uc.user_id,
d.query_date,
SUM(IFF(DATEDIFF(DAY, uc.checkin_date, d.query_date) <= 7, 1, 0)) AS visits_past_7_days,
SUM(IFF(DATEDIFF(DAY, uc.checkin_date, d.query_date) <= 90, 1, 0)) AS visits_past_90_days
FROM dates d
CROSS JOIN user_checkin AS uc
WHERE uc.checkin_date < '2018-08-01'
GROUP BY d.query_date, uc.user_id
ORDER BY d.query_date, uc.user_id
)

sql server 2008 running totals between 2 dates

I need to get running totals between 2 dates in my sql server table and update the records simultaneoulsy. My data is as below and ordered by date,voucher_no
DATE VOUCHER_NO OPEN_BAL DEBITS CREDITS CLOS_BAL
-------------------------------------------------------------------
10/10/2017 1 100 10 110
12/10/2017 2 110 5 105
13/10/2017 3 105 20 125
Now if i insert a record with voucher_no 4 on 12/10/2017 the output should be like
DATE VOUCHER_NO OPEN_BAL DEBITS CREDITS CLOS_BAL
------------------------------------------------------------------
10/10/2017 1 100 10 110
12/10/2017 2 110 5 105
12/10/2017 4 105 4 109
13/10/2017 3 109 20 129
I have seen several examples which find running totals upto a certain date but not between 2 dates or from a particular date to end of file
You should consider changing your database structure. I think it will be better to keep DATE, VOUCHER_NO, DEBITS, CREDITS in one table. And create view to calculate balances. In that case you will not have to update table after each insert. In this case your table will look like
create table myTable (
DATE date
, VOUCHER_NO int
, DEBITS int
, CREDITS int
)
insert into myTable values
('20171010', 1, 10, null),( '20171012', 2, null, 5)
, ('20171013', 3, 20, null), ('20171012', 4, 4, null)
And view will be
;with cte as (
select
DATE, VOUCHER_NO, DEBITS, CREDITS, bal = isnull(DEBITS, CREDITS) * case when DEBITS is null then -1 else 1 end
, rn = row_number() over (order by DATE, VOUCHER_NO)
from
myTable
)
select
a.DATE, a.VOUCHER_NO, a.DEBITS, a.CREDITS
, OPEN_BAL = sum(b.bal + case when b.rn = 1 then 100 else 0 end) - a.bal
, CLOS_BAL = sum(b.bal + case when b.rn = 1 then 100 else 0 end)
from
cte a
join cte b on a.rn >= b.rn
group by a.DATE, a.VOUCHER_NO, a.rn, a.bal, a.DEBITS, a.CREDITS
Here's another solution if you can not change your db structure. In this case you must run update statement each time after inserts. In both cases I assume that initial balance is 100 while recalculation
create table myTable (
DATE date
, VOUCHER_NO int
, OPEN_BAL int
, DEBITS int
, CREDITS int
, CLOS_BAL int
)
insert into myTable values
('20171010', 1, 100, 10, null, 110)
,( '20171012', 2, 110, null, 5, 105)
, ('20171013', 3, 105, 20, null, 125)
, ('20171012', 4, null, 4, null, null)
;with cte as (
select
DATE, VOUCHER_NO, DEBITS, CREDITS, bal = isnull(DEBITS, CREDITS) * case when DEBITS is null then -1 else 1 end
, rn = row_number() over (order by DATE, VOUCHER_NO)
from
myTable
)
, cte2 as (
select
a.DATE, a.VOUCHER_NO
, OPEN_BAL = sum(b.bal + case when b.rn = 1 then 100 else 0 end) - a.bal
, CLOS_BAL = sum(b.bal + case when b.rn = 1 then 100 else 0 end)
from
cte a
join cte b on a.rn >= b.rn
group by a.DATE, a.VOUCHER_NO, a.rn, a.bal
)
update a
set a.OPEN_BAL = b.OPEN_BAL, a.CLOS_BAL = b.CLOS_BAL
from
myTable a
join cte2 b on a.DATE = b.DATE and a.VOUCHER_NO = b.VOUCHER_NO

SQL - Using sum but optionally using default value for row

Given tables
asset
col - id
date_sequence
col - date
daily_history
col - date
col - num_error_seconds
col - asset_id
historical_event
col - start_date
col - end_date
col - asset_id
I'm trying to count up all the daily num_error_seconds for all assets in a given time range in order to display "Percentage NOT in error" by day. The catch is if there is a historical_event involving an asset that has an end_date beyond the sql query range, then daily_history should be ignored and a default value of 86400 seconds (one day of error_seconds) should be used for that asset
The query I have that does not use the historical_event is:
select ds.date,
IF(count(dh.time) = 0,
100,
100 - (100*sum(dh.num_error_seconds) / (86400 * count(*)))
) percent
from date_sequence ds
join asset a
left join daily_history dh on dh.date = ds.date and dh.asset_id=a.asset_id
where ds.date >= in_start_time and ds.date <= in_end_time
group by ds.thedate;
To build on this is beyond my SQL knowledge. Because of the aggregate function, I cannot simply inject 86400 seconds for each asset that is associated with an event that has an end_date beyond the in_end_time.
Sample Data
Asset
1
2
Date Sequence
2013-09-01
2013-09-02
2013-09-03
2013-09-04
Daily History
2013-09-01, 1400, 1
2013-09-02, 1501, 1
2013-09-03, 1420, 1
2013-09-04, 0, 1
2013-09-01, 10000, 2
2013-09-02, 20000, 2
2013-09-03, 30000, 2
2013-09-04, 40000, 2
Historical Event
start_date, end_date, asset_id
2013-09-03 12:01:03, 2014-01-01 00:00:00, 1
What I would expect to see with this sample data is a % of time these assets are in error
2013-09-01 => 100 - (100*(1400 + 10000))/(86400*2)
2013-09-02 => 100 - (100*(1501 + 20000))/(86400*2)
2013-09-03 => 100 - (100*(1420 + 30000))/(86400*2)
2013-09-04 => 100 - (100*(0 + 40000))/(86400*2)
Except: There was a historical event which should take precendence. It happened on 9/3 and is open-ended (has an end date in the future, so the calculations would change to:
2013-09-01 => 100 - (100*(1400 + 10000))/(86400*2)
2013-09-02 => 100 - (100*(1501 + 20000))/(86400*2)
2013-09-03 => 100 - (100*(86400 + 30000))/(86400*2)
2013-09-04 => 100 - (100*(86400 + 40000))/(86400*2)
Asset 1's num_error_seconds gets overwritten with a full day of error seconds if there is a historical event that has a start_date before 'in_end_time' and an end_time after the in_end_time
Can this be accomplished in one query? Or do I need to stage data with an initial query?
I think you're after something like this:
Select
ds.date,
100 - 100 * Sum(
case
when he.asset_id is not null then 86400 -- have a historical_event
when dh.num_error_seconds is null then 0 -- no daily_history record
else dh.num_error_seconds
end
) / 86400 / count(a.id) as percent -- need to divide by number of assets
From
date_sequence ds
cross join
asset a
left outer join
daily_history dh
on a.id = dh.asset_id and
ds.date = dh.date
left outer join (
select distinct -- avoid counting multiple he records
asset_id
from
historical_event he
Where
he.end_date > in_end_time
) he
on a.id = he.asset_id
Where
ds.date >= in_start_time and
ds.date <= in_end_time -- I'd prefer < here
Group By
ds.date
Example Fiddle

count not null columns out of specific list of columns and then compare in where clause for each row

Table structure:
Table Structure http://imagebin.org/index.php?mode=image&id=238883
I want to fetch data from both the tables which satisfy some of the conditions like
WHERE batch='2009', sex='male',course='B.Tech', branch='cs', xth_percent>60,
x2percent>60, gradpercent>60 and (if ranktype='other' ) then
no._of_not_null_semester>number elseifranktype='Leet') then
no._of_not_null_semester>number-2
sem 1-8 contains percentage for 8 semesters, and I want to filter results for each student if they have cleared 3 semesters or 4 semester i.e. not null values out of 8 values
no._of_not_null_semester
needs to be calculated, it is not a part of database, need help with that as well.
Required Query
SELECT * FROM student_test
INNER JOIN master_test ON student_test.id=master_test.id
WHERE sex='male' and batch='2009' and course='B.Tech'
and xthpercent>60 and x2percent>60 and
WHEN ranktype='Leet' THEN
SUM(CASE WHEN sem1 IS NOT NULL THEN 1 ELSE 0
WHEN sem2 IS NOT NULL THEN 1 ELSE 0
WHEN sem3 IS NOT NULL THEN 1 ELSE 0
WHEN sem4 IS NOT NULL THEN 1 ELSE 0
WHEN sem5 IS NOT NULL THEN 1 ELSE 0) >2
ELSE
SUM(CASE WHEN sem1 IS NOT NULL THEN 1 ELSE 0
WHEN sem2 S NOT NULL THEN 1 ELSE 0
WHEN sem3 IS NOT NULL THEN 1 ELSE 0
WHEN sem4 IS NOT NULL THEN 1 ELSE 0
WHEN sem5 IS NOT NULL THEN 1 ELSE 0) >4
Without changing the structure you can't use COUNT to achieve this.
One way to solve the problem would be to create a semester table which would contain a row for each finished semester for each student. This would have a foreign key pointing to test_student.id and you could use COUNT(semester.id)
If that is an option for you, it would be the best solution.
EDIT:
Check this out, didn't test the query but should work generally
I decided to do the math in the select itself to prevent calculating the same thing twice.
The HAVING conditions are applied after your result is ready to deliver, just before a LIMIT.
In terms of speed optimization you could try and move the sSum block into the WHERE condition just like you had it before. Probably it doesn't make much of a difference
SUM() does not work because it is an aggregate function which summarizes values in a column
I did some changes to your query in addition:
don't SELECT *, select specific fields and add a table identifier. ( in this case I used the aliases s for student_test AND m for master_test )
you put s.batch = '2009' into single quotes - if the field batch is an integer field, you should use s.batch = 2009, which would prevent MySQL from casting every single row to string to be able to compare it (int = int much faster than cast(int as varchar) = varchar) same about the other numeric values in your table
The Query:
SELECT
s.id,
s.sex,
s.course,
s.branch,
(
IF ( m.sem1 IS NOT NULL, 1, 0 ) +
IF ( m.sem2 IS NOT NULL, 1, 0 ) +
IF ( m.sem3 IS NOT NULL, 1, 0 ) +
IF ( m.sem4 IS NOT NULL, 1, 0 ) +
IF ( m.sem5 IS NOT NULL, 1, 0 ) +
IF ( m.sem6 IS NOT NULL, 1, 0 ) +
IF ( m.sem7 IS NOT NULL, 1, 0 ) +
IF ( m.sem8 IS NOT NULL, 1, 0 )
) AS sSum
FROM
student_test s
INNER JOIN master_test m ON m.id = s.id
WHERE
s.sex = 'male'
AND
s.batch = '2009' # I dont see this field in your database diagram!?
AND
s.course = 'B.Tech'
AND
m.xthpercent > 60
AND
m.x2percent > 60
HAVING
(
m.ranktype = 'OTHER'
AND
sSum > 4
)
OR
(
m.ranktype = 'LEET'
AND
sSum > 2
)
If you're generally interested in learning database design and usage I found an very interesting opportunity for you.
Stanford University offers a free database class "Introduction to databases". This is free and will cost you approx. 2 hours a week for 3 weeks, final exam included.
https://class2go.stanford.edu/db/Winter2013/preview/
SELECT
s.id,
s.sex,
s.course,
s.deptt,
m1.id,
m1.xthpercent,
m1.x2percent,
m1.sem1,
m1.sem2,
m1.sem3,
m1.ranktype,
m1.sem4,
m1.sem5,
m1.sem6,
m1.sem7,
m1.sem8,
m1.sSum
FROM
student_test s
INNER JOIN(SELECT m.id,
m.xthpercent,
m.x2percent,
m.sem1,
m.sem2,
m.sem3,
m.ranktype,
m.sem4,
m.sem5,
m.sem6,
m.sem7,
m.sem8,
( IF ( ceil(m.sem1)>0, 1, 0 ) +
IF ( ceil(m.sem2)>0, 1, 0 ) +
IF ( ceil(m.sem3)>0, 1, 0 ) +
IF ( ceil(m.sem4)>0, 1, 0 ) +
IF ( ceil(m.sem5)>0, 1, 0 ) +
IF ( ceil(m.sem6)>0, 1, 0 ) +
IF ( ceil(m.sem7)>0, 1, 0 ) +
IF ( ceil(m.sem8)>0, 1, 0 )
) AS sSum FROM master_test m
WHERE m.xthpercent>60 and
m.x2percent>60
HAVING (m.ranktype='Leet' AND sSum>2 )
OR
(m.ranktype != 'Leet') AND sSum>4 )
as m1 ON m1.id = s.id
WHERE
s.sex='Male'
and
s.course='B.Tech'
and
s.deptt='ELE'
This is the query finally I'm using, Love that query :)