How to improve the execute efficiency of this sql? - mysql

The sql throws timeout exception in the PRD environment.
SELECT
COUNT(*) totalCount,
SUM(IF(t.RESULT_FLAG = 'success', 1, 0)) successCount,
SUM(IF(b.ERROR_CODE = 'Y140', 1, 0)) unrecognizedCount,
SUM(IF(b.ERROR_CODE LIKE 'Y%' OR b.ERROR_CODE = 'E008', 1, 0)) connectCall,
SUM(IF(b.ERROR_CODE = 'N004', 1, 0)) hangupUnconnect,
SUM(IF(b.ERROR_CODE = 'Y001', 1, 0)) hangupConnect
FROM
lbl_his b LEFT JOIN lbl_error_code t ON b.TASK_ID = t.TASK_ID AND t.CODE = b.ERROR_CODE
WHERE
b.TASK_ID = "5f460e4ffa99f51697ad4ae3"
AND b.CREATE_TIME BETWEEN "2020-07-01 00:00:00" AND "2020-10-28 00:00:00"
The size of table lbl_his is super large. About 20,000,000 rows data which occupied 20GB disk.
The size of table lbl_error_code is small. Only 305 rows.
The indexes of table lbl_his:
TASK_ID
UPDATE_TIME
CREATE_TIME
RECORD_ID
The union indexes of table lbl_his:
TASK_ID, ERROR_CODE, UPDATE_TIME
TASK_ID, CREATE_TIME
There are no index created for table lbl_error_code.
I ran EXPLAIN SELECT and found the sql hit the index of lbl_his.TASK_ID and lbl_error_code.primary.
How to avoid to execute timeout?

For an index solution on lbl_his, try putting a non-clustered index on
firstly the things you filter on by exact match
then the things you filter on as ranges (or inexact matches)
e.g., the initial part of the index should be TASK_ID then CREATE_TIME. Putting these first is very important as it means the engine can do one seek to get the data.
Then include any other fields in use (either as part of index, or includes - doesn't matter) - in this case, ERROR_CODE. This makes your index a covering index.
Therefore your final new non-clustered index on lbl_his should be (TASK_ID, CREATE_TIME, ERROR_CODE)

Related

Mysql query performance multiple and or conditions

I have this query in mysql with very poor performance.
select `notifiables`.`notification_id`
from `notifiables`
where `notifiables`.`notification_type` in (2, 3, 4)
and ( ( `notifiables`.`notifiable_type` = 16
and `notifiables`.`notifiable_id` = 53642)
or ( `notifiables`.`notifiable_type` = 17
and `notifiables`.`notifiable_id` = 26358)
or ( `notifiables`.`notifiable_type` = 18
and `notifiables`.`notifiable_id` = 2654))
order by `notifiables`.`id` desc limit 20
Is this query can be optimized in any way. Please help
This table has 2M rows. and taking upto 1-4 seconds in searching
Updated indexes and Explain select
Possible solutions:
Turning OR into UNION (see #hongnhat)
Row constructors (see #Akina)
Adding
AND notifiable_type IN (16, 17, 18)
Index hint. I dislike this because it often does more harm than good. However, the Optimizer is erroneously picking the PRIMARY KEY(id) (because of the ORDER BY instead of some filter which, according to the Cardinality should be very good.
INDEX(notification_type, notifiable_type, notifiable_id, id, notification_id) -- This is "covering", which can help because the index is probably 'smaller' than the dataset. When adding this index, DROP your current INDEX(notification_type) since it distracts the Optimizer.
VIEW is very unlikely to help.
More
Give this a try: Add this to the beginning of the WHERE
WHERE notifiable_id IN ( 53642, 26358, 2654 )
AND ... (all of what you have now)
And be sure to have an INDEX starting with notifiable_id. (I don't see one currently.)
Use the next syntax:
SELECT notification_id
FROM notifiables
WHERE notification_type IN (2, 3, 4)
AND (notifiable_type, notifiable_id) IN ( (16, 53642), (17, 26358), (18, 2654) )
ORDER BY id DESC LIMIT 20
Create index by (notification_type, notifiable_type, notifiable_id) or (notifiable_type, notifiable_id, notification_type) (depends on separate conditions selectivity).
Or create covering index ((notification_type, notifiable_type, notifiable_id, notification_id) or (notifiable_type, notifiable_id, notification_type, notification_id)).
You can make different kinds of "VIEW" from the data you want and then join them.

MySQL View 20x slower than Select

I have a query that selects ~8000 rows. When I execute the query it takes 0.1 sec.
When I copy the query into a view and execute the view it takes about 2 seconds. In the first row of explain it selects ~570K rows, i dont know why.
I dont understand the first Row and why it shows up only in the view explain
1 PRIMARY ALL NULL NULL NULL NULL
This is the query (yes i know im not a mysql pro and the query is not that efficent, but it works ans 0.1 sek would be ok for me. Does anyone know why it is so slow in a view?
MariaDB 10.5.9
select
`xxxxxxx`.`auftraege`.`Zustandigkeit` AS `Zustandigkeit`,
`xxxxxxx`.`auftraege`.`cms` AS `cms`,
`xxxxxxx`.`auftraege`.`auftrag_id` AS `auftrag_id`,
`xxxxxxx`.`angebot`.`angebot_id` AS `angebot_id`,
`xxxxxxx`.`kunden`.`kunde_id` AS `kid`,
`xxxxxxx`.`angebot`.`kunde_id` AS `kunde_id`,
`xxxxxxx`.`kunden`.`firma` AS `firma`,
`xxxxxxx`.`auftraege`.`gekuendigt` AS `gekuendigt`,
`xxxxxxx`.`kunden`.`ansprechpartnerVorname` AS `ansprechpartnerVorname`,
`xxxxxxx`.`kunden`.`ansprechpartner` AS `ansprechpartner`,
`xxxxxxx`.`auftraege`.`ampstatus` AS `ampstatus`,
`xxxxxxx`.`auftraege`.`autoMahnungen` AS `autoMahnungen`,
`xxxxxxx`.`kunden`.`mail` AS `mail`,
`xxxxxxx`.`kunden`.`ansprechpartnerAnrede` AS `ansprechpartnerAnrede`,
case
`xxxxxxx`.`kunden`.`ansprechpartnerAnrede`
when
'm'
then
concat('Herr ', ifnull(`xxxxxxx`.`kunden`.`ansprechpartnerVorname`, ''), ifnull(`xxxxxxx`.`kunden`.`ansprechpartner`, ''))
else
concat('Frau ', ifnull(`xxxxxxx`.`kunden`.`ansprechpartnerVorname`, ''), ifnull(`xxxxxxx`.`kunden`.`ansprechpartner`, ''))
end
AS `ansprechpartnerfullName`, `xxxxxxx`.`kunden`.`website` AS `website`, `xxxxxxx`.`personal`.`name_betrieb` AS `name_betrieb`, `xxxxxxx`.`kunden`.`prioritaet` AS `prioritaet`, `xxxxxxx`.`auftraege`.`infoemail` AS `infoemail`, `xxxxxxx`.`auftraege`.`keywords` AS `keywords`, `xxxxxxx`.`auftraege`.`ftp_h` AS `ftp_h`, `xxxxxxx`.`auftraege`.`ftp_u` AS `ftp_u`, `xxxxxxx`.`auftraege`.`ftp_pw` AS `ftp_pw`, `xxxxxxx`.`auftraege`.`lgi_h` AS `lgi_h`, `xxxxxxx`.`auftraege`.`lgi_u` AS `lgi_u`, `xxxxxxx`.`auftraege`.`lgi_pw` AS `lgi_pw`, `xxxxxxx`.`auftraege`.`autoRemind` AS `autoRemind`, `xxxxxxx`.`kunden`.`telefon` AS `telefon`, `xxxxxxx`.`kunden`.`mobilfunk` AS `mobilfunk`, `xxxxxxx`.`auftraege`.`kommentar` AS `kommentar`, `xxxxxxx`.`auftraege`.`phase` AS `phase`, `xxxxxxx`.`auftraege`.`datum` AS `datum`, `xxxxxxx`.`angebot`.`typ` AS `typ`,
case
`xxxxxxx`.`auftraege`.`gekuendigt`
when
'1'
then
'Ja'
else
'Nein'
end
AS `Gekuendigt ? `,
(
select
count(`xxxxxxx`.`status`.`aenderung`)
from
`xxxxxxx`.`status`
where
`xxxxxxx`.`status`.`auftrag_id` = `xxxxxxx`.`auftraege`.`auftrag_id`
)
AS `aenderungen`,
`xxxxxxx`.`auftraege`.`vertragStart` AS `vertragStart`,
`xxxxxxx`.`auftraege`.`vertragEnde` AS `vertragEnde`,
case
`xxxxxxx`.`auftraege`.`zahlungsart`
when
'U'
then
'Überweisung'
when
'L'
then
'Lastschrift'
else
'Unbekannt'
end
AS `Zahlungsart`, `xxxxxxx`.`kunden`.`yyyyy_piwik` AS `yyyyy_piwik`,
(
select
max(`xxxxxxx`.`status`.`datum`) AS `mxDTst`
from
`xxxxxxx`.`status`
where
`xxxxxxx`.`status`.`auftrag_id` = `xxxxxxx`.`auftraege`.`auftrag_id`
and `xxxxxxx`.`status`.`typ` = 'SEO'
)
AS `mxDTst`,
(
select
case
`xxxxxxx`.`rechnungen`.`beglichen`
when
'YES'
then
'isOk'
else
'isAffe'
end
AS `neuUwe`
from
(
`xxxxxxx`.`zahlungsplanneu`
join
`xxxxxxx`.`rechnungen`
on(`xxxxxxx`.`zahlungsplanneu`.`rechnungsnummer` = `xxxxxxx`.`rechnungen`.`rechnungsnummer`)
)
where
`xxxxxxx`.`zahlungsplanneu`.`auftrag_id` = `xxxxxxx`.`auftraege`.`auftrag_id`
and `xxxxxxx`.`rechnungen`.`beglichen` <> 'STO' limit 1
)
AS `neuer`,
(
select
group_concat(`xxxxxxx`.`kunden_keywords`.`keyword` separator ',')
from
`xxxxxxx`.`kunden_keywords`
where
`xxxxxxx`.`kunden_keywords`.`kunde_id` = `xxxxxxx`.`kunden`.`kunde_id`
)
AS `keyword`,
(
select
case
count(0)
when
0
then
'Cool'
else
'Uncool'
end
AS `AusfallVor`
from
`xxxxxxx`.`rechnungen`
where
`xxxxxxx`.`rechnungen`.`rechnung_tag` < current_timestamp() - interval 15 day
and `xxxxxxx`.`rechnungen`.`kunde_id` = `xxxxxxx`.`kunden`.`kunde_id`
and `xxxxxxx`.`rechnungen`.`beglichen` = 'NO' limit 1
)
AS `Liquidiert`
from
(
((((`xxxxxxx`.`auftraege`
join
`xxxxxxx`.`angebot`
on(`xxxxxxx`.`auftraege`.`angebot_id` = `xxxxxxx`.`angebot`.`angebot_id`))
join
`xxxxxxx`.`kunden`
on(`xxxxxxx`.`angebot`.`kunde_id` = `xxxxxxx`.`kunden`.`kunde_id`))
left join
`xxxxxxx`.`kunden_keywords`
on(`xxxxxxx`.`angebot`.`kunde_id` = `xxxxxxx`.`kunden_keywords`.`kunde_id`))
join
`xxxxxxx`.`personal`
on(`xxxxxxx`.`kunden`.`bearbeiter` = `xxxxxxx`.`personal`.`personal_id`))
left join
`xxxxxxx`.`status`
on(`xxxxxxx`.`auftraege`.`auftrag_id` = `xxxxxxx`.`status`.`auftrag_id`)
)
group by
`xxxxxxx`.`auftraege`.`auftrag_id`
order by
NULL
UPDATE 1
1. The View Itself (Duration 1.83 sec)
1.1 Create the View: This is the View i created, it only contains the query from above.
1.2 Executing the View: It takes 1.83 sek to execute the view
1.3 Analyze the View: This is the explain of the view
2. The view with added where clause (Duration 1.86 sec)
2.1 Analyze the View with added where clause #rick wanted me to add a where clause to the view, if i understood him correctly. This is the explain of the view, where i added a where clause, takes 1.86 sec.
3. The Query, that is the source of the view (Duration: 0.1 sec)
3.1 Execute the query directly This is the query, that is the source of the view, when i execute it directly to the server. It takes ~0.1 - 0.2 seconds.
3.2 Analyze the direct queryAnd this is the explain of the pure query.
Why the view is so much slower, by only cupsuling the query inside of the view?
Update 2
These are the indexes I have set
ALTER TABLE angebot ADD INDEX angebot_idx_angebot_id (angebot_id);
ALTER TABLE auftraege ADD INDEX auftraege_idx_auftrag_id (auftrag_id);
ALTER TABLE kunden ADD INDEX kunden_idx_kunde_id (kunde_id);
ALTER TABLE kunden_keywords ADD INDEX kunden_keywords_idx_kunde_id (kunde_id);
ALTER TABLE personal ADD INDEX personal_idx_personal_id (personal_id);
ALTER TABLE rechnungen ADD INDEX rechnungen_idx_rechnungsnummer_beglichen (rechnungsnummer,beglichen);
ALTER TABLE rechnungen ADD INDEX rechnungen_idx_beglichen_kunde_id_rechnung (beglichen,kunde_id,rechnung_tag);
ALTER TABLE status ADD INDEX status_idx_auftrag_id (auftrag_id);
ALTER TABLE status ADD INDEX status_idx_typ_auftrag_id_datum (typ,auftrag_id,datum);
ALTER TABLE zahlungsplanneu ADD INDEX zahlungsplanneu_idx_auftrag_id (auftrag_id);
Be consistent between tables. kunde_id, for example, seems to be declared differently between tables. This may be preventing some obvious optimizations. (There are 6 JOINs that say func in EXPLAIN`.)
Remove the extra parentheses in JOINs. They may be preventing what the Optimizer is happy to do -- rearrange the tables in a JOIN.
Turn the query inside out. By this, I mean to do the minimum amount of work to do the main JOIN. Collect mostly id(s). Then do the dependent subqueries in an outer select. Something like:
SELECT ... ( SELECT ... ), ...
FROM ( SELECT a1.id
FROM a AS a1
JOIN b ON ..
JOIN c ON .. )
JOIN a AS a2 ON a2.id = a1.id
JOIN d ON ...
The "inside-out" kludge may eliminate the need for the GROUP BY. (Your query is too complex for me to see for sure.) If so, then I call the problem "explode-implode" -- Your query first JOINs, producing a temp table with lots of rows ("explodes"). Then it does a GROUP BY ("implodes").
More
These indexes will probably help:
status: (auftrag_id, typ, datum, aenderung)
rechnungen: (beglichen, kunde_id, rechnung_tag)
rechnungen: (rechnungsnummer, beglichen)
zahlungsplanneu: (auftrag_id, rechnungsnummer)
kunden_keywords: (kunde_id, keyword) -- (unless `kunde_id` is the PK)
(I see from all 3 EXPLAINs that you probably have sufficient indexes on kunden_keywords and status. Show me what indexes you have, so I can see if the existing indexes are as good as my suggestions.) "Using index" == "covering index".
Near the end is this LEFT JOIN, but I did not spot any use for the table; perhaps it can be removed?
left join `kunden_keywords` on(`angebot`.`kunde_id` = `kunden_keywords`.`kunde_id`))

Tips to optimize query, with many subqueries in MySQL

I have ~6 tables where I have to count or sum fields based on matching site_ids and date. I have the following query, with many subqueries which takes an extraordinary amount of time to run. I am certain there is an easier, more efficient way, however I am rather new to these more complex queries. I have read regarding optimizations, specifically using joins ON but struggling to understand and implement.
The goal is to speed this up and not bring my small server to it's knees when running. Any assistance or direction would be VERY much appreciated!
SELECT date(date_added) as dt_date,
site_id as dt_site_id,
(SELECT site_id from branch_mappings bm WHERE mark_id_site = dt.site_id) as site_id,
(SELECT parent_id from branch_mappings bm WHERE mark_id_site = dt.site_id) as main_site_id,
(SELECT corp_owned from branch_mappings bm WHERE mark_id_site = dt.site_id) as corp_owned,
count(id) as dt_calls,
(SELECT count(date_submitted) FROM mark_unbounce ub WHERE date(date_submitted) = dt_date AND ub.site_id = dt.site_id) as ub,
(SELECT count(timestamp) FROM mark_wordpress_contact wp WHERE date(timestamp) = dt_date AND wp.site_id = dt.site_id) as wp,
(SELECT count(added_on) FROM m_shrednations sn WHERE date(added_on) = dt_date AND sn.description = dt.site_id) as sn,
(SELECT sum(users) FROM mark_ga ga WHERE date(ga.date) = dt_date AND channel LIKE 'Organic%' AND ga.site_id = dt.site_id) as ga_organic
FROM mark_dialogtech dt
WHERE site_id is not null
GROUP BY site_name, dt_date
ORDER BY site_name, dt_date;
What you're doing is the equivalent of asking your server to query 7+ different tables every time you run this query. Personally, I use Joins and nested queries because I can whittle down do what I need.
The first 3 subqueries can be replaced with...
SELECT date(date_added) as dt_date,
dt.site_id as dt_site_id,
bm.site_id as site_id,
bm.parent_id as main_site_id,
bm.corp_owned as corp_owned,
FROM mark_dialogtech dt
INNER JOIN branch_mappings bm
ON bm.mark_id_site = dt.site_id
I'm not sure why you are running the last 3. Is there a business requirement? If so, consider how often this is to be run and when.
If absolutely necessary, add those to the joins like...
FROM mark_dialogtech dt
INNER JOIN
(SELECT site_id, count(date_submitted) FROM mark_unbounce GROUP BY site_id) ub
on ub.site_id = dt.site_id
This should limit the results to only records where the site_id exists in both the mark_dialogtech and mark_unbounce (or whatever table). From my experience, this method has sped things up.
Still, my concern is the number of aggregations you're performing. If they can be cached to a dashboard and pulled during slow times, that would be best.
Its hard to analyze how big is your query(no data examples) but in your case I hightly recommend to use CTE(Common Table Expressions). Check this :
https://www.sqlpedia.pl/cte-common-table-expressions/
CTEs do not have a physical representation in tempdb like temporary tables or tabular variables. CTE can be viewed as such a temporary, non-materialized view. When MSSQL executes a query and encounters a CTE, it replace the reference to that CTE with definition. Therefore, if the CTE data is used several times in a given query, the same code will be executed several times and MSSQL does not optimize it. Soo... it will work just for few data like you want to do.
Appreciate all the responses.
I ended up creating a python script to run the queries separately and inserting the results into the table for the respective KPI. So, I scrapped the idea of a single query due to performance. I concatenated each date and site_id to create the id, then leveraged an ON DUPLICATE KEY UPDATE with each INSERT statement.
The python dictionary looks like this, and I simply looped. Again, thanks for the help.
SELECT STATEMENTS (Python Dict)
"dt":"SELECT date(date_added) as dt_date, site_id as dt_site, count(site_id) as dt_count FROM mark_dialogtech WHERE site_id is not null GROUP BY dt_date, dt_site ORDER BY dt_date, dt_site;",
"ub":"SELECT date_submitted as ub_date, site_id as ub_site, count(site_id) as ub_count FROM mark_unbounce WHERE site_id is not null GROUP BY ub_date, ub_site;",
"wp":"SELECT date(timestamp) as wp_date, site_id as wp_site, count(site_id) as wp_count FROM mark_wordpress_contact WHERE site_id is not null GROUP BY wp_date, wp_site;",
"sn":"SELECT date(added_on) as sn_date, description as sn_site, count(description) as sn_count FROM m_shrednations WHERE description <> '' GROUP BY sn_date, sn_site;",
"ga":"SELECT date as ga_date, site_id as ga_site, sum(users) as ga_count FROM mark_ga WHERE users is not null GROUP BY ga_date, ga_site;"
INSERT STATEMENTS (Python Dict)
"dt":f"INSERT INTO mark_helper_rollup (id, on_date, site_id, dt_calls, added_on) VALUES ('{dbdata[0]}','{dbdata[1]}',{dbdata[2]},{dbdata[3]},'{dbdata[4]}') ON DUPLICATE KEY UPDATE dt_Calls={dbdata[3]}, added_on='{dbdata[4]}';",
"ub":f"INSERT INTO mark_helper_rollup (id, on_date, site_id, ub, added_on) VALUES ('{dbdata[0]}','{dbdata[1]}',{dbdata[2]},{dbdata[3]},'{dbdata[4]}') ON DUPLICATE KEY UPDATE ub={dbdata[3]}, added_on='{dbdata[4]}';",
"wp":f"INSERT INTO mark_helper_rollup (id, on_date, site_id, wp, added_on) VALUES ('{dbdata[0]}','{dbdata[1]}',{dbdata[2]},{dbdata[3]},'{dbdata[4]}') ON DUPLICATE KEY UPDATE wp={dbdata[3]}, added_on='{dbdata[4]}';",
"sn":f"INSERT INTO mark_helper_rollup (id, on_date, site_id, sn, added_on) VALUES ('{dbdata[0]}','{dbdata[1]}',{dbdata[2]},{dbdata[3]},'{dbdata[4]}') ON DUPLICATE KEY UPDATE sn={dbdata[3]}, added_on='{dbdata[4]}';",
"ga":f"INSERT INTO mark_helper_rollup (id, on_date, site_id, ga_organic, added_on) VALUES ('{dbdata[0]}','{dbdata[1]}',{dbdata[2]},{dbdata[3]},'{dbdata[4]}') ON DUPLICATE KEY UPDATE ga_organic={dbdata[3]}, added_on='{dbdata[4]}';",
It would be very difficult to analyze the query with out the data, Any ways!
try joining the tables and group it, that should improve the performance
here is a left join sample
SELECT column names
FROM table1
LEFT JOIN table2
ON table1.common_column = table2.common_column;
check this for more detailed inform https://learnsql.com/blog/how-to-left-join-multiple-tables/

MySql - Abysmal Performance

I am trying to run a relatively simple query on a table that has half a million rows. It's just a small fragment I'm using to test the values I get back are correct. The problem is this query takes over 20 minutes to complete, which seems unusually slow even for 500,000 records.
DROP VIEW IF EXISTS view_temp_sortie_stats;
CREATE VIEW view_temp_sortie_stats AS
SELECT server_id, session_id, ucid, role, sortie_id,
(
SELECT COUNT(sortie_id)
FROM raw_gameevents_log
WHERE sortie_id = l.sortie_id AND server_id = l.server_id AND session_id = l.session_id AND target_player_ucid = l.ucid AND event = "HIT"
) AS HitsReceived
FROM raw_gameevents_log l
WHERE ucid IS NOT NULL AND sortie_id IS NOT NULL
GROUP BY server_id, session_id, ucid, role, sortie_id;
SELECT * FROM view_temp_sortie_stats;
Here is my table:
Next I tried to add indexes for server_id, session_id, sortie_id to see if it would improve - this took more than 10 minutes to apply and timed out. So I could not add them.
This seems abnormally slow, it shouldn't take this much time to add indexes, or perform this query.
My innodb_buffer_pool_size is 5GB, yet the mysqld process only consumes 300mb of memory when these queries are run.
I am running on Windows Server 2012 R2 Standard with 12 GB Ram, 2x Intel Haswell CPU, so I should be seeing much better performance than this from mysql.
There is no one else connected to this instance of MySql and no other operations are occurring.
EDIT - Here is the query explained
Does someone know what might be wrong?
EDIT2 - Partial Fix
After some googling I found out why the add index was taking forever - the original query was still running in the background for over 2hrs. Once I Killed the query the add index took about 30 seconds.
Now when I run the above query it takes 27 seconds - which is a drastic improvement for sure, but that still seems pretty slow for 500,000 records. Here is the new query explain plan:
Your subquery is:
SELECT COUNT(sortie_id)
FROM raw_gameevents_log
WHERE sortie_id = l.sortie_id AND server_id = l.server_id
AND session_id = l.session_id AND target_player_ucid = l.ucid
AND event = "HIT"
and the main query is:
SELECT server_id, session_id, ucid, role, sortie_id, [...]
FROM raw_gameevents_log l
WHERE ucid IS NOT NULL AND sortie_id IS NOT NULL
GROUP BY server_id, session_id, ucid, role, sortie_id;
Let's start from the subquery. The COUNT can count on whatever, so we don't bother with the select fields. The WHERE fields:
WHERE sortie_id = l.sortie_id AND server_id = l.server_id
AND session_id = l.session_id AND target_player_ucid = l.ucid
AND event = "HIT"
You create an index beginning with the constant fields, then the others:
CREATE INDEX subqindex ON raw_gameevents_log(
event,
sortie_id, server_id, session_id, target_player_ucid
)
Then the main query:
WHERE ucid IS NOT NULL AND sortie_id IS NOT NULL
GROUP BY server_id, session_id, ucid, role, sortie_id;
Here you need an index on
ucid, sortie_id, server_id, session_id, role
Finally, you might try getting rid of the subquery (even if the optimizer probably already does a good work with that):
SELECT server_id, session_id, ucid, role, sortie_id,
COALESCE(hits, 0) AS hits
FROM raw_gameevents_log l
LEFT JOIN
(
SELECT COUNT(1) AS hits FROM raw_gameevents_log
WHERE event = 'HIT'
) AS h
ON (h.sortie_id = l.sortie_id, h.server_id = l.server_id, h.session_id = l.session_id, h.target_player_ucid = l.ucid)
WHERE l.ucid IS NOT NULL AND l.sortie_id IS NOT NULL
GROUP BY l.server_id, l.session_id, l.ucid, l.role, l.sortie_id;

What index should be created in mysql when where condition is made up of 'AND' and 'OR' combination?

My mysql query is like below, I need to create index to boost up result fetching.
SELECT * FROM tbl_name
WHERE seasonid = 1 AND status = 'N'
AND month = 10 AND (team_a = 'India' OR team_b = 'India');
Thanks in advance
(a = 1 OR a = 3) is turned into a IN (1,3), which is sometimes optimizable. However, you don't have that case. Therefore, the expressions on either side of OR are useless for indexing.
INDEX(seasonid,status,month)
with the 3 fields in any order is the 'best' index for that query.
See also my index cookbook.