I have a query that is taking way too long to execute (4 seconds) even though all the fields i am querying against are indexed. Below are the query and the explain results. Any ideas what the problem is? (mysql CPU usage shoots up to 100% when executing the query
EXPLAIN SELECT count(hd.did) as NumPo, `hd`.`sid`, `src`.`Name`
FROM (`hd`)
JOIN `result` ON `result`.`did` = `hd`.`did`
JOIN `sf` ON `sf`.`fid` = `hd`.`fid`
JOIN `src` ON `src`.`sid` = `hd`.`sid`
WHERE `sf`.`tid` = 2
AND `result`.`set` = 'xxxxxxx'
GROUP BY `hd`.`sid`
ORDER BY `NumPo` DESC
LIMIT 10;
+----+-------------+--------------+--------+-------------------------+---------+---------+--------------------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+-------------------------+---------+---------+--------------------------+------+----------------------------------------------+
| 1 | SIMPLE | sf | ref | PRIMARY,type | type | 2 | const | 4 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | hd | ref | PRIMARY,sid,fid | FeedID | 4 | f2.sf.fid | 3 | |
| 1 | SIMPLE | result | ALL | resultset | NULL | NULL | NULL | 5322 | Using where; Using join buffer |
| 1 | SIMPLE | src | eq_ref | PRIMARY | PRIMARY | 4 | f2.hd.sid | 1 | |
+----+-------------+--------------+--------+-------------------------+---------+---------+--------------------------+------+----------------------------------------------+
| 1 | SIMPLE | result | ALL | resultset | NULL | NULL | NULL | 5322 | Using where; Using join buffer |
It looks like it's not using an index on the biggest table. I'm having trouble guessing what this query is supposed to do, but it looks like you have an index on result.set, so I'd try adding one to result.did and see if it helps.
Related
I am having issue with mysql query.
SELECT Count(*) AS aggregate
FROM (SELECT Group_concat(gateways.public_name) AS client_gateways,
`clients`.`id`,
`clients`.`name`,
`clients`.`status`,
`clients`.`api_key`,
`clients`.`user_name`,
`clients`.`psp_id`,
`clients`.`suspend`,
`clients`.`secret_key`,
`clients`.`created_at`,
`companies`.`name` AS `company_name`,
`mid_groups_mid`.`mid_id`,
`mid_groups_mid`.`mid_group_id`,
`mid_groups`.`id` AS `group_id`,
`mid_groups`.`user_id`,
`mids`.`mid_group_id` AS `id_of_mid`
FROM `clients`
LEFT JOIN `client_site_gateways`
ON `clients`.`id` = `client_site_gateways`.`client_id`
LEFT JOIN `gateways`
ON `client_site_gateways`.`gateway_id` = `gateways`.`id`
LEFT JOIN `client_broker`
ON `client_broker`.`client_id` = `clients`.`id`
LEFT JOIN `mid_groups`
ON `mid_groups`.`user_id` = `clients`.`psp_id`
LEFT JOIN `mid_groups_mid`
ON `mid_groups_mid`.`mid_group_id` = `mid_groups`.`id`
LEFT JOIN `mids`
ON `mids`.`mid_group_id` = `mid_groups_mid`.`mid_group_id`
INNER JOIN `companies`
ON `companies`.`id` = `clients`.`company_id`
WHERE `is_corp` = 0
AND `clients`.`suspend` = '0'
AND ( `clients`.`company_id` = 1 )
AND `clients`.`deleted_at` IS NULL
GROUP BY `clients`.`id`,
`clients`.`name`,
`clients`.`status`,
`clients`.`api_key`,
`clients`.`suspend`,
`clients`.`secret_key`,
`clients`.`created_at`,
`companies`.`name`,
`clients`.`user_name`,
`clients`.`psp_id`,
`mid_groups_mid`.`mid_id`,
`mid_groups_mid`.`mid_group_id`,
`mid_groups`.`id`,
`mid_groups`.`user_id`,
`mids`.`mid_group_id`) count_row_table
all table have few hundreds records. here is explain query result
+------+-------------+----------------------+--------+-------------------------------------+-------------------------------------+---------+----------------------------------------------+------------+-------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------------------+--------+-------------------------------------+-------------------------------------+---------+----------------------------------------------+------------+-------------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 2849642280 | |
| 2 | DERIVED | companies | const | PRIMARY | PRIMARY | 4 | const | 1 | Using temporary; Using filesort |
| 2 | DERIVED | clients | ref | clients_company_id_foreign | clients_company_id_foreign | 4 | const | 543 | Using where |
| 2 | DERIVED | client_site_gateways | ref | client_id | client_id | 4 | knox_staging.clients.id | 5 | |
| 2 | DERIVED | gateways | eq_ref | PRIMARY | PRIMARY | 4 | knox_staging.client_site_gateways.gateway_id | 1 | Using where |
| 2 | DERIVED | client_broker | ALL | NULL | NULL | NULL | NULL | 6 | Using where; Using join buffer (flat, BNL join) |
| 2 | DERIVED | mid_groups | ref | mid_groups_user_id_foreign | mid_groups_user_id_foreign | 4 | knox_staging.clients.psp_id | 1 | Using where; Using index |
| 2 | DERIVED | mid_groups_mid | ref | mid_groups_mid_mid_group_id_foreign | mid_groups_mid_mid_group_id_foreign | 8 | knox_staging.mid_groups.id | 433 | Using where |
| 2 | DERIVED | mids | ref | mids_mid_group_id_foreign | mids_mid_group_id_foreign | 9 | knox_staging.mid_groups_mid.mid_group_id | 404 | Using where; Using index |
+------+-------------+----------------------+--------+-------------------------------------+-------------------------------------+---------+----------------------------------------------+------------+-------------------------------------------------+
in explain results what is causing to have 2849642280 row. while tables have only few hundreds records. all tables have proper indexing.
what i am thinking causing storage full is tmp table with above records. i tried to scale storage upto 60GB database size is few MBs. all storage filled up as soon as i run above query. i am not sure what causing left join to filter 2849642280 rows
The problem is probably the "aggregate." If the only thing you need is the count of records, you should write a new query which gets that count.
Query
SELECT SQL_NO_CACHE contacts.id,
contacts.date_modified contacts__date_modified
FROM contacts
INNER JOIN
(SELECT tst.team_set_id
FROM team_sets_teams tst
INNER JOIN team_memberships team_membershipscontacts ON (team_membershipscontacts.team_id = tst.team_id)
AND (team_membershipscontacts.user_id = '5daa2e92-c347-11e9-afc5-525400a80916')
AND (team_membershipscontacts.deleted = 0)
GROUP BY tst.team_set_id) contacts_tf ON contacts_tf.team_set_id = contacts.team_set_id
LEFT JOIN contacts_cstm contacts_cstm ON contacts_cstm.id_c = contacts.id
WHERE contacts.deleted = 0
ORDER BY contacts.date_modified DESC,
contacts.id DESC
LIMIT 21;
Takes extremely long (2 minutes on 2M records). I cant change this query, since it is system generated
This is it's explain:
+----+-------------+--------------------------+------------+--------+-------------------------------------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------------+------------+--------+-------------------------------------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| 1 | PRIMARY | contacts | NULL | ref | idx_contacts_tmst_id,idx_del_date_modified,idx_contacts_del_last,idx_cont_del_reports,idx_del_id_user | idx_del_date_modified | 2 | const | 1113718 | 100.00 | Using temporary; Using filesort |
| 1 | PRIMARY | <derived3> | NULL | ALL | NULL | NULL | NULL | NULL | 2 | 50.00 | Using where; Using join buffer (Block Nested Loop) |
| 1 | PRIMARY | contacts_cstm | NULL | eq_ref | PRIMARY | PRIMARY | 144 | sugarcrm.contacts.id | 1 | 100.00 | Using index |
| 3 | DERIVED | team_membershipscontacts | NULL | ref | idx_team_membership,idx_teammemb_team_user,idx_del_team_user | idx_team_membership | 145 | const | 2 | 99.36 | Using index condition; Using where; Using temporary; Using filesort |
| 3 | DERIVED | tst | NULL | ref | idx_ud_set_id,idx_ud_team_id,idx_ud_team_set_id,idx_ud_team_id_team_set_id | idx_ud_team_id_team_set_id | 144 | sugarcrm.team_membershipscontacts.team_id | 1 | 100.00 | Using index |
+----+-------------+--------------------------+------------+--------+-------------------------------------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
But when I use force index(idx_del_date_modified) (which is the same index used in explain), the query takes just 0.01s and I get slightly different explain.
+----+-------------+--------------------------+------------+--------+----------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------------+------------+--------+----------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| 1 | PRIMARY | contacts | NULL | ref | idx_del_date_modified | idx_del_date_modified | 2 | const | 1113718 | 100.00 | Using where |
| 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 2 | 50.00 | Using where |
| 1 | PRIMARY | contacts_cstm | NULL | eq_ref | PRIMARY | PRIMARY | 144 | sugarcrm.contacts.id | 1 | 100.00 | Using index |
| 2 | DERIVED | team_membershipscontacts | NULL | ref | idx_team_membership,idx_teammemb_team_user,idx_del_team_user | idx_team_membership | 145 | const | 2 | 99.36 | Using index condition; Using where; Using temporary; Using filesort |
| 2 | DERIVED | tst | NULL | ref | idx_ud_set_id,idx_ud_team_id,idx_ud_team_set_id,idx_ud_team_id_team_set_id | idx_ud_team_id_team_set_id | 144 | sugarcrm.team_membershipscontacts.team_id | 1 | 100.00 | Using index |
+----+-------------+--------------------------+------------+--------+----------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
The first query uses temporary table and file sort, but the query with force index uses just where. Shouldn't the queries be the same? Why is the query with force index so much faster - used index is still the same.
According to MySQL manual:
Temporary tables can be created under conditions such as these:
If there is an ORDER BY clause and a different GROUP BY clause, or if
the ORDER BY or GROUP BY contains columns from tables other than the
first table in the join queue, a temporary table is created.
DISTINCT combined with ORDER BY may require a temporary table.
If you use the SQL_SMALL_RESULT option, MySQL uses an in-memory
temporary table, unless the query also contains elements (described
later) that require on-disk storage.
Likely, you have better performance because in MySQL there is the query optimizer component.
If you create index the query optimizer could not use the index column even though the index exists.
Using force index(..) you are forcing MySql to use index, instead.
Please consider a detailed example here.
When I EXPLIAN this:
EXPLIAN SELECT m.*,m.id AS mid FROM movie_category mc
LEFT JOIN movie m ON m.id=mc.movie_id
RIGHT JOIN movie_area ma ON ma.movie_id=mc.movie_id
LEFT JOIN area a ON a.id=ma.area_id
LEFT JOIN category c ON c.id=mc.category_id
WHERE 1 and ma.area_id>0
GROUP BY mid
ORDER BY m.read_count desc LIMIT 0,36;
I got this result:
+----+-------------+-------+------------+--------+-----------------+----------+---------+----------------------+-------+----------+---------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+--------+-----------------+----------+---------+----------------------+-------+----------+---------------------------------+
| 1 | SIMPLE | ma | NULL | ALL | NULL | NULL | NULL | NULL | 15545 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | mc | NULL | ref | movie_id | movie_id | 5 | flask.ma.movie_id | 2 | 100.00 | NULL |
| 1 | SIMPLE | m | NULL | eq_ref | PRIMARY,year_id | PRIMARY | 4 | flask.ma.movie_id | 1 | 100.00 | NULL |
| 1 | SIMPLE | a | NULL | eq_ref | PRIMARY | PRIMARY | 4 | flask.ma.area_id | 1 | 100.00 | Using index |
| 1 | SIMPLE | c | NULL | eq_ref | PRIMARY | PRIMARY | 4 | flask.mc.category_id | 1 | 100.00 | Using index |
+----+-------------+-------+------------+--------+-----------------+----------+---------+----------------------+-------+----------+---------------------------------+
5 rows in set, 1 warning (0.00 sec)
How to optimise this query? I really down't know what to do, help me out.
edit:
From the explain result,first line's Extra is "Using temporary; Using filesort",is not good.And second line's Extra and third line's Extra are all NULL, are also not good.
ps:
The query span 0.91 seconds,is very bad.How to add index to prove the query speed.
Improve schema
Without seeing the schemas involved, I will assume you have two many-to-many mapping table (mc, ma) that are inefficiently defined.
Follow the rules here ; you will gain some efficiency.
Get rid of unnecessary tables
category and movie_category are not really used in the query, so remove them from the query.
I assume this is a "generated" query? Then make the generation a little more sophisticated!
MySQL version 5.7.17
When I use JOIN for small tables, MySQL don't allow me to use index on the main table.
Compare:
EXPLAIN SELECT `ko_base_client_orders`.*
FROM `ko_base_client_orders`
LEFT JOIN `ko_base_new_perspe` AS `ko_of`
ON (`ko_of`.`id` = `ko_base_client_orders`.`office`)
ORDER BY `order_time` DESC
LIMIT 10 OFFSET 0;
+----+-------------+-----------------------+------------+-------+---------------+---------+---------+------+-------+----------+-----------------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------------------+------------+-------+---------------+---------+---------+------+-------+----------+-----------------------------------------------------------------+
| 1 | SIMPLE | ko_base_client_orders | NULL | ALL | NULL | NULL | NULL | NULL | 60737 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | ko_of | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 2 | 100.00 | Using where; Using index; Using join buffer (Block Nested Loop) |
+----+-------------+-----------------------+------------+-------+---------------+---------+---------+------+-------+----------+-----------------------------------------------------------------+
2 rows in set, 1 warning (0.00 sec)
and the same query with index hint:
EXPLAIN SELECT `ko_base_client_orders`.*
FROM `ko_base_client_orders`
LEFT JOIN `ko_base_new_perspe` AS `ko_of`
FORCE INDEX FOR JOIN (`PRIMARY`)
ON (`ko_of`.`id` = `ko_base_client_orders`.`office`)
ORDER BY `order_time` DESC
LIMIT 10 OFFSET 0;
+----+-------------+-----------------------+------------+--------+---------------+------------+---------+---------------------------------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------------------+------------+--------+---------------+------------+---------+---------------------------------+------+----------+-------------+
| 1 | SIMPLE | ko_base_client_orders | NULL | index | NULL | order_time | 6 | NULL | 10 | 100.00 | NULL |
| 1 | SIMPLE | ko_of | NULL | eq_ref | PRIMARY | PRIMARY | 4 | e3.ko_base_client_orders.office | 1 | 100.00 | Using index |
+----+-------------+-----------------------+------------+--------+---------------+------------+---------+---------------------------------+------+----------+-------------+
In second case MySQL uses key for order_time, but in first example it ignores index and scans the entire table.
I can not predict which table would be small, and which would not (it differs from project to project), so I don't want to use index hints all the time. Is there another solution for this problem?
I've recently noticed that a query I have is running quite slowly, at almost 1 second per query.
The query looks like this
SELECT eventdate.id,
eventdate.eid,
eventdate.date,
eventdate.time,
eventdate.title,
eventdate.address,
eventdate.rank,
eventdate.city,
eventdate.state,
eventdate.name,
source.link,
type,
eventdate.img
FROM source
RIGHT OUTER JOIN
(
SELECT event.id,
event.date,
users.name,
users.rank,
users.eid,
event.address,
event.city,
event.state,
event.lat,
event.`long`,
GROUP_CONCAT(types.type SEPARATOR ' | ') AS type
FROM event FORCE INDEX (latlong_idx)
JOIN users ON event.uid = users.id
JOIN types ON users.tid=types.id
WHERE `long` BETWEEN -74.36829174058 AND -73.64365405942
AND lat BETWEEN 40.35195025942 AND 41.07658794058
AND event.date >= '2009-10-15'
GROUP BY event.id, event.date
ORDER BY event.date, users.rank DESC
LIMIT 0, 20
)eventdate
ON eventdate.uid = source.uid
AND eventdate.date = source.date;
and the explain is
+----+-------------+------------+--------+---------------+-------------+---------+------------------------------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+---------------+-------------+---------+------------------------------+-------+---------------------------------+
| 1 | PRIMARY | | ALL | NULL | NULL | NULL | NULL | 20 | |
| 1 | PRIMARY | source | ref | iddate_idx | iddate_idx | 7 | eventdate.id,eventdate.date | 156 | |
| 2 | DERIVED | event | ALL | latlong_idx | NULL | NULL | NULL | 19500 | Using temporary; Using filesort |
| 2 | DERIVED | types | ref | eid_idx | eid_idx | 4 | active.event.id | 10674 | Using index |
| 2 | DERIVED | users | eq_ref | id_idx | id_idx | 4 | active.types.id | 1 | Using where |
+----+-------------+------------+--------+---------------+-------------+---------+------------------------------+-------+---------------------------------+
I've tried using 'force index' on latlong, but that doesn't seem to speed things up at all.
Is it the derived table that is causing the slow responses? If so, is there a way to improve the performance of this?
--------EDIT-------------
I've attempted to improve the formatting to make it more readable, as well
I run the same query changing only the 'WHERE statement as
WHERE users.id = (
SELECT users.id
FROM users
WHERE uidname = 'frankt1'
ORDER BY users.approved DESC , users.rank DESC
LIMIT 1 )
AND date & gt ; = '2009-10-15'
GROUP BY date
ORDER BY date)
That query runs in 0.006 seconds
the explain looks like
+----+-------------+------------+-------+---------------+---------------+---------+------------------------------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+-------+---------------+---------------+---------+------------------------------+------+----------------+
| 1 | PRIMARY | | ALL | NULL | NULL | NULL | NULL | 42 | |
| 1 | PRIMARY | source | ref | iddate_idx | iddate_idx | 7 | eventdate.id,eventdate.date | 156 | |
| 2 | DERIVED | users | const | id_idx | id_idx | 4 | | 1 | |
| 2 | DERIVED | event | range | eiddate_idx | eiddate_idx | 7 | NULL | 24 | Using where |
| 2 | DERIVED | types | ref | eid_idx | eid_idx | 4 | active.event.bid | 3 | Using index |
| 3 | SUBQUERY | users | ALL | idname_idx | idname_idx | 767 | | 5 | Using filesort |
+----+-------------+------------+-------+---------------+---------------+---------+------------------------------+------+----------------+
The only way to clean up that mammoth SQL statement is to go back to the drawing board and carefully work though your database design and requirements. As soon as you start joining 6 tables and using an inner select you should expect incredible execution times.
As a start, ensure that all your id fields are indexed, but better to ensure that your design is valid. I don't know where to START looking at your SQL - even after I reformatted it for you.
Note that 'using indexes' means you need to issue the correct instructions when you CREATE or ALTER the tables you are using. See for instance MySql 5.0 create indexes