I have got about 4.4K records in stockmain table and 4.4K records in stockdetail table and bout 1.04K records in item table. I have got the following query:
SELECT
item.model,
stockdetail.docs,
item.category,
stockdetail.item_id,
stockdetail.chasis,
stockdetail.price,
stockdetail.tax,
stockdetail.recycle,
stockdetail.auction,
stockdetail.shaken,
stockdetail.transport,
stockdetail.fee,
stockdetail.netamount,
IFNULL(SUM(QTY),0) as QTY,
item.DESCRIPTION
FROM
stockmain
INNER JOIN stockdetail
ON stockmain.STID=stockdetail.STID
INNER JOIN item
ON stockdetail.ITEM_ID = item.ITEM_ID
WHERE
stockmain.vrdate
BETWEEN '{$startDate}' AND '{$endDate}'
AND stockmain.company_id={$company_id}
GROUP BY
item.item_id, chasis HAVING IFNULL(SUM(QTY),0) > 0 ORDER BY item.description, item.model
and it takes average number of about 45-48 seconds to load the data. How may I optimize this query to perform faster?
P.S I have tried by adding the indexes to stockmain.vrdate and stockmain.company_id but that changed nothing.
Below is the EXPLAIN for the above query`
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | stockdetail | ALL | | | | | 4180 | Using temporary; Using filesort |
| 1 | SIMPLE | item | eq_ref | PRIMARY | PRIMARY | 4 | kashmir.stockdetail.item_id | 1 | |
| 1 | SIMPLE | stockmain | eq_ref | PRIMARY | PRIMARY | 4 | kashmir.stockdetail.stid | 1 | Using where |
Related
I am having issue with mysql query.
SELECT Count(*) AS aggregate
FROM (SELECT Group_concat(gateways.public_name) AS client_gateways,
`clients`.`id`,
`clients`.`name`,
`clients`.`status`,
`clients`.`api_key`,
`clients`.`user_name`,
`clients`.`psp_id`,
`clients`.`suspend`,
`clients`.`secret_key`,
`clients`.`created_at`,
`companies`.`name` AS `company_name`,
`mid_groups_mid`.`mid_id`,
`mid_groups_mid`.`mid_group_id`,
`mid_groups`.`id` AS `group_id`,
`mid_groups`.`user_id`,
`mids`.`mid_group_id` AS `id_of_mid`
FROM `clients`
LEFT JOIN `client_site_gateways`
ON `clients`.`id` = `client_site_gateways`.`client_id`
LEFT JOIN `gateways`
ON `client_site_gateways`.`gateway_id` = `gateways`.`id`
LEFT JOIN `client_broker`
ON `client_broker`.`client_id` = `clients`.`id`
LEFT JOIN `mid_groups`
ON `mid_groups`.`user_id` = `clients`.`psp_id`
LEFT JOIN `mid_groups_mid`
ON `mid_groups_mid`.`mid_group_id` = `mid_groups`.`id`
LEFT JOIN `mids`
ON `mids`.`mid_group_id` = `mid_groups_mid`.`mid_group_id`
INNER JOIN `companies`
ON `companies`.`id` = `clients`.`company_id`
WHERE `is_corp` = 0
AND `clients`.`suspend` = '0'
AND ( `clients`.`company_id` = 1 )
AND `clients`.`deleted_at` IS NULL
GROUP BY `clients`.`id`,
`clients`.`name`,
`clients`.`status`,
`clients`.`api_key`,
`clients`.`suspend`,
`clients`.`secret_key`,
`clients`.`created_at`,
`companies`.`name`,
`clients`.`user_name`,
`clients`.`psp_id`,
`mid_groups_mid`.`mid_id`,
`mid_groups_mid`.`mid_group_id`,
`mid_groups`.`id`,
`mid_groups`.`user_id`,
`mids`.`mid_group_id`) count_row_table
all table have few hundreds records. here is explain query result
+------+-------------+----------------------+--------+-------------------------------------+-------------------------------------+---------+----------------------------------------------+------------+-------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------------------+--------+-------------------------------------+-------------------------------------+---------+----------------------------------------------+------------+-------------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 2849642280 | |
| 2 | DERIVED | companies | const | PRIMARY | PRIMARY | 4 | const | 1 | Using temporary; Using filesort |
| 2 | DERIVED | clients | ref | clients_company_id_foreign | clients_company_id_foreign | 4 | const | 543 | Using where |
| 2 | DERIVED | client_site_gateways | ref | client_id | client_id | 4 | knox_staging.clients.id | 5 | |
| 2 | DERIVED | gateways | eq_ref | PRIMARY | PRIMARY | 4 | knox_staging.client_site_gateways.gateway_id | 1 | Using where |
| 2 | DERIVED | client_broker | ALL | NULL | NULL | NULL | NULL | 6 | Using where; Using join buffer (flat, BNL join) |
| 2 | DERIVED | mid_groups | ref | mid_groups_user_id_foreign | mid_groups_user_id_foreign | 4 | knox_staging.clients.psp_id | 1 | Using where; Using index |
| 2 | DERIVED | mid_groups_mid | ref | mid_groups_mid_mid_group_id_foreign | mid_groups_mid_mid_group_id_foreign | 8 | knox_staging.mid_groups.id | 433 | Using where |
| 2 | DERIVED | mids | ref | mids_mid_group_id_foreign | mids_mid_group_id_foreign | 9 | knox_staging.mid_groups_mid.mid_group_id | 404 | Using where; Using index |
+------+-------------+----------------------+--------+-------------------------------------+-------------------------------------+---------+----------------------------------------------+------------+-------------------------------------------------+
in explain results what is causing to have 2849642280 row. while tables have only few hundreds records. all tables have proper indexing.
what i am thinking causing storage full is tmp table with above records. i tried to scale storage upto 60GB database size is few MBs. all storage filled up as soon as i run above query. i am not sure what causing left join to filter 2849642280 rows
The problem is probably the "aggregate." If the only thing you need is the count of records, you should write a new query which gets that count.
Query
SELECT SQL_NO_CACHE contacts.id,
contacts.date_modified contacts__date_modified
FROM contacts
INNER JOIN
(SELECT tst.team_set_id
FROM team_sets_teams tst
INNER JOIN team_memberships team_membershipscontacts ON (team_membershipscontacts.team_id = tst.team_id)
AND (team_membershipscontacts.user_id = '5daa2e92-c347-11e9-afc5-525400a80916')
AND (team_membershipscontacts.deleted = 0)
GROUP BY tst.team_set_id) contacts_tf ON contacts_tf.team_set_id = contacts.team_set_id
LEFT JOIN contacts_cstm contacts_cstm ON contacts_cstm.id_c = contacts.id
WHERE contacts.deleted = 0
ORDER BY contacts.date_modified DESC,
contacts.id DESC
LIMIT 21;
Takes extremely long (2 minutes on 2M records). I cant change this query, since it is system generated
This is it's explain:
+----+-------------+--------------------------+------------+--------+-------------------------------------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------------+------------+--------+-------------------------------------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| 1 | PRIMARY | contacts | NULL | ref | idx_contacts_tmst_id,idx_del_date_modified,idx_contacts_del_last,idx_cont_del_reports,idx_del_id_user | idx_del_date_modified | 2 | const | 1113718 | 100.00 | Using temporary; Using filesort |
| 1 | PRIMARY | <derived3> | NULL | ALL | NULL | NULL | NULL | NULL | 2 | 50.00 | Using where; Using join buffer (Block Nested Loop) |
| 1 | PRIMARY | contacts_cstm | NULL | eq_ref | PRIMARY | PRIMARY | 144 | sugarcrm.contacts.id | 1 | 100.00 | Using index |
| 3 | DERIVED | team_membershipscontacts | NULL | ref | idx_team_membership,idx_teammemb_team_user,idx_del_team_user | idx_team_membership | 145 | const | 2 | 99.36 | Using index condition; Using where; Using temporary; Using filesort |
| 3 | DERIVED | tst | NULL | ref | idx_ud_set_id,idx_ud_team_id,idx_ud_team_set_id,idx_ud_team_id_team_set_id | idx_ud_team_id_team_set_id | 144 | sugarcrm.team_membershipscontacts.team_id | 1 | 100.00 | Using index |
+----+-------------+--------------------------+------------+--------+-------------------------------------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
But when I use force index(idx_del_date_modified) (which is the same index used in explain), the query takes just 0.01s and I get slightly different explain.
+----+-------------+--------------------------+------------+--------+----------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------------+------------+--------+----------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
| 1 | PRIMARY | contacts | NULL | ref | idx_del_date_modified | idx_del_date_modified | 2 | const | 1113718 | 100.00 | Using where |
| 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 2 | 50.00 | Using where |
| 1 | PRIMARY | contacts_cstm | NULL | eq_ref | PRIMARY | PRIMARY | 144 | sugarcrm.contacts.id | 1 | 100.00 | Using index |
| 2 | DERIVED | team_membershipscontacts | NULL | ref | idx_team_membership,idx_teammemb_team_user,idx_del_team_user | idx_team_membership | 145 | const | 2 | 99.36 | Using index condition; Using where; Using temporary; Using filesort |
| 2 | DERIVED | tst | NULL | ref | idx_ud_set_id,idx_ud_team_id,idx_ud_team_set_id,idx_ud_team_id_team_set_id | idx_ud_team_id_team_set_id | 144 | sugarcrm.team_membershipscontacts.team_id | 1 | 100.00 | Using index |
+----+-------------+--------------------------+------------+--------+----------------------------------------------------------------------------+----------------------------+---------+-------------------------------------------+---------+----------+---------------------------------------------------------------------+
The first query uses temporary table and file sort, but the query with force index uses just where. Shouldn't the queries be the same? Why is the query with force index so much faster - used index is still the same.
According to MySQL manual:
Temporary tables can be created under conditions such as these:
If there is an ORDER BY clause and a different GROUP BY clause, or if
the ORDER BY or GROUP BY contains columns from tables other than the
first table in the join queue, a temporary table is created.
DISTINCT combined with ORDER BY may require a temporary table.
If you use the SQL_SMALL_RESULT option, MySQL uses an in-memory
temporary table, unless the query also contains elements (described
later) that require on-disk storage.
Likely, you have better performance because in MySQL there is the query optimizer component.
If you create index the query optimizer could not use the index column even though the index exists.
Using force index(..) you are forcing MySql to use index, instead.
Please consider a detailed example here.
When I EXPLIAN this:
EXPLIAN SELECT m.*,m.id AS mid FROM movie_category mc
LEFT JOIN movie m ON m.id=mc.movie_id
RIGHT JOIN movie_area ma ON ma.movie_id=mc.movie_id
LEFT JOIN area a ON a.id=ma.area_id
LEFT JOIN category c ON c.id=mc.category_id
WHERE 1 and ma.area_id>0
GROUP BY mid
ORDER BY m.read_count desc LIMIT 0,36;
I got this result:
+----+-------------+-------+------------+--------+-----------------+----------+---------+----------------------+-------+----------+---------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+--------+-----------------+----------+---------+----------------------+-------+----------+---------------------------------+
| 1 | SIMPLE | ma | NULL | ALL | NULL | NULL | NULL | NULL | 15545 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | mc | NULL | ref | movie_id | movie_id | 5 | flask.ma.movie_id | 2 | 100.00 | NULL |
| 1 | SIMPLE | m | NULL | eq_ref | PRIMARY,year_id | PRIMARY | 4 | flask.ma.movie_id | 1 | 100.00 | NULL |
| 1 | SIMPLE | a | NULL | eq_ref | PRIMARY | PRIMARY | 4 | flask.ma.area_id | 1 | 100.00 | Using index |
| 1 | SIMPLE | c | NULL | eq_ref | PRIMARY | PRIMARY | 4 | flask.mc.category_id | 1 | 100.00 | Using index |
+----+-------------+-------+------------+--------+-----------------+----------+---------+----------------------+-------+----------+---------------------------------+
5 rows in set, 1 warning (0.00 sec)
How to optimise this query? I really down't know what to do, help me out.
edit:
From the explain result,first line's Extra is "Using temporary; Using filesort",is not good.And second line's Extra and third line's Extra are all NULL, are also not good.
ps:
The query span 0.91 seconds,is very bad.How to add index to prove the query speed.
Improve schema
Without seeing the schemas involved, I will assume you have two many-to-many mapping table (mc, ma) that are inefficiently defined.
Follow the rules here ; you will gain some efficiency.
Get rid of unnecessary tables
category and movie_category are not really used in the query, so remove them from the query.
I assume this is a "generated" query? Then make the generation a little more sophisticated!
I have been having issues with MySQL (version 5.5) left join performance on a number of queries. In all cases I have been able to work around the issue by restructuring the queries with unions and subselects (I saw some examples of this in the book High Performance MySQL). The problem is this this leads to very messy queries.
Below is an example of two queries that produce the exact same results. The first query is roughly two orders of magnitude slower than the second. The second query is much less readable than the first.
As far as I can tell these sorts of queries are not performing poorly because of bad indexing. In all cases when I restructure the query it runs just fine. I have also tried carefully looking at the indexes and using hints to no avail.
Has anyone else run into similar issues with MySQL? Are there any server parameters I should try tweaking? Has anyone found a cleaner way to work around this sort of issue?
Query 1
select
i.id,
sum(vp.measurement * pol.quantity_ordered) measurement_on_order
from items i
left join (vendor_products vp, purchase_order_lines pol, purchase_orders po) on
vp.item_id = i.id and
pol.vendor_product_id = vp.id and
pol.purchase_order_id = po.id and
po.received_at is null and
po.closed_at is null
group by i.id
explain:
+----+-------------+-------+--------+-------------------------------+-------------------+---------+-------------------------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+-------------------------------+-------------------+---------+-------------------------------------+------+-------------+
| 1 | SIMPLE | i | index | NULL | PRIMARY | 4 | NULL | 241 | Using index |
| 1 | SIMPLE | po | ref | PRIMARY,received_at,closed_at | received_at | 9 | const | 2 | |
| 1 | SIMPLE | pol | ref | purchase_order_id | purchase_order_id | 4 | nutkernel_dev.po.id | 7 | |
| 1 | SIMPLE | vp | eq_ref | PRIMARY,item_id | PRIMARY | 4 | nutkernel_dev.pol.vendor_product_id | 1 | |
+----+-------------+-------+--------+-------------------------------+-------------------+---------+-------------------------------------+------+-------------+
Query 2
select
i.id,
sum(on_order.measurement_on_order) measurement_on_order
from (
(
select
i.id item_id,
sum(vp.measurement * pol.quantity_ordered) measurement_on_order
from purchase_orders po
join purchase_order_lines pol on pol.purchase_order_id = po.id
join vendor_products vp on pol.vendor_product_id = vp.id
join items i on vp.item_id = i.id
where
po.received_at is null and po.closed_at is null
group by i.id
)
union all
(select id, 0 from items)
) on_order
join items i on on_order.item_id = i.id
group by i.id
explain:
+------+--------------+------------+--------+-------------------------------+--------------------------------+---------+-------------------------------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+--------------+------------+--------+-------------------------------+--------------------------------+---------+-------------------------------------+------+----------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 3793 | Using temporary; Using filesort |
| 1 | PRIMARY | i | eq_ref | PRIMARY | PRIMARY | 4 | on_order.item_id | 1 | Using index |
| 2 | DERIVED | po | ALL | PRIMARY,received_at,closed_at | NULL | NULL | NULL | 20 | Using where; Using temporary; Using filesort |
| 2 | DERIVED | pol | ref | purchase_order_id | purchase_order_id | 4 | nutkernel_dev.po.id | 7 | |
| 2 | DERIVED | vp | eq_ref | PRIMARY,item_id | PRIMARY | 4 | nutkernel_dev.pol.vendor_product_id | 1 | |
| 2 | DERIVED | i | eq_ref | PRIMARY | PRIMARY | 4 | nutkernel_dev.vp.item_id | 1 | Using index |
| 3 | UNION | items | index | NULL | index_new_items_on_external_id | 257 | NULL | 3380 | Using index |
| NULL | UNION RESULT | <union2,3> | ALL | NULL | NULL | NULL | NULL | NULL | |
+------+--------------+------------+--------+-------------------------------+--------------------------------+---------+-------------------------------------+------+----------------------------------------------+
I have a query :
SELECT listings.*, listingagents.agentid
FROM listings
LEFT JOIN listingagents ON (listingagents.id = listings.listingagentid)
LEFT JOIN ignore ON (ignore.system_key = listings.listingid)
WHERE ignore.id IS NULL
ORDER BY listings.id ASC
I am trying to improve the performance of this query since it is very slow and it is putting a heavy load on the MySQL server.
When I do a mysql explain, output shows :
+--------+-------------+---------------+--------+---------------+------------+---------+----------------------------+--------+-------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+--------+-------------+---------------+--------+---------------+------------+---------+----------------------------+--------+-------------------------+
| 1 | SIMPLE | listings | ALL | NULL | NULL | NULL | NULL | 383360 | Using filesort |
| 1 | SIMPLE | listingagents | eq_ref | PRIMARY | PRIMARY | 4 | db.listings.listingagen... | 1 | |
| 1 | SIMPLE | ignore | ref | system_key | system_key | 1 | const | 404 | Using where; Not exists |
+--------+-------------+---------------+--------+---------------+------------+---------+----------------------------+--------+-------------------------+
I tried to do a simple query:
SELECT listings.*
FROM listings
ORDER BY listings.id ASC
And that query also have "Using filesort;".
The fields "listings.id", "listingagents.id" and "ignore.id" are Primary Keys
The fields "listingagents.id" and "ignore.system_key" have indexes.
What can I do to improve the 1st query?
try to decrease listings range (currently 383360 rows) by adding some condition. e.g. id > x or limit.