When I fire the below the mysql query it takes the query [Showing rows 0 - 29 ( 13,436 total, Query took 0.1715 sec)] time in chrom but its takes more time to displaying about 3 to 5 min. I have total delivery_to_shop a 13,418 rows and table shopdelivery_to_client b 7000 rows and others fewers rows. I'm trying to optimise but not success,
I'm trying to findout what is the prob? the query as below:
SELECT
DISTINCT(a.`factory_deli_id`),a.`shop_id`,a.`entry_date`,(a.slip_no) AS FSNo,(s.shop_name) AS FShopName,(i.dress_type_entry) AS FInitItem,(a.item_qty) AS FQty,FORMAT(i.price_rate_max, 2) AS FItemRate,FORMAT(a.item_qty*i.price_rate_max, 2) AS FTot,
b.`entry_date`,b.`shop_id`, b.`factory_item_id`, b.`slip_no`
FROM delivery_to_shop a
INNER JOIN init_item_entry i
ON a.factory_item_id = i.factory_item_id
INNER JOIN shop_name_entry s
ON a.shop_id = s.shop_id
LEFT JOIN shopdelivery_to_client b
ON a.`slip_no` = b.`slip_no`
AND a.`factory_item_id` = b.`factory_item_id`
AND a.`shop_id` = b.`shop_id`
Any Help?
Add some indices for a start:
ALTER TABLE delivery_to_shop ADD INDEX index1 (factory_item_id, shop_id, slip_no)
ALTER TABLE init_item_entry ADD INDEX index2 (init_item_entry)
ALTER TABLE shop_name_entry ADD INDEX index3 (shop_id)
ALTER TABLE shopdelivery_to_client ADD INDEX index4 (factory_item_id, shop_id, slip_no)
Related
Need help with MySQL query.
I have indexed mandatory columns but still getting results in 160 seconds.
I know I have a problem with Contact conditions without it results are coming in 15s.
Any kind of help is appreciated.
My Query is :
SELECT `order`.invoicenumber, `order`.lastupdated_by AS processed_by, `order`.lastupdated_date AS LastUpdated_date,
`trans`.transaction_id AS trans_id,
GROUP_CONCAT(`trans`.subscription_id) AS subscription_id,
GROUP_CONCAT(`trans`.price) AS trans_price,
GROUP_CONCAT(`trans`.quantity) AS prod_quantity,
`user`.id AS id, `user`.businessname AS businessname,
`user`.given_name AS given_name, `user`.surname AS surname
FROM cdp_order_transaction_master AS `order`
INNER JOIN `cdp_order_transaction_detail` AS trans ON `order`.transaction_id=trans.transaction_id
INNER JOIN cdp_user AS user ON (`order`.user_id=user.id OR CONCAT( user.id , '_CDP' ) = `order`.lastupdated_by)
WHERE `order`.xero_invoice_status='Completed' AND `order`.order_date > '2021-01-01'
GROUP BY `order`.transaction_id
ORDER BY `order`.lastupdated_date
DESC LIMIT 100
1. Index the columns used in the join, where section so that sql does not scan the entire table and only scans the desired columns. A full scan of the table works extremely badly.
create index for cdp_order_transaction_master table :
CREATE INDEX idx_cdp_order_transaction_master_transaction_id ON cdp_order_transaction_master(transaction_id);
CREATE INDEX idx_cdp_order_transaction_master_user_id ON cdp_order_transaction_master(user_id);
CREATE INDEX idx_cdp_order_transaction_master_lastupdated_by ON cdp_order_transaction_master(lastupdated_by);
CREATE INDEX idx_cdp_order_transaction_master_xero_invoice_status ON cdp_order_transaction_master(xero_invoice_status);
CREATE INDEX idx_cdp_order_transaction_master_order_date ON cdp_order_transaction_master(order_date);
create index for cdp_order_transaction_detail table :
CREATE INDEX idx_cdp_order_transaction_detail_transaction_id ON cdp_order_transaction_detail(transaction_id);
create index for cdp_user table :
CREATE INDEX idx_cdp_user_id ON cdp_user(id);
2. Use Owner/Schema Name
If the owner name is not specified, the SQL Server engine tries to find it in all schemas to find the object.
SELECT `f`.*
FROM `files_table` `f`
WHERE f.`application_id` IN(6)
AND `f`.`project_id` IN(130418)
AND `f`.`is_last_version` = 1
AND `f`.`temporary` = 0
AND f.deleted_by is null
ORDER BY `f`.`date` DESC
LIMIT 5
When I remove the ORDER BY, query executes in 0.1 seconds. With the ORDER BY it takes 3 seconds.
There is an index on every WHERE column and there is also an index on ORDER BY field (date).
What can I do to make this query faster? Why is ORDER BY slowing it down so much? Table has 3M rows.
instead of an index on each column in where be sure you have a composite index that cover all the columns in where
eg
create index idx1 on files_table (application_id, project_id,is_last_version,temporary,deleted_by)
avoid IN clause for single value use = for these
SELECT `f`.*
FROM `files_table` `f`
WHERE f.`application_id` = 6
AND `f`.`project_id` = 130418
AND `f`.`is_last_version` = 1
AND `f`.`temporary` = 0
AND f.deleted_by is null
ORDER BY `f`.`date` DESC
LIMIT 5
the date or others column in select could be useful retrive all info using the index and avoiding the access to the table data .. but for select all (select *)
you probably need severl columns an then the access to the table data is done however .. but you can try an eval the performance ..
be careful to place the data non involved in where at the right of all the column involved in where
create index idx1 on files_table (application_id, project_id,is_last_version,temporary,deleted_by, date)
Does anyone know how to optimize this query?
SELECT planbook.*,
COUNT(pb_unit_id) AS total_units,
COUNT(pb_lsn_id) AS total_lessons
FROM planbook
LEFT JOIN planbook_unit ON pb_unit_pb_id = pb_id
LEFT JOIN planbook_lesson ON pb_lsn_pb_id = pb_id
WHERE pb_site_id = 1
GROUP BY pb_id
The slow part is getting the total number of matching units and lessons. I have indexes on the following fields (and others):
planbook.pb_id
planbook_unit.pb_unit_pb_id
planbook_lesson.pb_lsn_pb_id
My only objective is to get the total number of matching units and lessons along with the details of each planbook row.
However, this query is taking around 35 seconds. I have 1625 records in planbook, 13,693 records in planbook_unit, and 122,950 records in planbook_lesson.
Any suggestions?
Edit: Explain Results
SELECT planbook.*,
( SELECT COUNT(*) FROM planbook_unit
WHERE pb_unit_pb_id = planbook.pb_id ) AS total_units,
( SELECT COUNT(*) FROM planbook_lesson
WHERE pb_lsn_pb_id = planbook.pb_id ) AS total_lessons
FROM planbook
WHERE pb_site_id = 1
planbook: INDEX(pb_site_id)
planbook_unit: INDEX(pb_unit_pb_id)
planbook_lesson: INDEX(pb_lsn_pb_id)
Looking to your query
You should add and index for
table planbook column pb_site_id
and eventually a composite one for
table planbook column (pb_site_id, pd_id)
Below is my table called 'datapoints'. I am trying to retrieve instances where there are different instances of 'sensorValue' for the same 'timeOfReading' and 'sensorNumber'.
For example:
sensorNumber sensorValue timeOfReading
5 5 6
5 5 6
5 6 10 <----same time/sensor diff value!
5 7 10 <----same time/sensor diff value!
Should output: sensorNumber:5, timeOfReading: 10 as a result.
I understand this is a duplicate question, in fact I have one of the links provided below for references - however none of the solutions are working as my query simply never ends.
Below is my SQL code:
SELECT table1.sensorNumber, table1.timeOfReading
FROM datapoints table1
WHERE (SELECT COUNT(*)
FROM datapoints table2
WHERE table1.sensorNumber = table2.sensorNumber
AND table1.timeOfReading = table1.timeOfReading
AND table1.sensorValue != table2.sensorValue) > 1
AND table1.timeOfReading < 20;
Notice I have placed a bound for timeOfReading as low as 20. I also tried setting a bound for both table1 and table 2 as well but the query just runs until timeout without displaying results no matter what I put...
The database contains about 700mb of data, so I do not think I can just run this on the entire DB in a reasonable amount of time, I am wondering if this is the culprit?
If so how could I properly limit my query to run a search efficiently? If not what am doing wrong that this is not working?
Select rows having 2 columns equal value
EDIT:
Error Code: 2013. Lost connection to MySQL server during query 600.000 sec
When I try to run the query again I get this error unless I restart
Error Code: 2006. MySQL server has gone away 0.000 sec
You can use a self-JOIN to match related rows in the same table.
SELECT DISTINCT t1.sensorNumber, t1.timeOfReading
FROM datapoints AS t1
JOIN datapoints AS t2
ON t1.sensorNumber = t2.sensorNumber
AND t1.timeOfReading = t2.timeOfReading
AND t1.sensorValue != t2.sensorValue
WHERE t1.timeOfReading < 20
DEMO
To improve performance, make sure you have a composite index on sensorNumber and timeOfReading:
CREATE INDEX ix_sn_tr on datapoints (sensorNumber, timeOfReading);
I think you have missed a condition. Add a not condition also to retrieve only instances with different values.
SELECT *
FROM new_table a
WHERE EXISTS (SELECT * FROM new_table b
WHERE a.num = b.num
AND a.timeRead = b.timeRead
AND a.value != b.value);
you can try this query
select testTable.* from testTable inner join (
SELECT sensorNumber,timeOfReading
FROM testTable
group by sensorNumber , timeOfReading having Count(distinct sensorValue) > 1) t
on
t.sensorNumber = testTable.sensorNumber and t.timeOfReading = testTable.timeOfReading;
here is sqlFiddle
This query will return the sensorNumber and the timeOfReading where there are different values of sensorValue:
select sensorNumber, timeOfReading
from tablename
group by sensorNumber, timeOfReading
having count(distinct sensorValue)>1
and this will return the actual records:
select t.*
from
tablename t inner join (
select sensorNumber, timeOfReading
from tablename
group by sensorNumber, timeOfReading
having count(distinct sensorValue)>1
) d on t.sensorNumber=d.sensorNumber and t.timeOfReading=d.timeOfReading
I would suggest you to add an index on sensorNumber, timeOfReading
alter table tablename add index idx_sensor_time (sensorNumber, timeOfReading)
I am working with an existing site and I came across the following MySQL query that needs optimization:
select
mo.mmrrc_order_oid,
mo.completed_by_email,
mo.completed_by_name,
mo.completed_by_title,
mo.order_submission_oid,
mo.order_dt,
mo.center_id,
mo.po_num_tx,
mo.mod_dt,
ste_s.state_cd,
group_concat(distinct osr.status_cd order by osr.status_cd) as test,
case group_concat(distinct osr.status_cd order by osr.status_cd)
when 'Fulfilled' then 'Fulfilled'
when 'Fulfilled,N/A' then 'Fulfilled'
when 'N/A' then 'N/A'
when 'Pending' then 'Pending'
else 'In Process'
end as restriction_status,
max(osr.closed_dt) as restriction_update_dt,
ot.milestone,
ot.completed_dt as tracking_update_dt,
dc.first_name,
dc.last_name,
inst.institution_name,
order_search.products as products_ordered,
mo.other_emails,
mo.customer_label,
mo.grant_numbers
from
t_mmrrc_order mo
join ste_state ste_s using(state_id)
left join t_order_contact oc
on oc.mmrrc_order_oid=mo.mmrrc_order_oid and oc.role_cd='Recipient'
left join t_distrib_cont_instn dci using(distrib_cont_instn_oid)
left join t_institution inst using(institution_oid)
left join t_distribution_contact dc using(distribution_contact_oid)
left join t_order_tracking ot
on ot.mmrrc_order_oid=mo.mmrrc_order_oid
and ifnull(ot.order_tracking_oid, '0000-00-00')= ifnull(
(
select max(order_tracking_oid)
from t_order_tracking ot3
where
ot3.mmrrc_order_oid=mo.mmrrc_order_oid
and ot3.completed_dt= (
select max(completed_dt)
from t_order_tracking ot2
where ot2.mmrrc_order_oid=mo.mmrrc_order_oid
)
), '0000-00-00')
left join t_order_strain_restriction osr
on osr.mmrrc_order_oid = mo.mmrrc_order_oid
left join order_search on order_search.mmrrc_order_oid=mo.mmrrc_order_oid
group by
mo.mmrrc_order_oid
LIMIT 0, 5
this query takes 10+ seconds to run regardless of the limit. When run without a limit, there are a total of 5,727 results and runtime is 10.624 seconds.
With "LIMIT 0, 5" it took 18.47 seconds.
I understand that there are a bunch of joins and nested selects, which is why it is so slow. Any ideas on how to optimize this without having to change the database structure?
MySQL version: 5.0.95
Most tables have over 10,000 records.
This simpler query takes about 9 seconds:
select
mo.mmrrc_order_oid,
mo.completed_by_email,
mo.completed_by_name,
mo.completed_by_title,
mo.order_submission_oid,
mo.order_dt,
mo.center_id,
mo.po_num_tx,
mo.mod_dt,
dc.first_name,
dc.last_name,
inst.institution_name,
order_search.products as products_ordered,
mo.other_emails,
mo.customer_label,
mo.grant_numbers
from
t_mmrrc_order mo
join ste_state ste_s using(state_id)
left join t_order_contact oc
on oc.mmrrc_order_oid=mo.mmrrc_order_oid and oc.role_cd='Recipient'
left join t_distrib_cont_instn dci using(distrib_cont_instn_oid)
left join t_institution inst using(institution_oid)
left join t_distribution_contact dc using(distribution_contact_oid)
left join t_order_strain_restriction osr
on osr.mmrrc_order_oid = mo.mmrrc_order_oid
left join order_search on order_search.mmrrc_order_oid=mo.mmrrc_order_oid
group by mo.mmrrc_order_oid
limit 0,5
I suppose the grouping slows it down the most. In this case, without grouping takes only 0.17 seconds. Any help would be appreciated. Thanks.
Additional details - here is what EXPLAIN gives me for the first query:
View Image
I found that order_search is a view that is causing most of the slow down. The query for the view is:
SELECT
t_oi.mmrrc_order_oid AS mmrrc_order_oid,
group_concat(t_im.icc_item_code separator ',') AS products
FROM
t_order_item t_oi
JOIN t_item_master t_im on t_oi.item_master_oid = t_im.item_master_oid
JOIN t_strain_archive on t_im.strain_archive_oid = t_strain_archive.strain_archive_oid
WHERE t_oi.item_status_cd IN (_utf8'Active',_utf8'Modified')
GROUP BY t_oi.mmrrc_order_oid
ORDER BY t_im.icc_item_code
Just assuming you haven't index the coloumns so i create some indexes for your coloumns this would help you and there are still much coloumns to index like in your join conditions you should apply this operation on that coloumns also for better execution
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamemmrrc_order_oid` (`mmrrc_order_oid`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamecompleted_by_email` (`completed_by_email`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamecompleted_by_name` (`completed_by_name`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamecompleted_by_title` (`completed_by_title`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnameorder_submission_oid` (`order_submission_oid`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnameorder_dt` (`order_dt`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamecenter_id` (`center_id`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamepo_num_tx` (`po_num_tx`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamemod_dt` (`mod_dt`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnameother_emails` (`other_emails`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamecustomer_label` (`customer_label`);
ALTER TABLE `t_mmrrc_order` ADD INDEX `Indexnamegrant_numbers` (`grant_numbers`);
ALTER TABLE `t_distribution_contact ` ADD INDEX `Indexnamefirst_name` (`first_name`);
ALTER TABLE `t_distribution_contact ` ADD INDEX `Indexnamelast_name` (`last_name`);
ALTER TABLE `order_search` ADD INDEX `Indexnameproducts` (`products`);
I managed to solve this problem by doing two separate queries from my PHP script.
First, I query the order_search view by itself and save all the data in a PHP array indexed by the mmrrc_order_oid, which then serves as a quick lookup table for products. This resulting lookup table is an array of about about 6000 strings.
Next, I perform the big complex query with order_search table omitted. This only takes about a second now. For each resulting record, I simply use the lookup table by mmrrc_order_oid to get the products for that order.