I have a problem with a mysql query that sometimes needs more than 80 secs to execute.
Could you please help me to optimize it?
Here my sql code
SELECT
codFinAtl,
nome,
cognome,
dataNascita AS annoNascita,
MIN(tempoRisultato) AS tempoRisultato
FROM
graduatorie
INNER JOIN anagrafica ON codFin = codFinAtl
INNER JOIN manifestazionigrad ON manifestazionigrad.codice = graduatorie.codMan
WHERE
anagrafica.eliminato = 'n'
AND graduatorie.eliminato = 'n'
AND codGara IN('01', '81')
AND sesso = 'F'
AND manifestazionigrad.aa = '2018/19'
AND graduatorie.baseVasca = '25'
AND tempoRisultato IS NOT NULL
AND dataNascita BETWEEN '20050101' AND '20061231'
GROUP BY
codFinAtl
ORDER BY
tempoRisultato,
cognome,
nome
And my db schema
[UPDATE]
Here there is the results of EXPLAIN query
+----+-------------+--------------------+------------+--------+--------------------------+-----------+---------+-------------------------------+--------+----------+----------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+--------+--------------------------+-----------+---------+-------------------------------+--------+----------+----------------------------------------------+
| 1 | SIMPLE | anagrafica | NULL | ALL | codFin | NULL | NULL | NULL | 334094 | 0.11 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | graduatorie | NULL | ref | codMan,codFinAtl,codGara | codFinAtl | 33 | finsicilia.anagrafica.codFin | 20 | 0.24 | Using where |
| 1 | SIMPLE | manifestazionigrad | NULL | eq_ref | codice | codice | 32 | finsicilia.graduatorie.codMan | 1 | 10.00 | Using index condition; Using where |
+----+-------------+--------------------+------------+--------+--------------------------+-----------+---------+-------------------------------+--------+----------+----------------------------------------------+
graduatorie: INDEX(eliminato, baseVasca)
manifestazionigrad: INDEX(aa, codMan)
These may not be sufficient. Please qualify each column in the query so we know which table they come from.
But the real problem is probably "explode-implode". First it explodes by JOINing, then it implodes by GROUP BY.
Is it possible to compute MIN(tempoRisultato) without any JOINs? (I can't even tell which table is involved.) If so, provide that, then we can discuss what to do next. There may be two choices: a derived table with the MIN, or a subquery in the JOIN that may be correlated or uncorrelated.
(tempoRisultato seems to be in two tables!)
I got 300,000 rows in 'tblmessages', and I'm trying to run this query.
If U don't use "order by msgId desc" It's run very fast, but when I add the order It's very very slow.
What am I missing...?
my index
SELECT msgId
FROM tblmessages
left join wp_users as a on a.ID = (CASE WHEN (msgFromUserId=1) then tblmessages.msgToUserId else tblmessages.msgFromUserId END)
left join tblforum_users u1 on u1.user_ID =(CASE WHEN (msgFromUserId=1) then tblmessages.msgToUserId else tblmessages.msgFromUserId END)
where (msgFromUserId=1 or msgToUserId=1)
order by msgId desc limit 0,20
after Explain:
+----+-------------+-------------+------------+-------------+---------------------------------------+---------------------------------------+---------+------+--------+----------+-----------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------------+------------+-------------+---------------------------------------+---------------------------------------+---------+------+--------+----------+-----------------------------------------------------+
| 1 | SIMPLE | tblmessages | NULL | index_merge | IXtbl_messages_from,IX_tblmessages_to | IX_tblmessages_from,IX_tblmessages_to | 5,5 | NULL | 726454 | 100.00 | Using union(IX_tblmessages_from,IX_tblmessages_to); |
Using where; Using filesort |
| 1 | SIMPLE | a | NULL | eq_ref | PRIMARY | PRIMARY | 8 | func | 1 | 100.00 | Using where; Using index |
| 1 | SIMPLE | u1 | NULL | eq_ref | PRIMARY | PRIMARY | 4 | func | 1 | 100.00 | Using where; Using index |
+----+-------------+-------------+------------+-------------+---------------------------------------+---------------------------------------+---------+------+--------+----------+-----------------------------------------------------+
and more information about tblmessages:
Data 12.1 GiB
Index 190.8 MiB
Total 12.3 GiB
There are 2 reasons for this:
If U don't use "order by msgId desc" It's run very fast, but when I
add the order It's very very slow.
1st is that your query has a "LIMIT 20" rows at the end of it. Ordering your results for this query (via "order by") will require all rows which satisfy the where clause to be pulled/generated,sorted,etc. in order to get the correct 20 rows.
2nd is sort-of subjective, but I'll say your table's don't have the necessary indexing to support optimization of that query. Or, another way of looking at it, could be a query like that is generally gonna run slowly because of the multiple joins on IF-THEN clauses. My advice on fixing this is to either simplify the query into multiple queries or update your question with "SHOW CREATE TABLE" statements for the relevant tables along with one of those INFORMATION_SCHEMA queries that summarizes indices, partitions, etc. From there we could potentially update design (e.g. adding multi-column indices/indexes/however its spelled...)
I need to optimize this query:
SELECT
cp.id_currency_purchase,
MIN(cr.buy_rate),
MAX(cr.buy_rate)
FROM currency_purchase cp
JOIN currency_rate cr ON cr.id_currency = cp.id_currency AND cr.id_currency_source = cp.id_currency_source
WHERE cr.rate_date >= cp.purchase_date
GROUP BY cp.id_currency_purchase
Now it takes about 0,5 s, which is way too long for me (it's used either updated very often, so query cache doesn't help in my case).
Here's EXPLAIN EXTENDED output:
+------+-------------+-------+-------+--------------------------------+-------------+---------+-------------------------+-------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+-------+-------+--------------------------------+-------------+---------+-------------------------+-------+----------+-------------+
| 1 | SIMPLE | cp | index | id_currency_source,id_currency | PRIMARY | 8 | NULL | 9 | 100.00 | |
| 1 | SIMPLE | cr | ref | id_currency,id_currency_source | id_currency | 8 | lokatnik.cp.id_currency | 21433 | 100.00 | Using where |
+------+-------------+-------+-------+--------------------------------+-------------+---------+-------------------------+-------+----------+-------------+
There are some great questions at stackoverflow for one table GROUP BY optimization (like: https://stackoverflow.com/a/11631480/1320081), but I really don't know how to optimize query based on JOIN-ed tables.
How can I speed it up?
The best index for this query:
SELECT cp.id_currency_purchase, MIN(cr.buy_rate), MAX(cr.buy_rate)
FROM currency_purchase cp JOIN
currency_rate cr
ON cr.id_currency = cp.id_currency AND
cr.id_currency_source = cp.id_currency_source
WHERE cr.rate_date >= cp.purchase_date
GROUP BY cp.id_currency_purchase;
is: currency_rate(id_currency, id_currency_source, rate_date, buy_rate). You might also try currency_purchase(id_currency, id_currency_source, rate_date).
SELECT archive.id, archive.file, archive.create_timestamp, archive.spent
FROM archive LEFT JOIN submissions ON archive.id = submissions.id
WHERE submissions.id is NULL
AND archive.file is not NULL
AND archive.create_timestamp < DATE_SUB(NOW(), INTERVAL 6 month)
AND spent = 0
ORDER BY archive.create_timestamp ASC LIMIT 10000
EXPLAIN result:
+----+-------------+--------------------+--------+--------------------------------+------------------+---------+--------------------------------------------+-----------+--------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+--------+--------------------------------+------------------+---------+--------------------------------------------+-----------+--------------------------------------+
| 1 | SIMPLE | archive | range | create_timestamp,file_in | create_timestamp | 4 | NULL | 111288502 | Using where |
| 1 | SIMPLE | submissions | eq_ref | PRIMARY | PRIMARY | 4 | production.archive.id | 1 | Using where; Using index; Not exists |
+----+-------------+--------------------+--------+--------------------------------+------------------+---------+--------------------------------------------+-----------+--------------------------------------+
I've tried hinting use of indexes for archive table with:
USE INDEX (create_timestamp,file_in)
Archive table is huge, ~150mil records.
Any help with speeding up this query would be greatly appreciated.
You want to use a composite index. For this query:
create index archive_file_spent_createts on archive(file, spent, createtimestamp);
In such an index, you want the where conditions with = to come first, followed by up to one column with an inequality. In this case, I'm not sure if MySQL will use the index for the order by.
This assumes that the where conditions do, indeed, significantly reduce the size of the data.
I'm looking for some advice on how I might better optimize this query.
For each _piece_detail record that:
Contains at least one matching _scan record on (zip, zip_4,
zip_delivery_point, serial_number)
Belongs to a company from mailing_groups (through a chain of relationships)
Has either:
first_scan_date_time that is greater than the MIN(scan_date_time) of the related _scan records
latest_scan_date_time that is less than the MAX(scan_date_time) of
the related _scan records
I will need to:
Set _piece_detail.first_scan_date_time to MIN(_scan.scan_date_time)
Set _piece_detail.latest_scan_date_time to MAX(_scan.scan_date_time)
Since I'm dealing with millions upon millions of records, I am trying to reduce the number of records that I actually have to search through. Here are some facts about the data:
The _piece_details table is partitioned by job_id, so it seems to
make the most sense to run through these checks in the order of
_piece_detail.job_id, _piece_detail.piece_id.
The scan records table contains over 100,000,000 records right now and is partitioned by (zip, zip_4, zip_delivery_point,
serial_number, scan_date_time), which is the same key that is used
to match a _scan with a _piece_detail (aside from scan_date_time).
Only about 40% of the _piece_detail records belong to a mailing_group, but we don't know which ones these are until we run
through the full relationship of joins.
Only about 30% of the _scan records belong to a _piece_detail with a mailing_group.
There are typically between 0 and 4 _scan records per _piece_detail.
Now, I am having a hell of a time finding a way to execute this in a decent way. I had originally started with something like this:
UPDATE _piece_detail
INNER JOIN (
SELECT _piece_detail.job_id, _piece_detail.piece_id, MIN(_scan.scan_date_time) as first_scan_date_time, MAX(_scan.scan_date_time) as latest_scan_date_time
FROM _piece_detail
INNER JOIN _container_quantity
ON _piece_detail.cqt_database_id = _container_quantity.cqt_database_id
AND _piece_detail.job_id = _container_quantity.job_id
INNER JOIN _container_summary
ON _container_quantity.container_id = _container_summary.container_id
AND _container_summary.job_id = _container_quantity.job_id
INNER JOIN _mail_piece_unit
ON _container_quantity.mpu_id = _mail_piece_unit.mpu_id
AND _container_quantity.job_id = _mail_piece_unit.job_id
INNER JOIN _header
ON _header.job_id = _piece_detail.job_id
INNER JOIN mailing_groups
ON _mail_piece_unit.mpu_company = mailing_groups.mpu_company
INNER JOIN _scan
ON _scan.zip = _piece_detail.zip
AND _scan.zip_4 = _piece_detail.zip_4
AND _scan.zip_delivery_point = _piece_detail.zip_delivery_point
AND _scan.serial_number = _piece_detail.serial_number
GROUP BY _piece_detail.job_id, _piece_detail.piece_id, _scan.zip, _scan.zip_4, _scan.zip_delivery_point, _scan.serial_number
) as t1 ON _piece_detail.job_id = t1.job_id AND _piece_detail.piece_id = t1.piece_id
SET _piece_detail.first_scan_date_time = t1.first_scan_date_time, _piece_detail.latest_scan_date_time = t1.latest_scan_date_time
WHERE _piece_detail.first_scan_date_time < t1.first_scan_date_time
OR _piece_detail.latest_scan_date_time > t1.latest_scan_date_time;
I thought that this may have been trying to load too much into memory at once and might not be using the indexes properly.
Then I thought that I might be able to avoid doing that huge joined subquery and add two leftjoin subqueries to get the min/max like so:
UPDATE _piece_detail
INNER JOIN _container_quantity
ON _piece_detail.cqt_database_id = _container_quantity.cqt_database_id
AND _piece_detail.job_id = _container_quantity.job_id
INNER JOIN _container_summary
ON _container_quantity.container_id = _container_summary.container_id
AND _container_summary.job_id = _container_quantity.job_id
INNER JOIN _mail_piece_unit
ON _container_quantity.mpu_id = _mail_piece_unit.mpu_id
AND _container_quantity.job_id = _mail_piece_unit.job_id
INNER JOIN _header
ON _header.job_id = _piece_detail.job_id
INNER JOIN mailing_groups
ON _mail_piece_unit.mpu_company = mailing_groups.mpu_company
LEFT JOIN _scan fs ON (fs.zip, fs.zip_4, fs.zip_delivery_point, fs.serial_number) = (
SELECT zip, zip_4, zip_delivery_point, serial_number
FROM _scan
WHERE zip = _piece_detail.zip
AND zip_4 = _piece_detail.zip_4
AND zip_delivery_point = _piece_detail.zip_delivery_point
AND serial_number = _piece_detail.serial_number
ORDER BY scan_date_time ASC
LIMIT 1
)
LEFT JOIN _scan ls ON (ls.zip, ls.zip_4, ls.zip_delivery_point, ls.serial_number) = (
SELECT zip, zip_4, zip_delivery_point, serial_number
FROM _scan
WHERE zip = _piece_detail.zip
AND zip_4 = _piece_detail.zip_4
AND zip_delivery_point = _piece_detail.zip_delivery_point
AND serial_number = _piece_detail.serial_number
ORDER BY scan_date_time DESC
LIMIT 1
)
SET _piece_detail.first_scan_date_time = fs.scan_date_time, _piece_detail.latest_scan_date_time = ls.scan_date_time
WHERE _piece_detail.first_scan_date_time < fs.scan_date_time
OR _piece_detail.latest_scan_date_time > ls.scan_date_time
These are the explains when I convert them to SELECT statements:
+----+-------------+---------------------+--------+----------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+--------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------------+--------+----------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+--------+----------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 844161 | NULL |
| 1 | PRIMARY | _piece_detail | eq_ref | PRIMARY,first_scan_date_time,latest_scan_date_time | PRIMARY | 18 | t1.job_id,t1.piece_id | 1 | Using where |
| 2 | DERIVED | _header | index | PRIMARY | date_prepared | 3 | NULL | 87 | Using index; Using temporary; Using filesort |
| 2 | DERIVED | _piece_detail | ref | PRIMARY,cqt_database_id,zip | PRIMARY | 10 | odms._header.job_id | 9703 | NULL |
| 2 | DERIVED | _container_quantity | eq_ref | unique,mpu_id,job_id,job_id_container_quantity | unique | 14 | odms._header.job_id,odms._piece_detail.cqt_database_id | 1 | NULL |
| 2 | DERIVED | _mail_piece_unit | eq_ref | PRIMARY,company,job_id_mail_piece_unit | PRIMARY | 14 | odms._container_quantity.mpu_id,odms._header.job_id | 1 | Using where |
| 2 | DERIVED | mailing_groups | eq_ref | PRIMARY | PRIMARY | 27 | odms._mail_piece_unit.mpu_company | 1 | Using index |
| 2 | DERIVED | _container_summary | eq_ref | unique,container_id,job_id_container_summary | unique | 14 | odms._header.job_id,odms._container_quantity.container_id | 1 | Using index |
| 2 | DERIVED | _scan | ref | PRIMARY | PRIMARY | 28 | odms._piece_detail.zip,odms._piece_detail.zip_4,odms._piece_detail.zip_delivery_point,odms._piece_detail.serial_number | 1 | Using index |
+----+-------------+---------------------+--------+----------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+--------+----------------------------------------------+
+----+--------------------+---------------------+--------+--------------------------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+-----------+-----------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+---------------------+--------+--------------------------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+-----------+-----------------------------------------------------------------+
| 1 | PRIMARY | _header | index | PRIMARY | date_prepared | 3 | NULL | 87 | Using index |
| 1 | PRIMARY | _piece_detail | ref | PRIMARY,cqt_database_id,first_scan_date_time,latest_scan_date_time | PRIMARY | 10 | odms._header.job_id | 9703 | NULL |
| 1 | PRIMARY | _container_quantity | eq_ref | unique,mpu_id,job_id,job_id_container_quantity | unique | 14 | odms._header.job_id,odms._piece_detail.cqt_database_id | 1 | NULL |
| 1 | PRIMARY | _mail_piece_unit | eq_ref | PRIMARY,company,job_id_mail_piece_unit | PRIMARY | 14 | odms._container_quantity.mpu_id,odms._header.job_id | 1 | Using where |
| 1 | PRIMARY | mailing_groups | eq_ref | PRIMARY | PRIMARY | 27 | odms._mail_piece_unit.mpu_company | 1 | Using index |
| 1 | PRIMARY | _container_summary | eq_ref | unique,container_id,job_id_container_summary | unique | 14 | odms._header.job_id,odms._container_quantity.container_id | 1 | Using index |
| 1 | PRIMARY | fs | index | NULL | updated | 1 | NULL | 102462928 | Using where; Using index; Using join buffer (Block Nested Loop) |
| 1 | PRIMARY | ls | index | NULL | updated | 1 | NULL | 102462928 | Using where; Using index; Using join buffer (Block Nested Loop) |
| 3 | DEPENDENT SUBQUERY | _scan | ref | PRIMARY | PRIMARY | 28 | odms._piece_detail.zip,odms._piece_detail.zip_4,odms._piece_detail.zip_delivery_point,odms._piece_detail.serial_number | 1 | Using where; Using index; Using filesort |
| 2 | DEPENDENT SUBQUERY | _scan | ref | PRIMARY | PRIMARY | 28 | odms._piece_detail.zip,odms._piece_detail.zip_4,odms._piece_detail.zip_delivery_point,odms._piece_detail.serial_number | 1 | Using where; Using index; Using filesort |
+----+--------------------+---------------------+--------+--------------------------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+-----------+-----------------------------------------------------------------+
Now, looking at the explains generated by each, I really can't tell which is giving me the best bang for my buck. The first one shows fewer total rows when multiplying the rows column, but the second appears to execute a bit quicker.
Is there anything that I could do to achieve the same results while increasing performance through modifying the query structure?
Disable update of index while doing the bulk updates
ALTER TABLE _piece_detail DISABLE KEYS;
UPDATE ....;
ALTER TABLE _piece_detail ENABLE KEYS;
Refer to the mysql docs : http://dev.mysql.com/doc/refman/5.0/en/alter-table.html
EDIT:
After looking at the mysql docs I pointed to, I see the docs specify this for MyISAM table, and is nit clear for other table types. Further solutions here : How to disable index in innodb
There is something I was taught and I strictly follow till today - Create as many temporary table you want while avoiding the usage of derived tables. Especially it in case of UPDATE/DELETE/INSERTs as
you cant predict the index on derived tables
The derived tables might not be held in memory if the resultset is big
The table(MyIsam)/rows(Innodb) may be locked for longer time as each time the derived query is running. I prefer a temp table which has primary key join with parent table.
And most importantly it makes you code look neat and readable.
My approach will be
CREATE table temp xxx(...)
INSERT INTO xxx select q from y inner join z....;
UPDATE _piece_detail INNER JOIN xxx on (...) SET ...;
Always reduce you downtime!!
Why aren't you using sub-queries for each join? Including the inner joins?
INNER JOIN (SELECT field1, field2, field 3 from _container_quantity order by 1,2,3)
ON _piece_detail.cqt_database_id = _container_quantity.cqt_database_id
AND _piece_detail.job_id = _container_quantity.job_id
INNER JOIN (SELECT field1, field2, field3 from _container_summary order by 1,2,3)
ON _container_quantity.container_id = _container_summary.container_id
AND _container_summary.job_id = _container_quantity.job_id
You're definitely pulling a lot into memory by not limiting your selects on those inner joins. By using the order by 1,2,3 at the end of each sub-query you create an index on each sub-query. Your only index is on headers and you aren't joining on _headers....
A couple suggestions to optimize this query. Either create the indexes you need on each table, or use the Sub-query join clauses to create manually the indexes you need on the fly.
Also remember that when you do a left join on a "temporary" table full of aggregates you are just asking for performance trouble.
Contains at least one matching _scan record on (zip, zip_4,
zip_delivery_point, serial_number)
Umm...this is your first point in what you want to do, but none of these fields are indexed?
From your explain results it seems that the subquery is going through all the rows twice then, how about you keep the MIN/MAX from the first one and use just one left join instead of two?