How do I refactor this MySQL query? - mysql

I have following MySQL query:
(SELECT c.Channel as name, count(*) as total_episode
FROM (
SELECT a.aid, a.vid
FROM videoItem v INNER JOIN aid2vid a USING(vid)
GROUP BY a.aid
) a1 INNER JOIN channelListingItem c USING(aid)
GROUP BY c.Channel
)
UNION
(SELECT c1.Channel as name, 0 as total_episode
FROM channelListingItem c1 LEFT JOIN (
SELECT c.Channel FROM (
SELECT a.aid, a.vid
FROM videoItem v INNER JOIN aid2vid a USING(vid)
GROUP BY a.aid
) a1 INNER JOIN channelListingItem c USING(aid)
GROUP BY c.Channel
) c2 USING(Channel)
WHERE c2.Channel is null
GROUP BY name
);
Basically, what this statement does is to get the correct count episode in each channel & assign zero for channels w/o vid in consequent table (videoItem).
Note that
SELECT a.aid, a.vid
FROM videoItem v
INNER JOIN aid2vid a USING(vid)
GROUP BY a.aid
is duplicated twice and from explain this MySQL statement I don't see MySQL re-use the query result.
+----+--------------+------------+------+----------+---------+---------+----------+------+---------------------------------+
| id | select_type | table | type | pos_keys | key | key_len | ref | rows | Extra |
+----+--------------+------------+------+----------+---------+---------+----------+------+---------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 313 | Using temporary; Using filesort |
| 1 | PRIMARY | c | ALL | idx_vid | NULL | NULL | NULL | 616 | Using where; Using join buffer |
| 2 | DERIVED | a | ALL | vid | NULL | NULL | NULL | 1015 | Using temporary; Using filesort |
| 2 | DERIVED | v | ref | idx_vid | idx_vid | 32 | db.a.vid | 10 | Using index |
| 3 | UNION | c1 | ALL | NULL | NULL | NULL | NULL | 616 | Using temporary; Using filesort |
| 3 | UNION | <derived4> | ALL | NULL | NULL | NULL | NULL | 28 | Using where; Not exists |
| 4 | DERIVED | <derived5> | ALL | NULL | NULL | NULL | NULL | 313 | Using temporary; Using filesort |
| 4 | DERIVED | c | ALL | idx_vid | NULL | NULL | NULL | 616 | Using where; Using join buffer |
| 5 | DERIVED | a | ALL | vid | NULL | NULL | NULL | 1015 | Using temporary; Using filesort |
| 5 | DERIVED | v | ref | idx_vid | idx_vid | 32 | db.a.vid | 10 | Using index |
|NULL| UNION RESULT | <union1,3> | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+------------+------+----------+---------+---------+----------+------+---------------------------------+
11 rows in set (0.02 sec)
How do I refactor this MySQL statement? Also is there good refactor tool for MySQL statement?
Thanks.

This one seemed to work for me:
select Channel as name,count(distinct a1.aid) as total_episode
from channelListingItem c
left join
(
select a.aid, a.vid
from videoItem v INNER JOIN aid2vid a USING(vid)
) a1 on a1.aid = c.aid
group by Channel;
From what I can see the following query that you use twice in an inline view:
SELECT a.aid, a.vid
FROM videoItem v INNER JOIN aid2vid a USING(vid)
GROUP BY a.aid
Is being used to get a distinct list of aid and vid values that exist in both videoItem and aid2vid. I have replaced the GROUP BY in the inline view with a COUNT(DISTINCT) to achieve the same thing since you are not using any aggregate functions in the inline view part of the query.
I think you do not need to split the query into two parts joined by a union i.e. part 1 to get episode counts > 0 and part 2 to get episode counts = 0. This can be achieved in one GROUP BY.
Hope this helps!

I could be way of but I believe following to provide the same results as your original query does.
The gist of it is to
Add the total_episode field to your LEFT JOIN.
Use COALESCE to return either the total_episode value or 0.
SQL Statement
SELECT c1.Channel as name
, COALESCE(total_episode, 0)
FROM channelListingItem c1
LEFT JOIN (
SELECT c.Channel
, count(*) as total_episode
FROM (
SELECT a.aid
, a.vid
FROM videoItem v
INNER JOIN aid2vid a ON a.vid = v.vid
GROUP BY
a.aid
) a1
INNER JOIN channelListingItem c ON c.aid = a1.aid
GROUP BY
c.Channel
) c2 ON c2.Channel = c1.Channel
GROUP BY
name

Related

Optimize Select query with mutiple JOINS, Sub queries and MIN MAX

I am trying to optimize the below query, it works well but very slowly. The sub queries are iterating through the entire tables multiple times
There challenge is, there are 3 situations where I need to get the lowest value from the JOINS where value is not blank and 1 situation where the JOIN is supposed to get the highest row value. I have used MIN and Max
Is there a better approach that i can use to create this query ?
The below are lowest value columns where value is not blank. I am using MIN() sub-query with a JOIN
j4.owner_perception,
j6.tl_perception,
The below are highest value columns. I am using MAX() sub-query with a JOIN
j2.tl_perception,
j2.owner_perception
The Query
EXPLAIN SELECT
s.ticket_number,
s.request_id,
j4.owner_perception,
j6.tl_perception,
j2.tl_perception,
j2.owner_perception
FROM
survey
LEFT JOIN survey AS s ON survey.id = s.id
LEFT JOIN (
SELECT
request_id,
MIN(id) AS minumumownerid
FROM
history_gt
WHERE
owner_perception != ""
GROUP BY
request_id
) AS j3 ON s.request_id = j3.request_id
LEFT JOIN history_gt AS j4 ON j4.id = j3.minumumownerid
LEFT JOIN (
SELECT
request_id,
MIN(id) AS minumumtlid
FROM
history_gt
WHERE
tl_perception != ""
GROUP BY
request_id
) AS j5 ON s.request_id = j5.request_id
LEFT JOIN history_gt AS j6 ON j6.id = j5.minumumtlid
LEFT JOIN (
SELECT
request_id,
MAX(id) AS maximumid
FROM
history_gt
GROUP BY
request_id
) AS j1 ON s.request_id = j1.request_id
LEFT JOIN history_gt AS j2 ON j2.id = j1.maximumid
GROUP BY
s.request_id
ORDER BY
s.id ASC
LIMIT
50
This is the EXPLAIN Details (I do not understand EXPLAIN :)
+------+--------------+-------------+---------+-------------------+-------------+----------+--------------------+----------+----------------------------------------------+--+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | |
+------+--------------+-------------+---------+-------------------+-------------+----------+--------------------+----------+----------------------------------------------+--+
| 1 | PRIMARY | survey | index | NULL | request_id | 767 | NULL | 476 | Using index; Using temporary; Using filesort | |
| 1 | PRIMARY | s | eq_ref | PRIMARY | PRIMARY | 4 | qo.survey.id | 1 | | |
| 1 | PRIMARY | <derived2> | ref | key0 | key0 | 153 | qo.s.request_id | 1060 | Using where | |
| 1 | PRIMARY | j4 | eq_ref | PRIMARY | PRIMARY | 4 | j3.minumumownerid | 1 | Using where | |
| 1 | PRIMARY | <derived3> | ref | key0 | key0 | 153 | qo.s.request_id | 2121 | Using where | |
| 1 | PRIMARY | j6 | eq_ref | PRIMARY | PRIMARY | 4 | j5.minumumtlid | 1 | Using where | |
| 1 | PRIMARY | <derived4> | ref | key0 | key0 | 153 | qo.s.request_id | 530 | Using where | |
| 1 | PRIMARY | j2 | eq_ref | PRIMARY | PRIMARY | 4 | j1.maximumid | 1 | Using where | |
| 4 | DERIVED | history_gt | range | NULL | request_id | 152 | NULL | 252406 | Using index for group-by | |
| 3 | DERIVED | history_gt | index | NULL | request_id | 152 | NULL | 1009620 | Using where | |
| 2 | DERIVED | history_gt | index | owner_perception | request_id | 152 | NULL | 1009620 | Using where | |
+------+--------------+-------------+---------+-------------------+-------------+----------+--------------------+----------+----------------------------------------------+--+
Yes, your original is bulky and going through the history table 3 times. My suggestion is to do conditional query against all records in history ONCE. Notice my "PQ" (prequery) that groups by request id and does a respective min/max based on CASE WHEN of the != "" qualification. If it does not apply, return null which is ignored as a result. I then apply coalesce() to prevent nulls from coming back from that result set. Now I have all your aggregates in 1 result set.
Now that is available, I removed your redundant Survey to Survey S join and went directly from the survey to the prequery result. Now I can left join from the prequery back to history on each of their respective min/max IDs.
SELECT
s.ticket_number,
s.request_id,
j4.owner_perception,
j6.tl_perception,
j2.tl_perception,
j2.owner_perception
FROM
survey s
left join
(SELECT
request_id,
coalesce( min( case when owner_perception != ""
then id else null end ), 0 ) minOwnerID,
coalesce( min( case when tl_perception != ""
then id else null end ), 0 ) minTLID,
MAX(id) AS maxID
FROM
history_gt
GROUP BY
request_id ) PQ
on s.request_id = PQ.request_id
LEFT JOIN history_gt AS j4
ON PQ.minOwnerID = j4.id
LEFT JOIN history_gt AS j6
ON PQ.minTLID = j6.id
LEFT JOIN history_gt AS j2
ON PQ.maxID = j2.id
GROUP BY
s.request_id
ORDER BY
s.id ASC
LIMIT
50

MySQL LEFT JOIN optimisation

I have the following query where
A has 1 000 000 rows
B, C, D has 150 000 rows each
E has 50 000 rows
The query itself seems to take around 50 seconds to complete. It will usually contain a WHERE clause as it is doing a search through all the possible data in the database. Is there any way this could be improved?
SELECT A.one, A.two, A.three
FROM A
LEFT JOIN B ON ( A.id = B.id )
LEFT JOIN C ON ( A.id = C.d )
LEFT JOIN D ON ( A.id = D.id )
LEFT JOIN E ON ( A.name = E.name
AND E.date <= A.date )
ORDER BY A.id ASC
Explain query:
+----+-------------+-------+--------+---------------+----------+---------+-----------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------+----------+---------+-----------+--------+-------------+
| 1 | SIMPLE | A | index | NULL | PRIMARY | 17 | NULL | 357752 | |
| 1 | SIMPLE | B | eq_ref | PRIMARY | PRIMARY | 17 | db.A.id | 1 | Using index |
| 1 | SIMPLE | C | eq_ref | PRIMARY | PRIMARY | 17 | db.A.id | 1 | Using index |
| 1 | SIMPLE | D | eq_ref | PRIMARY | PRIMARY | 17 | db.A.id | 1 | Using index |
| 1 | SIMPLE | E | ref | Name,Date | Name | 62 | db.A.name | 1 | |
+----+-------------+-------+--------+---------------+----------+---------+-----------+--------+-------------+
I would recommend replacing the indexes that you have on E - Name and Date with a dual-column index on both, because for that last join, you're effectively selecting from E where name and date match a criteria. Because name is eq, it should be first in the index.
ALTER TABLE `E` ADD INDEX idx_join_optimise (`name`,`date`)
This will let the join select fully use an index.
Also - I assume that this is an example query, but you don't seem to be using B, C or D which is possibly slowing it down.
If the WHERE clause that you mentioned uses values from the other tables, I'd suggest changing them to an INNER JOIN based on the criteria.. (It'd help if you posted some example of what you were doing)

MySQL joins and group columns

In the following situation i need to somehow group some columns into one. I have the following query:
SELECT a.id,b.id,c.id,d.id
FROM some_table AS a
LEFT JOIN some_table AS b ON ( a.id=b.parent_id )
LEFT JOIN some_table AS c ON ( b.id=c.parent_id )
LEFT JOIN some_table AS d ON ( c.id=d.parent_id )
WHERE a.id = '22'
Results in:
+--------+--------+--------+--------+
| a.id | b.id | c.id | d.id |
+--------+--------+--------+--------+
| 22 | 24 | 25 | null |
| 22 | 381 | null | null |
| 22 | 418 | 2389 | 9841 |
+--------+--------+--------+--------+
This is an category table populated with 220,000+ rows.
I need the last id (which is not NULL). So in this case i need (25,381,9841)
What is the easiest way to achieve this?
best result should be:
+------+
| id |
+------+
| 25 |
| 381 |
| 9841 |
See COALESCE(). That's all you need!
To expand on the above answer, it seems like you need
SELECT a.id, COALESCE(d.id, c.id, b.id) AS 'id'
FROM some_table AS a
LEFT JOIN some_table AS b ON ( a.id=b.parent_id )
LEFT JOIN some_table AS c ON ( b.id=c.parent_id )
LEFT JOIN some_table AS d ON ( c.id=d.parent_id )
WHERE a.id = '22'
That would give you an output of
+------+------+
| a.id | id |
+------+------+
| 22 | 25 |
| 22 | 381 |
| 22 | 9841 |

MySQL ORDER BY extremely slow - even with indexes

I have got a quite complex query with many joins which runs very well without ordering. But as soon as i try to order by any of my fields it is executing extremely slow and takes about 30 seconds to complete.
Here's the query:
SELECT SQL_NO_CACHE *
FROM et_order
INNER JOIN et_order_type ON et_order.type_id = et_order_type.id
INNER JOIN et_order_data ON et_order.id = et_order_data.order_id
INNER JOIN et_user et_user_consultant ON et_order.user_id_consulting = et_user_consultant.id
INNER JOIN et_customer ON et_order.customer_id = et_customer.id
INNER JOIN et_appointment ON et_order.appointment_id = et_appointment.id
INNER JOIN et_order_status order_status ON et_order.order_status_id = order_status.id
INNER JOIN et_status glass_r_status ON et_order_data.status_id_glass_r = glass_r_status.id
INNER JOIN et_status glass_l_status ON et_order_data.status_id_glass_l = glass_l_status.id
ORDER BY et_order.id DESC
LIMIT 50
The original query is even bigger and has various WHERE operations as well, but even the base query without any condition is unreasonably slow. When I remove the ORDER BY et_order.id DESC the query takes about 0.01 secs to fetch.
In my original query, I select every single field I need separately - just changed it to 'SELECT *' now for better readability of the statement.
Explain Select gives the following result:
+----+-------------+--------------------+--------+-------------------------------------------------------------------------------+-------------+---------+-----------------------------------------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+--------+-------------------------------------------------------------------------------+-------------+---------+-----------------------------------------+-------+---------------------------------+
| 1 | SIMPLE | et_customer | ALL | PRIMARY | NULL | NULL | NULL | 59750 | Using temporary; Using filesort |
| 1 | SIMPLE | et_order | ref | PRIMARY,customer_id,appointment_id,user_id_consulting,order_status_id,type_id | customer_id | 4 | eyetool.et_customer.id | 1 | |
| 1 | SIMPLE | et_user_consultant | eq_ref | PRIMARY | PRIMARY | 4 | eyetool.et_order.user_id_consulting | 1 | |
| 1 | SIMPLE | et_appointment | ref | PRIMARY | PRIMARY | 8 | eyetool.et_order.appointment_id | 1 | |
| 1 | SIMPLE | et_order_data | ref | status_id_glass_l,status_id_glass_r,order_id | order_id | 5 | eyetool.et_order.id | 1 | Using where |
| 1 | SIMPLE | et_order_type | ALL | PRIMARY | NULL | NULL | NULL | 4 | Using where; Using join buffer |
| 1 | SIMPLE | glass_l_status | eq_ref | PRIMARY | PRIMARY | 4 | eyetool.et_order_data.status_id_glass_l | 1 | |
| 1 | SIMPLE | order_status | eq_ref | PRIMARY,id | PRIMARY | 4 | eyetool.et_order.order_status_id | 1 | |
| 1 | SIMPLE | glass_r_status | eq_ref | PRIMARY | PRIMARY | 4 | eyetool.et_order_data.status_id_glass_r | 1 | |
+----+-------------+--------------------+--------+-------------------------------------------------------------------------------+-------------+---------+-----------------------------------------+-------+---------------------------------+
9 rows in set (0.00 sec)
What I don't really understand is why explain select says it does not use any key for et_order_type. Maybe because it is not needed since there are only 4 rows in it!?
But there is an index on type_id in et_order: KEY type_id (type_id)
I've added a (single) INDEX for every key I am using for joining and ordering. Could this be the problem? Do I need to create combined indexes?
The Table contains about 200.000 datasets in et_order and et_order_data, 60.000 in et_customer, 150.000 in et_apointments. Other contents are negligible.
When i just join et_order_data and et_order_type, it also takes very long and explain select still says key NULL for et_order_type:
EXPLAIN SELECT SQL_NO_CACHE *
FROM et_order
INNER JOIN et_order_type ON et_order.type_id = et_order_type.id
INNER JOIN et_order_data ON et_order.id = et_order_data.order_id
ORDER BY et_order.id DESC
LIMIT 50
+----+-------------+---------------+------+-----------------+----------+---------+---------------------+--------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+------+-----------------+----------+---------+---------------------+--------+---------------------------------+
| 1 | SIMPLE | et_order | ALL | PRIMARY,type_id | NULL | NULL | NULL | 162007 | Using temporary; Using filesort |
| 1 | SIMPLE | et_order_data | ref | order_id | order_id | 5 | eyetool.et_order.id | 1 | Using where |
| 1 | SIMPLE | et_order_type | ALL | PRIMARY | NULL | NULL | NULL | 4 | Using where; Using join buffer |
+----+-------------+---------------+------+-----------------+----------+---------+---------------------+--------+---------------------------------+
The Table structure for et_order and et_order_type can be reviewed here: http://pastebin.com/PED6Edyx
Any tips to optimize my query?
I tried ordering in a subquery like:
SELECT SQL_NO_CACHE *
FROM (SELECT * FROM et_order ORDER BY et_order.id DESC) as et_order
INNER JOIN et_order_type ON et_order.type_id = et_order_type.id
...
This was very quick, but does not help at all, because i have to do the ordering not only on et_order, but also on fields of the joined tables.
Thanks in advance for your help!
Update:
Strange, when i change every inner join to a left one it works like a charme...
SELECT SQL_NO_CACHE *
FROM et_order
LEFT JOIN et_order_type ON et_order.type_id = et_order_type.id
LEFT JOIN et_order_data ON et_order.id = et_order_data.order_id
LEFT JOIN et_user et_user_consultant ON et_order.user_id_consulting = et_user_consultant.id
LEFT JOIN et_customer ON et_order.customer_id = et_customer.id
LEFT JOIN et_appointment ON et_order.appointment_id = et_appointment.id
LEFT JOIN et_order_status order_status ON et_order.order_status_id = order_status.id
LEFT JOIN et_status glass_r_status ON et_order_data.status_id_glass_r = glass_r_status.id
LEFT JOIN et_status glass_l_status ON et_order_data.status_id_glass_l = glass_l_status.id
ORDER BY et_order.id DESC LIMIT 50
Does anyone know why?
Try this query
SELECT SQL_NO_CACHE *
FROM et_order
INNER JOIN et_order_type ON et_order.type_id = et_order_type.id
INNER JOIN et_order_data ON et_order.id = et_order_data.order_id
INNER JOIN et_user et_user_consultant ON et_order.user_id_consulting = et_user_consultant.id
INNER JOIN et_customer FORCE INDEX(et_customer.id) ON et_order.customer_id = et_customer.id
INNER JOIN et_appointment ON et_order.appointment_id = et_appointment.id
INNER JOIN et_order_status order_status ON et_order.order_status_id = order_status.id
INNER JOIN et_status glass_r_status ON et_order_data.status_id_glass_r = glass_r_status.id
INNER JOIN et_status glass_l_status ON et_order_data.status_id_glass_l = glass_l_status.id
ORDER BY et_order.id DESC LIMIT 50

MySql query sending data to slow

I have a query with some joins in it. Every table I've joined in this query has a foreign key. When I run it the execution time is very slow, about 23 secs.
The main table has about 50,000 rows.
SELECT o.id, o.title, o.link, o.position_id, o.status, o.publish_date, o.archived_on, vos.name AS site_name, vorib.image AS ribbon, vop.picture,
GROUP_CONCAT(DISTINCT CAST(voci.name AS CHAR)) AS cities,
GROUP_CONCAT(DISTINCT CAST(vors.name AS CHAR)) AS regions,
GROUP_CONCAT(DISTINCT CAST(voi.icon_id AS CHAR)) AS icons,
GROUP_CONCAT(DISTINCT CAST(voc.city_id AS CHAR)) AS cities_id,
GROUP_CONCAT(DISTINCT CAST(vor.region_id AS CHAR)) AS regions_id,
GROUP_CONCAT(DISTINCT CAST(vose.section_id AS CHAR)) AS sections,
GROUP_CONCAT(DISTINCT CAST(vocat2.category_id AS CHAR)) AS categories,
GROUP_CONCAT(DISTINCT CAST(vocategories.name AS CHAR)) AS categories_names,
(SELECT SUM(vocount.clicks) FROM vo_offers_counter AS vocount WHERE vocount.offer_id = vo.id) AS hits
FROM vo_offers AS o
LEFT JOIN offers_pictures AS vop ON o.id = vop.offer_id AND vop.number = 1
LEFT JOIN offer_sites AS vos ON o.site_id = vos.id
LEFT JOIN offers_city AS voc ON o.id = voc.offer_id
LEFT JOIN offers_category AS vocat ON o.id = vocat.offer_id
LEFT JOIN offers_category AS vocat2 ON o.id = vocat2.offer_id
LEFT JOIN offer_categories AS vocategories ON vocat2.category_id = vocategories.id
LEFT JOIN offers_city AS voc2 ON o.id = voc2.offer_id
LEFT JOIN offer_cities AS voci ON voc2.city_id = voci.id
LEFT JOIN offers_region AS vor ON o.id = vor.offer_id
LEFT JOIN offer_regions AS vors ON vor.region_id = vors.id
LEFT JOIN offer_ribbons AS vorib ON o.ribbon_id = vorib.id
LEFT JOIN offers_section AS vose ON o.id = vose.offer_id
LEFT JOIN offers_icons AS voi ON o.id = voi.offer_id
WHERE o.id IS NOT NULL AND o.status IN ('published','pending','xml')
GROUP BY o.id
ORDER BY CASE WHEN o.position_id IN('', '0') THEN ~0 ELSE 0 END asc, o.position_id asc, o.publish_microtime desc
LIMIT 100 OFFSET 200
Here is a DESCRIBE of the query:
+----+--------------------+--------------+--------+---------------------------+---------------------------+---------+--------------------------------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+--------------+--------+---------------------------+---------------------------+---------+--------------------------------------+------+----------------------------------------------+
| 1 | PRIMARY | o | range | PRIMARY,status | status | 2 | NULL | 3432 | Using where; Using temporary; Using filesort |
| 1 | PRIMARY | vop | ref | offer_id | offer_id | 5 | new_vsichkioferti.v.id | 1 | |
| 1 | PRIMARY | vos | eq_ref | PRIMARY | PRIMARY | 4 | new_vsichkioferti.v.site_id | 1 | |
| 1 | PRIMARY | voc | ref | offer_id | offer_id | 5 | new_vsichkioferti.v.id | 2 | Using index |
| 1 | PRIMARY | vocat | ref | vo_offers_category_ibfk_1 | vo_offers_category_ibfk_1 | 5 | new_vsichkioferti.v.id | 1 | Using index |
| 1 | PRIMARY | vocat2 | ref | vo_offers_category_ibfk_1 | vo_offers_category_ibfk_1 | 5 | new_vsichkioferti.v.id | 1 | |
| 1 | PRIMARY | vocategories | eq_ref | PRIMARY | PRIMARY | 4 | new_vsichkioferti.vocat2.category_id | 1 | Using index |
| 1 | PRIMARY | voc2 | ref | offer_id | offer_id | 5 | new_vsichkioferti.v.id | 2 | |
| 1 | PRIMARY | voci | eq_ref | PRIMARY | PRIMARY | 4 | new_vsichkioferti.voc2.city_id | 1 | Using index |
| 1 | PRIMARY | vor | ref | offer_id | offer_id | 5 | new_vsichkioferti.v.id | 1 | |
| 1 | PRIMARY | vors | eq_ref | PRIMARY | PRIMARY | 4 | new_vsichkioferti.vor.region_id | 1 | Using index |
| 1 | PRIMARY | vorib | eq_ref | PRIMARY | PRIMARY | 4 | new_vsichkioferti.v.ribbon_id | 1 | |
| 1 | PRIMARY | vose | ref | offer_id | offer_id | 5 | new_vsichkioferti.v.id | 1 | Using index |
| 1 | PRIMARY | voi | ref | offer_id | offer_id | 5 | new_vsichkioferti.v.id | 1 | Using index |
| 2 | DEPENDENT SUBQUERY | vocount | ref | offer_id | offer_id | 5 | func | 1 | Using where |
+----+--------------------+--------------+--------+---------------------------+---------------------------+---------+--------------------------------------+------+----------------------------------------------+
15 rows in set
What can I do to get this to run faster?
[EDIT]
The problem is in this joins:
LEFT JOIN offers_city AS voc2 ON o.id = voc2.offer_id
LEFT JOIN offer_cities AS voci ON voc2.city_id = voci.id
mostly in the first one, table offers_city is with 221339 rows but only with two columns: offer_id and city_id with indexes and both is a foreign keys
I see that in there WHERE part you filter just by main table (vo_offers AS o) columns. If it is always the case - you could try to speed it up with a subselect. The thing with your query is - it does (probably, not 100% sure) first joins all other records from joined tables and then performs filtering.
So you could try something like:
SELECT o.id, o.title, o.link, o.position_id, o.status, o.publish_date, o.archived_on, vos.name AS site_name, vorib.image AS ribbon, vop.picture,
GROUP_CONCAT(DISTINCT CAST(voci.name AS CHAR)) AS cities,
GROUP_CONCAT(DISTINCT CAST(vors.name AS CHAR)) AS regions,
GROUP_CONCAT(DISTINCT CAST(voi.icon_id AS CHAR)) AS icons,
GROUP_CONCAT(DISTINCT CAST(voc.city_id AS CHAR)) AS cities_id,
GROUP_CONCAT(DISTINCT CAST(vor.region_id AS CHAR)) AS regions_id,
GROUP_CONCAT(DISTINCT CAST(vose.section_id AS CHAR)) AS sections,
GROUP_CONCAT(DISTINCT CAST(vocat2.category_id AS CHAR)) AS categories,
GROUP_CONCAT(DISTINCT CAST(vocategories.name AS CHAR)) AS categories_names,
(SELECT SUM(vocount.clicks) FROM vo_offers_counter AS vocount WHERE vocount.offer_id = vo.id) AS hits
FROM (SELECT * FROM vo_offers WHERE id IS NOT NULL AND status IN ('published','pending','xml')) AS o
LEFT JOIN offers_pictures AS vop ON o.id = vop.offer_id AND vop.number = 1
LEFT JOIN offer_sites AS vos ON o.site_id = vos.id
LEFT JOIN offers_city AS voc ON o.id = voc.offer_id
LEFT JOIN offers_category AS vocat ON o.id = vocat.offer_id
LEFT JOIN offers_category AS vocat2 ON o.id = vocat2.offer_id
LEFT JOIN offer_categories AS vocategories ON vocat2.category_id = vocategories.id
LEFT JOIN offers_city AS voc2 ON o.id = voc2.offer_id
LEFT JOIN offer_cities AS voci ON voc2.city_id = voci.id
LEFT JOIN offers_region AS vor ON o.id = vor.offer_id
LEFT JOIN offer_regions AS vors ON vor.region_id = vors.id
LEFT JOIN offer_ribbons AS vorib ON o.ribbon_id = vorib.id
LEFT JOIN offers_section AS vose ON o.id = vose.offer_id
LEFT JOIN offers_icons AS voi ON o.id = voi.offer_id
GROUP BY o.id
ORDER BY CASE WHEN o.position_id IN('', '0') THEN ~0 ELSE 0 END asc, o.position_id asc, o.publish_microtime desc
LIMIT 100 OFFSET 200
So in this case you first (in a subselect) find the records you really need and only then join all other tables
Not sure if it will help, but you can try it out at least...