I have a problem with the following query which is very slow :
SELECT A.* FROM B
INNER JOIN A ON A.id=B.fk_A
WHERE A.creationDate BETWEEN '20120309' AND '20120607'
GROUP BY A.id
ORDER BY RAND()
LIMIT 0,5
EXPLAIN :
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE B index fk_A fk_A 4 \N 58962 Using index; Using temporary; Using filesort
1 SIMPLE A eq_ref PRIMARY,creationDate PRIMARY 4 B.fk_A 1 Using where
INDEXES :
A.id (int) = PRIMARY index
A.creationDate (date) = index
B.fk_A = index
Do you see something to optimize ?
Thanks a lot for your advice
I think the RAND() function will create a Rand() value for every row (this is why the using temporary shows up, and filesort because it can't use an index.
the best way would be to SELECT MAX(id) FROM a to get the max value.
then create 5 random numbers between 1 and MAX(id) and do a SELECT ... WHERE a.id IN (...) query.
If the result has fewer than 5 rows (because a record has been deleted) repeat the procedure until you are fine (or initially create 100 random numbers and LIMIT the query to 5.
That is not a 100% mysql solution, because you have to do the logic in your code, but will be much faster I believe.
Update
Just Found an interesting article in the net, that basically tells the same: http://akinas.com/pages/en/blog/mysql_random_row/
One possible rewriting of the query:
SELECT A.*
FROM A
WHERE A.creationDate BETWEEN '20120309' AND '20120607'
AND EXISTS
( SELECT *
FROM B
WHERE A.id = B.fk_A
)
ORDER BY RAND()
LIMIT 0,5
Related
The following query takes mysql to execute almost 7 times longer than implementing the same using two separate queries, and avoiding OR on the WHERE statement. I prefer using a single query as I can sort and group everything.
Here is the problematic query:
EXPLAIN SELECT *
FROM `posts`
LEFT JOIN `teams_users`
ON (teams_users.team_id=posts.team_id
AND teams_users.user_id='7135')
WHERE (teams_users.status='1'
OR posts.user_id='7135');
Result:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE posts ALL user_id NULL NULL NULL 169642
1 SIMPLE teams_users eq_ref PRIMARY PRIMARY 8 posts.team_id,const 1 Using where
Now if I do the following two queries instead, the aggregate execution time, as said, is shorter by 7 times:
EXPLAIN SELECT *
FROM `posts`
LEFT JOIN `teams_users`
ON (teams_users.team_id=posts.team_id
AND teams_users.user_id='7135')
WHERE (teams_users.status='1');
Result:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE teams_users ref PRIMARY,status status 1 const 5822 Using where
1 SIMPLE posts ref team_id team_id 5 teams_users.team_id 9 Using where
and:
EXPLAIN SELECT *
FROM `posts`
LEFT JOIN `teams_users`
ON (teams_users.team_id=posts.team_id
AND teams_users.user_id='7135')
WHERE (posts.user_id='7135');
Result:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE posts ref user_id user_id 4 const 142
1 SIMPLE teams_users eq_ref PRIMARY PRIMARY 8 posts.team_id,const 1
Obviously the amount of scanned rows is much lower on the two queries.
Why is the initial query slow?
Thanks.
Yes, OR is frequently a performance-killer. A common work-around is to do UNION. For your example:
SELECT *
FROM `posts`
LEFT JOIN `teams_users`
ON (teams_users.team_id=posts.team_id
AND teams_users.user_id='7135')
WHERE (teams_users.status='1')
UNION DISTINCT
SELECT *
FROM `posts`
LEFT JOIN `teams_users`
ON (teams_users.team_id=posts.team_id
AND teams_users.user_id='7135')
WHERE (posts.user_id='7135');
If you are sure there are not dups, change to the faster UNION ALL.
If you are not fishing for missing team_users rows, use JOIN instead of LEFT JOIN.
If you need ORDER BY, add some parens:
( SELECT ... )
UNION ...
( SELECT ... )
ORDER BY ...
Otherwise, the ORDER BY would apply only to the second SELECT. (If you also need 'pagination', see my blog .)
Please note that you might also need LIMIT in certain circumstances.
The queries without the OR clause are both sargable. That is, they both can be satisfied using indexes.
The query with the OR would be sargable if the MySQL query planner contained logic to figure out it can rewrite it as the UNION ALL of two queries. By the MySQL query planner doesn't (yet) have that kind of logic.
So, it does table scans to get the result set. Those are often very slow.
I have the following query:
SELECT *
FROM s
JOIN b ON s.borrowerId = b.id
JOIN (
SELECT MIN(id) AS id
FROM tbl
WHERE dealId IS NULL
GROUP BY borrowerId, created
) s2 ON s.id = s2.id
Is there a simple way to optimize this so that I can do the JOIN directly and utilize indexes?
UPDATE
The created field is part of the GROUP BY statement because due to the limitations of our version of MySQL and the ORM being used it is possible to have multiple records with the same created timestamp value. As a result I need to find the first record for each combination of borrowerId and created.
Typically I might attempt something like this:
SELECT *
FROM s
INNER JOIN b ON s.borrowerId = b.id
LEFT OUTER JOIN s2
ON s.borrowerId = s2.borrowerId
AND s.created = s2.created
AND s.id <> s2.id
AND s.id < s2.id
WHERE s2.id IS NULL
AND s.dealId IS NULL;
But I'm not sure if that works 100% the way I want.
EXPLAIN from MySQL outputs the following:
1 PRIMARY b ALL NULL NULL NULL NULL 129690
1 PRIMARY <derived2> ALL NULL NULL NULL NULL 317751 Using join buffer
1 PRIMARY s eq_ref PRIMARY,borrowerId_2,borrowerId PRIMARY 4 s2.id 1 Using where
2 DERIVED statuses ref dealId dealId 5 183987 Using where; Using temporary; Using filesort
As you can see, it has to query a massive number of records to build the subquery data set and when joining to the derived subquery, no indexes are found and so no indexes are used.
The first query needs this composite index:
INDEX(borrowerId, created, id)
Note that MySQL rarely uses two indexes for one SELECT, but a composite index is often very handy.
The second query seems grossly inefficient.
Please provide SHOW CREATE TABLE for each table.
I've got a composite key table CUSTOMER_PRODUCT_XREF
__________________________________________________________________
|CUSTOMER_ID (PK NN VARCHAR(191)) | PRODUCT_ID(PK NN VARCHAR(191))|
-------------------------------------------------------------------
In my batch program I need to select 500 updated customers and also get the PRODUCT_ID's purchased by CUSTOMERs separated by comma and update our SOLR index. In my query I'm select 500 customers and doing a left join to CUSTOMER_PRODUCT_XREF
SELECT
customer.*, group_concat(xref.PRODUCT_ID separator ', ')
FROM
CUSTOMER customer
LEFT JOIN CUSTOMER_PRODUCT_XREF xref ON customer.CUSTOMER_ID=xref.CUSTOMER_ID
group by customer.CUSTOMER_ID
LIMIT 500;
EDIT: EXPLAIN QUERY
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE customer ALL PRIMARY NULL NULL NULL 74236 Using where; Using temporary; Using filesort
1 SIMPLE xref index NULL PRIMARY 1532 NULL 121627 Using where; Using index; Using join buffer (Block Nested Loop)
I got lost connection exception after 20 minutes running the above query.
I tried with the following (sub query) and it took 1.7 seconds to get result but still slow.
SELECT
customer.*, (SELECT group_concat(PRODUCT_ID separator ', ')
FROM CUSTOMER_PRODUCT_XREF xref
WHERE customer.CUSTOMER_ID=xref.CUSTOMER_ID
GROUP BY customer.CUSTOMER_ID)
FROM
CUSTOMER customer
LIMIT 500;
EDIT: EXPLAIN QUERY produces
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY customer ALL NULL NULL NULL NULL 74236 NULL
2 DEPENDENT SUBQUERY xref index NULL PRIMARY 1532 NULL 121627 Using where; Using index; Using temporary; Using filesort
Question
CUSTOMER_PRODUCT_XREF already has both columns set as PRIMARY_KEY and NOT_NULL but why is my query still very slow ? I thought having Primary Key on a column was enough to build an index for it. Do I need further indexing ?
DATABASE INFO:
All the ID's in my database are VARCHAR(191) because the id's can contain alphabets.
I'm using utf8mb4_unicode_ci character encoding
I'm using SET group_concat_max_len := ##max_allowed_packet to get maximum number of product_ids for each customer. Prefer using group_concat in one main query so that I don't have to do multiple separate queries to get products for each customer.
Your original version of the query is doing the join first and then sorting all the resulting data -- which is probably pretty big given how large the fields are.
You can "fix" that version by selecting 500 hundred customers first and then doing the join:
SELECT c.*, group_concat(xref.PRODUCT_ID separator ', ')
FROM (select c.*
from CUSTOMER customer c
order by c.customer_id
limit 500
) c LEFT JOIN
CUSTOMER_PRODUCT_XREF xref
ON c.CUSTOMER_ID=xref.CUSTOMER_ID
group by c.CUSTOMER_ID ;
An alternative that might or might not have a big impact would be to doing the aggregation by customer in a subquery and join that, as in:
SELECT c.*, xref.products
FROM (select c.*
from CUSTOMER customer c
order by c.customer_id
limit 500
) c LEFT JOIN
(select customer_id, group_concat(xref.PRODUCT_ID separator ', ') as products
from CUSTOMER_PRODUCT_XREF xref
) xref
ON c.CUSTOMER_ID=xref.CUSTOMER_ID;
What you have discovered is that the MySQL optimizer does not recognize this situation (where the limit has a big impact on performance). Some other database engines do a better job of optimization in this case.
Alright the speed of the queries in my question shot up when I created an index just on the CUSTOMER_ID in CUSTOMER_PRODUCT_XREF table.
So I've got two indexes now
PRIMARY_KEY_INDEX on PRODUCT_ID and CUSTOMER_ID
CUSTOMER_ID_INDEX on CUSTOMER_ID
Given is a mySQL table named "orders_products" with the following relevant fields:
products_id
orders_id
Both fields are indexed.
I am running the following query:
SELECT products_id, count( products_id ) AS counter
FROM orders_products
WHERE orders_id
IN (
SELECT DISTINCT orders_id
FROM orders_products
WHERE products_id = 85094
)
AND products_id != 85094
GROUP BY products_id
ORDER BY counter DESC
LIMIT 4
This query takes extremely long, around 20 seconds. The database is not very busy otherwise, and performs well on other queries.
I am wondering, what causes the query to be so slow?
The table is rather big (around 1,5 million rows, size around 210 mb), could this be a memory issue?
Is there a way to tell exactly what is taking mySQL so long?
Output of Explain:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY orders_products range products_id products_id 4 NULL 1577863 Using where; Using temporary; Using filesort
2 DEPENDENT SUBQUERY orders_products ref orders_id,products_id products_id 4 const 2 Using where; Using temporary
Queries that use WHERE ID IN (subquery) perform notoriously badly with mysql.
With most cases of such queries however, it is possible to rewrite them as a JOIN, and this one is no exception:
SELECT
t2.products_id,
count(t2.products_id) AS counter
FROM orders_products t1
JOIN orders_products t2
ON t2.orders_id = t1.orders_id
AND t2.products_id != 85094
WHERE t1.products_id = 85094
GROUP BY t2.products_id
ORDER BY counter DESC
LIMIT 4
If you want to return rows where there are no other products (and show a zero count for them), change the join to a LEFT JOIN.
Note how the first instance of the table has the WHERE products_id = X, which allows index look up and immediately reduces the number of rows, and the second instance of the table has the target data, but it looked up on the id field (again fast), but filtered in the join condition to count the other products.
Give these a try:
MySQL does not optimize IN with a subquery - join the tables together.
Your query contains != condition, which is very difficult to deal with - can you narrow down products and use multiple lookups rather than inequity comparison?
I got a query:
SELECT a.nick,grp,count(*) FROM help_mails h JOIN accounts a ON h.helper=a.id WHERE closed=1 GROUP BY helper, grp, a.nick
What is wrong with this join?
When I made 2 queries:
SELECT helper,grp,count(*) FROM help_mails h WHERE closed=1 GROUP BY helper, grp;
SELECT nick FROM accounts WHERE id IN (...)
It is 100 times faster.
EXPLAIN returns this:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE h ref closed closed 1 const 1846 Using temporary; Using filesort
1 SIMPLE a ref PRIMARY PRIMARY 4 margonem.h.helper 1 Using where; Using index
accounts.id, help_mails.grp and help_mails.closed got indexes.
Note that your first query is not same as the second ones.
If you have same NICK for two account's, COUNT(*)'s for these accounts will be merged together in the first query and returned separately in the second one.
If you want separate COUNT's for separate account's to be always returned, you may combine your queries into one:
SELECT a.nick, gpr, cnt
FROM (
SELECT helper, grp, COUNT(*) AS cnt
FROM help_mails h
WHERE closed = 1
GROUP BY
helper, grp
) ho
JOIN accounts a
ON a.id = ho.helper
or change a GROUP BY condition for the first query:
SELECT a.nick, grp, count(*)
FROM help_mails h
JOIN accounts a
ON h.helper = a.id
WHERE closed = 1
GROUP BY
helper, grp, a.id, a.nick
Building a composite index on help_mails (closed, helper, grp) will help you a lot, since it will be used in GROUP BY.
It looks like what's wrong is that help_mails.helper isn't indexed.