Here's the format of mysql code
select a,b,c
from table1
left join table2 on x=y
left join table3 on m=n
limit 100000, 10
I know know to optimize limit when I have a large offset. But I couldn't find the solution to optimize the one with multiple tables, is there any way to make my query faster?
First of all, offsets and limits are unpredictable unless you include ORDER BY clauses in your query. Without ORDER BY, your SQL server is allowed to return result rows in any order it chooses.
Second, Large offsets and small limits are a notorious query-performance antipattern. There's not much you can to do make the problem go away.
To get decent performance, it's helpful to rethink why you want to use this kind of access pattern, and then try to use WHERE filters on some indexed column value.
For example, let's say you're doing this kind of thing.
select a.user_id, b.user_email, c.user_account
from table1 a
left join table2 b on a.user_id = b.user_id
left join table3 c on b.account_id = c.account_id
limit whatever
Let's say you're paginating the query so you get fifty users at a time. Then you can start with a last_seen_user_id variable in your program, initialized to -1.
Your query looks like this:
select a.user_id, b.user_email, c.user_account
from (
select user_id
from table1
where user_id > ?last_seen_user_id?
order by user_id
limit 50
) u
join table1 a on u.user_id = a.user_id
left join table2 b on a.user_id = b.user_id
left join table3 c on b.account_id = c.account_id
order by a.user_id
Then, when you retrieve that result, set your last_seen_user_id to the value from the last row in the result.
Run the query again to get the next fifty users. If table1.user_id is a primary key or a unique index, this will be fast.
Related
An example of a table, data along with the query can be found in http://sqlfiddle.com/#!9/2e65dd/3
I'm interested in finding all distinct user_id's that don't have certain record_type.
In my actual case, this table is huge and it has several million records in it and have an index on user_id column. Although i'm planning to retrieve it in batches by limiting the output to 1000 at a time.
select distinct user_id from
records o where
not exists (
select *
from records i
where i.user_id=o.user_id and i.record_type=3)
limit 0, 1000
Is there a better approach to achieve this need ?
I would do it this way:
SELECT u.user_id
FROM (SELECT DISTINCT user_id FROM records) AS u
LEFT OUTER JOIN records as r
ON u.user_id = r.user_id AND r.record_type = 3
WHERE r.user_id IS NULL
That avoids the correlated subquery in your NOT EXISTS solution.
Alternatively, you should have another table that just lists users, so you don't have to do the subquery:
SELECT u.user_id
FROM users AS u
LEFT OUTER JOIN records as r
ON u.user_id = r.user_id AND r.record_type = 3
WHERE r.user_id IS NULL
In either case, it would help optimize the JOIN to add a compound index on the pair of columns:
ALTER TABLE records ADD KEY (user_id, record_type)
I's suggest a join as well, but mine would have differed from Bill K's like so:
SELECT DISTINCT r.user_id
FROM records AS r
LEFT JOIN (SELECT DISTINCT user_id FROM records WHERE record_type = 3) AS rt3users
ON r.user_id = rt3users.user_id
WHERE rt3users.user_id IS NULL
;
However, an alternative I would not expect better performance from, but is worth checking, since performance can vary based on size and content of data...
SELECT DISTINCT r.user_id
FROM records AS r
WHERE r.user_id NOT IN (
SELECT DISTINCT user_id
FROM records
WHERE record_type = 3
)
;
Note, this one is more similar to your original but does away with the correlated nature of the original subquery.
You could create a temporary table with record types equal 3 like
Select distinct user_id
into #users
from records
where record_type=3
Then create unique index (or primary key) on this table. Then you query would search indexes in both tables.
I can’t say the performance would be better you’d have to test it on your data.
I have about twenty rather small tables (the largest has about 2k rows, normaly about 100 rows, with from 4 up to 20 columns each) I try to join by
select ... from table1
left join table2 on table1.name = table2.t2name
left join table3 on table1.name = table3.othername
left join table4 on table2.t2name = table4.something
and so on
in theory it should return about 2k rows with maybe 80 columns, so I guess that the amount of data itself is not the problem.
But it runs out of memory. From reading several posts here I figured out that mysql internaly makes a big "all x all"-table first and reduces it later. How can I force it to excute the join after each join first, that it takes a lot less memory?
Just to make things clear, in your case the expected amount of data is not the problem.
What appears to be the problem is the fact that you are asking the system to compare A X B X C X D... rows (calculate what it means and you will get the picture).
The general idea described in one of my previous comments is to make you query look as follows:
SELECT * FROM (select ... from table1
where .....
) A
LEFT JOIN SELECT * FROM (select ... from table2
where .....
) B
ON A.name = B.t2name
LEFT JOIN SELECT * FROM (select ... from table3
where .....
) C
ON A.name = C.othername
LEFT JOIN SELECT * FROM (select ... from table4
where .....
) D
ON B.name = D.something
In this way, and assuming that this is applicable in the sense that you do have conditions to put in the where ..... clause of the inner selects, you will be reducing the number of records from each table that would need to be compared during the join.
If I have the following two tables:
Table "a" with 2 columns: id (int) [Primary Index], column1 [Indexed]
Table "b" with 3 columns: id_table_a (int),condition1 (int),condition2 (int) [all columns as Primary Index]
I can run the following query to select rows from Table a where Table b condition1 is 1
SELECT a.id FROM a WHERE EXISTS (SELECT 1 FROM b WHERE b.id_table_a=a.id && condition1=1 LIMIT 1) ORDER BY a.column1 LIMIT 50
With a couple hundred million rows in both tables this query is very slow. If I do:
SELECT a.id FROM a INNER JOIN b ON a.id=b.id_table_a && b.condition1=1 ORDER BY a.column1 LIMIT 50
It is pretty much instant but if there are multiple matching rows in table b that match id_table_a then duplicates are returned. If I do a SELECT DISTINCT or GROUP BY a.id to remove duplicates the query becomes extremely slow.
Here is an SQLFiddle showing the example queries: http://sqlfiddle.com/#!9/35eb9e/10
Is there a way to make a join without duplicates fast in this case?
*Edited to show that INNER instead of LEFT join didn't make much of a difference
*Edited to show moving condition to join did not make much of a difference
*Edited to add LIMIT
*Edited to add ORDER BY
You can try with inner join and distinct
SELECT distinct a.id
FROM a INNER JOIN b ON a.id=b.id_table_a AND b.condition1=1
but using distinct on select * be sure you don't distinct id that return wrong result in this case use
SELECT distinct col1, col2, col3 ....
FROM a INNER JOIN b ON a.id=b.id_table_a AND b.condition1=1
You could also add a composite index with use also condtition1 eg: key(id, condition1)
if you can you could also perform a
ANALYZE TABLE table_name;
on both the table ..
and another technique is try to reverting the lead table
SELECT distinct a.id
FROM b INNER JOIN a ON a.id=b.id_table_a AND b.condition1=1
Using the most selective table for lead the query
Using this seem different the use of index http://sqlfiddle.com/#!9/35eb9e/15 (the last add a using where)
# USING DISTINCT TO REMOVE DUPLICATES without col and order
EXPLAIN
SELECT DISTINCT a.id
FROM a
INNER JOIN b ON a.id=b.id_table_a AND b.condition1=1
;
It looks like I found the answer.
SELECT a.id FROM a
INNER JOIN b ON
b.id_table_a=a.id &&
b.condition1=1 &&
b.condition2=(select b.condition2 from b WHERE b.id_table_a=a.id && b.condition1=1 LIMIT 1)
ORDER BY a.column1
LIMIT 5;
I don't know if there is a flaw in this or not, please let me know if so. If anyone has a way to compress this somehow I will gladly accept your answer.
SELECT id FROM a INNER JOIN b ON a.id=b.id_table_a AND b.condition1=1
Take the condition into the ON clause of the join, that way the index of table b can get used to filter. Also use INNER JOIN over LEFT JOIN
Then you should have less results which have to be grouped.
Wrap the fast version in a query that handles de-duping and limit:
SELECT DISTINCT * FROM (
SELECT a.id
FROM a
JOIN b ON a.id = b.id_table_a && b.condition1 = 1
) x
ORDER BY column1
LIMIT 50
We know the inner query is fast. The de-duping and ordering has to happen somewhere. This way it happens on the smallest rowset possible.
See SQLFiddle.
Option 2:
Try the following:
Create indexes as follows:
create index a_id_column1 on a(id, column1)
create index b_id_table_a_condition1 on b(a_table_a, condition1)
These are covering indexes - ones that contain all the columns you need for the query, which in turn means that index-only access to data can achieve the result.
Then try this:
SELECT * FROM (
SELECT a.id, MIN(a.column1) column1
FROM a
JOIN b ON a.id = b.id_table_a
AND b.condition1 = 1
GROUP BY a.id) x
ORDER BY column1
LIMIT 50
Use your fast query in a subselect and remove the duplicates in the outer select:
SELECT DISTINCT sub.id
FROM (
SELECT a.id
FROM a
INNER JOIN b ON a.id=b.id_table_a && b.condition1=1
WHERE b.id_table_a > :offset
ORDER BY a.column1
LIMIT 50
) sub
Because of removing duplicates you might get less than 50 rows. Just repeat the query until you get anough rows. Start with :offset = 0. Use the last ID from last result as :offset in the following queries.
If you know your statistics, you can also use two limits. The limit in the inner query should be high enough to return 50 distinct rows with a probability which is high enough for you.
SELECT DISTINCT sub.id
FROM (
SELECT a.id
FROM a
INNER JOIN b ON a.id=b.id_table_a && b.condition1=1
ORDER BY a.column1
LIMIT 1000
) sub
LIMIT 50
For example: If you have an average of 10 duplicates per ID, LIMIT 1000 in the inner query will return an average of 100 distinct rows. Its very unlikely that you get less than 50 rows.
If the condition2 column is a boolean, you know that you can have a maximum of two duplicates. In this case LIMIT 100 in the inner query would be enough.
I have something in a query that I have to edit, that I don't understand.
There are 4 tables that are joined: tickets, tasks, tickets_users, users. The whole query is not important, but you have an example at the end of the post. What bugs me is this kind of code used many times in relation to other tables:
(SELECT name
FROM users
WHERE users.id=tickets_users.users_id
) AS RequesterName,
Is this a subquery with the tables users and tickets_users joined? What is this?
WHERE users.id=tickets_users.users_id
If this was a join I would have expected to see:
ON users.id = tickets_users.users_id
And how is this different from a typical join? Just use the same column definition: users.name and just join with the users table.
Can anyone enlighten me on the advanced SQL querying prowess of the original author?
The query looks like this:
SELECT
description,
(SELECT name
FROM users
WHERE users.id = tickets_users.users_id) AS RequesterName,
(SELECT description
FROM tickets
WHERE tickets.id = ticket_tasks.tickets_id) AS TicketDescription,
ticket_tasks.content AS TaskDescription
FROM
ticket_tasks
RIGHT JOIN
tickets ON ticket_tasks.tickets_id = tickets.id
INNER JOIN
tickets_users ON tickets_users.tickets_id = tickettasks.tickets_id
Thanks,
This is what is called a correlated subquery. To describe it in simple terms its doing a select inside a select.
However doing this more than once in ANY query is not recommended AT ALL.. the performance issue with this will be huge.
A correlated subquery will return a row by row comparison for each row of the select... if that doesnt make sense then think of it this way...
SELECT
id,
(SELECT id FROM tableA AS ta WHERE ta.id > t.id)
FROM
tableB AS t;
This will do for each row in tableB, every row in tableA will be selected and compared to tableB id.
NOTE:
If you have 100 rows in all 4 tables and you do a correlated subquery for each one then you are doing 100*100*100*100 row comparisons. thats 100,000,000 (one hundred million) comparisons!
A correlated subquery is NOT a join, but rather a subquery..
SELECT *
FROM
(SELECT id FROM t -- this is a subquery
) AS temp
However, JOINs are different... generally you can do it one of these two ways
This is the faster way
SELECT *
FROM t
JOIN t1 ON t1.id = t.id
This is the slower way
SELECT *
FROM t, t1
WHERE t1.id = t.id
what the second join is doing is making the Cartesian Product of the two tables and then filtering out the extra stuff in the WHERE clause as opposed to the first JOIN that filters as it joins.
For the different types of joins theres a few and all are useful in their prospective actions..
INNER JOIN (same as JOIN)
LEFT JOIN
RIGHT JOIN
LEFT OUTER JOIN
RIGHT OUTER JOIN
In mysql FULL JOIN or FULL OUTER JOIN does not exist.. so in order to do a FULL join you need to combine a LEFT and RIGHT join. See this link for a better understanding of what joins do with Venn diagrams LINK
REMEMBER this is for SQL so it includes the FULL joins as well. those don't work in MySQL.
I have this query in mysql and since it take almost 20 sec to execute, I want to do selects insted of innerjoins with limits in order to make the execution faster.
SELECT t1.order_id, CONCAT(t3.first_name,' ',t3.last_name),
buyer_first_name, buyer_last_name,
max(product_quantity) as product_quantity, order_status,
order_value, t5.first_name staff_firstnamelogin,
t5.last_name staff_lastnamelogin, t6.day_name
FROM t_seller_order t0
INNER JOIN t_orders t1
ON t0.event_id = t1.event_id
AND t1.seller_order_token = t0.seller_order_token
INNER JOIN t_tickets t2
ON t1.order_id = t2.order_id
INNER JOIN t_login t3
ON t3.login_id = t1.login_id
INNER JOIN t_login t5
ON t0.login_id = t5.login_id
INNER JOIN t_event_days t6
ON t2.product_id = t6.event_day_id
WHERE t0.event_id = 35
group by t1.order_id
order by order_id desc;
There are many things about the schema that prevent speeding up the query. Let's see what can or cannot be done...
Since the WHERE and GROUP BY hit different tables, no index is useful for both. The best is to have t0: INDEX(event_id).
Indexes for JOINs: t2..t6 need indexes (or PKs) on order_id, login_id, event_day_id. t1 needs INDEX(event_id, seller_order_token) in either order.
The GROUP BY and ORDER BY are the 'same', so that will take only one sort, not two.
A potential speedup is to finish the GROUP BY before doing some of the JOINs. The current structure is "inflate-deflate", wherein the JOINs conspire to create a huge temp table, then the GROUP BY deflates the results. So...
If see if you can write a SELECT like this:
SELECT t0.id, t1.id -- I need the PRIMARY KEYs for these two tables
FROM t_seller_order AS t0
JOIN t_orders AS t1
WHERE t0.event_id = 35
GROUP BY t1.order_id
How fast is that? Hopefully we can build the rest of the query around this, but without taking too much more time. There are two approaches; I don't know which will be better.
Plan A: Use subqueries (when possible) instead of JOINs. For example, instead of JOINing to t3, plan on this being one item in theSELECT`:
( SELECT CONCAT(first_name,' ',last_name)
FROM t_login WHERE login_id = t1.login_id
) AS login_name
(Ditto for any other columns in the SELECT that touch a table only once. As it stands, t5 is touched twice, so this approach may be impractical.)
Plan B: JOIN after the GROUP BY. That is, after then "deflate".
SELECT ...
FROM ( SELECT t0.id, t1.id ... GROUP BY... ) AS x -- as discussed above
JOIN y ON y.foo = x.foo
JOIN z ON z.bar = x.bar
-- the GROUP BY is avoided
ORDER BY x.order_id desc; -- The ORDER BY is still necessary
Is your example, I lean toward Plan B, but a mixture of both 'Plans' may be desirable.
Further notes: LEFT JOIN and LIMIT add wrinkles to the above discussion. Since you did not have either, I will not clutter this discussion with them.