Optimizing INNER JOIN across multiple tables - mysql

I have trawled many of the similar responses on this site and have improved my code at several stages along the way. Unfortunately, this 3-row query still won't run.
I have one table with 100k+ rows and about 30 columns of which I can filter down to 3-rows (in this example) and then perform INNER JOINs across 21 small lookup tables.
In my first attempt, I was lazy and used implicit joins.
SELECT `master_table`.*, `lookup_table`.`data_point` x 21
FROM `lookup_table` x 21
WHERE `master_table`.`indexed_col` = "value"
AND `lookup_table`.`id` = `lookup_col` x 21
The query looked to be timing out:
#2013 - Lost connection to MySQL server during query
Following this, I tried being explicit about the joins.
SELECT `master_table`.*, `lookup_table`.`data_point` x 21
FROM `master_table`
INNER JOIN `lookup_table` ON `lookup_table`.`id` = `master_table`.`lookup_col` x 21
WHERE `master_table`.`indexed_col` = "value"
Still got the same result. I then realised that the query was probably trying to perform the joins first, then filter down via the WHERE clause. So after a bit more research, I learned how I could apply a subquery to perform the filter first and then perform the joins on the newly created table. This is where I got to, and it still returns the same error. Is there any way I can improve this query further?
SELECT `temp_table`.*, `lookup_table`.`data_point` x 21
FROM (SELECT * FROM `master_table` WHERE `indexed_col` = "value") as `temp_table`
INNER JOIN `lookup_table` ON `lookup_table`.`id` = `temp_table`.`lookup_col` x 21
Is this the best way to write up this kind of query? I tested the subquery to ensure it only returns a small table and can confirm that it returns only three rows.

First, at its most simple aspect you are looking for
select
mt.*
from
Master_Table mt
where
mt.indexed_col = 'value'
That is probably instantaneous provided you have an index on your master table on the given indexed_col in the first position (in case you had a compound index of many fields)…
Now, if I am understanding you correctly on your different lookup columns (21 in total), you have just simplified them for redundancy in this post, but actually doing something in the effect of
select
mt.*,
lt1.lookupDescription1,
lt2.lookupDescription2,
...
lt21.lookupDescription21
from
Master_Table mt
JOIN Lookup_Table1 lt1
on mt.lookup_col1 = lt1.pk_col1
JOIN Lookup_Table2 lt2
on mt.lookup_col2 = lt2.pk_col2
...
JOIN Lookup_Table21 lt21
on mt.lookup_col21 = lt21.pk_col21
where
mt.indexed_col = 'value'
I had a project well over a decade ago dealing with a similar situation... the Master table had about 21+ million records and had to join to about 30+ lookup tables. The system crawled and queried died after running a query after more than 24 hrs.
This too was on a MySQL server and the fix was a single MySQL keyword...
Select STRAIGHT_JOIN mt.*, ...
By having your master table in the primary position, where clause and its criteria directly on the master table, you are good. You know the relationships of the tables. Do the query in the exact order I presented it to you. Don't try to think for me on this and try to optimize based on a subsidiary table that may have smaller record count and somehow think that will help the query faster... it won't.
Try the STRAIGHT_JOIN keyword. It took the query I was working on and finished it in about 1.5 hrs... it was returning all 21 million rows with all corresponding lookup key descriptions for final output, hence still needed a longer duration than just 3 records.

First, don't use a subquery. Write the query as:
SELECT mt.*, lt.`data_point`
FROM `master_table` mt INNER JOIN
`lookup_table` l
ON l.`id` = mt.`lookup_col`
WHERE mt.`indexed_col` = value;
The indexes that you want are master_table(value, lookup_col) and lookup_table(id, data_point).
If you are still having performance problems, then there are multiple possibilities. High among them is that the result set is simply too big to return in a reasonable amount of time. To see if that is the case, you can use select count(*) to count the number of returned rows.

Related

MySQL query and LEFT JOIN for large table

I am trying to optimize query (below) which takes 80 minutes to execute. :/
I have very large table prodaja with 21m rows and actual_stock with 960k rows.
SELECT
p.NazivMat,
sum(p.Kolicina) AS ProdajaKol,
sum(p.Iznos) AS ProdajaIznos,
s.Kolicina AS TrenutnaZaliha,
s.Iznos AS TrenZalIznos
FROM
prodaja p
LEFT JOIN actual_stock s ON s.BrojSklad = p.BrojSklad
AND s.SifraMat = p.SifraMat
WHERE
p.Dobavljac = 1664
AND p.DatumOtprem BETWEEN '2020-12-10'
AND '2020-12-11'
I have set Indexes on fields BrojSklad and SifraMat but it does not change much at all as I have dates and range changing and query can take (run) forever if 10 days range is selected (with this query).
Is there any other way(s) to get same result with different query or two of them like "prefetch" and store in temp table and run another one?
Table with 20m rows is pain in the but. :/
UPDATE: 30. Dec
Thanks for all responds below. For sake of simplicity, I've shorten the query, the long version is below. I did add GROUP BY and the end of it, that's sorted.
EXPLAIN SELECT
cm_prodaja.NazivGrupe,
cm_prodaja.Grupa,
cm_prodaja.DatumOtprem,
cm_prodaja.SifraMat,
cm_prodaja.BarCode,
cm_prodaja.SifArtOdDob,
cm_prodaja.NazivMat,
sum(cm_prodaja.Kolicina) AS Kolicina,
sum(cm_prodaja.Iznos) AS Iznos,
IFNULL (zaliha_artikala_radnje.Kolicina, 0) AS TrenutnaZaliha,
IFNULL (zaliha_artikala_radnje.Iznos, 0) AS TrenZalIznos
FROM
cm_prodaja
LEFT JOIN zaliha_artikala_radnje ON zaliha_artikala_radnje.BrojSklad = cm_prodaja.BrojSklad
AND zaliha_artikala_radnje.SifraMat = cm_prodaja.SifraMat
WHERE
cm_prodaja.Dobavljac = 1664
AND cm_prodaja.DatumOtprem BETWEEN '2020-08-10'
AND '2020-08-11'
GROUP BY cm_prodaja.BrojSklad, cm_prodaja.NazivRadnje, cm_prodaja.SifraMat,
cm_prodaja.BarCode, cm_prodaja.SifArtOdDob, cm_prodaja.NazivMat,
cm_prodaja.Kolicina, cm_prodaja.Iznos, cm_prodaja.Dobavljac,
cm_prodaja.NazivDobavljaca, cm_prodaja.Proizvodjac,
cm_prodaja.NazivProizvodjaca, cm_prodaja.Grupa, cm_prodaja.NazivGrupe
I made it a bit faster by adding missing Index on zaliha_artikala_radnje.BrojSklad and zaliha_artikala_radnje.SifraMat.
Another thing I did is enabling partitioning and I've set to "split" table by year (months) on 4 sections/year and that helped a lot.
I've added image with EXPLAIN result.
Add these composite indexes, with the columns in the order given:
p: (Dobavljac, DatumOtprem)
s: (SifraMat, BrojSklad, Iznos, Kolicina)
If you need further assistance, please provide SHOW CREATE TABLE and fix the syntax error: ... LEFT The JOIN ...
What is the datatype of DatumOtprem? I am worried about the endpoints of the BETWEEN.
Another problem... The query has SUM(), but no GROUP BY; what is the intent?
thanks for suggestions. I ended up with using PHP way to JOIN data described here: https://www.koolreport.com/docs/processes/join/
My original query took 3-40 minutes to give results for 1-7 days selected in filter.
Now it takes 2-5 seconds, where final result has 1k - 8k rows.
I did try different methods and anything what has JOIN inside query dropped performance drastically. As said, KoolReport JOIN function solved my problem. I created two queries, both of them are getting their sets of data ordered by SifraMat and match same field Dobavljac.

SQL query takes too much time (3 joins)

I'm facing an issue with an SQL Query. I'm developing a php website, and to avoid making too much queries, I prefer to make a big one looking like :
select m.*, cj.*, cjb.*, me.pseudo as pseudo_acheteur
from mercato m
JOIN cartes_joueur cj
ON m.ID_carte = cj.ID_carte_joueur
JOIN cartes_joueur_base cjb
ON cj.ID_carte_joueur_base = cjb.ID_carte_joueur_base
JOIN membres me
ON me.ID_membre = cj.ID_membre
where not exists (select * from mercato_encheres me where me.ID_mercato = m.ID_mercato)
and cj.ID_membre = 2
and m.status <> 'cancelled'
ORDER BY total_carac desc, cj.level desc, cjb.nom_carte asc
This should return all cards sold by the member without any bet on it. In the result, I need all the information to display them.
Here is the approximate rows in each table :
mercato : 1200
cartes_joueur : 800 000
carte_joueur_base : 62
membres : 2000
mercato_enchere : 15 000
I tried to reduce them (in dev environment) by deleting old data; but the query still needs 10~15 seconds to execute (which is way too long on a website )
Thanks for your help.
Let's take a look.
The use of * in SELECT clauses is harmful to query performance. Why? It's wasteful. It needlessly adds to the volume of data the server must process, and in the case of JOINs, can force the processing of columns with duplicate values. If you possibly can do so, try to enumerate the columns you need.
You may not have useful indexes on your tables for accelerating this. We can't tell. Please notice that MySQL can't exploit multiple indexes in a single query, so to make a query fast you often need a well-chosen compound index. I suggest you try defining the index (ID_membre, ID_carte_jouer, ID_carte_joueur_base) on your cartes_joueur table. Why? Your query matches for equality on the first of those columns, and then uses the second and third column in ON conditions.
I have often found that writing a query with the largest table (most rows) first helps me think clearly about optimizing. In your case your largest table is cartes_jouer and you are choosing just one ID_membre value from that table. Your clearest path to optimization is the knowledge that you only need to examine approximately 400 rows from that table, not 800 000. An appropriate compound index will make that possible, and it's easiest to imagine that index's columns if the table comes first in your query.
You have a correlated subquery -- this one.
where not exists (select *
from mercato_encheres me
where me.ID_mercato = m.ID_mercato)
MySQL's query planner can be stupidly literal-minded when it sees this, running it thousands of times. In your case it's even worse: it's got SELECT * in it: see point 1 above.
It should be refactored to use the LEFT JOIN ... IS NULL pattern. Here's how that goes.
select whatever
from mercato m
JOIN ...
JOIN ...
LEFT JOIN mercato_encheres mench ON mench.ID_mercato = m.ID_mercato
WHERE mench.ID_mercato IS NULL
and ...
ORDER BY ...
Explanation: The use of LEFT JOIN rather than ordinary inner JOIN allows rows from the mercato table to be preserved in the output even when the ON condition does not match them to tables in the mercato_encheres table. The mismatching rows get NULL values for the second table. The mench.ID_mercato IS NULL condition in the WHERE clause then selects only the mismatching rows.

Mysql range check instead of index usage on inner join

I'm having a serious problem with MySQL (innoDB) 5.0.
A very simple SQL query is executed with a very unexpected query plan.
The query:
SELECT
SQL_NO_CACHE
mbCategory.*
FROM
MBCategory mbCategory
INNER JOIN ResourcePermission as rp
ON rp.primKey = mbCategory.categoryId
where mbCategory.groupId = 12345 AND mbCategory.parentCategoryId = 0
limit 20;
MBCategory - contains 216583 rows
ResourcePermission - contains 3098354 rows.
In MBCategory I've multiple indexes (columns order as in index):
Primary (categoryId)
A (groupId,parentCategoryId,categoryId)
B (groupId,parentCategoryId)
In ResourcePermission I've multiple indexes (columns order as in index):
Primary - on some column
A (primKey).
When I look into query plan Mysql changes tables sequence and selects rows from ResourcePermission at first and then it joins the MBCategory table (crazy idea) and it takes ages. So I added STRAIGHT_JOIN to force the innodb engine to use correct table sequence:
SELECT
STRAIGHT_JOIN SQL_NO_CACHE
mbCategory.*
FROM
MBCategory
mbCategory
INNER JOIN ResourcePermission as rp
ON rp.primKey = mbCategory.categoryId
where mbCategory.groupId = 12345 AND mbCategory.parentCategoryId = 0
limit 20;
But here the second problem materialzie:
In my opinion mysql should use index A (primKey) on the join operation instead it performs Range checked for each record (index map: 0x400) and it again takes ages !
Force index doesn't help, mysql still performing Range checked for each record .
There are only 23 rows in the MBCategory which fulfill where criteria, and after join there are only 75 rows.
How can I make mysql to choose correct index on this operation ?
Ok,
elementary problem.
I owe myself a beer.
The system I'm recently tunning is not a system I've developted - I've been assigned to it by my management to improve performance (originall team doesn't have knowledge on this topic).
After fee weeks of improving SQL queries, indexes, number of sql queries that are beeing executed by application I didn't check one of the most important things in this case !!
COLUMN TYPES ARE DIFFERENT !
Developer who have written than kind of code should get quite a big TALK.
Thanks for help !
I had the same problem with a different cause. I was joining a large table, and the ON clause used OR to compare the primary key (ii.itemid) to two different columns:
SELECT *
FROM share_detail sd
JOIN box_view bv ON sd.container_id = bv.id
JOIN boxes b ON b.id = bv.shared_id
JOIN item_index ii ON ii.itemid = bv.shared_id OR b.parent_itemid = ii.itemid;
Fortunately, it turned out the parent_itemid comparison was redundant, so I was able to remove it. Now the index is being used as expected. Otherwise, I was going to try splitting the item_index join into two separate joins.

Putting together a SQL Stored Proc

So I have a couple SQL commands that I basically want to make a proc, but while doing this, I'd like to optimize them a little bit more.
The first part of it is this:
select tr_reference_nbr
from cfo_daily_trans_hist
inner join cfo_fas157_valuation on fv_dh_daily_trans_hist_id = dh_daily_trans_hist_id
inner join cfo_tran_quote on tq_tran_quote_id = dh_tq_tran_quote_id
inner join cfo_transaction on tq_tr_transaction_id = tr_transaction_id
inner join cfo_fas157_project_valuation ON fpv_fas157_project_valuation_id = fv_fpv_fas157_project_valuation_id AND fpv_status_bit = 1
group by tr_reference_nbr, fv_dh_daily_trans_hist_id
having count(*)>1
This query returns to me which tr_reference_nbr's exist that have duplicate data in our system, which needs to be removed. After this is run, I run this other query, copying and pasting in the tr_reference_nbr one at a time that the above query gave me:
select
tr_reference_nbr , dh_daily_trans_hist_id ,cfo_fas157_project_valuation.*,
cfo_daily_trans_hist.* ,
cfo_fas157_valuation.*
from cfo_daily_trans_hist
inner join cfo_fas157_valuation on fv_dh_daily_trans_hist_id = dh_daily_trans_hist_id
inner join cfo_tran_quote on tq_tran_quote_id = dh_tq_tran_quote_id
inner join cfo_transaction on tq_tr_transaction_id = tr_transaction_id
iNNER JOIN cfo_fas157_project_valuation ON fpv_fas157_project_valuation_id = fv_fpv_fas157_project_valuation_id
where
tr_reference_nbr in
(
[PASTEDREFERENCENUMBER]
)
and fpv_status_bit = 1
order by dh_val_time_stamp desc
Now this query gives me a bunch of records for that specific tr_reference_nbr. I then have to look through this data and find the rows that have a matching (duplicate) dh_daily_trans_hist_id. Once this is found, I look and make sure that the following columns also match for that row so I know they are true duplicates: fpv_unadjusted_sponsor_charge, fpv_adjusted_sponsor_charge, fpv_unadjusted_counterparty_charge, and fpv_adjusted_counterparty_charge.
If THOSE all match, I then look to yet another column, fv_create_dt, and make sure that there is less then a minute difference between the two timestamps there. If there is, I run yet another query on the row that was stored EARLIER, which looks like this:
begin tran
update cfo_fas157_valuation set fpv_status_bit = 0 where fpv_fas157_project_valuation_id = [IDRECIEVEDFROMTHEOTHERTABLE]
commit
As you can see, this is still a very manual process even though we do have a few queries written, but I'm trying to find a solution to where we can just run one query, and it would basically do EVERYTHING except for the final query. So basically something that would provide to us a few fpv_fas157_project_valuation_id's that need to be updated.
From looking at these queries, do any of you guys see an easy way to combine all this? I've been working on it all day and can't seem to get something to run. I feel like I keep screwing up the joins and stuff.
Thanks!
You can combine these queries in multiple ways:
use temporary tables to store results of queries - suitable for stored procedure
use table variables to store results of queries - suitable for stored procedure
use Common Table Expressions (CTEs) to store results of queries - suitable for single query
Once You have them in separate tables/variables/CTEs You can easily join them.
Then You have to do one more thing, and that is to find difference in datetime in two consecutive rows. There is a trick to do this:
use ROW_NUMBER() to add a column with number of row partitioned by grouping fields (tr_reference_nbr, ... ) ordered by fv_create_dt
do a self join on A.ROW_NUMBER = B.ROW_NUMBER + 1
check the difference between A.fv_create_dt and B.fv_create_dt to filter the rows with difference less than a minute
Just do a good test of your self-join to make sure You filter only rows You need to filter.
If You still have problems with this, don't hesitate to leave a comment.
Interesting note: SQL Server Denali has T-SQL enhancements LEAD and LAG to access subsequent and previous row without self-joins.

Specific SQL Query Optimization Help Needed

So I'm working on a data mining project where we're looking at code elements and their relationships and changes to these things over time. What we want is to ask some questions about how often related elements are changed. I've set it up as a view, but it's taking like 10 min to run. I believe the problem is that I'm having to do a lot of subtraction, concatenation, and string comparisons to compare entries (for our window size), but I don't know a good way to fix this. The query looks like
select aw.same
, rw.k
, count(distint concat_ws(',', r1.id, r2.id)) as num
from deltamethoddeclaration dmd1
join revision r1
on r1.id=FKrevID
join methodinvocation mi
on mi.FKcallerID = dmd1.FKMDID
join deltamethoddeclaration dmd2
on mi.FKcalleeID = dmd2.FKMDID
join revision r2
on r2.id = dmd2.FKrevID
join revisionwindow rw
join authorwindow aw
where (dmd1.FKrevID - dmd2.FKrevID) < rw.k
and (dmd2.FKrevID - dmd1.FKrevID) < rw.k
and case aw.same
when 1 then
r1.author = r2.author
when 0 then
r1.author <> r2.author
else
1=1
end
group by aw.same
, rw.k
;
Ok, so revisionwindow stores the revision windows we're interested in (10, 20, 50, 100) and authorwindow stores which author types we want (same, different, and don't care). Part of the problem is, we could have the same revision pair with different elements matching, so the only hack i could come up with was that ugly count(distinct concat()) thing. This should return a table with 12 rows, one for each combination of the author and revision windows. The entries under 'num' are the unique pairs of revisions related in the manner specified (in this case, both change methods and one of the methods calls the other). It works perfectly, it's just crazy slow (~10 min running time). I'm basically looking for any advice or help to make this work better without sacrificing accuracy.
where (dmd1.FKrevID - dmd2.FKrevID) < rw.k
The most damaging about this statement is the less than operator < not the arithmetic. B-trees cannot use this and forces a full table scan every time, any time. Gory details why this true: http://explainextended.com/2010/05/19/things-sql-needs-determining-range-cardinality/
I doubt your CASE statement can be optimized by the backend and <> operator suffers from the same problem as above. I would think about ways to join with = operators, perhaps breaking up the query and using UNION statements so you can always use indexes.
Your not using EXPLAIN. You need to start using it to optimize queries. You have no idea what indexes are being used and what are not, or if your condition is selective enough where they would even be helpful (if its not very selective see the last point) http://dev.mysql.com/doc/refman/5.0/en/explain.html
Since this a data mining application you have a great opportunity to use temp tables of intermediate values. Since the data is probably dumped at periodic intervals (or maybe even only once!) it is easy to rebuild the long running temp table every so often without running the risk of data corruption (Or it may just not matter since you looking for aggregate patterns.)
I have taken queries that were running over 60 minutes and reduced them to less than 100 ms (instant) by building temp tables that cached the hard stuff. If you are not able to use any of the ideas above, this is probably the lowest lying fruit. Take all the 'hard stuff' - case joins and non equality joins and do it one place. Then add an index to your temp table :-) The trick is to make it general enough that you can query the temp table so you still have flexibility to ask different questions.
I suspect the two joins (join revisionwindow rw) and (join authorwindow aw) that do not have an ON condition but use the WHERE, cause this.
How many records do these two tables have? MySQL probably does first a CROSS JOIN on these and only later checks the complex (WHERE) conditions.
But please post the results of EXPLAIN.
--EDIT--
Oops, I missed your last paragraph which explains that the two tables have 4 and 3 rows.
Can you try this:
(where the concat has been replaced
and the where clauses have been moved as JOIN ON ...)
select aw.same
, rw.k
, count(distint r1.id, r2.id) as num
from deltamethoddeclaration dmd1
join revision r1
on r1.id = dmd1.FKrevID
join methodinvocation mi
on mi.FKcallerID = dmd1.FKMDID
join deltamethoddeclaration dmd2
on mi.FKcalleeID = dmd2.FKMDID
join revision r2
on r2.id = dmd2.FKrevID
join revisionwindow rw
on (dmd1.FKrevID - dmd2.FKrevID) < rw.k
and (dmd2.FKrevID - dmd1.FKrevID) < rw.k
join authorwindow aw
on case aw.same
when 1 then
r1.author = r2.author
when 0 then
r1.author <> r2.author
else
1=1
end
group by aw.same
, rw.k
;