How do I figure out which columns to index?
SELECT a.ORD_ID AS Manual_Added_Orders,
a.ORD_poOrdID_List AS Auto_Added_Orders,
a.ORDPOITEM_ModelNumber,
a.ORDPO_Number,
a.ORDPOITEM_ID,
(SELECT sum(ORDPOITEM_Qty) AS ORDPOITEM_Qty
FROM orderpoitems
WHERE ORDPOITEM_ModelNumber = a.ORDPOITEM_ModelNumber
AND ORDPO_Number = 123007)
AS ORDPOITEM_Qty,
a.ORDPO_TrackingNumber,
a.ORDPOITEM_Received,
a.ORDPOITEM_ReceivedQty,
a.ORDPOITEM_ReceivedBy,
b.ORDPO_ID
FROM orderpoitems a
LEFT JOIN orderpo b ON (a.ORDPO_Number = b.ORDPO_Number)
WHERE a.ORDPO_Number = 123007
GROUP BY a.ORDPOITEM_ModelNumber
ORDER BY a.ORD_poOrdID_List, a.ORD_ID
I did the explain that is how I am getting these pictures... I added a few indexes... still not looking good.
Well firstly your query could be simplified to:
SELECT a.ORD_ID AS Manual_Added_Orders,
a.ORD_poOrdID_List AS Auto_Added_Orders,
a.ORDPOITEM_ModelNumber,
a.ORDPO_Number,
a.ORDPOITEM_ID,
SUM(ORDPOITEM_Qty) AS ORDPOITEM_Qty
a.ORDPO_TrackingNumber,
a.ORDPOITEM_Received,
a.ORDPOITEM_ReceivedQty,
a.ORDPOITEM_ReceivedBy,
b.ORDPO_ID
FROM orderpoitems a
LEFT JOIN orderpo b ON (a.ORDPO_Number = b.ORDPO_Number)
WHERE a.ORDPO_Number = 123007
GROUP BY a.ORDPOITEM_ModelNumber
ORDER BY a.ORD_poOrdID_List, a.ORD_ID
Secondly I would start by creating a index on the orderpoitems.ORDPO_Number and orderpo.ORDPO_number
Bit hard to say without the table structures.
Read up on indexes and covering index
From what you have, start with what is in your where clause AND join criteria to another table. Also, include if possible and practical, those columns used in group by / order by as order by is typically a killer when finishing a query.
That said, I would have an index on your OrderPOItems table on
( ordpo_number, orderpoitem_ModelNumber, ord_poordid_list, ord_id )
This way, the FIRST element hits your WHERE clause. Next the column for your data grouping, finally, the columns for your order by. This way, the joins and qualifying components can be "covered" from the index alone without having to go to the raw data pages for the rest of the columns being returned. Hopefully a good jump start specific to your scenario.
Related
hi i have to different tables m_mp and m_answer
both have m_date and m_mdate which fills with time() function i want to select both entries but order by date
example:
first table: 'some text','6464647776'
second table 'some answer','545454545'
so i want to show first second table and then the first one
this is the code im using:
SELECT r.*,u.*
FROM `mensajes` as r
LEFT JOIN `m_answer` as u on r.id = u.id_m
WHERE r.id = '1'
ORDER BY m_date
and then display the result of each table using while-loop
I guess I get what you want to do.
You may force both grouping and ordering using ORDER BY, like this:
http://sqlfiddle.com/#!9/373d65/9
I know the solution is not optimal in terms of speed, but, given proper indices, I suspect performance to be acceptable, unless you are already aiming for millions of messages; when that time comes, you would like to either properly GROUPBY, or make subsequent queries for just the recently answered questions from the current page.
Ok, here we go. There's this messy SELECT crossing other tables and ordering to get the one desired row. Basically I do the "math" inside the ORDER BY.
1 base table.
7 JOINS poiting to local tables.
WHERE with 2 clauses and a NOT IN crossing another table.
You'll see in the code the ORDER BY is pretty damn big/ugly, it sums the result of 5 different calculations. I need that result to order by those calculations in order to get the worst row-case.
The problem is once I execute the Stored Procedure it takes up to 8 seconds to run. That's kind of non-acceptable. So, I'm starting to check Indexes.
So, I'm looking for advices on how to make this query run faster.
I'm indexing the WHERE clauses and the field LINEA, Should I index something else? Like the rows Im crossing for the JOINs? or should I approach the query differently?
Query:
SET #LINEA = (
SELECT TOP 1
BOA.LIN
FROM
BAND_BA BOA
LEFT JOIN
TEL PAR
ON REPLACE(BOA.Lin,'-','') = SUBSTRING(PAR.Te,2,10)
LEFT JOIN
TELP CLP
ON REPLACE(BOA.Lin,'-','') = SUBSTRING(CLP.Numtel,2,10)
LEFT JOIN
CA C
ON REPLACE(BOA.Lin,'-','') = C.An
LEFT JOIN
RE R
ON REPLACE(BOA.Lin,'-','') = R.Lin
LEFT JOIN
PRODUCTOS2 P2
ON BOA.PRODUCTO = P2.codigo
LEFT JOIN
EN
ON REPLACE(BOA.Lin,'-','') = EN.G
LEFT JOIN
TIP ID
ON TIPID = ID.ID
WHERE
BOA.EST = 'C' AND
ID.SE = 'boA' AND
BOA.LIN NOT IN (
SELECT
LIN
FROM
BAN
)
ORDER BY (EN.VALUE + ANT.VALUE + REIT.VAL + C.VALUE + TEL.VALUE
) DESC,
I'll be frank, this is some pretty terrible SQL. Without seeing all your table structures, advice here will be incomplete. That being said, please don't post all your table structures because you are already very close to "hire a consultant" territory with this.
All the REPLACE logic should be done away with. If you need to JOIN on these fields, then add comparable fields to the tables so you don't need to manipulate the data. Every single JOIN that uses a REPLACE or SUBSTRING is a table or index scan - those are non-SARGable and a definite anti-pattern.
The ORDER BY is probably the most convoluted ORDER BY I have ever seen. Some major issues there:
Subqueries should all be eliminated and materialized either in the outer query or as variables
String manipulation should be eliminated (see item 1 above)
The entire query is basically a code smell. If you need to write code like this to meet business requirements then you either have a terribly inappropriate design or some other much larger issue in the organization or data.
One thing that can kill performance is using a lot of LEFT JOINs. To improve performance of LEFT JOIN, you might want to make sure that the column(s) to which you join have an index - that can have a huge impact on performance.
I have a reasonably complex MySQL query being run on another developer's database. I am trying to copy over his data to our new database structure, so I'm running this query to get a load of the data over to copy. The main table has around 45,000 rows.
As you can see from the query below, there's a lot of fields from several different tables. My problem is that the field Ref.refno (as ref_id) is being pulled through, in some cases, two or three times. This is because in the table LandlordOnlineRef (LLRef) there are sometimes multiple rows with this same reference number - in this case, because the row should have been edited, but instead was duplicated...
Here's what I've tried doing: -
SELECT DISTINCT(Ref.refno) [...] - this makes no difference to the output at all, although I would've assumed it would stop selecting duplicate refno IDs
Is this a MySQL bug, or me? - I also tried adding GROUP BY ref_id to the end of my query. The query normally takes a few milliseconds to run, but when I add GROUP BY to the end, it seems to run infinitely - I waited several minutes but nothing was happening. I thought it might be struggling because I'm using LIMIT 1000, so I also tried LIMIT 10 but still get the same effect.
Here's the problem query - thanks!
SELECT
-- progress
Ref.refno AS ref_id,
Ref.tenantid AS tenant_id,
Ref.productid AS product_id,
Ref.guarantorid AS guarantor_id,
Ref.agentid AS agent_id,
Ref.companyid AS company_id,
Ref.status AS status,
Ref.startdate AS ref_start_date,
Ref.enddate AS ref_end_date,
-- ReferenceDetails
RefDetails.creditscore AS credit_score,
-- LandlordOnlineRef
LLRef.propaddress AS prev_ll_address,
LLRef.rent AS prev_ll_rent,
LLRef.startdate AS prev_ll_start_date,
LLRef.enddate AS prev_ll_end_date,
LLRef.arrears AS prev_ll_arrears,
LLRef.arrearsreason AS prev_ll_arrears_reason,
LLRef.propertycondition AS prev_ll_property_condition,
LLRef.conditionreason AS prev_ll_condition_reason,
LLRef.consideragain AS prev_ll_consider_again,
LLRef.completedby AS prev_ll_completed_by,
LLRef.contactno AS prev_ll_contact_no,
LLRef.landlordagent AS prev_ll_or_agent,
-- EmpDetails
EmpRef.cempname AS emp_name,
EmpRef.cempadd1 AS emp_address_1,
EmpRef.cempadd2 AS emp_address_2,
EmpRef.cemptown AS emp_address_town,
EmpRef.cempcounty AS emp_address_county,
EmpRef.cemppostcode AS emp_address_postcode,
EmpRef.ctelephone AS emp_telephone,
EmpRef.cemail AS emp_email,
EmpRef.ccontact AS emp_contact,
EmpRef.cgross AS emp_income,
EmpRef.cyears AS emp_years,
EmpRef.cmonths AS emp_months,
EmpRef.cposition AS emp_position,
-- EmpLlodReference
ELRef.lod_ref_status AS prev_ll_status,
ELRef.lod_ref_email AS prev_ll_email,
ELRef.lod_ref_tele AS prev_ll_telephone,
ELRef.emp_ref_status AS emp_status,
ELRef.emp_ref_tele AS emp_telephone,
ELRef.emp_ref_email AS emp_email
FROM ReferenceDetails AS RefDetails
LEFT JOIN progress AS Ref ON Ref.refno
LEFT JOIN LandlordOnlineRef AS LLRef ON LLRef.refno = Ref.refno
LEFT JOIN EmpLlodReference AS ELRef ON ELRef.refno = Ref.refno
LEFT JOIN EmpDetails AS EmpRef ON EmpRef.tenantid = Ref.tenantid
-- For testing purposes to speed things up, limit it to 1000 rows
LIMIT 1000
LEFT JOIN progress AS Ref ON Ref.refno
is going to basically turn that into a cartesian join. You're not doing an explicit comparison, you're saying "join all records where there's a non-null value".
Shouldn't it be
LEFT JOIN progress AS Ref ON Ref.refno = RefDetails.something
?
Put all of the selected columns into DISTINCT, separated by ,. If you want to keep the renaming, wrap another SELECT DISTINCT(*) FROM (YOUR_SELECT) around.
Are there indexes on the columns in the GROUP BY clause? LIMIT is applied after GROUP BY. So limiting does not affect the query runtime.
General rule is to never group by more columns then you have to. Use a subquery with a group by on the table thats returning duplicate rows to get rid of them.
Change:
LEFT JOIN LandlordOnlineRef AS LLRef ON LLRef.refno = Ref.refno
to:
Left Join (select refno
, othercolumns you need
from LandlordOnlineRef
group by refno,othercolumn) as LLRef
Not sure on which columns you'll want to include here, but at any table level, you can change that table to a subquery to eliminate duplicate rows before the join. As MarkBannister says, you'll need some logic to identify a unique refno within LLRef . You can also use a date column for 'most recent' or any other logic you can think of to get back a unique LLRef and the info related to that record.
ugh, auto spell check is changing refno to refine. ha
so I have a 560mb db with the largest table 500mb(over 10 million rows)
my query hase to join 5 tables and takes about 10 seconds to finish....
SELECT DISTINCT trips.tripid AS tripid,
stops.stopdescrption AS "perron",
Date_format(segments.segmentstart, "%H:%i") AS "time",
Date_format(trips.tripend, "%H:%i") AS "arrival",
Upper(routes.routepublicidentifier) AS "lijn",
plcend.placedescrption AS "destination"
FROM calendar
JOIN trips
ON calendar.vsid = trips.vsid
JOIN routes
ON routes.routeid = trips.routeid
JOIN places plcstart
ON plcstart.placeid = trips.placeidstart
JOIN places plcend
ON plcend.placeid = trips.placeidend
JOIN segments
ON segments.tripid = trips.tripid
JOIN stops
ON segments.stopid = stops.stopid
WHERE stops.stopid IN ( 43914, 23899, 23925, 23908,
23913, 19899, 23871, 43902,
23876, 25563, 18956, 19912,
23889, 23861, 23879, 23884,
23856, 19920, 19898, 23916,
23894, 20985, 23930, 20932,
20986, 22434, 20021, 19893,
19903, 19707, 19935 )
AND calendar.vscdate = Str_to_date('25-10-2011', "%e-%c-%Y")
AND segments.segmentstart >= Str_to_date('15:56', "%H:%i")
AND routes.routeservicetype = 0
AND segments.segmentstart > "00:00:00"
ORDER BY segments.segmentstart
what are things I can do to speed this up? any tips are welcome, i'm pretty new to sql...
but I can't change the structure of the db because it's not mine...
Use EXPLAIN to find the bottlenecks: http://dev.mysql.com/doc/refman/5.0/en/explain.html
Then perhaps, add indexes.
If you don't need to select ALL rows, use LIMIT to limit returned result count.
Just looking at the query, I would say that you should make sure that you have indexes on trips.vsid, calendar.vscdate, segments.segmentstart and routes.routeservicetype. I assume that there is already indexes on all the primary keys in the tables.
Using explain as Briedis suggested would show you how well the indexes work.
You might want to add covering indexes for some tables, like for example an index on trips.vsid where tripid and routeid are included. That way the database can use only the index for the data that is needed from the table, and not read from the actual table.
Edit:
The execution plan tells you that it successfully uses indexes for everything except the segments table, where it does a table scan and filters by the where condition. You should try to make a covering index for segments.segmentstart by including tripid and stopid.
Try adding a clusters index to the routes table on both routeservicetype and routeid.
Depending on the frequency of the data within the routeservicetype field, you may get an improvement by shrinking the amount of data being compared in the join to the trips table.
Looking at the explain plan, you may also want to force the sequence of the table usage by using STRAIGHT_JOIN instead of JOIN (or INNER JOIN), as I've had real improvements with this technique.
Essentially, put the table with the smallest row-count of extracted data at the beginning of the query, and the largest row count table at the end (in this case possibly the segments table?), with the exception of simple lookups (eg. for descriptions).
You may also consider altering the WHERE clause to filter the segments table on stopid instead of the stops table, and creating a clustered index on the segments table on (stopid, tripid and segmentstart) - this index will be effectively able to satisfy two joins and two where clauses from a single index...
To build the index...
ALTER TABLE segments ADD INDEX idx_qry_helper ( stopid, tripid, segmentstart );
And the altered WHERE clause...
WHERE segments.stopid IN ( 43914, 23899, 23925, 23908,
23913, 19899, 23871, 43902,
23876, 25563, 18956, 19912,
23889, 23861, 23879, 23884,
23856, 19920, 19898, 23916,
23894, 20985, 23930, 20932,
20986, 22434, 20021, 19893,
19903, 19707, 19935 )
:
:
At the end of the day, a 10 second response for what appears to be a complex query on a fairly large dataset, isn't all that bad!
Its particular query pops up in the slow query log all the time for me. Any way to improve its efficiency?
SELECT
mov_id,
mov_title,
GROUP_CONCAT(DISTINCT genres.genre_name) as all_genres,
mov_desc,
mov_added,
mov_thumb,
mov_hits,
mov_numvotes,
mov_totalvote,
mov_imdb,
mov_release,
mov_type
FROM movies
LEFT JOIN _genres
ON movies.mov_id = _genres.gen_movieid
LEFT JOIN genres
ON _genres.gen_catid = genres.grenre_id
WHERE mov_status = 1 AND mov_incomplete = 0 AND mov_type = 1
GROUP BY mov_id
ORDER BY mov_added DESC
LIMIT 0, 20;
My main concern is in regard to the group_concat function, which outputs a comma separated list of genres associated with the particular film, which I put through a for loop and make click-able links.
Do you need the genre names? If you can do with just the genre_id, you can eliminate the second join. (You can fill in the genre name later, in the UI, using a cache).
What indexes do you have?
You probably want
create index idx_movies on movies
(mov_added, mov_type, mov_status, mov_incomplete)
and most certainly the join index
create index ind_genres_movies on _genres
(gen_mov_id, gen_cat_id)
Can you post the output of EXPLAIN? i.e. put EXPLAIN in front of the SELECT and post the results.
I've had quite a few wins with using SELECT STRAIGHT JOIN and ordering the tables according to their size.
STRAIGHT JOIN stops mysql guess which order to join tables and does it in the order specified so if you use your smallest table first you can reduce the amount of rows being joined.
I'm assuming you have indexes on mov_id, gen_movieid, gen_catid and grenre_id?
The 'using temporary, using filesort' is from the group concat distinct genres.genre_name.
Trying to get a distinct on a column without an index will cause a temp table to be used.
Try adding an index on genre_name column.