I have a left join:
$query = "SELECT a.`id`, a.`documenttitle`, a.`committee`, a.`issuedate`, b.`tagname`
FROM `#__document_management_documents` AS a
LEFT JOIN `#__document_managment_tags` AS b
ON a.id = b.documentid
".$tagexplode."
".$issueDateText."
AND a.committee in (".$committeeQueryTextExplode.")
AND a.documenttitle LIKE '".$documentNameFilter."%'
GROUP BY a.id ORDER BY a.documenttitle ASC
";
It's really slow abaout 7 seconds on 4000 records
Any ideas what I might be doing wrong
SELECT a.`id`, a.`documenttitle`, a.`committee`, a.`issuedate`, b.`tagname`
FROM `w4c_document_management_documents` AS a
LEFT JOIN `document_managment_tags` AS b
ON a.id = b.documentid WHERE a.issuedate >= ''
AND a.committee in ('1','8','9','10','11','12','13','16','17','18','19','20','21','22','23','24','25','26','27','28','29','30','31','32','33','34','35','36','37','38','39','40','41','42','43','44','45','46','47')
AND a.documenttitle LIKE '%' GROUP BY a.id ORDER BY a.documenttitle ASC
I would put an index on a.committee, and full text index the doctitle col. The IN and LIKE are immediate flags to me. Issue date should also have an index because you are >= it
Try running the following commands in a MySQL client:
show index from #__document_management_documents;
show index from #_document_management_tags;
Check to see if there are keys/indexes on the id and documentid fields from the respective tables. If there aren't, MySQL will be doing a full table scan to lookup the values. Creating indexes on these fields makes the search time logarithmic, because it sorts them in a binary tree which is stored in the index file. Even better is to use primary keys (if possible), because that way the row data is stored in the leaf, which saves MySQL another I/O operation to lookup the data.
It could also simply be that the IN and >= operators have bad performance, in which case you might have to rewrite your queries or redesign your tables.
As mentioned above, try to find if your columns have index. You can even do "EXPLAIN" command in your MySQL client at the start of your query to see if the query is actually using indexes. You will see in the 'key' columns and 'Extra' column. Get more information here
This will help you optimize your query. Also group by causes using temporary and filesort which causes MySQL to create a temporary table and going through each rows. If you could use PHP to group by it would be faster.
Related
I have a MySql query that looks like the following:
SELECT trace_access.Employe_Code, trace_access.Employe_Prenom,
trace_access.Employe_Nom, , trace_access.Evenement_Date, trace_access.Evenement_Heure
FROM trace_access
INNER JOIN emp
ON trace_access.Employe_Code = emp.Employe_CodeEmploye
LEFT JOIN user
ON emp.Employe_ID = user.Employe_ID
LEFT JOIN role
ON role.User_ID = user.User_ID
WHERE trace_access.Employe_Nom Not Like "TEST%NU"
ORDER BY trace_access.Evenement_Date DESC , trace_access.Evenement_Heure DESC
The table "trace_access"contains almost 20 million entries.
When I explain the query:
explain query
My question is why MySql didn't use the key for the emp table and how to avoid "Using temporary; Using filesort" ??
I have tried to froce it to use my index but that didn't work.
The query lasts one hour and more, writes a file on /tmp folder that exceeds 8Go !!!
Any help ?
Thanks a lot.
With your current execution plan (to start with emp), a full table scan is required and there is no usefull index on emp. This can make sense when you only get a relatively small amount of rows after your join to trace_access, which probably isn't the case here.
To prevent the filesort, you need an index that supports your order by. So add, if it doesn't exist yet, the index trace_access(Evenement_Date, Evenement_Heure).
This might already be enough to make MySQL start with trace_access. If not, replace INNER JOIN emp with STRAIGHT_JOIN emp. This will force MySQL to do so.
Also add an index for emp(Employe_CodeEmploye).
Depending on your data, you can try to add Employe_Code and/or Employe_Nom as 3rd and/or 4th column to the index on trace_access, though it will probably not have much effect.
I have following tables on Production with the respective counts of records,
people count= '565367'
donors count= '556325'
telerec_recipients count= '115147'
person_addresses count= '563183'
person_emails count= '106958'
person_phones count= '676474'
person_sms count= '22275'
On the UI end I want to display some data by applying some joins or left joins, and at the end by grouping, ordering and paginating I'm generating a json response.
So for achieving this I tried 2 methods, one by creating the view file of the join queries and apply where, group by and order by clauses inside controller, and second by directly firing the laravel syntax of joins in my controller.
Because I have a large data in all the tables, The join query works good but when it comes to group by statement it takes much time to execute.
Help me out in order to optimize or faster this process.
My Example Query is:
SELECT people.id as person_id,donors.donor_id,telerec_recipients.id as telerec_recipient_id,people.first_name,people.last_name,people.birth_date,
person_addresses.address_id,person_emails.email_id,person_phones.phone_id,person_sms.sms_id
FROM `people`
inner join `donors`
on `people`.`id` = `donors`.`person_id`
left join `telerec_recipients`
on `people`.`id` = `telerec_recipients`.`person_id`
left join `person_addresses`
on `people`.`id` = `person_addresses`.`person_id`
left join `person_emails`
on `people`.`id` = `person_emails`.`person_id`
left join `person_phones`
on `people`.`id` = `person_phones`.`person_id`
left join `person_sms`
on `people`.`id` = `person_sms`.`person_id`
GROUP BY `people`.`id`, `telerec_recipients`.`id`
ORDER BY `last_name` ASC LIMIT 25 offset 0;
Result of the explain:
Donors Table Indexes
The problem is that MySQL uses the wrong index on the donors table, you need to instruct MySQL using force index index hint to use the primary key instead.
Explanation
From the output of the explain it is clear that sg really goes wrong with donors table, since you can see using temporary, using filesort in the extra column.
The key column tells you that MySQL uses a unique index on the donors.donor_id field. However, donor_id field is not used anywhere in the query for joining / grouping / filtering purposes. donors.person_id field is used in the join. Since the primary key of donors table is on the person_id field, myql should use that for the join. force index index hint tells MySQL that you firmly believe that a certain index should be used.
Note 1:
You have several duplicate indexes. Remove all duplicates, they are only slowing your system own.
Note 2:
I think you query is against the sql standards because you have fields in the select list that are neither in the group by list, nor are subject of an aggregate function such as sum(), nor are functionally dependent on the grouping fields. In MySQL under certain sql mode settings such queries are allowed to run, but they may not produce entirely the same output as you would like.
I have this query which basically goes through a bunch of tables to get me some formatted results but I can't seem to find the bottleneck. The easiest bottleneck was the ORDER BY RAND() but the performance are still bad.
The query takes from 10 sec to 20 secs without ORDER BY RAND();
SELECT
c.prix AS prix,
ST_X(a.point) AS X,
ST_Y(a.point) AS Y,
s.sizeFormat AS size,
es.name AS estateSize,
c.title AS title,
DATE_FORMAT(c.datePub, '%m-%d-%y') AS datePub,
dbr.name AS dateBuiltRange,
m.myId AS meuble,
c.rawData_id AS rawData_id,
GROUP_CONCAT(img.captionWebPath) AS paths
FROM
immobilier_ad_blank AS c
LEFT JOIN PropertyFeature AS pf ON (c.propertyFeature_id = pf.id)
LEFT JOIN Adresse AS a ON (c.adresse_id = a.id)
LEFT JOIN Size AS s ON (pf.size_id = s.id)
LEFT JOIN EstateSize AS es ON (pf.estateSize_id = es.id)
LEFT JOIN Meuble AS m ON (pf.meuble_id = m.id)
LEFT JOIN DateBuiltRange AS dbr ON (pf.dateBuiltRange_id = dbr.id)
LEFT JOIN ImageAd AS img ON (img.commonAd_id = c.rawData_id)
WHERE
c.prix != 0
AND pf.subCatMyId = 1
AND (
(
c.datePub > STR_TO_DATE('01-04-2016', '%d-%m-%Y')
AND c.datePub < STR_TO_DATE('30-04-2016', '%d-%m-%Y')
)
OR date_format(c.datePub, '%d-%m-%Y') = '30-04-2016'
)
AND a.validPoint = 1
GROUP BY
c.id
#ORDER BY
# RAND()
LIMIT
5000
Here is the explain query:
Visual Portion:
And here is a screenshot of mysqltuner
EDIT 1
I have many indexes Here they are:
EDIT 2:
So you guys did it. Down to .5 secs to 2.5 secs.
I mostly followed all of your advices and changed some of my.cnf + runned optimized on my tables.
You're searching for dates in a very suboptimal way. Try this.
... c.datePub >= STR_TO_DATE('01-04-2016', '%d-%m-%Y')
AND c.datePub < STR_TO_DATE('30-04-2016', '%d-%m-%Y') + INTERVAL 1 DAY
That allows a range scan on an index on the datePub column. You should create a compound index for that table on (datePub, prix, addresse_id, rawData_id) and see if it helps.
Also try an index on a (valid_point). Notice that your use of a geometry data type in that table is probably not helping anything.
To begin with you have quite a lot of indexes but many of them are not useful. Remember more indexes means slower inserts and updates. Also mysql is not good at using more than one index per table in complex queries. The following indexes have a cardinality < 10 and probably should be dropped.
IDX_...E88B
IDX....62AF
IDX....7DEE
idx2
UNIQ...F210
UNIQ...F210..
IDX....0C00
IDX....A2F1
At this point I got tired of the excercise, there are many more
Then you have some duplicated data.
point
lat
lng
The point field has the lat and lng in it. So the latter two are not needed. That means you can lose two more indexes idxlat and idxlng. I am not quite sure how idxlng appears twice in the index list for the same table.
These optimizations will lead to an overall increase in performance for INSERTS and UPDATES and possibly for all SELECTs as well because the query planner needs to spend less time deciding which index to use.
Then we notice from your explain that the query does not use any index on table Adresse (a). But your where clause has a.validPoint = 1 clearly you need an index on it as suggested by #Ollie-Jones
However I suspect that this index may have low cardinality. In that case I recommend that you create a composite index on this column + another.
The problem is your join with (a). The table has an index, but the index can't be used, more than likely due to the sort (/group by), or possibly incompatible types. The EXPLAIN shows three quarters of a million rows examined, this means that index lookup was not possible.
When designing a query, look for the smallest possible result set - search by that index, and then join from there. Perhaps "c" isn't the best table for the primary query.
(You could try using FORCE INDEX (id) on table a, if it doesn't work, the error may give you more information).
As others have pointed out, you need an index on a.validPoint but what about c.datePub that is also used in the WHERE clause. Why not a multiple column index on datePub, address_id the index on address_id is already used, so a multiple column index will be better here.
SELECT SQL_NO_CACHE link.stop, stop.common_name, locality.name, stop.bearing, stop.latitude, stop.longitude
FROM service
JOIN pattern ON pattern.service = service.code
JOIN link ON link.section = pattern.section
JOIN naptan.stop ON stop.atco_code = link.stop
JOIN naptan.locality ON locality.code = stop.nptg_locality_ref
GROUP BY link.stop
The above query takes roughly 800ms - 1000ms to run.
If I append a group_concat statement the query then takes 8 - 10 seconds:
SELECT SQL_NO_CACHE link.stop, link.stop, stop.common_name, locality.name, stop.bearing, stop.latitude, stop.longitude, group_concat(service.line) lines
How can I change this query so that it runs in less than 2 seconds with the group_concat statement?
SQL Fiddle: http://sqlfiddle.com/#!9/414fe
EXPLAIN statements for both queries: http://i.imgur.com/qrURgzV.png
How long does this query take?
SELECT p.section, GROUP_CONCAT(s.line)
FROM pattern p join
service s
ON p.service = s.code
GROUP BY p.section
I am thinking that you can do the group_concat() in a subquery, so the outer query does not need an aggregation. This can speed queries when there is one table in the subquery. In your case, there are two.
The final results would be something like:
link.section = pattern.section
SELECT SQL_NO_CACHE . . .,
(SELECT GROUP_CONCAT(s.line)
FROM pattern p join
service s
ON p.service = s.code
WHERE p.section = link.section
) as lines
FROM link JOIN
naptan.stop
ON stop.atco_code = link.stop JOIN
naptan.locality
ON locality.code = stop.nptg_locality_ref;
For this query, you want the following additional indexes: pattern(section, service) and service(code, line).
I don't know if this will work, but it is worth a try.
Note: this is assuming that you really don't need the group by for the rest of the columns.
A remark: You're using the nonstandard MySQL extension to GROUP BY. It happens to work for you because link.stop is joined to stop.atco_code, which itself is a primary key. But you need to be very careful with this.
I suggest you add some compound indexes. You join in to pattern on service and join out based on section. So add this index.
ALTER TABLE pattern ADD INDEX service_section (service, section, line);
This will let the query use just the index, and not have to hit the table itself to retrieve the information needed for the JOIN or your GROUP_CONCAT() operation. (You might also delete the index on just service, this new index makes it redundant).
Similarly, you want to create an index (section, stop) on the link table, and get rid of the index on just section.
On stop, you're using most of the columns, and you already have an index (PK) on atco_code, so let this one be.
Finally, on locality put an index on (code,name).
All this indexing monkey business should cut down the amount of work MySQL must do to satisfy your query.
Now look, as soon as you add WHERE anything = anything to the query, you may need to add a column to one or more of these indexes. You definitely should read up on multi-column indexing and grouping; good indexing is a critical success factor for your kind of data.
You should also run ANALYZE TABLE xxxx on each of your tables after inserting lots of rows, to make sure the query optimizer can see appropriate information about the content of the table and indexes.
I have a MySQL query:
SELECT DISTINCT
c.id,
c.company_name,
cd.firstname,
cd.surname,
cis.description AS industry_sector
FROM (clients c)
JOIN clients_details cd ON c.id = cd.client_id
LEFT JOIN clients_industry_sectors cis ON cd.industry_sector_id = cis.id
WHERE c.record_type='virgin'
ORDER BY date_action, company_name asc, id desc
LIMIT 30
The clients table has about 60-70k rows and has an index for 'id', 'record_type', 'date_action' and 'company_name' - unfortunately the query still takes 5+ secs to complete. Removing the 'ORDER BY' reduces this to about 30ms since a filesort is not required. Is there any way I can alter this query to improve upon the 5+ sec response time?
See: http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
Especially:
In some cases, MySQL cannot use indexes to resolve the ORDER BY (..). These cases include the following:
(..)
You are joining many tables, and the columns in the ORDER BY are not all from the first nonconstant table that is used to retrieve rows. (This is the first table in the EXPLAIN output that does not have a const join type.)
You have an index for id, record_type, date_action. But if you want to order by date_action, you really need an index that has date_action as the first field in the index, preferably matching the exact fields in the order by. Otherwise yes, it will be a slow query.
Without seeing all your tables and indexes, it's hard to tell. When asking a question about speeding up a query, the query is just part of the equation.
Does clients have an index on id?
Does clients have an index on record_type
Does clients_details have an index on client_id?
Does clients_industry_sectors have an index on id?
These are the minimum you need for this query to have any chance of working quickly.
thanks so much for the input and suggestions. In the end I've decided to create a new DB table which has the sole purpose of existing to return results for this purpose so no joins are required, I just update the table when records are added or deleted to/from the master clients table. Not ideal from a data storage point of view but it solves the problem and means I'm getting results fantastically fast. :)