MySQL query optimization - mysql

I am having join query that seems fetching slowly. How can I optimize it, or it is
reasonable?
time to execute
29 total, Query took 1.6956 sec
mysql query
SELECT SQL_CALC_FOUND_ROWS
t2.AuctionID ,t2.product_name ,t3.user_name ,t1.date_time ,t1.owned_price
,t2.specific_product_id
FROM table_user_ownned_auction AS t1
INNER JOIN table_product AS t2 ON t1.specific_product_id=t2.specific_product_id
INNER JOIN table_user_information AS t3 ON t3.user_id=t1.user_id
ORDER BY ownned_id DESC
Here's the explain output

Looking at the explain output, your problem is in
The second line: The join with table t1.
Put an index on t1.specific_product_id and t2.specific_product_id.
the first line has one 3 rows in it, using filesort on that is actually faster than using the index because it saves on I/O-time.
The following code will add an index to t2.specific_product_id.
ALTER TABLE table_product ADD INDEX spi(specific_product_id);
Because you only have 29 rows of output, using the index should speed up your query to instantaneous.

If you want to understand the performance issues of a query, just use the EXPLAIN keyword in front of your query:
EXPLAIN SELECT SQL_CALC_FOUND_ROWS
,t2.AuctionID ,t2.product_name ,t3.user_name ,t1.date_time ,t1.owned_price
,t2.specific_product_id
FROM table_user_ownned_auction AS t1 inner
JOIN table_product AS t2 ON t1.specific_product_id=t2.specific_product_id
INNER JOIN table_user_information AS t3 ON t3.user_id=t1.user_id
ORDER BY ownned_id DESC
It will tell you important information about your query. The most important columns are "key" and "Extra".
If "key" is NULL you need an index. Mostly for columns that are used in WHERE or GROUP BY or ORDER BY statements. "Extra" tells you about resource-consuming (CPU or Memory) operations.
So, add an index on the "ownned_id" (which i presume should be "owned_id") and explain it again. Then look at the performance gain.
If you have problems, I can help you better if you paste the EXPLAIN output.

By looking at your explain table ,type are all which is very bad if you have more than 10 000 row in your table .I Strongly advise to index this column in your table
t1.specific_product_id
t2.specific_product_id
t3.user_id
t1.user_id
If should your table reach 10 000 row , you should be be able to see perfomance boost. For more information please this video from 00:00 to 02:04 minutes , As you can in the below video , before indexing the query have to search more than 90000 row of data and after index it will search less than 5 row of data hope it will help.
https://www.youtube.com/edit?o=U&video_id=ojyEcNMAj8k

Related

Optimizate My SQL Index Multiple Table JOIN

I have a 5 tables in mysql. And when I want execute query it executed too long.
There are structure of my tables:
Reciept(count rows: 23799640)reciept table structure
reciept_goods(count rows: 39398989)reciept_goods table structure
good(count rows: 17514)good table structure
good_categories(count rows: 121)good_categories table structure
retail_category(count rows: 10)retail_category table structure
My Indexes:
Date -->reciept.date #1
reciept_goods_index --> reciept_goods.recieptId #1,
reciept_goods.shopId #2,
reciept_goods.goodId #3
category_id -->good.category_id #1
I have a next sql request:
SELECT
R.shopId,
sales,
sum(Amount) as sum_amount,
count(distinct R.id) as count_reciept,
RC.id,
RC.name
FROM
reciept R
JOIN reciept_goods RG
ON R.id = RG.RecieptId
AND R.ShopID = RG.ShopId
JOIN good G
ON RG.GoodId = G.id
JOIN good_categories GC
ON G.category_id = GC.id
JOIN retail_category RC
ON GC.retail_category_id = RC.id
WHERE
R.date >= '2018-01-01 10:00:00'
GROUP BY
R.shopId,
R.sales,
RC.id
Explain this query gives next result:
Explain query
and execution time = 236sec
if use straight_join good ON (good.id = reciept_goods.GoodId ) explain query
Explain query
and execution time = 31sec
SELECT STRAIGHT_JOIN ... rest of query
I think, that problem in the indexes of my tables, but I don't uderstand how to fix them, can someone help me?
With about 2% of your rows in reciepts having the correct date, the 2nd execution plan chosen (with straight_join) seems to be the right execution order. You should be able to optimize it by adding the following covering indexes:
reciept(date, sales)
reciept_goods(recieptId, shopId, goodId, amount)
I assume that the column order in your primary key for reciept_goods currently is (goodId, recieptId, shopId) (or (goodId, shopId, receiptId)). You could change that to recieptId, shopId, goodId (and if you look at e.g. the table name, you may wanted to do this anyway); in that case, you do not need the 2nd index (at least for this query). I would assume that this primary key made MySQL take the slower execution plan (of course assuming that it would be faster) - although sometimes it's just bad statistics, especially on a test server.
With those covering indexes, MySQL should take the faster explain plan even without straight_join, if it doesn't, just add it again (although I would like a look at both executions plans then). Also check that those two new indexes are used in the explain plan, otherwise I may have missed a column.
It looks like you are depending on walking through a couple of many:many tables? Many people design them inefficiently.
Here I have compiled a list of 7 tips on making mapping tables more efficient. The most important is use of composite indexes.

Fast to query slow to create table

I have an issue on creating tables by using select keyword (it runs so slow). The query is to take only the details of the animal with the latest entry date. that query will be used to inner join another query.
SELECT *
FROM amusementPart a
INNER JOIN (
SELECT DISTINCT name, type, cageID, dateOfEntry
FROM bigRegistrations
GROUP BY cageID
) r ON a.type = r.cageID
But because of slow performance, someone suggested me steps to improve the performance. 1) use temporary table, 2)store the result and use it and join it the the other statement.
use myzoo
CREATE TABLE animalRegistrations AS
SELECT DISTINCT name, type, cageID, MAX(dateOfEntry) as entryDate
FROM bigRegistrations
GROUP BY cageID
unfortunately, It is still slow. If I only use the select statement, the result will be shown in 1-2 seconds. But if I add the create table, the query will take ages (approx 25 minutes)
Any good approach to improve the query time?
edit: the size of big registration table is around 3.5 million rows
Can you please try the query in the way below to achieve The query is to take only the details of the animal with the latest entry date. that query will be used to inner join another query, the query you are using is not fetching records as per your requirement and it will faster:
SELECT a.*, b.name, b.type, b.cageID, b.dateOfEntry
FROM amusementPart a
INNER JOIN bigRegistrations b ON a.type = b.cageID
INNER JOIN (SELECT c.cageID, max(c.dateOfEntry) dateofEntry
FROM bigRegistrations c
GROUP BY c.cageID) t ON t.cageID = b.cageID AND t.dateofEntry = b.dateofEntry
Suggested indexing on cageID and dateofEntry
This is a multipart question.
Use Temporary Table
Don't use Distinct - group all columns to make distinct (dont forget to check for index)
Check the SQL Execution plans
Here you are not creating a temporary table. Try the following...
CREATE TEMPORARY TABLE IF NOT EXISTS animalRegistrations AS
SELECT name, type, cageID, MAX(dateOfEntry) as entryDate
FROM bigRegistrations
GROUP BY cageID
Have you tried doing an explain to see how the plan is different from one execution to the next?
Also, I have found that there can be locking issues in some DB when doing insert(select) and table creation using select. I ran this in MySQL, and it solved some deadlock issues I was having.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
The reason the query runs so slow is probably because it is creating the temp table based on all 3.5 million rows, when really you only need a subset of those, i.e. the bigRegistrations that match your join to amusementPart. The first single select statement is faster b/c SQL is smart enough to know it only needs to calculate the bigRegistrations where a.type = r.cageID.
I'd suggest that you don't need a temp table, your first query is quite simple. Rather, you may just need an index. You can determine this manually by studying the estimated execution plan, or running your query in the database tuning advisor. My guess is you need to create an index similar to below. Notice I index by cageId first since that is what you join to amusementParks, so that would help SQL narrow the results down the quickest. But I'm guessing a bit - view the query plan or tuning advisor to be sure.
CREATE NONCLUSTERED INDEX IX_bigRegistrations ON bigRegistrations
(cageId, name, type, dateOfEntry)
Also, if you want the animal with the latest entry date, I think you want this query instead of the one you're using. I'm assuming the PK is all 4 columns.
SELECT name, type, cageID, dateOfEntry
FROM bigRegistrations BR
WHERE BR.dateOfEntry =
(SELECT MAX(BR1.dateOfEntry)
FROM bigRegistrations BR1
WHERE BR1.name = BR.name
AND BR1.type = BR.type
AND BR1.cageID = BR.cageID)

Very slow MySQL subquery

I have two tables that each contain about 500 customer data records. Each record in each of the tables has an email field. Sometimes the same email addresses exist on both tables, sometimes not. I want to retrieve every email address on table1 that doesn't exist on table2. The email field in each table is indexed. I'm doing the select with a sub query that is really slow, 10 to 20 seconds.
select email
from
t1
where
email not in (select email from t2)
There's actually about 30K rows in each table, but I can knock it down to 500 each very quickly with an additional 'where' to filter by category. It's only when I add that subquery that it slows down dramatically. So, I am sure this can be faster, and I know a join should be much faster than the subquery, but can't figure out how to do that. I found a left outer join explanation here on SO, that looked like it should help, but got nowhere with it. Any help is appreciated.
mysql does not optimize a subquery in the WHERE clause (edit: it re-runs the subquery for every row tested)
to convert to a JOIN, try something like
SELECT email FROM t1
LEFT JOIN t2 ON (t1.email = t2.email)
WHERE t2.email IS NULL
this should run very fast, a covering index query.
The query optimizer should walk the email index of t1, check the
email index of t2, and output those emails that are in t1 but not in t2.
Edit: I should add, mysql does optimize a subquery in the JOIN clause: it runs the subquery and puts the results into a "derived table" (temporary table without any indexes), and joins the derived table like any other. The syntax is a bit funny, each derived table must have an alias, ie ... JOIN (SELECT ...) AS derived ON ....
Usually subqueries do more processing than usual query. In your case it first fetches all the emails from t2 and compares it with the email list of t1.
You can try like below, without using a sub query.
SELECT email FROM t1,t2 WHERE t1.email!=t2.email
The best way to improve the performance of SELECT operations is to create indexes on one or more of the columns that are tested in the query. The index entries act like pointers to the table rows, allowing the query to quickly determine which rows match a condition in the WHERE clause, and retrieve the other column values for those rows. All MySQL data types can be indexed.
some tricks for creating mysql tables ..
see this.
I think this should work fine
SELECT email from T1
LEFT JOIN T2
ON T1.email=T2.email
WHERE T2.email!=NULL

Optimizing MySQL Query With MAX()

Apologies if this has been asked before but is there any way, at all, I can optimize this query to run faster. At the minute it takes about 2 seconds which while isn't a huge amount it is the slowest query on my site, all other queries take less that 0.5 secs.
Here is my query:
SELECT SQL_CALC_FOUND_ROWS MAX(images.id) AS maxID, celebrity.* FROM images
JOIN celebrity ON images.celeb_id = celebrity.id
GROUP BY images.celeb_id
ORDER BY maxID DESC
LIMIT 0,20
Here is an explain:
1 SIMPLE celebrity ALL PRIMARY NULL NULL NULL 536 Using temporary; Using filesort
1 SIMPLE images ref celeb_id celeb_id 4 celeborama_ignite.celebrity.id 191
I'm at a loss at how to improve the performance in this query further. I'm not super familiar with MySQL, but I do know that it is slow because I am sorting on the data created by MAX() and that has no index. I can't not sort on that as it gives me the results needed, but is there something else I can do to prevent it from slowing down the query?
Thanks.
If you really need fast solution - then don't perform such queries in runtime.
Just create additional field last_image_id in celebrity table and update it on event of uploading of new image (by trigger or your application logic, doesn't matter)
I would get the latest image this way:
SElECT c.*, i.id AS image_id
FROM celebrity c
JOIN images i ON i.celeb_id = c.id
LEFT OUTER JOIN images i2 ON i2.celeb_id = c.id AND i2.id > i.id
WHERE i2.id IS NULL
ORDER BY image_id DESC
LIMIT 0,20;
In other words, try to find a row i2 for the same celebrity with a higher id than i.id. If the outer join fails to find that match, then i.id must be the max image id for the given celebrity.
SQL_CALC_FOUND_ROWS can cause queries to run extremely slowly. I've found some cases where just removing the SQL_CALC_FOUND_ROWS made the query run 200x faster (but it could also make only a small difference in other cases, it depends on the table, so you should test both ways).
If you need the equivalent of SQL_CALC_FOUND_ROWS, just run a separate query:
SELECT COUNT(*) FROM celebrity;
I think you need a compound index on (celeb_id, id) in table images (supposing it's a MyISAM table), so the GROUP BY celeb_id and MAX(id) can use this index.
But with big tables, you'll probably have to follow #zerkms' advice and add a new column in table celebrity
MYSQL doesn't perform so good with joins. i would recommend to dividing your query in two. that is in first query select the Celeb and then select image. Simply avoid joins.
Check out this link - http://phpadvent.org/2011/a-stitch-in-time-saves-nine-by-paul-jones
SELECT STRAIGHT_JOIN *
FROM (
SELECT MAX(id) as maxID, celeb_id as id
FROM images
GROUP BY celeb_id
ORDER by maxID DESC
LIMIT 0, 20) as ids
JOIN celebrity USING (id);
the query does not allow row number precalculation, but an additional:
SELECT COUNT(DISTINCT celeb_id)
FROM images;
or even (if each celebrity has an image):
SELECT COUNT(*) FROM celebrity;
will not cost much, because can easily be cached by the query cache (if it not switched off).

How can I improve the performance of this MySQL query?

I have a MySQL query:
SELECT DISTINCT
c.id,
c.company_name,
cd.firstname,
cd.surname,
cis.description AS industry_sector
FROM (clients c)
JOIN clients_details cd ON c.id = cd.client_id
LEFT JOIN clients_industry_sectors cis ON cd.industry_sector_id = cis.id
WHERE c.record_type='virgin'
ORDER BY date_action, company_name asc, id desc
LIMIT 30
The clients table has about 60-70k rows and has an index for 'id', 'record_type', 'date_action' and 'company_name' - unfortunately the query still takes 5+ secs to complete. Removing the 'ORDER BY' reduces this to about 30ms since a filesort is not required. Is there any way I can alter this query to improve upon the 5+ sec response time?
See: http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
Especially:
In some cases, MySQL cannot use indexes to resolve the ORDER BY (..). These cases include the following:
(..)
You are joining many tables, and the columns in the ORDER BY are not all from the first nonconstant table that is used to retrieve rows. (This is the first table in the EXPLAIN output that does not have a const join type.)
You have an index for id, record_type, date_action. But if you want to order by date_action, you really need an index that has date_action as the first field in the index, preferably matching the exact fields in the order by. Otherwise yes, it will be a slow query.
Without seeing all your tables and indexes, it's hard to tell. When asking a question about speeding up a query, the query is just part of the equation.
Does clients have an index on id?
Does clients have an index on record_type
Does clients_details have an index on client_id?
Does clients_industry_sectors have an index on id?
These are the minimum you need for this query to have any chance of working quickly.
thanks so much for the input and suggestions. In the end I've decided to create a new DB table which has the sole purpose of existing to return results for this purpose so no joins are required, I just update the table when records are added or deleted to/from the master clients table. Not ideal from a data storage point of view but it solves the problem and means I'm getting results fantastically fast. :)