mysql query optimization - order by + indexes - mysql

I'm trying to learn something about Optimizing and indexes because I ran a insert select-query that required 4 min to complete. Now, I've added multiple indexes and it seems to have made my query run in 0.160sec. Now what I'm wondering is why the customer table is getting the using filesort message when i'm ordering by orderdate in my order table. Query and explain:
I've even tried an index in O(Orders) for (orderdate, orderid) and (orderdate, orderid, customerid). I thought one of them would help, but no dice. Can anyone help me understand why?

There is nothing wrong with having a query that uses "filesort"; all that means is that the results can't be sorted based on an index.
Now the reason why the sort can't be performed on an index is in this case because your ORDER BY contains columns from tables other than the first table in the join queue.
Since your query result doesn't contain very many rows, the temporary table being used is probably in memory.
What happens is as the query results are fetched from that query is that the results are put into a temporary table so they can later be sorted.
Adding the initial indexes sped up your query most likely because MySQL was doing a full table scan to fetch the results initially which was very time consuming. Once you added the proper indexes, finding the records is extremely quick. It probably had to do a filesort on a temporary table originally but this was likely no slower or faster than it is now.
If you try moving the join for the Orders table and put it before the join of the Products table, you may be able to eliminate the use of the temporary table and file sort.
Check out what does using filesort mean? and How MySQL Uses Internal Temporary Tables for more information.

Related

MySQL performance tuning for DELETE query

Can any one help me to re-write the query to speed up the execution time? It took 37 seconds to execute.
DELETE FROM storefront_categories
WHERE userid IN (SELECT userid
FROM MASTER
where expirydate<'2020-2-4'
)
At the same time, this query took only 4.69 seconds only to execute.
DELETE FROM storefront_categories
WHERE userid NOT IN (SELECT userid FROM MASTER)
The table storefront_categories have 97K records where as in MASTER have 40K records. We have created a index on MASTER.expirydate field.
When deleting 40K rows, expect it to take time. The main cost (assuming adequate indexing and a decent query) is the overhead of transactional semantics of an "atomic" delete. This involves making a copy of each row being deleted, just in case there is a crash. That way, InnoDB can bring the database back to what it had been before the crash.
When deleting 40% of a table, it is much faster to copy the rows to keep into another table then swap tables.
When deleting a large number of rows (regardless of the percentage), it is better to do it in chunks. And it is best to walk through the table based on the PRIMARY KEY.
I discuss both of those techniques, plus others, in http://mysql.rjweb.org/doc.php/deletebig
As for the query formulation:
It is version-dependent; old versions of MySQL did a poor job on some flavors.
NOT IN (SELECT ...) and NOT EXISTS tend to be the worst performers.
IN (SELECT ...) and/or EXISTS may be better.
"Multi-table DELETE is another option. It works like JOIN.
(Bottom line: You did not say what version you are running; I can't predict which formulation will be best.)
My blog avoids the formulation debate.
The query looks fine as it is.
I would suggest the following indexes for optimization:
master(expiry_date, userid)
storefront_categories(userid)
The first index is a covering index for the subquery on master: it means that the database should be able to execute the subquery by looking at the index only (whereas with just expiry_date in the index, it still needs to look at the table data to fetch the related userid).
The second index lets the database optimize the in operation.
I would try with exists :
DELETE
FROM storefront_categories
WHERE EXISTS (SELECT 1
FROM MASTER M
WHERE M.userid = storefront_categories.userid AND
M.expirydate <'2020-02-04'
);
Index would be metter here i would expect index on storefront_categories(userid) & MASTER(userid, expirydate).
I would advise you to use NOT EXISTS with the correct index:
DELETE sc
FROM storefront_categories sc
WHERE NOT EXISTS (SELECT 1
FROM master m
WHERE m.userid = sc.userid AND
m.expirydate < '2020-02-04'
);
The index you want is on master(userid, expirydate). The order of the columns is important. For this version, an index on storefront_categories does not help.
Note that I changed the date format. I recommend using YYYY-MM-DD to avoid ambiguity -- and to use the full 10 characters.

10000 rows - selecting according to where clause - performance

I have a database table with 10000 rows in it and I'd like to select a few thousand items using something like the following:
SELECT id FROM models WHERE category_id = 2
EDIT: The id column is the primary index of the table. Also, the table has another index on category_id.
My question is what would be the impact on performance? Would the query run slow? Should I consider splitting my models table into separate tables (one table for each category)?
This is what database engines are designed for. Just make sure you have an index on the column that you're using in the WHERE clause.
You can try this to get the 100 records
SELECT id FROM models WHERE category_id = 2 LIMT 100
Also you can create index on that column to get the fast retrival of the result
ALTER TABLE `table` ADD INDEX `category_id ` (`category_id `)
EDIT:-
If you have index created on your columns then you dont have to worry about the performance, database engines are smart enough to take care of the performance.
My question is what would be the impact on performance? Would the
query run slow? Should I consider splitting my models table into
separate tables
No you dont have to split your tables as that would not help you in gaining performance
Im fairly new to SQL however I would first index the column
I agree with R.T.'s solution. In addition I can recommend you the link below :
https://indexanalysis.codeplex.com/
download the sql code. It's a stored procedure that helps me a lot when I want to analyze the impact of the indexes or what status they have in my database.
Please check.

Query execution taking long

I am currently using mysql
I have two tables called person and zim_list_id both tables has over 2 million rows
I want to update person table using zim_list_id table
the query I am using is
update person p JOIN zim_list_id z on p.person_id = z.person_id
set p.office_name = z.`Office Name`;
I have also created index on zim_list_id table and person table , the query I executed was
create index idx_person_office_name on person(`Office_name`);
create index idx_zim_list_id_office_name on zim_list_id(`Office name`);
the query execution is taking very long. is there any way to reduce the execution time?
The indexes on Office Name do nothing at all for this query. All you've done with those indexes is make inserts and updates slower, as now the database has to update the index any time that column changes.
What you really need, if you don't already have them, are indexes on the person_id field in those tables, to make the join more efficient.
You might also consider adding Office_Name as a second column on the zim_list_id table's index, as this will allow the database to fullfill that part of the query entirely from the index. But I wouldn't do that until I had checked the results after setting the plain person_id indexes first.
Finally, I'm curious how much memory is in that server (especially relative to the total size of the database), how much of it is available in your MySql buffer_pool_size setting, and what other work that server might be doing... there could always be an environmental factor as well.

Most efficient query to get last modified record in large table

I have a table with a large number of records ( > 300,000). The most relevant fields in the table are:
CREATE_DATE
MOD_DATE
Those are updated every time a record is added or updated.
I now need to query this table to find the date of the record that was modified last. I'm currently using
SELECT mod_date FROM table ORDER BY mod_date DESC LIMIT 1;
But I'm wondering if this is the most efficient way to get the answer.
I've tried adding a where clause to limit the date to the last month, but it looks like that's actually slower (and I need the most recent date, which could be older than the last month).
I've also tried the suggestion I read elsewhere to use:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'db'
AND TABLE_NAME = 'table';
But since I might be working on a dump of the original that query might result into NULL. And it looks like this is actually slower than the original query.
I can't resort to last_insert_id() because I'm not updating or inserting.
I just want to make sure I have the most efficient query possible.
The most efficient way for this query would be to use an index for the column MOD_DATE.
From How MySQL Uses Indexes
8.3.1 How MySQL Uses Indexes
Indexes are used to find rows with specific column values quickly.
Without an index, MySQL must begin with the first row and then read
through the entire table to find the relevant rows. The larger the
table, the more this costs. If the table has an index for the columns
in question, MySQL can quickly determine the position to seek to in
the middle of the data file without having to look at all the data. If
a table has 1,000 rows, this is at least 100 times faster than reading
sequentially.
You can use
SHOW CREATE TABLE UPDATE_TIME;
to get the CREATE statement and see, if an index on MOD_DATE is defined.
To add an Index you can use
CREATE INDEX
CREATE [UNIQUE|FULLTEXT|SPATIAL] INDEX index_name
[index_type]
ON tbl_name (index_col_name,...)
[index_option]
[algorithm_option | lock_option] ...
see http://dev.mysql.com/doc/refman/5.6/en/create-index.html
Make sure that both of those fields are indexed.
Then I would just run -
select max(mod_date) from table
or create_date, whichever one.
Make sure to create 2 indexes, one on each date field, not a compound index on both.
As for a discussion of the difference between this and using limit, see MIN/MAX vs ORDER BY and LIMIT
Use EXPLAIN:
http://dev.mysql.com/doc/refman/5.0/en/explain.html
This tells You how mysql executes statement, thanks to that You can figure out most efficient way, cause it depends on Your db structure and there is no one universal solution.

MySQL group by query complexity analysis

Which is the complexity of the "group by" statement in MySQL?
I am managing vaery big tables and I also would like to know if there is any method to calculate how much time a query is going to take.
This question is impossible to answer with knowledge of what the entire query looks like. Some group bys can be prohibitively expensive while others are very cheap, it all depends on how the indexes in the database are set up, if the value you group by can be cached etc.
For example, this is a very cheap group by:
CREATE TABLE t (a INT, KEY(a));
SELECT * FROM WHERE 1 GROUP BY a;
Since a is an index.
But for something like this, it's very expensive since it would require a table scan.
CREATE TABLE t (a INT);
SELECT * FROM WHERE 1 GROUP BY a;
Generally if a key is not available, the database will creates a temporary table in memory for group by clauses, go through all the values, insert each value into the temporary table with an index to the corresponding row in the result set, then it will select from the temporary table, pick the first row from each column and send that back as the result. Depending on if you use the "extra" rows per group by clause (ie. using MAX(), GROUP_CONCAT() or similar) it will need to fetch all rows again.
You can use EXPLAIN to figure out what strategy MySQL will use, the 'Extra' (in ascending order of cost to execute) 'Using index' if an index can be used, 'Using filesort' if reading all rows from disk will be necessary, and column will contain 'Using Temporary' if a temporary will be required