Queries executing in orders [duplicate] - mysql

My thinking is that if I put my ANDs that filter out a greater number of rows before those that filter out just a few, my query should run quicker since that selection set is much smaller between And statements.
But does the order of AND in the WHERE clause of an SQL Statement really effect the performance of the SQL that much or are the engines optimized already for this?

It really depends on the optimiser.
It shouldn't matter because it's the optimiser's job to figure out the optimal way to run your query regardless of how you describe it.
In practice, no optimiser is perfect so you might find that re-ordering the clauses does make a difference to particular queries. The only way to know for sure is to test it yourself with your own schema, data etc.

Most SQL engines are optimized to do this work for you. However, I have found situations in which trying to carve down the largest table first can make a big difference - it doesn't hurt !

A lot depends how the indices are set up. If an index exists which combines the two keys, the optimizer should be able to answer the query with a single index search. Otherwise if independent indices exist for both keys, the optimizer may get a list of the records satisfying each key and merge the lists. If an index exists for one condition but not the other, the optimizer should filter using the indexed list first. In any of those scenarios, it shouldn't matter what order the conditions are listed.
If none of the conditions apply, the order the conditions are specified may affect the order of evaluation, but since the database is going to have to fetch every single record to satisfy the query, the time spent fetching will likely dwarf the time spent evaluating the conditions.

Related

Is it wrong to move part of your WHERE clause to a HAVING clause if it runs faster?

In my long, complicated query that is not using aggregation, I have moved one of the ANDed where clause parts to a new HAVING clause.
Logically, the result is the same, rows are filtered before returned.
Semantically, the result may be different in some way I don't understand.
But performance-wise, this runs 3x faster. I understand this is because the thing I moved is doing an expensive NOT EXISTS (SELECT ...). Previously the server was spending time evaluating this for rows that could be excluded using the other simpler rules.
Are there any official or unofficial rules I have broken with this optimization approach?
no there are no rules as such.
As the joins come before the WHERE clause, you would reduce the number of rows, that will be checked against the WHERE clause.
It is usually somewhat fawned upon, because you could miss some rows that are needed.
So basically you can do it, but have to check , if all wanted rows are there.
The order of WHERE clauses ANDed together --> The optimizer if free to rearrange, however
There are some exceptions: FULLTEXT search first; subqueries last. (I am not sure of this.)
Referencing aggregations --> must be in HAVING
Otherwise WHERE and HAVING have the same semantics.
WHERE is logically done before GROUP BY; HAVING is done after.
It seems that you have discovered that NOT EXISTS is more efficient if it is somehow forced to come after other tests; and moving it to HAVING seems to have achieved that.
Submit a bug report (jira.mariadb.com) suggesting that you have found a case where the Optimizer is not juggling them the clauses as well as it should.
If you show us the actual query, we might be able to dig deeper.

How to judge the complexity of SQL queries

Any resource where it is explained how to judge the complexity of SQL queries would be much appreciated.
(By "complexity", I assume you mean "slowness"?) Some tips:
Subqueries may or may not slow down a query a lot.
GROUP BY and ORDER BY -- when both are present but different: Usually requires two sorts.
Usually only a single index is used per SELECT.
OR is almost always inefficient. Switching to UNION allows multiple indexes to be efficiently used.
UNION ALL, with a few restrictions, is more efficient than UNION DISTINCT (because of the dedupping pass)
Non-sargeable expressions cannot use an index, hence severely inefficient.
If the entire WHERE, GROUP BY and ORDER BY are handled by a single index can LIMIT be efficiently handled. (Else it must collect all the stuff, sort it, only then can it peel off a few rows.)
Entity-Attribute-Value schema is inefficient.
UUIDs and GUIDs are inefficient on very large tables.
A composite index is often better than a single-column index.
A "covering" index is somewhat better.
Sometimes, especially when a LIMIT is involved, it is better to turn the query inside-out. That is start with a subquery that finds the few ids that you need, then reaches back into the same table and into other tables to get the rest of the desired columns.
"Windowing functions" are poorly implemented in MySQL 8 and MariaDB 10.2. They are useful for "groupwise-max" and "hierarchical schemas". Until the Optimizer improves, I declare them to be "complex".
Recent versions have recognized "row constructors"; previously they were a performance hit.
Having an AUTO_INCREMENT id hurts performance in certain cases; helps in others.
EXPLAIN (or EXPLAIN FORMAT=JSON) tells you what is going on now; it fails to tell you how to rewrite the query or what better index to add.
More indexing tips: http://mysql.rjweb.org/doc.php/index_cookbook_mysql In that link, see "Handler counts" for a good way to measure complexity for specific queries. I use it for comparing query formulations, etc, even without populating a large table to get usable timings.
Give me a bunch of Queries; I'll point out the complexities, if any, in each.
Check out the official MySQL documentation on Query Execution Plan:
https://dev.mysql.com/doc/refman/5.7/en/execution-plan-information.html
You could use the EXPLAIN command to get more information about your query.

Will records order change between two identical query in mysql without order by

The problem is I need to do pagination.I want to use order by and limit.But my colleague told me mysql will return records in the same order,and since this job doesn't care in which order the records are shown,so we don't need order by.
So I want to ask if what he said is correct? Of course assuming that no records are updated or inserted between the two queries.
You don't show your query here, so I'm going to assume that it's something like the following (where ID is the primary key of the table):
select *
from TABLE
where ID >= :x:
limit 100
If this is the case, then with MySQL you will probably get rows in the same order every time. This is because the only predicate in the query involves the primary key, which is a clustered index for MySQL, so is usually the most efficient way to retrieve.
However, probably may not be good enough for you, and if your actual query is any more complex than this one, probably no longer applies. Even though you may think that nothing changes between queries (ie, no rows inserted or deleted), so you'll get the same optimization plan, that is not true.
For one thing, the block cache will have changed between queries, which may cause the optimizer to choose a different query plan. Or maybe not. But I wouldn't take the word of anyone other than one of the MySQL maintainers that it won't.
Bottom line: use an order by on whatever column(s) you're using to paginate. And if you're paginating by the primary key, that might actually improve your performance.
The key point here is that database engines need to handle potentially large datasets and need to care (a lot!) about performance. MySQL is never going to waste any resource (CPU cycles, memory, whatever) doing an operation that doesn't serve any purpose. Sorting result sets that aren't required to be sorted is a pretty good example of this.
When issuing a given query MySQL will try hard to return the requested data as quick as possible. When you insert a bunch of rows and then run a simple SELECT * FROM my_table query you'll often see that rows come back in the same order than they were inserted. That makes sense because the obvious way to store the rows is to append them as inserted and the obvious way to read them back is from start to end. However, this simplistic scenario won't apply everywhere, every time:
Physical storage changes. You won't just be appending new rows at the end forever. You'll eventually update values, delete rows. At some point, freed disk space will be reused.
Most real-life queries aren't as simple as SELECT * FROM my_table. Query optimizer will try to leverage indices, which can have a different order. Or it may decide that the fastest way to gather the required information is to perform internal sorts (that's typical for GROUP BY queries).
You mention paging. Indeed, I can think of some ways to create a paginator that doesn't require sorted results. For instance, you can assign page numbers in advance and keep them in a hash map or dictionary: items within a page may appear in random locations but paging will be consistent. This is of course pretty suboptimal, it's hard to code and requieres constant updating as data mutates. ORDER BY is basically the easiest way. What you can't do is just base your paginator in the assumption that SQL data sets are ordered sets because they aren't; neither in theory nor in practice.
As an anecdote, I once used a major framework that implemented pagination using the ORDER BY and LIMIT clauses. (I won't say the same because it isn't relevant to the question... well, dammit, it was CakePHP/2). It worked fine when sorting by ID. But it also allowed users to sort by arbitrary columns, which were often not unique, and I once found an item that was being shown in two different pages because the framework was naively sorting by a single non-unique column and that row made its way into both ORDER BY type LIMIT 10 and ORDER BY type LIMIT 10, 10 because both sortings complied with the requested condition.

Can I use index in MySQL in this way? [duplicate]

If I have a query like:
Select EmployeeId
From Employee
Where EmployeeTypeId IN (1,2,3)
and I have an index on the EmployeeTypeId field, does SQL server still use that index?
Yeah, that's right. If your Employee table has 10,000 records, and only 5 records have EmployeeTypeId in (1,2,3), then it will most likely use the index to fetch the records. However, if it finds that 9,000 records have the EmployeeTypeId in (1,2,3), then it would most likely just do a table scan to get the corresponding EmployeeIds, as it's faster just to run through the whole table than to go to each branch of the index tree and look at the records individually.
SQL Server does a lot of stuff to try and optimize how the queries run. However, sometimes it doesn't get the right answer. If you know that SQL Server isn't using the index, by looking at the execution plan in query analyzer, you can tell the query engine to use a specific index with the following change to your query.
SELECT EmployeeId FROM Employee WITH (Index(Index_EmployeeTypeId )) WHERE EmployeeTypeId IN (1,2,3)
Assuming the index you have on the EmployeeTypeId field is named Index_EmployeeTypeId.
Usually it would, unless the IN clause covers too much of the table, and then it will do a table scan. Best way to find out in your specific case would be to run it in the query analyzer, and check out the execution plan.
Unless technology has improved in ways I can't imagine of late, the "IN" query shown will produce a result that's effectively the OR-ing of three result sets, one for each of the values in the "IN" list. The IN clause becomes an equality condition for each of the list and will use an index if appropriate. In the case of unique IDs and a large enough table then I'd expect the optimiser to use an index.
If the items in the list were to be non-unique however, and I guess in the example that a "TypeId" is a foreign key, then I'm more interested in the distribution. I'm wondering if the optimiser will check the stats for each value in the list? Say it checks the first value and finds it's in 20% of the rows (of a large enough table to matter). It'll probably table scan. But will the same query plan be used for the other two, even if they're unique?
It's probably moot - something like an Employee table is likely to be small enough that it will stay cached in memory and you probably wouldn't notice a difference between that and indexed retrieval anyway.
And lastly, while I'm preaching, beware the query in the IN clause: it's often a quick way to get something working and (for me at least) can be a good way to express the requirement, but it's almost always better restated as a join. Your optimiser may be smart enough to spot this, but then again it may not. If you don't currently performance-check against production data volumes, do so - in these days of cost-based optimisation you can't be certain of the query plan until you have a full load and representative statistics. If you can't, then be prepared for surprises in production...
So there's the potential for an "IN" clause to run a table scan, but the optimizer will
try and work out the best way to deal with it?
Whether an index is used doesn't so much vary on the type of query as much of the type and distribution of data in the table(s), how up-to-date your table statistics are, and the actual datatype of the column.
The other posters are correct that an index will be used over a table scan if:
The query won't access more than a certain percent of the rows indexed (say ~10% but should vary between DBMS's).
Alternatively, if there are a lot of rows, but relatively few unique values in the column, it also may be faster to do a table scan.
The other variable that might not be that obvious is making sure that the datatypes of the values being compared are the same. In PostgreSQL, I don't think that indexes will be used if you're filtering on a float but your column is made up of ints. There are also some operators that don't support index use (again, in PostgreSQL, the ILIKE operator is like this).
As noted though, always check the query analyser when in doubt and your DBMS's documentation is your friend.
#Mike: Thanks for the detailed analysis. There are definately some interesting points you make there. The example I posted is somewhat trivial but the basis of the question came from using NHibernate.
With NHibernate, you can write a clause like this:
int[] employeeIds = new int[]{1, 5, 23463, 32523};
NHibernateSession.CreateCriteria(typeof(Employee))
.Add(Restrictions.InG("EmployeeId",employeeIds))
NHibernate then generates a query which looks like
select * from employee where employeeid in (1, 5, 23463, 32523)
So as you and others have pointed out, it looks like there are going to be times where an index will be used or a table scan will happen, but you can't really determine that until runtime.
Select EmployeeId From Employee USE(INDEX(EmployeeTypeId))
This query will search using the index you have created. It works for me. Please do a try..

It make sense to index a datetime field?

I have a table approaching the billions of records and with various indexed fields, one of which is a datetime that I use only for selecting records between two dates, and never for a join.
Does it make sense or is it possible to de-index it?
On the face of it, it seems a good idea if it is used in a WHERE clause.
The way to find out if it would benefit would be to view the execution plan of one the queries before and after adding it and see if it has made a difference.
An index may still be ignored if the query is returning a large amount of rows in the index as it would more efficient in that case to do am index scan rather than an index seek