I have some questions on Composite Primary Keys and the cardinality of the columns. I searched the web, but did not find any definitive answer, so I am trying again. The questions are:
Context: Large (50M - 500M rows) OLAP Prep tables, not NOSQL, not Columnar. MySQL and DB2
1) Does the order of keys in a PK matter?
2) If the cardinality of the columns varies heavily, which should be used first. For example, if I have CLIENT/CAMPAIGN/PROGRAM where CLIENT is highly cardinal, CAMPAIGN is moderate, PROGRAM is almost like a bitmap index, what order is the best?
3) What order is the best for Join, if there is a Where clause and when there is no Where Clause (for views)
Thanks in advance.
You've got "MySQL and DB2". This answer is for DB2, MySQL has none of this.
Yes, of course that is logical, but the optimiser takes much more than just that into account.
Generally, the order of the columns in the WHERE clause (join) do not (and should not) matter.
However, there are two items related to the order of predicates which may be the reason for your question.
What does matter, is the order of the columns in the index, against which the WHERE clause is processed. Yes, there it is best to specify the columns in the order of highest cardinality to lowest. That allows the optimiser to target a smaller range of rows.
And along those lines do not bother implementing indices for single-column, low cardinality columns (there are useless). If the index is correct, then it will be used more often.
.
The order of tables being joined (not columns in the join) matters very much, it is probably the most important consideration. In fact Join Transitive Closure is automatic, and the optimiser evaluates all possible join orders, and chooses what it thinks is the best, based on Statistics (which is why UPDATE STATS is so important).
Regardless of the no of rows in the tables, if you are joining 100 rows from table_A on a bad index with 1,000,000 rows in table_B on a good index, you want the order A:B, not B:A. If you are getting less than the max IOPS, you may want to do something about it.
The correct sequence of steps is, no surprise:
check that the index is correct as per (1). Do not just add another index, correct the ones you have.
check that update stats is being executed regularly
always try the default operation of the optimiser first. Set stats on and measure I/Os. Use representative sets of values (that the user will use in production).
check the shoowplan, to ensure that the code is correct. Of course that will also identify the join order chosen.
if the performance is not good enough, and you believe that the the join order chosen by the optimiser for those sets of values is sub-optimal, SET JTC OFF (syntax depends on your version of DB2), then specify the order that you want in the WHERE clause. Measure I/Os. Use representative sets
form an opinion. Choose whichever is better performance overall. Never tune for single queries.
1) Does the order of keys in a PK matter?
Yes, it changes the order of the record for the index that is used to police the PRIMARY KEY.
2) If the cardinality of the columns varies heavily, which should be used first. For example, if I have CLIENT/CAMPAIGN/PROGRAM where CLIENT is highly cardinal, CAMPAIGN is moderate, PROGRAM is almost like a bitmap index, what order is the best?
For select queries, this totally depends on the queries you are going to use. If you are searching for all three columns at once, the order is not important; if you are searching for two or one columns, they should be leading in the index.
For inserts, it is better to make the leading column match the order in which the records are inserted.
3) What order is the best for Join, if there is a Where clause and when there is no Where Clause (for views)
Again, this depends on the WHERE clause.
Related
If I have a query like:
Select EmployeeId
From Employee
Where EmployeeTypeId IN (1,2,3)
and I have an index on the EmployeeTypeId field, does SQL server still use that index?
Yeah, that's right. If your Employee table has 10,000 records, and only 5 records have EmployeeTypeId in (1,2,3), then it will most likely use the index to fetch the records. However, if it finds that 9,000 records have the EmployeeTypeId in (1,2,3), then it would most likely just do a table scan to get the corresponding EmployeeIds, as it's faster just to run through the whole table than to go to each branch of the index tree and look at the records individually.
SQL Server does a lot of stuff to try and optimize how the queries run. However, sometimes it doesn't get the right answer. If you know that SQL Server isn't using the index, by looking at the execution plan in query analyzer, you can tell the query engine to use a specific index with the following change to your query.
SELECT EmployeeId FROM Employee WITH (Index(Index_EmployeeTypeId )) WHERE EmployeeTypeId IN (1,2,3)
Assuming the index you have on the EmployeeTypeId field is named Index_EmployeeTypeId.
Usually it would, unless the IN clause covers too much of the table, and then it will do a table scan. Best way to find out in your specific case would be to run it in the query analyzer, and check out the execution plan.
Unless technology has improved in ways I can't imagine of late, the "IN" query shown will produce a result that's effectively the OR-ing of three result sets, one for each of the values in the "IN" list. The IN clause becomes an equality condition for each of the list and will use an index if appropriate. In the case of unique IDs and a large enough table then I'd expect the optimiser to use an index.
If the items in the list were to be non-unique however, and I guess in the example that a "TypeId" is a foreign key, then I'm more interested in the distribution. I'm wondering if the optimiser will check the stats for each value in the list? Say it checks the first value and finds it's in 20% of the rows (of a large enough table to matter). It'll probably table scan. But will the same query plan be used for the other two, even if they're unique?
It's probably moot - something like an Employee table is likely to be small enough that it will stay cached in memory and you probably wouldn't notice a difference between that and indexed retrieval anyway.
And lastly, while I'm preaching, beware the query in the IN clause: it's often a quick way to get something working and (for me at least) can be a good way to express the requirement, but it's almost always better restated as a join. Your optimiser may be smart enough to spot this, but then again it may not. If you don't currently performance-check against production data volumes, do so - in these days of cost-based optimisation you can't be certain of the query plan until you have a full load and representative statistics. If you can't, then be prepared for surprises in production...
So there's the potential for an "IN" clause to run a table scan, but the optimizer will
try and work out the best way to deal with it?
Whether an index is used doesn't so much vary on the type of query as much of the type and distribution of data in the table(s), how up-to-date your table statistics are, and the actual datatype of the column.
The other posters are correct that an index will be used over a table scan if:
The query won't access more than a certain percent of the rows indexed (say ~10% but should vary between DBMS's).
Alternatively, if there are a lot of rows, but relatively few unique values in the column, it also may be faster to do a table scan.
The other variable that might not be that obvious is making sure that the datatypes of the values being compared are the same. In PostgreSQL, I don't think that indexes will be used if you're filtering on a float but your column is made up of ints. There are also some operators that don't support index use (again, in PostgreSQL, the ILIKE operator is like this).
As noted though, always check the query analyser when in doubt and your DBMS's documentation is your friend.
#Mike: Thanks for the detailed analysis. There are definately some interesting points you make there. The example I posted is somewhat trivial but the basis of the question came from using NHibernate.
With NHibernate, you can write a clause like this:
int[] employeeIds = new int[]{1, 5, 23463, 32523};
NHibernateSession.CreateCriteria(typeof(Employee))
.Add(Restrictions.InG("EmployeeId",employeeIds))
NHibernate then generates a query which looks like
select * from employee where employeeid in (1, 5, 23463, 32523)
So as you and others have pointed out, it looks like there are going to be times where an index will be used or a table scan will happen, but you can't really determine that until runtime.
Select EmployeeId From Employee USE(INDEX(EmployeeTypeId))
This query will search using the index you have created. It works for me. Please do a try..
Let's say you have a table with columns A and B, among others. You create a multi-column index (A, B) on the table.
Does your query have to take the order of indexes into account? For example,
select * from MyTable where B=? and A in (?, ?, ?);
In the query we put B first and A second. But the index is (A, B). Does the order matter?
Update: I do know that the order of indexes matters significantly in terms of the leftmost prefix rule. However, does it matter which column comes first in the query itself?
In this case no but I recommend to use the EXPLAIN keyword and you will see which optimizations MySQL will use (or not).
The order of columns in the index can affect the way the MySQL optimiser uses the index. Specifically, MySQL can use your compound index for queries on column A because it's the first part of the compound index.
However, your question refers to the order of column references in the query. Here, the optimiser will take care of the references appropriately, and the order is unimportant. The different clauses must come in a particular order to satisfy syntax rules, so you have little control anyway.
Mysql reference on multi-column index optimisation is here
You can test out specific queries of you think they are problems, but otherwise I wouldn't worry about this optimization. Your query will mostly likely be mangled from its original form by the query plan. That is to say MySQL should do a good job of planning how it will use the indices to optimize speed. This may require the conditions to be in a different order, but I doubt it. If MySQL actually did have to reorder the conditions for optimization it would be a very minor cost relative to the execution of the query (at least if the result set is large).
Consider fetching data with
SELECT * FROM table WHERE column1='XX' && column2='XX'
Mysql will filter the results matching with the first part of WHERE clause, then with the second part. Am I right?
Imagine the first part match 10 records, and adding the second part filters 5 records. Is it needed to INDEX the second column too?
You are talking about short circuit evaluation. A DBMS has cost-based optimizer. There is no guarantee wich of both conditions will get evaluated first.
To apply that to your question: Yes, it might be benificial to index your second column.
Is it used regulary in searches?
What does the execution plan tell you?
Is the access pattern going to change in the near future?
How many records does the table contain?
Would a Covering Index would be a better choice?
...
Indexes are optional in MySQL, but they can increase performance.
Currently, MySQL can only use one index per table select, so with the given query, if you have an index on both column1 and column2, MySQL will try to determine the index that will be the most beneficial, and use only one.
The general solution, if the speed of the select is of utmost importance, is to create a multi-column index that includes both columns.
This way, even though MySQL could only use one index for the table, it would use the multi-column index that has both columns indexed, allowing MySQL to quickly filter on both criteria in the WHERE clause.
In the multi-column index, you would put the column with the highest cardinality (the highest number of distinct values) first.
For even further optimization, "covering" indexes can be applied in some cases.
Note that indexes can increase performance, but with some cost. Indexes increase memory and storage requirements. Also, when updating or inserting records into a table, the corresponding indexes require maintenance. All of these factors must be considered when implementing indexes.
Update: MySQL 5.0 can now use an index on more than one column by merging the results from each index, with a few caveats.
The following query is a good candidate for Index Merge Optimization:
SELECT * FROM t1 WHERE key1=1 AND key2=1
When processing such query RDBMS will use only one index. Having separate indices on both colums will allow it to choose one that will be faster.
Whether it's needed depends on your specific situation.
Is the query slow as it is now?
Would it be faster with index on another column?
Would it be faster with one index containing both
columns?
You may need to try and measure several approaches.
You don't have to INDEX the second column but it may speed up your SELECT
I have this mysql query and I am not sure what are the implications of indexing all the fields in the query . I mean is it OK to index all the fields in the CASE statement, Join Statement and Where Statement? Are there any performance implications of indexing fields?
SELECT roots.id as root_id, root_words.*,
CASE
WHEN root_words.title LIKE '%text%' THEN 1
WHEN root_words.unsigned_title LIKE '%normalised_text%' THEN 2
WHEN unsigned_source LIKE '%normalised_text%' THEN 3
WHEN roots.root LIKE '%text%' THEN 4
END as priorities
FROM roots INNER JOIN root_words ON roots.id=root_words.root_id
WHERE (root_words.unsigned_title LIKE '%normalised_text%') OR (root_words.title LIKE '%text%')
OR (unsigned_source LIKE '%normalised_text."%') OR (roots.root LIKE '%text%') ORDER by priorities
Also, How can I further improve the speed of the query above?
Thanks!
You index columns in tables, not queries.
None of the search criteria you've specified will be able to make use of indexes (since the search terms begin with a wild card).
You should make sure that the id column is indexed, to speed the JOIN. (Presumably, it's already indexed as a PRIMARY KEY in one table and a FOREIGN KEY in the other).
To speed up this query you will need to use full text search. Adding indexes will not speed up this particular query and will cost you time on INSERTs, UPDATEs, and DELETEs.
Caveat: Indexes speed up retrieval time but cause inserts and updates to run slower.
To answer the implications of indexing every field, there is a performance hit when using indexes whenever the data that is indexed is modified, either through inserts, updates, or deletes. This is because SQL needs to maintain the index. It's a balance between how often the data is read versus how often it is modified.
In this specific query, the only index that could possibly help would be in your JOIN clause, on the fields roots.id and root_words.root_id.
None of the checks in your WHERE clause could be indexed, because of the leading '%'. This causes SQL to scan every row in these tables for a matching value.
If you are able to remove the leading '%', you would then benefit from indexes on these fields... if not, you should look into implementing full-text search; but be warned, this isn't trivial.
Indexing won't help when used in conjunction with LIKE '%something%'.
It's like looking for words in a dictionary that have ae in them somewhere. The dictionary (or Index in this case) is organised based on the first letter of the word, then the second letter, etc. It has no mechanism to put all the words with ae in them close together. You still end up reading the whole dictionary from beginning to end.
Indexing the fields used in the CASE clause will likely not help you. Indexing helps by making it easy to find records in a table. The CASE clause is about processing the records you have found, not finding them in the first place.
Optimisers can also struggle with optimising multiple unrelated OR conditions such as yours. The optimiser is trying to narrow down the amount of effort to complete your query, but that's hard to do when unrelated conditions could make a record acceptable.
All in all your query would benefit from indexes on roots(root_id) and/or roots(id), but not much else.
If you were to index additional fields though, the two main costs are:
- Increased write time (insert, update or delete) due to additional indexes to write to
- Increased space taken up on the disk
When selecting columns from a MySQL table, is performance affected by the order that you select the columns as compared to their order in the table (not considering indexes that may cover the columns)?
For example, you have a table with rows uid, name, bday, and you have the following query.
SELECT uid, name, bday FROM table
Does MySQL see the following query any differently and thus cause any sort of performance hit?
SELECT uid, bday, name FROM table
The order doesn't matter, actually, so you are free to order them however you'd like.
edit: I guess a bit more background is helpful: As far as I know, the process of optimizing any query happens prior to determining exactly what subset of the row data is being pulled. So the query optimizer breaks it down into first what table to look at, joins to perform, indexes to use, aggregates to apply, etc., and then retrieves that dataset. The column ordering happens between the data pull and the formation of the result set, so the data actually "arrives" as ordered by the database, and is then reordered as it is returned to your application.
In practice, I suspect it might.
With a decent query optimiser: it shouldn't.
You can only tell for your cases by measuring. And the measurements will likely change as the distribution of data changes in the database.
with regards
Wazzy
The order of the attributes selected is negligible. The underlying storage engines surely order their attribute locations, but you would not necessarily have a way to know the specific ordering (renames, alter tables, row vs. column stores) in most cases may be independent from the table description which is just meta data anyway. The order of presentation into the result set would be insignificant in terms of any measurable overhead.