i got questions about indexing SQL database:
Is it better to index boolean column or rather not because there are only 2 options? i know if the table is small then indexing will not change anything, but im asking about table with 1mln records.
If i got two dates ValidFrom and ValidTo is it better to create 1 index with 2 columns or 2 seperate indexes? In 90% of queries i use where validfrom < date && validto > date, but there are also few selects only with validfrom or only with validto
whats the diffrence between clustered and non-clistered index? i cant find any article, so a link would be great
You both tagged MySQL and SQL-server. This answer is MySQL inspired.
It depends on many things, but more important than the size is the variation. If about 50% of the values are TRUE, that means the rest of the values (also about 50%) are FALSE and an index will not help much. If only 2% of the values are TRUE and your queries often only need TRUE records, this index will be useful!
If your queries often use both, put both in the index. If one is used more than the other, put that one FIRST in the index, so the composite index can be used for the one field as well.
A clustered index means that the data actually is inside the index. A non-clustered index just points to the data, which is actually stored elsewhere. The PRIMARY KEY in InnoDB is a clustered index.
If you want to use Indexes in MySQL, EXPLAIN is your friend!
This is all for SQL Server, which is what I know about...
1 - Depends on cardinality, but as a rule an index on a single boolean field (BIT in SQL Server) won't be used since it's not very selective.
2 - Make 2 indexes, one with both, and the other with just the second field from the first index. Then you are covered in both cases.
3 - Clustered indexes contain the data for ALL fields at the leaf level (the entire table basically) ordered by your clustered index field. Non-clustered indexes contain only the key fields and any INCLUDEd fields at the leaf level, with a pointer to the clustered index row if you need any other data from other fields for that row.
If you use the "Filtered Index", the number of records up to 2 million with no problems.
Create 1 Non clustered index instead of 2 Filtered Index
Different in user experience, these two aspects are not related to each other nothing. The search index (PK: Primary Key) is different than searching for a range of values (Non clustered Index often used in tracing the value range), in fact finding by PK represented less than 1% queries
Related
I have a table with the following columns:
id-> PK
customer_id-> index
store_id-> index
order_date-> index
last_modified-> index
other_columns...
other_columns...
I have three single column index. I also have a customer_id_store_id index which is a foreign key constraint referencing other tables.
id, customer_id, store_id are char(36) which is UUID. order_date is datetime and last_modifed is UNIX timestamp.
I want to gain some performance by removing all index and adding one with (customer_id, store_id, order_date). Most queries will have these fields in the where clause. But sometimes the store_id will not be needed.
What is the best approach? to add "store_id IS NOT NULL" in the where clause or creating the index this way (customer_id, order_date, store_id).
I also frequently need to query the table by last_modified field (where clause includes customer_id=, store_id=, last_modified>).
As I only have a single column index on it and there are hundreds of customers who is insert/updating the tables, more often the index scans rows more than necessary. Is it better to create another index (customer_id, store_id, last_modified) or leave it as it is? Or add this column to the previous index making it four columns composite index. But then again the order_date is irrelevant here and omitting it might result the index not being used as intended.
The query works fast on customers that don't have many rows possibly using the customer_id index there. But for customers with large amount of data, this isn't optimal. More often I need only few days of data.
Can anyone please advise what's the best index in this scenario.
It is true that lots of single column indexes on a MySQL table are generally considered harmful.
A query with
WHERE customer_id=constant AND store_id=constant AND last_modified>=constant
will be accelerated by an index on (customer_id, store_id, last_modified). Why? The MySQL query planner can random-access the index to the first item it needs to retrieve, then scan the index sequentially. That same index works for
WHERE customer_id=constant AND store_id=constant
AND last_modified>=constant
AND last_modified< constant + INTERVAL 1 DAY
BUT, that index will not be useful for a query with just
WHERE store_id=constant AND last_modified>constant
or
WHERE customer_id=constant AND store_id IS NOT NULL AND last_modified>=constant
For the first of those query patterns you need (store_id, last_modified) to achieve the ability to sequentially scan the index.
The second of those query patterns requires two different range searches. One is something IS NOT NULL. That's a range search because it has to romp through all the non-null values in the column. The second range search is last_modified>=constant. That's a range search, because it starts with the first value of last_modified that meets the given criterion, and scans to the end of the index.
MySQL indexes are B-trees. That means, essentially, that they're sorted into a particular single order. So, an index is best for accelerating queries that require just one range search. So, the second query pattern is inherently hard to satisfy with an index.
A table can have multiple compound indexes designed to satisfy multiple different query patterns. That's usually the strategy to large tables work well in practical applications. Each index imposes a little bit of performance penalty on updates and inserts. Indexes also take storage space. But storage is very cheap these days.
If you want to use a compound index to search on multiple criteria, these things must be true:
all but one of the criteria must be equality criteria like store_id = constant.
one criterion can be a range-scan criterion like last_modified >= constant or something IS NOT NULL.
the columns in the index must be ordered so that the columns involved in equality criteria all appear, then the the column involved in the range-scan criterion.
you may mention other columns after the range scan criterion. But they make up part of a covering index strategy (beyond the scope of this post).
http://use-the-index-luke.com/ is a good basic intro to the black art of indexing.
Is it useful for SELECT performance to set an index on a field that contains only distinct values?
eg:
order_id
--------
98317490
10928343
82931376
93438473
...
Is it useful for SELECT performance to set an index on a field that contains only distinct values?
That depends. An index is useful if you often search on this column:
WHERE column=value
WHERE column BETWEEN a AND b
The usefulness of an index is determined by its selectivity. For example, if your column contains a boolean, which is:
false in 99.9% of rows
true in 0.1% of rows
Then you can easily guess that using an index to find "true" values will be a huge boost relative to reading the entire table to search for them.
On the other hand, searching for "false" using an index will be slower than not using an index, since you're gonna read the whole table anyway, you might as well not bother to also process the index.
If values are all distinct, then selectivity is maximum, and index will be very useful. That is, assuming you actually search on that column!
An index that is never used only slows down updates.
Of course it is useful, as with all indexes - it is useful if you have select statements where you have this field on the WHERE clause.
Whether this field has distinct values or not doesn't really matter.
Note that if your field is marked as UNIQUE or PRIMARY KEY in the database, the database will technically already have an index for this field, so adding another index for it will not change anything.
I have a table with two partitions. Partitions are pactive = 1 and pinactive = 0. I understand that two partitions does not make so much of a gain, but I have used it to truncate and load in one partition and plain inserts in another partition.
The problem comes when I create indexes.
Query goes this way
select partitionflag,companyid,activityname
from customformattributes
where companyid=47
and activityname = 'Activity 1'
and partitionflag=0
Created index -
create index idx_try on customformattributes(partitionflag,companyid,activityname,completiondate,attributename,isclosed)
there are around 200000 records that will be retreived from the above query. But the query along with the mentioned index takes 30+ seconds. What is the reason for such a long time? Also, if remove the partitionflag from the mentioned index, the index is not even used.
And is the understanding that,
Even with the partitions available, the optimizer needs to have the required partition mentioned in the index definition, so that it only hits the required partition ---- Correct?
Any ideas on understanding this would be very helpful
You can optimize your index by reordering the columns in it. Usually the columns in the index are ordered by its cardinality (starting from the highest and go down to the lowest). Cardinality is the uniqueness of data in the given column. So in your case I suppose there are many variations of companyid in customformattributes table while partitionflag will have cardinality of 2 (if all the options for this column are 1 and 0).
Your query will first filter all the rows with partitionflag=0, then it will filter by company id and so on.
When you remove partitionflag from the index the query did not used the index because may be the optimizer decides that it will be faster to make full table scan instead of using the index (in most of the cases the optimizer is right)
For the given query:
select partitionflag,companyid,activityname
from customformattributes
where companyid=47
and activityname = 'Activity 1'
and partitionflag=0
the following index may be would be better (but of course :
create index idx_try on customformattributes(companyid,activityname, completiondate,attributename, partitionflag, isclosed)
For the query to use index the following rule must be met - the left most column in the index should be present in the where clause ... and depending on the mysql version you are using additional query requirements may be needed. For example if you are using old version of mysql - you may need to order the columns in the where clause in the same order they are listed in the index. In the last versions of mysql the query optimizer is responsible for ordering the columns in the where clause in the correct order.
Your SELECT query took 30+ seconds because it returns 200k rows and because the index might not be the optimal for the given query.
For the second question about the partitioning: the common rule is that the column you are partitioning by must be part of all the UNIQUE keys in a table (Primary key is also unique key by definition so the column should be added to the PK also). If table structure and logic allows you to add the partitioning column to all the UNIQUE indexes in the table then you add it and partition the table.
When the partitioning is made correctly you can take the advantage of partitioning pruning - this is when SELECT query searches the data only in the partitions where given data is stored (otherwise it looks in all partitions)
You can read more about partitioning here:
https://dev.mysql.com/doc/refman/5.6/en/partitioning-overview.html
The query is slow simply because disks are slow.
Cardinality is not important when designing an index.
The optimal index for that query is
INDEX(companyid, activityname, partitionflag) -- in any order
It is "covering" since it includes all the columns mentioned anywhere in the SELECT. This is indicated by "Using index" in the EXPLAIN.
Leaving off the other 3 columns makes the query faster because it will have to read less off the disk.
If you make any changes to the query (add columns, change from '=' to '>', add ORDER BY, etc), then the index may no longer be optimal.
"Also, if remove the partitionflag from the mentioned index, the index is not even used." -- That is because it was no longer "covering".
Keep in mind that there are two ways an index may be used -- "covering" versus being a way to look up the data. When you don't have a "covering" index, the optimizer chooses between using the index and bouncing between the index and the data versus simply ignoring the index and scanning the table.
Let's use lastName as an example.
Assuming that there are no duplicate last names in your DB (by chance, not because of a unique), would there be any benefit to indexing this lastName column?
The query that would be used to search would be something like SELECT * IN t WHERE lastName='Smith'.
If every entry in the column is unique, then how can an index have an effect? Wouldn't it have to search every entry regardless?
Sorry, I am just learning about indexing and I would really like to understand it better.
Thanks.
Yes, there is a benefit in indexing even if the column values are unique. In the index the values are not only unique but they are also organised in a tree structure that lets you search for a row with O(log N) complexity.
There is a great article in Wikipedia about it: Database Index
...
The data is present in arbitrary order, but the logical ordering is specified
by the index. The data rows may be spread throughout the table
regardless of the value of the indexed column or expression. The
non-clustered index tree contains the index keys in sorted order, with
the leaf level of the index containing the pointer to the record (page
and the row number in the data page in page-organized engines; row
offset in file-organized engines).
In a non-clustered index
The physical order of the rows is not the same as the index order. The
indexed columns are typically non-primary key columns used in JOIN,
WHERE, and ORDER BY clauses. There can be more than one non-clustered
index on a database table.
...
Consider the following SQL statement:
SELECT first_name FROM people WHERE last_name = 'Smith';
To process this statement without an index
the database software must look at the last_name column on every row
in the table (this is known as a full table scan). With an index the
database simply follows the B-tree data structure until the Smith
entry has been found; this is much less computationally expensive than
a full table scan.
Generally speaking the more unique values there are in a column, or the higher its cardinality What is cardinality in MySQL?, the more useful an index will be on that column.
I run the following query on my database :
SELECT e.id_dernier_fichier
FROM Enfants e JOIN FichiersEnfants f
ON e.id_dernier_fichier = f.id_fichier_enfant
And the query runs fine. If I modifiy the query like this :
SELECT e.codega
FROM Enfants e JOIN FichiersEnfants f
ON e.id_dernier_fichier = f.id_fichier_enfant
The query becomes very slow ! The problem is I want to select many columns in table e and f, and the query can take up to 1 minute ! I tried different modifications but nothing works. I have indexes on id_* also on e.codega. Enfants has 9000 lines and FichiersEnfants has 20000 lines. Any suggestions ?
Here are the info asked (sorry not having shown them from the beginning) :
The difference in performance is possibly due to e.id_dernier_fichier being in the index used for the JOIN, but e.codega not being in that index.
Without a full definition of both tables, and all of their indexes, it's not possible to tell for certain. Also, including the two EXPLAIN PLANs for the two queries would help.
For now, however, I can elaborate on a couple of things...
If an INDEX is CLUSTERED (this also applies to PRIMARY KEYs), the data is actually physically stored in the order of the INDEX. This means that knowing you want position x in the INDEX also implicity means you want position x in the TABLE.
If the INDEX is not clustered, however, the INDEX is just providing a lookup for you. Effectively saying position x in the INDEX corresponds to position y in the TABLE.
The importance here is when accessing fields not specified in the INDEX. Doing so means you have to actually go to the TABLE to get the data. In the case of a CLUSTERED INDEX, you're already there, the overhead of finding that field is pretty low. If the INDEX isn't clustered, however, you effectifvely have to JOIN the TABLE to the INDEX, then find the field you're interested in.
Note; Having a composite index on (id_dernier_fichier, codega) is very different from having one index on just (id_dernier_fichier) and a seperate index on just (codega).
In the case of your query, I don't think you need to change the code at all. But you may benefit from changing the indexes.
You mention that you want to access many fields. Putting all those fields in a composite index is porbably not the best solution. Instead you may want to create a CLUSTERED INDEX on (id_dernier_fichier). This will mean that once the *id_dernier_fichier* has been located, you're already in the right place to get all the other fields as well.
EDIT Note About MySQL and CLUSTERED INDEXes
13.2.10.1. Clustered and Secondary Indexes
Every InnoDB table has a special index called the clustered index where the data for the rows is stored:
If you define a PRIMARY KEY on your table, InnoDB uses it as the clustered index.
If you do not define a PRIMARY KEY for your table, MySQL picks the first UNIQUE index that has only NOT NULL columns as the primary key and InnoDB uses it as the clustered index.
If the table has no PRIMARY KEY or suitable UNIQUE index, InnoDB internally generates a hidden clustered index on a synthetic column containing row ID values. The rows are ordered by the ID that InnoDB assigns to the rows in such a table. The row ID is a 6-byte field that increases monotonically as new rows are inserted. Thus, the rows ordered by the row ID are physically in insertion order.