Is indexing link tables smart? - mysql

So let's say for example I have 2 tables: Users > Items
Users can have favorite Items, and a Item can have multiple users that see it as a favorite, so I'll be using a linking table.
Now my linking table would contain something like:
id (int 11 AI)
user_id (int 11)
item_id (int 11)
Now would it be necessary / usefull to put a index on user_id and item_id since this table will contain a lot of records over time.
I'm not a 100% sure when to use indexes. My idea of when to use them(Might be completely incorrect though) is when you have big database and need to search/filter on a column then you index it. If this is incorrect I'm sorry, it's just what I've always been told.

Basically, yes, that's how it goes.
In this case, I'd say that an index on the user_id column would be useful, because you will display to the user a list of their favorites, right?
An index on the item_id might be less useful, because I doubt you're going to display a list of users that have favorited a specific item. Although you might care about the count ("100 users like this item"), so you might add that index after all. Or you might de-normalize and keep the count in the items table. That would give a better performance, although you'll need to write extra code to maintain that number.
Last but not least - in a link table, you can do away with the id column. Just add the primary key index on both columns (user_id and item_id in that order). This will make sure that you cannot enter duplicate rows, and since user_id is the first column in the index, you'll be able to use it in search queries. No need anymore to add a separate index on just the user_id column.
However this also depends on the code you're using. If you're using some kind of framework (ORM?) that REQUIRES an id column for every table, then this trick is useless.
As requested by the author, here's a quick intro on what indexes are.
Suppose you have a DB table which is just a bunch of rows in no particular order. Let's say we have a table people with the columns name, surname, age.
Now, when you want to find the age for John Smith you probably make a query like this:
select age from people where name='John' and surname='Smith'
When you do this, the DB engine can do only one thing - it has to go through ALL the rows and look for the ones that match. If there's 100,000 rows, it will be slow.
Now there's a faster way of doing this. Think about a phonebook (the classical paper edition). On it's thousand yellow pages there are phone numbers for hundreds of people. Yet you can find the number you seek very quickly even if you're a human being. That's because the numbers are sorted alphabetically by name and surname. You open a random page and you can immediately see whether the number you're looking for is before or after the page you opened. Repeat a couple of times and you've found it.
This kind of searching is called a "binary search". Your DB engine could do this too, if the records were sorted by name and surname. So this is what a Primary Key is - it tells the DB to store the records not in some random order, but sorted by some columns. When a new record comes, it can quickly find its rightful place and push it in there, thus keeping the table forever sorted.
There are a few things to note here already.
First, you can make it sort by one or more columns, but, just like in a phonebook, the order is important. If you sort by name first and then by surname, then that's the order the records will be in. So you'll be able to quickly find all the records where name='John' or name='John' and surname='Smith', but it won't help you at all if you need to find just surname='Smith'. Just like in a phonebook.
Second, pushing a record somewhere in the middle is also somewhat slow. Not criminally so, but still. Appending a record at the end is faster. Therefore people tend to use auto_increment columns for their Primary Keys, because then every new row will be placed at the end.
Third, in most DBs Primary Key is not only also used to search quickly, but also uniquely identify the row. Which means that the DB will not be happy if there are two rows that have equal values for the Primary Key columns. In that case, it cannot determine which has to go first, and which last, and it's also not unique. Another reason to use auto_increment. Note that if the PK index has multiple columns in it, then their combination must be unique - every column individually may be non-unique. In our case that means that there can be many Johns and many Smiths, but only one John Smith.
But we still have a problem. What if we want to quickly find rows both by just the name, and just the surname? A PK index can only do one of those things, not both at the same time.
This is where other non-PK indexes come in play. You can add as many of those as you want to the table. In our case, we could create another index to hold just the surname column.
When we do so, the DB creates another hidden table (OK, not true, but you can think of it this way) which is a copy of the original table, but only with the surname column and a special link back to the rows in the original table. This hidden index table is sorted by the surname column. So when you now need to find a row by specifying just the surname, the DB engine can look it up in the hidden index table, and then follow the links back to the original rows and get the data from them. Much faster.
These non-PK indexes also typically come in a few flavors. There's the standard "index" which places no restrictions at all - you can have duplicate values in the columns, nulls, etc. There's a "unique" index, which enforces that all the values in the index need to be unique; and then there are sometimes speciality indexes like FullText, Spatial, etc. Indexes also tend to have some technical options, but you'll have to read the documentation of your DB for those.
One last important thing to note is - indexes make it fast to find things in a table, but they come at a cost. Modifications to the table (insert, update, delete) become slower, because the indexes need to be updated as well. Keep that in mind and only add them where necessary.
Except for Primary Keys. ALWAYS add Primary Keys. That's an order! :)

In short, yes.
Imagine how well joins would work if, each time you needed to match a primary key value to a foreign key in another table, the DBMS had to search the entire table for the matching keys.

Related

Index every column to add foreign keys

I am currently learning about foreign keys and trying to add them as much as I can in my application to ensure data-integrity. I am using INNODB on Mysql.
My clicks table has a structure something like...
id, timestamp, link_id, user_id, ip_id, user_agent_id, ... etc for about 12 _id columns.
Obviously these all point to other tables, so should I add a foreign key on them? MySQL is creating an index automatically for every foreign key, so essentially I'll have an index on every column? Is this what I want?
FYI - this table will essentially be my most bulky table. My research basically tells me I'm sacrificing performance for integrity but doesn't suggest how harsh the performance drop will be.
Right before inserting such a row, you did 12 inserts or lookups to get the ids, correct? Then, as you do the INSERT, it will do 12 checks to verify that all of those ids have a match. Why bother; you just verified them with the code.
Sure, have FKs in development. But in production, you should have weeded out all the coding mistakes, so FKs are a waste.
A related tip -- Don't do all the work at once. Put the raw (not-yet-normalized) data into a staging table. Periodically do bulk operations to add new normalization keys and get the _id's back. Then move them into the 'real' table. This has the added advantage of decreasing the interference with reads on the table. If you are expecting more than 100 inserts/second, let's discuss further.
The generic answer is that if you considered a data item so important that you created a lookup table for the possible values, then you should create a foreign key relationship to ensure you are not getting any orphan records.
However, you should reconsider, whether all data items (fields) in your clicks table need a lookup table. For example ip_id field probably represents an IP address. You can simply store the IP address directly in the clicks table, you do not really need a lookup table, since IP addresses have a wide range and the IP addresses are unique.
Based on the re-evaluation of the fields, you may be able to reduce the number of related tables, thus the number of foreign keys and indexes.
Here are three things to consider:
What is the ratio of reads to writes on this table? If you are reading much more often than writing, then more indexes could be good, but if it is the other way around then the cost of maintaining those indexes becomes harder to bear.
Are some of the foreign keys not very selective? If you have an index on the gender_id column then it is probably a waste of space. My general rule is that indexes without included columns should have about 1000 distinct values (unless values are unique) and then tweak from there.
Are some foreign keys rarely or never going to be used as a filter for a query? If you have a last_modified_user_id field but you never have any queries that will return a list of items which were last modified by a particular user then an index on that field is less useful.
A little bit of knowledge about indexes can go a long way. I recommend http://use-the-index-luke.com

Short, single-field indexes or enormous covering indexes in MySQL

I am trying to understand exactly what is and is not useful in a multiple-field index. I have read this existing question (and many more) plus other sites/resources (MySQL Performance Blog, Percona slideshares, etc.) but I'm not totally confident that what I've found on the subject is current and accurate. So please bear with me while I repeat some of what I think I know.
By indexing wisely, I can not only reduce how long it takes to match my query condition(s), but also reduce how long it takes to fetch the fields I want in my query result.
The index is just a sorted, duplicated subset of the full data, paired with pointers (MyISAM) or PKs (InnoDB), that I can search more efficiently than the full table.
Given the above, using an index to match my condition(s) really happens in the same way as fetching my desired result, except I created this special-purpose table (the index) that gets me an intermediate result set really quickly; and with this intermediate result set I can retrieve my final desired result set much more efficiently than by performing a full table scan.
Furthermore, if the index covers all the fields in my query (not just the conditions), instead of an intermediate result set, the index will give me everything I need without having to fetch any rows from the complete table.
InnoDB tables are clustered on the PK, so rows with consecutive PKs are likely to be stored in the same block (given many rows per block), and I can grab a range of rows with consecutive PKs fairly efficiently.
MyISAM tables are not clustered; there is some hidden internal row ordering that has no fixed relation to the PK (or any index), so any time I want to grab a set of rows, I may have to retrieve a different block for every single row - even if these rows have consecutive PKs.
Assuming the above is at least generally accurate, here's my puzzle. I have a slowly changing dimension table defined with the following columns (more or less) and using MyISAM:
dim_owner_ID INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
person_ID INT UNSIGNED NOT NULL,
raw_name VARCHAR(92) NOT NULL,
first VARCHAR(30),
middle VARCHAR(50),
last VARCHAR(30),
suffix CHAR(3),
flag CHAR(1)
Each "owner" is a unique instance of a particular individual with a particular name, so if Sue Smith changes her name to Sue Brown, that results in two rows that are the same except for the last field and the surrogate key. My understanding is that the only way to enforce this constraint internally is to do:
UNIQUE INDEX uq_owner_complete (person_ID, raw_name, first, middle, last, suffix, flag)
And that's basically going to duplicate the entire table (except for the surrogate key).
I also need to index a few other fields for quick joins and searches. While there will be some writes, and disk space is neither free nor infinite, read performance is absolutely the #1 priority here. These smaller indexes should serve very well to cover the conditions of the queries that will be run against the table, but in almost every case, the entire row needs to be selected.
With that in mind:
Is there any reasonable middle ground between sticking with short, single-field indexes (prefix where possible) and expanding every index to cover the entire table?
How would the latter be any different from storing the entire dataset five times on disk, but sorted differently each time?
Is there any benefit to adding the PK/surrogate ID to each of the smaller indexes in the hope that the query optimizer will be able to work some sort of index merge magic?
If this were an InnoDB index, the PK would already be there, but since it's MyISAM it's got pointers to the full rows instead. So if I'm understanding things correctly, there's no point (no pun intended) to adding the PK to any other index, unless doing so would allow the retrieval of the desired result set directly from the index. Which is not likely here.
I understand if it seems like I'm trying too hard to optimize, and maybe I am, but the tasks I need to perform using this database take weeks at a time, so every little bit helps.
You have to understand one concept. An index (either InnoDB or MyiSAM, ether Primary or secondary) is a data structure that's called "B+ tree".
Each node in the B+ tree is a couple (k, v), where k is a key, v is a value. If you build index on last_name your keys will be "Smith", "Johnson", "Kuzminsky" etc.
Value in the index is some data. If the index is the secondary index then the data is a primary key values.
So if you build index on last_name each node will be a couple (last_name, id) e.g. ("Smith", 5).
Primary index is an index where k is primary key and data is all other fields.
Bearing in mind the above let me comment some points:
By indexing wisely, I can not only reduce how long it takes to match my query condition(s), but also reduce how long it takes to fetch the fields I want in my query result.
Not exactly. If your secondary index is good you can quickly find v based on you query condition. E.g. you can quickly find PK by last name.
The index is just a sorted, duplicated subset of the full data, paired with pointers (MyISAM) or PKs (InnoDB), that I can search more efficiently than the full table.
Index is B+tree where each node is a couple of indexed field(s) value(s) and PK.
Given the above, using an index to match my condition(s) really happens in the same way as fetching my desired result, except I created this special-purpose table (the index) that gets me an intermediate result set really quickly; and with this intermediate result set I can retrieve my final desired result set much more efficiently than by performing a full table scan.
Not exactly. If there were no index you'd have to scan whole table and choose only records where last_name = "Smith". But you have index (last_name, PK), so having key "Smith" you can quickly find all PK where last_name = "Smith". And then you can quickly find your full result (because you need not only the last name, but the first name too). So you're right, queries like SELECT * FROM table WHERE last_name = "Smith" are executed in two steps:
Find all PK
By PK find full record.
Furthermore, if the index covers all the fields in my query (not just the conditions), instead of an intermediate result set, the index will give me everything I need without having to fetch any rows from the complete table.
Exactly. If your index is actually (last_name, first_name, id) and your query is SELECT first_name WHERE last_name = "Smith" you don't do the second step. You have the first name in the secondary index, so you don't have to go to the Primary index.
InnoDB tables are clustered on the PK, so rows with consecutive PKs are likely to be stored in the same block (given many rows per block), and I can grab a range of rows with consecutive PKs fairly efficiently.
Right. Two neighbor PK values will most likely be in the same page. Well, except cases when one PK is the last value in a page and next PK value is stored in the next page.
Basically, this is why B+ tree structure was invented. It's not only efficient for search but also efficient in sequential access. And until recently we had rotating hard drives.
MyISAM tables are not clustered; there is some hidden internal row ordering that has no fixed relation to the PK (or any index), so any time I want to grab a set of rows, I may have to retrieve a different block for every single row - even if these rows have consecutive PKs.
Right. If you insert new records to MyISAM table the records will be added to the end of MYD file regardless the PK order.
Primary index of MyISAM table will be B+tree with pointers to records in the MYD file.
Now about your particular problem. I don't see any reason to define UNIQUE INDEX uq_owner_complete.
Is there any reasonable middle ground between sticking with short, single-field indexes (prefix where possible) and expanding every index to cover the entire table?
The best is to have the secondary index on all columns that are used in the WHERE clause, except low selective fields (like sex). The most selective fields must go first in the index. For example (last_name, eye_color) is good. (eye_color, last_name) is bad.
If the covering index allows to avoid additional PK lookup, that's excellent. But if not that's acceptable too.
How would the latter be any different from storing the entire dataset five times on disk, but sorted differently each time?
Yes.
Is there any benefit to adding the PK/surrogate ID to each of the smaller indexes in the hope that the query optimizer will be able to work some sort of index merge magic?
PK is already a part of the index.( Remember, it's stored as data.) So, it makes no sense to explicitly add PK fields to the secondary index. I think (but not sure) that MyISAM secondary indexes store the PK values too (and Primary indexes do store the pointers).
To summarize:
Make your PK shorter as possible (surrogate PK works great)
Add as many indexes as you need until writes performance becomes unacceptable for you.

When should my indexes have the active column?

I have several tables and I'm wondering if my composite index is helpful or not. I am using MySQL 5+ but I guess this would apply to any database (or not?).
Anyway, say I the following table:
username active
-----------------------------------
Moe.Howard 1
Larry.Fine 0
Shemp.Howard 1
So I normally select like:
select * from users where username = 'shemp.howard' and active = 1;
The active=1 is used in many of our tables. Normally, my index would be on the username column but I'm thinking of added the active flag as well (to the same index).
My logic is that as the query engine is scanning through the index, it would be scanning against an index like:
moe.howard,1
shemp.howard,1
larry.fine,0
and find Shemp before it hits the inactive users (Larry).
Now, our active columns are usually TINYINTS and Unsigned. But I'm concerned the index might be backward!
larry.fine,0
moe.howard,1
shemp.howard,1
How should I best handle this and make sure my indexes are correct? Should I not add the active column to the same index as username? Or should I create a separate index for the active and make it descending?
Thanks.
If you combine those two fields in a composite index with the active flag as the second part of the key, then the index order will only depend on that value when (iff) the name field for two or more rows is identical (which seems unlikely in this situation based on the assumption that one would want user names in a system to be unique). The first key in the composite index will define the order of the keys whenever they are different. In other words, if the user name is unique, then adding the active flag as the second segment of a composite index will not change the order of the index.
Also, note that for the example query, the database won't "scan" the index to find the value. Rather it will seek to the first matching entry, which in the example given consists of a single match. The "scan" would happen if multiple entries pass the WHERE clause.
Having said that, unless there are lots of cases where you have duplicate names, my initial reaction would be to not create the composite key. If the names are "generally" unique, then you would not be buying a lot of savings with the composite key. On the other hand, if there are generally quite a few duplicate names with differing active flag values, it could help. At that point, you may need to just test.
Really we can only second guess what the query optimiser will try and do, however it is commonly recommended that if the selectivity of an index over 20% then a full table scan is preferable over an index access. This would mean it is very likely that even if you index active an index won't actually be used asuming you have many more active than non-active users.
MySQL can only use the index in order, so if you create a composite index of username,active that is entirely pointless as you're not going to have multiple users with the same username.
You really need to analyse your query requirements and then you can design an indexing plan to suite them. Profile each query and don't try to over optimize everything as this can have a negative result.
An index should be added only if the values you expect it to help you filter in/out are representative, statistically speaking.
What does that mean?
If say, the filter in your WHERE clause, on the column you're indexing, is helping you out retrieving 20% of the rows, you should add an index in it. This percent number depends on your special case and should be tryed out but that's the idea.
In your case, just by the name, you would have 100% of exclusion. Adding an index on the active column would be then useless because it wouldn't help reducing the final recordset (except if you have possibly n times the same name but only one active?)
The situation would be different if you decided to filter ONLY active users, not caring about the name.

MySQL Optimization When Not All Columns Are Indexed

Say I have a table with 3 columns and thousands of records like this:
id # primary key
name # indexed
gender # not indexed
And I want to find "All males named Alex", i.e., a specific name and specific gender.
Is the naieve way (select * from people where name='alex' and gender=2) good enough here? Or is there a more optimal way, like a sub-query on name?
Assuming that you don't have thousand of records, matching the name, with only few being actually males, the index on name is enough. Generally you should not index fields with little carinality (only 2 possible values means that you are going to match 50% of the rows, which does not justify using an index).
The only usefull exception I can think of, is if you are selecting name and gender only, and if you put both of them in the index, you can perform an index-covered query, which is faster than selecting rows by index and then retrieving the data from the table.
If creating an index is not an option, or you have a large volume of data in the table (or even if there is an index, but you still want to quicken the pace) it can often have a big impact to reorder the table according to the data you are grouping together.
I have a query at work for getting KPIs together for my division and even though everything was nicely indexed, the data that was being pulled was still searching through a couple of gigs of table. This means a LOT of disc accessing while the query aggregates all the correct rows together. I reordered the table using alter table tableName order by column1, column2; and the query went from taking around 15 seconds to returning data in under 3. So the physical gathering of the data can be a significant influence - even if the tables are indexed and the DB knows exactly where to get it. Arranging the data so it is easier for the database to get to everything it needs will improve performance.
A better way is to have a composite index.
i.e.
CREATE INDEX <some name for the index> ON <table name> (name, gender)
Then the WHERE clause can use it for both the name and the gender.

Correct indexing when using OR operator

I have a query like this:
SELECT fields FROM table
WHERE field1='something' OR field2='something'
OR field3='something' OR field4='something'
What would be the correct way to index such a table for this query?
A query like this takes a entire second to run! I have 1 index with all 4 of those fields in it, so I'd think mysql would do something like this:
Go through each row in the index thinking this:
Is field1 something? How about field2? field3? field4? Ok, nope, go to the next row.
You misunderstand how indexes work.
Think of a telephone book (the equivalent of a two-column index on last name first, first name last). If I ask you to find all people in the telephone book whose last name is "Smith," you can benefit from the fact that the names are ordered that way; you can assume that the Smiths are organized together. But if I ask you to find all the people whose first name is "John" you get no benefit from the index. Johns can have any last name, and so they are scattered throughout the book and you end up having to search the hard way, from cover to cover.
Now if I ask you to find all people whose last name is "Smith" OR whose first name is "John", you can find the Smiths easily as before, but that doesn't help you at all to find the Johns. They're still scattered throughout the book and you have to search for them the hard way.
It's the same with multi-column indexes in SQL. The index is sorted by the first column, then sorted by the second column in cases of ties in the first column, then sorted by the third column in cases of ties in both the first two columns, etc. It is not sorted by all columns simultaneously. So your multi-column index doesn't help to make your search terms more efficient, except for the left-most column in the index.
Back to your original question.
What would be the correct way to index such a table for this query?
Create a separate, single-column index on each column. One of these indexes will be a better choice than the others, based on MySQL's estimation of how many I/O operations the index will incur if it is used.
Modern versions of MySQL also have some smarts about index merging, so the query may use more than one index in a given table, and then try to merge the results. Otherwise MySQL tends to be limited to use one index per table in a given query.
Another trick that a lot of people use successfully is to do a separate query for each of your indexed columns (which should use the respective index) and then UNION the results.
SELECT fields FROM table WHERE field1='something'
UNION
SELECT fields FROM table WHERE field2='something'
UNION
SELECT fields FROM table WHERE field3='something'
UNION
SELECT fields FROM table WHERE field4='something'
One final observation: if you find yourself searching for the same 'something' across four fields, you should reconsider if all four fields are actually the same thing, and you're guilty of designing a table that violates First Normal form with repeating groups. If so, perhaps field1 through field4 belong in a single column in a child table. Then it becomes a lot easier to index and query:
SELECT fields from table INNER JOIN child_table ON table.pk = child_table.fk
WHERE child_table.field = 'something'
In addition to previous comment:
Some RDMS like Mysql/PostgreSql can use index merge if optimizer thinks that it's good idea.
So you can create different indexes for each field or create some composite indexes like field1,field2 and field3,field4. Finally, you should try several different solutions and choose with best explain plan.