Improve MySQL perforance by indexing everything - mysql

By indexing a column, MySQL does not need to look through every row in the table, instead it can find the data you are searching for alphabetically, then skip immediately to look at the row(s) where the data is located.
So I'm starting to think that, by setting every columns the performance should be even better. Am I right?
And if so, what would be the downside of this? Because if it's better for performance and if there is no downside, every columns should be defined as index by default.
Thanks for your advices.

Large numbers of indexes can slow down INSERT/UPDATE queries and take up significant amounts of disk space (potentially more than the data itself). You should index the columns intelligently, based on the sorts of queries your application makes.

Related

sorting 1 billion rows by one varchar column in MYSQL quickly

I have 1 billion rows stored in MYSQL, I need to output them alphabetically by the a varchar column, what's the most efficient way of go about it. using other linux utilites like sort awk are allowed.
MySQL can deal with a billion rows. Efficiency depends on 3 main factors: Buffers, Indexes and Joins.
Some suggestions:
Try to fit data set you’re working with in memory
Processing in memory is so much faster and you have whole bunch of problems solved just doing so. Use multiple servers to host portions of data set. Store portion of data you’re going to work with in temporary table etc.
Prefer full table scans to index accesses
For large data sets full table scans are often faster than range scans and other types of index lookups. Even if you look at 1% or rows or less full table scan may be faster.
Avoid joins to large tables
Joining of large data sets using nested loops is very expensive. Try to avoid it. Joins to smaller tables is OK but you might want to preload them to memory before join so there is no random IO needed to populate the caches.
Be aware of MySQL limitations which requires you to be extra careful working with large data sets. In MySQL, a query runs as a single thread (with exeption of MySQL Cluster) and MySQL issues IO requests one by one for query execution, which means if single query execution time is your concern many hard drives and large number of CPUs will not help.
Sometimes it is good idea to manually split query into several, run in parallel and aggregate result sets.
You did not give much info on your setup or your dataset, but this should give you a couple of clues on what to watch out for. In my opinion having the (properly tuned) database sort this for you would be faster than doing it programmatically unless you have very specific needs not mentioned in your post.
Have you just tried indexing the column and dumping them out? I'd try that first to see if the performance was inadequate before going exotic.
It depends on how you define efficient. CPU/Memory/IO/Time/Coding Effort. What is important in this case?
"select * from big_table order by the_varchar_column" That is probably the most efficient use of developer resources. Adding an index might make it run a lot faster.

Indexes, why don't just index everything and when to use indexes?

Indexes are used to find rows with specific column values quickly. Without an index, MySQL must begin with the first row and then read through the entire table to find the relevant rows.
Says our beloved MySQL manual.
In that case, why don't just index every column?
And since I have this feeling that it would be a bigger hit to performance, when should we use indexes/best practices for indexes?
Thanks in advance!
Creating an index always comes at a cost: The more indices you have on a table, the more expensive it is to modify that table (i.e. inserts, updates and deletes take longer).
In turn, queries that can use the indices will be faster. It's a classical tradeoff. On most tables a small number of commonly used indices is worth the cost, because queries happen often enough (or their performance is much more important than the modification performance).
On the other hand, if you have some kind of log table that is updated very often, but queried only very rarely (for example in case of a catastrophic failure), then adding an index would add a big cost and provide very little advantage.
Also: whether or not an index is useful depends a lot on the exact query to be executed. It's possible that you have indices spanning each column, but the query can't use it because the indices are in the wrong order, have the wrong information or the wrong format. So not all indices help all queries.
By your logic, you wouldn't index just every column, but every permutation of every column. The overhead involved in storing this information, and in keeping it up to date, would be utterly vast.
Generally index is helpful if it has a good selectivity, i.e. when the query selects a little portion of data based on the value (or range) of indexed attribute.
Also indice are good for merge joins, when sorting rows by a joining attribute in both joined tables allows to match rows and retrieve data in one pass.
As it was already mentioned, indexes slow down updates and take up some memory (which, by itself, slows down performance as well)

Does number of columns affect MYSQL speed?

I have a table. I only need to run one type of query: to find a given unique in column 1, then get say, the first 3 columns out.
now, how much would it affect speed if I added an extra few columns to the table for basically "data storage". I know I should use a saparate table, but lets assume I am constrained to having just 1 table, so the only way is to add on some columns at the end.
So, if I add on some columns, say 10 at the end, 30 varchar each, will this slow down any query given in the first sentence? If so, by how much of a factor do you think compared to without the extra reduntant yet present columns?
Yes, extra data can slow down queries because it means fewer rows can fit into a page, and this means more disk accesses to read a certain number of rows and fewer rows can be cached in memory.
The exact factor in slow down is hard to predict. It could be negligible, but if you are near the boundary between being able to cache the entire table in memory or not, a few extra columns could make a big difference to the execution speed. The difference in the time it takes to fetch a row from a cache in memory or from disk is several orders of magnitude.
If you add a covering index the extra columns should have less of an impact as the query can use the relatively narrow index without needing to refer to the wider main table.
I don't understand the 'I know I should use a separate table' bit. What you've described is the reason you have a DB, to associate a key with some related data. Look at it another way, how else do you retrieve this information if you don't have the key?
To answer your question, the only way to know what the performance hit is going to be is empirical testing (though Mark's answer, posted just prior to mine, is one - of VERY many - factors to speed).
That depends a bit on how much data you already have in the records. The difference would normally be somewhere between almost none at all to not so big.
The difference comes from how much more data has to be loaded from disk to get to the data. The extra columns will likely mean that there is room for less records in each page, but it's possible that it happens to be room enough left in each page for most of the extra data so that there are few extra blocks needed. It depends on how well the current data lines up in the pages.

Are indexes good or bad for a large database?

I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes.
I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not
Thanks
The article says that when dealing with very large data sets, where the amount of rows you need to work with are approaching the number of rows that is in the table, using an index might hurt performance.
In this case, going through the index will indeed hurt performance, as long as you need more data than is present in the index.
To go through the index, the database engine first has to read large parts of the index table (it is a type of table), then for each row (or set of rows) from this result, go to the real table and start cherrypicking pages to read.
If, on the other hand, you only need to retrieve columns that area already part of the index table, then the database engine only has to read from that, and not continue on to the full table for more data.
If you end up reading most or close to most of the actual table in question, all the work required to deal with the index might be more overhead than just doing a full table-scan to begin with.
Now, this is all the article is saying. For most work dealing with a database, using indexes is the exact right thing to do.
For instance, if you need to extract a small set of rows, going through an index instead of a full table scan will be many order of magnitudes faster.
In any case, if you're in doubt, you should do some performance profiling to find out how your application behaves under different types of loads, and then start tweaking, don't take a single article as a silver bullet for anything.
For instance, one way to speed up the example queries that does a count on the pad column in the article, would be to create a single index that covered both val and pad, in this way, the count would simply be a index-scan, and not a index-scan + table-lookup, and would run faster than the full table-scan.
Your best option is to know your data, and to experiment, and to know how the tools you use work, so indeed, learn more about indexes, but in the end, it is you who decides what is best for your program.
As always, it depends. I've so far never ran into a scenario as described in that blog posts. Using indexes on my queries for large (50+ million rows) has been on the order of 100 to 10000 times faster than doing a full table scan on these big tables.
There's probably no silver bullet here, you have to test for your particular data and your particular queries.
It is good practice to put the index on each column which you used in a WHERE clause.

Scanning a mysql table from the bottom

When i run a mysql select statement, it takes very long because i have already previously deleted a very large number of rows.
Is there a way for the table to start scanning from the bottom, as opposed to from the top?
A query does not scan the table in any particular order; it might do so if it happens to traverse a particular index in order (e.g. a range scan), which MIGHT be because you used an ORDER BY.
Databases just don't work like that. You cannot rely on their behaviour in that way.
If you're doing a full table scan, expect it to take a while, particularly if you've deleted a lot of rows recently. However, it will take even longer if you have lots of rows.
Ensure that the query uses indexes instead. Looks at the explain plan and make sure it uses indexes.
Maybe you need an additional index for your table. It also doesn't hurt to issue OPTIMIZE TABLE and ANALYZE TABLE occasionally. Query performance shouldn't be affected by having deleted rows, even large numbers of rows, provided you have suitable indexes.