Is there an implementation of SQL database allowing to select from a table the block of n following records -- in respect of the index -- starting from a specified index, with performance O(n + log tables_size)? But also allowing adding a record in O(log tables_size). If so, how to do it?
I'm probably a dreamer but, is it possible with MySQL?
If id is the primary key on a table, then the following will return in order of the time needed to fetch the records, plus the initial index seek:
select t.*
from t
where id >= SOMEID
order by id
limit <n>;
Adding a record consists of two parts. The first part is finding available space and the second part is inserting into the index. A b-tree index should require O(log table_size) for an insert. If the pages are full and you are only inserting at the end of the table, then finding the right page is constant time.
In other words, if I understand your question correctly, primary key clustered indexes do exactly what you are asking for.
Related
I have the following query:
select * from `tracked_employments`
where `tracked_employments`.`file_id` = 10006000
and `tracked_employments`.`user_id` = 1003230
and `tracked_employments`.`can_be_sent` = 1
and `tracked_employments`.`type` = ‘jobchange’
and `tracked_employments`.`file_type` = ‘file’
order by `tracked_employments`.`id` asc
limit 1000
offset 2000;
and this index
explain tells me that it does not use the index, but when I replace * with id it does use it. Why does it make a difference what columns I select?
Both you and Akina have misconceptions about how InnoDB indexing works.
Let me explain the two ways that that query may be executed.
Case 1. Index is used.
This assumes the datatypes, etc, all match the 5-column composite index that seems to exist on the table. Note: because all the tests are for =, the order of the columns in the WHERE clause and the INDEX does not matter.
In InnoDB, id (or whatever column(s) are in the PRIMARY KEY are implicitly added onto the index.
The lookup will go directly (in the Index's BTree) to the first row that matches all 5 tests. From there, it will scan forward. Each 'row' in the index has the PK, so it can reach over into the data's BTree to find any other columns needed for * (cf SELECT *).
But, it must skip over 2000 rows before delivering the 1000 that are desired. This is done by actually stepping over each one, one at a time. That is, OFFSET is not necessarily fast.
Case 2. Don't bother with the index.
This happens based on some nebulous analysis of the 3000 rows that need to be touched and the size of the table.
The rationale behind possibly scanning the table without using the index is that the bouncing between the index BTree and the data BTree may be more costly than simply scanning the data BTree. Note that the data BTree is already in the desired order -- namely by id. (Assuming that is the PK.) That avoids a sort of up to 1000 rows.
Also, certain datatype issues may prevent the use of the index.
I do need to ask what the client will do with 1000 rows all at once. If it is a web page, that seems awfully big.
Case 3 -- Just SELECT id
In this case, all the info is available in the index, so there is no need to reach into the data's BTree.
I want a query that does a fulltext search on one field and then a sort on a different field (imagine searching some text document and order by publication date). The table has about 17M rows and they are more or less uniformly distributed in dates. This is to be used in a webapp request/response cycle, so the query has to finish in at most 200ms.
Schematically:
SELECT * FROM table WHERE MATCH(text) AGAINST('query') ORDER BY date=my_date DESC LIMIT 10;
One possibility is having a fulltext index on the text field and a btree on the publication date:
ALTER TABLE table ADD FULLTEXT index_name(text);
CREATE INDEX index_name ON table (date);
This doesn't work very well in my case. What happens is that MySQL evaluates two execution paths. One is using the fulltext index to find the relevant rows, and once they are selected use a FILESORT to sort those rows. The second is using the BTREE index to sort the entire table and then look for matches using a FULL TABLE SCAN. They're both bad. In my case MySQL chooses the former. The problem is that the first step can select some 30k results which it then has to sort, which means the entire query might take of the order 10 seconds.
So I was thinking: do composite indexes of FULLTEXT+BTREE exist? If you know how a FULLTEXT index works, it first tokenizes the column you're indexing and then builds an index for the tokens. It seems reasonable to me to imagine a composite index such that the second index is a BTREE in dates for each token. Does this exist in MySQL and if so what's the syntax?
BONUS QUESTION: If it doesn't exist in MySQL, would PostgreSQL perform better in this situation?
Use IN BOOLEAN MODE.
The date index is not useful. There is no way to combine the two indexes.
Beware, if a user searches for something that shows up in 30K rows, the query will be slow. There is no straightforward away around it.
I suspect you have a TEXT column in the table? If so, there is hope. Instead of blindly doing SELECT *, let's first find the ids and get the LIMIT applied, then do the *.
SELECT a.*
FROM tbl AS a
JOIN ( SELECT date, id
FROM tbl
WHERE MATCH(...) AGAINST (...)
ORDER BY date DESC
LIMIT 10 ) AS x
USING(date, id)
ORDER BY date DESC;
Together with
PRIMARY KEY(date, id),
INDEX(id),
FULLTEXT(...)
This formulation and indexing should work like this:
Use FULLTEXT to find 30K rows, deliver the PK.
With the PK, sort 30K rows by date.
Pick the last 10, delivering date, id
Reach back into the table 10 times using the PK.
Sort again. (Yeah, this is necessary.)
More (Responding to a plethora of Comments):
The goal behind my reformulation is to avoid fetching all columns of 30K rows. Instead, it fetches only the PRIMARY KEY, then whittles that down to 10, then fetches * only 10 rows. Much less stuff shoveled around.
Concerning COUNT on an InnoDB table:
INDEX(col) makes it so that an index scan works for SELECT COUNT(*) or SELECT COUNT(col) without a WHERE.
Without INDEX(col),SELECT COUNT(*)will use the "smallest" index; butSELECT COUNT(col)` will need a table scan.
A table scan is usually slower than an index scan.
Be careful of timing -- It is significantly affected by whether the index and/or table is already cached in RAM.
Another thing about FULLTEXT is the + in front of words -- to say that each word must exist, else there is no match. This may cut down on the 30K.
The FULLTEXT index will deliver the date, id is random order, not PK order. Anyway, it is 'wrong' to assume any ordering, hence it is 'right' to add ORDER BY, then let the Optimizer toss it if it knows that it is redundant. And sometimes the Optimizer can take advantage of the ORDER BY (not in your case).
Removing just the ORDER BY, in many cases, makes a query run much faster. This is because it avoids fetching, say, 30K rows and sorting them. Instead it simply delivers "any" 10 rows.
(I have not experience with Postgres, so I cannot address that question.)
I have a large MySql table with over 11 million rows. This is just a huge data set and my task is to be able to analyze the dataset based on certain rules.
Each row belongs to a certain category. There are 2 million different categories. I want to get all rows for a category and perform operations on that.
So currently, I do the following:
Select distinct categories from the table.
for each category : Select fields from table WHERE category=category
Even though my category column is indexed, it takes a really long time to execute Step 2. This is mainly because of the huge data set.
Alternatively, I can use GROUP BY clause, however I am not sure if it will be as fast since GROUP BY on such a huge dataset may be expensive, especially when considering that I will be running my analysis several times on parts of the dataset. A way to permanently ensure a sorted table would be useful.
Therefore as an alternative, I can speed up my queries if only my table is pre-sorted by category. Now I can just read the table row by row and perform the same operations in a much faster time, as all rows of one category will be fetched consecutively.
As the dataset (MySql table) is fixed and no update, delete, insert operations will be performed on it. I want to be able to ensure a way to maintain a default sort order by category. Can anyone suggest a trick to ensure the default sort order of the rows.
Maybe read all rows and rewrite them to a new table or add a new primary key which ensures this order?
Even though my category column is indexed
Indexed by a secondary index? If so, you can encounter the following performance problems:
InnoDB tables are always clustered and the secondary index in clustered table can require a double-lookup (see the "Disadvantages of clustering" in this article).
Indexed rows can be scattered all over the place (index can have bad clustering factor - the link is for Oracle but the principle is the same). If so, an index range scan (such as WHERE category = whatever) can end-up loading many table pages, even though the index is actually used and only a small subset of rows is actually selected. This can destroy the range scan performance.
In alternative to the secondary index, consider using a natural primary key, which in InnoDB tables also acts as a clustering key. The primary/clustering key such as {category, no} will keep the rows of the same category physically close together, making both of your queries (and especially the second one) maximally efficient.
OTOH, if you want to keep the secondary index, consider covering all the fields that you query, so the primary B-Tree doesn't have to be touched at all.
You can do this in one step regardless of indexing by doing something like (pseudo code):
Declare #LastCategory int = Null
Declare #Category int
For Each Row In
Select
#Category = Category,
...
From
Table
Order By
Category
If #LastCategory Is Null Or #LastCategory != #Category
Do any "New Category Steps"
Set #LastCategory = #Category
End
Process Row
End For
With the index on category I'd expect this to perform OK. Your performance issues may be down to what you are doing when processing each row.
Here's an example: http://sqlfiddle.com/#!2/e53c98/1
I have a query of the following form:
SELECT * FROM MyTable WHERE Timestamp > [SomeTime] AND Timestamp < [SomeOtherTime]
I would like to optimize this query, and I am thinking about putting an index on timestamp, but am not sure if this would help. Ideally I would like to make timestamp a clustered index, but MySQL does not support clustered indexes, except for primary keys.
MyTable has 4 million+ rows.
Timestamp is actually of type INT.
Once a row has been inserted, it is never changed.
The number of rows with any given Timestamp is on average about 20, but could be as high as 200.
Newly inserted rows have a Timestamp that is greater than most of the existing rows, but could be less than some of the more recent rows.
Would an index on Timestamp help me to optimize this query?
No question about it. Without the index, your query has to look at every row in the table. With the index, the query will be pretty much instantaneous as far as locating the right rows goes. The price you'll pay is a slight performance decrease in inserts; but that really will be slight.
You should definitely use an index. MySQL has no clue what order those timestamps are in, and in order to find a record for a given timestamp (or timestamp range) it needs to look through every single record. And with 4 million of them, that's quite a bit of time! Indexes are your way of telling MySQL about your data -- "I'm going to look at this field quite often, so keep an list of where I can find the records for each value."
Indexes in general are a good idea for regularly queried fields. The only downside to defining indexes is that they use extra storage space, so unless you're real tight on space, you should try to use them. If they don't apply, MySQL will just ignore them anyway.
I don't disagree with the importance of indexing to improve select query times, but if you can index on other keys (and form your queries with these indexes), the need to index on timestamp may not be needed.
For example, if you have a table with timestamp, category, and userId, it may be better to create an index on userId instead. In a table with many different users this will reduce considerably the remaining set on which to search the timestamp.
...and If I'm not mistaken, the advantage of this would be to avoid the overhead of creating the timestamp index on each insertion -- in a table with high insertion rates and highly unique timestamps this could be an important consideration.
I'm struggling with the same problems of indexing based on timestamps and other keys. I still have testing to do so I can put proof behind what I say here. I'll try to postback based on my results.
A scenario for better explanation:
timestamp 99% unique
userId 80% unique
category 25% unique
Indexing on timestamp will quickly reduce query results to 1% the table size
Indexing on userId will quickly reduce query results to 20% the table size
Indexing on category will quickly reduce query results to 75% the table size
Insertion with indexes on timestamp will have high overhead **
Despite our knowledge that our insertions will respect the fact of have incrementing timestamps, I don't see any discussion of MySQL optimisation based on incremental keys.
Insertion with indexes on userId will reasonably high overhead.
Insertion with indexes on category will have reasonably low overhead.
** I'm sorry, I don't know the calculated overhead or insertion with indexing.
If your queries are mainly using this timestamp, you could test this design (enlarging the Primary Key with the timestamp as first part):
CREATE TABLE perf (
, ts INT NOT NULL
, oldPK
, ... other columns
, PRIMARY KEY(ts, oldPK)
, UNIQUE (oldPK)
) ENGINE=InnoDB ;
This will ensure that the queries like the one you posted will be using the clustered (primary) key.
Disadvantage is that your Inserts will be a bit slower. Also, If you have other indices on the table, they will be using a bit more space (as they will include the 4-bytes wider primary key).
The biggest advantage of such a clustered index is that queries with big range scans, e.g. queries that have to read large parts of the table or the whole table will find the related rows sequentially and in the wanted order (BY timestamp), which will also be useful if you want to group by day or week or month or year.
The old PK can still be used to identify rows by keeping a UNIQUE constraint on it.
You may also want to have a look at TokuDB, a MySQL (and open source) variant that allows multiple clustered indices.
Right now, I'm debating whether or not to use COUNT(id) or "count" columns. I heard that InnoDB COUNT is very slow without a WHERE clause because it needs to lock the table and do a full index scan. Is that the same behavior when using a WHERE clause?
For example, if I have a table with 1 million records. Doing a COUNT without a WHERE clause will require looking up 1 million records using an index. Will the query become significantly faster if adding a WHERE clause decreases the number of rows that match the criteria from 1 million to 500,000?
Consider the "Badges" page on SO, would adding a column in the badges table called count and incrementing it whenever a user earned that particular badge be faster than doing a SELECT COUNT(id) FROM user_badges WHERE user_id = 111?
Using MyIASM is not an option because I need the features of InnoDB to maintain data integrity.
SELECT COUNT(*) FROM tablename seems to do a full table scan.
SELECT COUNT(*) FROM tablename USE INDEX (colname) seems to be quite fast if
the index available is NOT NULL, UNIQUE, and fixed-length. A non-UNIQUE index doesn't help much, if at all. Variable length indices (VARCHAR) seem to be slower, but that may just be because the index is physically larger. Integer UNIQUE NOT NULL indices can be counted quickly. Which makes sense.
MySQL really should perform this optimization automatically.
Performance of COUNT() is fine as long as you have an index that's used.
If you have a million records and the column in question is NON NULL then a COUNT() will be a million quite easily. If NULL values are allowed, those aren't indexed so the number of records is easily obtained by looking at the index size.
If you're not specifying a WHERE clause, then the worst case is the primary key index will be used.
If you specify a WHERE clause, just make sure the column(s) are indexed.
I wouldn't say avoid, but it depends on what you are trying to do:
If you only need to provide an estimate, you could do SELECT MAX(id) FROM table. This is much cheaper, since it just needs to read the max value in the index.
If we consider the badges example you gave, InnoDB only needs to count up the number of badges that user has (assuming an index on user_id). I'd say in most case that's not going to be more than 10-20, and it's not much harm at all.
It really depends on the situation. I probably would keep the count of the number of badges someone has on the main user table as a column (count_badges_awarded) simply because every time an avatar is shown, so is that number. It saves me having to do 2 queries.