mysql index selection on large table - mysql

I have a couple of tables that looks like this:
CREATE TABLE Entities (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(45) NOT NULL,
client_id INT NOT NULL,
display_name VARCHAR(45),
PRIMARY KEY (id)
)
CREATE TABLE Statuses (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(45) NOT NULL,
PRIMARY KEY (id)
)
CREATE TABLE EventTypes (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(45) NOT NULL,
PRIMARY KEY (id)
)
CREATE TABLE Events (
id INT NOT NULL AUTO_INCREMENT,
entity_id INT NOT NULL,
date DATE NOT NULL,
event_type_id INT NOT NULL,
status_id INT NOT NULL
)
Events is large > 100,000,000 rows
Entities, Statuses and EventTypes are small < 300 rows a piece
I have several indexes on Events, but the ones that come into play are
idx_events_date_ent_status_type (date, entity_id, status_id, event_type_id)
idx_events_date_ent_status_type (entity_id, status_id, event_type_id)
idx_events_date_ent_type (date, entity_id, event_type_id)
I have a large complicated query, but I'm getting the same slow query results with a simpler one like the one below (note, in the real queries, I don't use evt.*)
SELECT evt.*, ent.name AS ent_name, s.name AS stat_name, et.name AS type_name
FROM `Events` evt
JOIN `Entities` ent ON evt.entity_id = ent.id
JOIN `EventTypes` et ON evt.event_type_id = et.id
JOIN `Statuses` s ON evt.status_id = s.id
WHERE
evt.date BETWEEN #start_date AND #end_date AND
evt.entity_id IN ( 19 ) AND -- this in clause is built by code
evt.event_type_id = #type_id
For some reason, mysql keeps choosing the index which doesn't cover Events.date and the query takes 15 seconds or more and returns a couple thousand rows. If I change the query to:
SELECT evt.*, ent.name AS ent_name, s.name AS stat_name, et.name AS type_name
FROM `Events` evt force index (idx_events_date_ent_status_type)
JOIN `Entities` ent ON evt.entity_id = ent.id
JOIN `EventTypes` et ON evt.event_type_id = et.id
JOIN `Statuses` s ON evt.status_id = s.id
WHERE
evt.date BETWEEN #start_date AND #end_date AND
evt.entity_id IN ( 19 ) AND -- this in clause is built by code
evt.event_type_id = #type_id
The query takes .014 seconds.
Since this query is built by code, I would much rather not force the index, but mostly, I want to know why it chooses one index over the other. Is it because of the joins?
To give some stats, there are ~2500 distinct dates, and ~200 entities in the Events table. So I suppose that might be why it chooses the index with all of the low cardinality columns.
Do you think it would help to add date to the end of idx_events_date_ent_status_type? Since this is a large table, it takes a long time to add indexes.
I tried adding an additional index,
ix_events_ent_date_status_et(entity_id, date, status_id, event_type_id)
and it actually made the queries slower.
I will experiment a bit more, but I feel like I'm not sure how the optimizer makes it's decisions.
Additional Info:
I tried removing the join to the Statuses table, and mysql switches to ix_events_date_ent_type, and the query runs in 0.045 sec
I can't wrap my head around why removing a join to a table that is not part of the filter impacts the choice of index.

I would add this index:
ALTER TABLE Events ADD INDEX (event_type_id, entity_id, date);
The order of columns is important. Put all column(s) used in equality conditions first. This is event_type_id in this case.
The optimizer can use multiple columns to optimize equalities, if the columns are left-most and consecutive.
Then the optimizer can use one more column to optimize a range condition. A range condition is anything other than = or IS NULL. So range conditions include >, !=, BETWEEN, IN(), LIKE (with no leading wildcard), IS NOT NULL, and so on.
The condition on entity_id is also an equality condition if the IN() list has one element. MySQL's optimizer can treat a list of one value as an equality condition. But if the list has more than one value, it becomes a range condition. So if the example you showed of IN (19) is typical, then all three columns of the index will be used for filtering.
It's still worth putting date in the index, because it can at least tell the InnoDB storage engine to filter rows before returning them. See https://dev.mysql.com/doc/refman/8.0/en/index-condition-pushdown-optimization.html It's not quite as good as a real index lookup, but it's worthwhile.
I would also suggest creating a smaller table to test with. Doing experiments on a 100 million row table is time-consuming. But you do need a table with a non-trivial amount of data, because if you test on an empty table, the optimizer behaves differently.

Rearrange your indexes to have columns in this order:
Any column(s) that will be tested with = or IS NULL.
Column(s) tested with IN -- If there is a single value, this will be further optimized to = for you.
One "range" column, such as your date.
Note that nothing after a "range" test will be used by WHERE.
(There are exceptions, but most are not relevant here.)
More discussion: Index Cookbook
Since the tables smell like Data Warehousing, I suggest looking into
Summary Tables In some cases, long queries on Events can be moved to the summary table(s), where they run much faster. Also, this may eliminate the need for some (or maybe even all) secondary indexes.
Since Events is rather large, I suggest using smaller numbers where practical. INT takes 4 bytes. Speed will improve slightly if you shrink those where appropriate.
When you have INDEX(a,b,c), that index will handle cases that need INDEX(a,b) and INDEX(a). Keep the longer one. (Sometimes the Optimizer picks the shorter index 'erroneously'.)

To most effectively use a composite index on multiple values of two different fields, you need to specify the values with joins instead of simple where conditions. So assuming you are selecting dates from 2022-12-01 to 2022-12-03 and entity_id in (1,2,3), do:
select ...
from (select date('2022-12-01') date union all select date('2022-12-02') union all select date('2022-12-03')) dates
join Entities on Entities.id in (1,2,3)
join Events on Events.entity_id=Entities.id and Events.date=dates.date
If you pre-create a dates table with all dates from 0000-01-01 to 9999-12-31, then you can do:
select ...
from dates
join Entities on Entities.id in (1,2,3)
join Events on Events.entity_id=Entities.id and Events.date=dates.date
where dates.date between #start_date and #end_date

Related

How to optimize SQL query with WHERE IN subquery

I have two tables in MySQL 5.6 for collecting event data.
When an event occurs it generates data in certain time period.
The parent table named 'event' remembers the last state of event.
The child table named 'event_version' remembers all data versions generated by any event.
Schemas for this tables looks like that:
CREATE TABLE `event` (
`id` BIGINT(20) NOT NULL,
`version_id` BIGINT(20)', -- refers to last event_version
`version_number` BIGINT(20)', -- consecutive numbers increased when new version appears
`first_event_time` TIMESTAMP(6), -- time when a set of event data was generated first time,
-- it is immutable after creation
`event_time` TIMESTAMP(6), -- time when a set of event data changed last time
`other_event_data` VARCHAR(30),--more other columns
PRIMARY KEY (`id`),
INDEX `event_time` (`event_time`),
INDEX `version_id` (`version_id`),
CONSTRAINT `FK_version_id` FOREIGN KEY (`version_id`) REFERENCES `event_version` (`id`)
);
CREATE TABLE `event_version` (
`id` BIGINT(20) NOT NULL,
`event_id` BIGINT(20)', -- refers to event
`version_number` BIGINT(20)', -- consecutive numbers increased when new version appears
`event_time` TIMESTAMP(6) NULL DEFAULT NULL, -- time when a set of event data was generated
`other_event_data` VARCHAR(30),--more other columns
PRIMARY KEY (`id`),
INDEX `event_time` (`event_time`), -- time when a set of event data changed
INDEX `event_id` (event_id),
CONSTRAINT `FK_event_id` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`)
);
I want to get all event_version rows which have new rows added in selected time period.
For example: there is na event with event.id=21 that appeared at 2019-04-28 and it produced versions at:
2019-04-28 version_number: 1, event_version.event_id=21
2019-04-30 version_number: 2, event_version.event_id=21
2019-05-02 version_number: 3, event_version.event_id=21
2019-05-04 version_number: 4, event_version.event_id=21
I want this records to be found when I search for period from 2019-05-01 to 2019-06-01.
The idea is to find all event_version.event_id created in selected period, and then all rows from event_version which have event_id from this list.
To create the list of event id I have an inner SELECT queries:
The first query:
SELECT DISTINCT event_id FROM event_version WHERE event_time>='2019-05-01' AND event_time<'2019-06-01';
It takes about 10s and returns about 500 000 records.
But I have second query which uses parent table and looks like this:
SELECT id FROM event WHERE (first_event_time>='2019-05-01' AND first_event_time<'2019-06-01') OR (first_event_time<'2019-05-01' AND event_time>'2019-05-01');
It takes about 7s and returns the same set of ids.
Then I use this subqueries in my final query:
SELECT * FROM event_version WHERE event_id IN (<one of prvious two queries>);
The problem is that when I use the second subquery it takes about 8s to produce result (about 5 millions records).
Creating the same result with the first subquery takes 3 minutes and 15s.
I can't understand why there is such a big difference in executing time even if subqueries produce the same result list.
I want to use a subquery like in the first example because it depends only from event_time, not from additional data from parent table.
I have more similar tables and there I can rely only on event_time.
My question: is there a possibility to optimize the query to produce expected result using only event_time?
As I understand, you want the following query to be optimized:
SELECT *
FROM event_version
WHERE event_id IN (
SELECT DISTINCT event_id
FROM event_version
WHERE event_time >= '2019-05-01'
AND event_time < '2019-06-01'
)
Things I would try:
Create an index on event_version(event_time, event_id). This should improve the performance of the subquery by avoiding a second lookup to get the event_id. Though the overall performance will probably be similar. The reason is that WHERE IN (<subquery>) tend to be slow (at least in older versions) when the subquery returns a lot of rows.
Try a JOIN with your subquery as derived table:
SELECT *
FROM (
SELECT DISTINCT event_id
FROM event_version
WHERE event_time >= '2019-05-01'
AND event_time < '2019-06-01'
) s
JOIN event_version USING(event_id)
Look if the index mentioned above is of any help here.
Try an EXISTS subquery:
SELECT v.*
FROM event e
JOIN event_version v ON v.event_id = e.id
WHERE EXISTS (
SELECT *
FROM event_version v1
WHERE v1.event_id = e.id
AND v1.event_time >= '2019-05-01'
AND v1.event_time < '2019-06-01'
)
Here you would need an index on event_version(event_id, event_time). Though the performance might be even worse. I would bet on the derived table join solution.
My guess - why your second query runs faster - is that the optimizer is able to convert the IN condition to a JOIN, because the returned column is the primary key of the event table.
im guessing the event_version table is a lot bigger then the event table. the subqueries are easy to do, you scan a table once for a predicate and return the rows. when you do this inside a subquery, forevery row the outer query checks, the subquery gets executed. so if event_version has 1m rows, it executes the subquery 1m times. theres probebly some smarter logic to not make it this extreme, but the principle stays.
how ever, i fail to see the point of the 3rd query. if you use the 3rd query with the 1st query as subquery, you get the exact same rows where if you had done the first query as Select all from event_version, so why the subquery?
wouldnt this:
SELECT * FROM event_version WHERE event_id IN (insert query 1);
be the same as
SELECT * FROM event_version WHERE event_time>='2019-05-01' AND event_time<'2019-06-01';
?

Keep subset of records separately for query performance mysql

I have a large table containing over 10 million records and It will keep growing. I am performing an aggregation query (count of particular value) on records of last 24 hours. The time taken by this query will keep increasing with number of records in the table.
I can limit the time taken by keeping these 24 hours records in separate table and perform aggregation on that table. Does mysql provide any functionality to handle this kind of scenario?
Table schema and query for reference:
CREATE TABLE purchases (
Id int(11) NOT NULL AUTO_INCREMENT,
ProductId int(11) NOT NULL,
CustomerId int(11) NOT NULL,
PurchaseDateTime datetime(3) NOT NULL,
PRIMARY KEY (Id),
KEY ix_purchases_PurchaseDateTime (PurchaseDateTime) USING BTREE,
KEY ix_purchases_ProductId (ProductId) USING BTREE,
KEY ix_purchases_CustomerId (CustomerId) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
select COALESCE(sum(ProductId = v_ProductId), 0),
COALESCE(sum(CustomerId = v_CustomerId), 0)
into v_ProductCount, v_CustomerCount
from purchases
where PurchaseDateTime > NOW() - INTERVAL 1 DAY
and ( ProductId = v_ProductId
or CustomerId = v_CustomerId );
Build and maintain a separate Summary table .
With partitioning, you might get a small improvement, or you might get no improvement. With a summary table, you might get a factor of 10 improvement.
The summary table could have a 1-day resolution, or you might need 1-hour. Please provide SHOW CREATE TABLE for what you currently have, so we can discuss more specifics.
(There is no built-in mechanism for what you want.)
Plan A
I would leave off
and ( ProductId = v_ProductId
or CustomerId = v_CustomerId )
since the rest of the query will simply deal with it anyway.
Then I would add
INDEX(PurchaseDateTime, ProductId, CustomerId)
which would be "covering" -- that is, the entire SELECT can be performed in the INDEX's BTree. It would also be 'clustered' in the sense that all the data needed would be consecutively stored in the index. Yes, the datetime is deliberately first. (OR is a nuisance to optimize. I don't trust the Optimizer to do "index merge union".)
Plan B
If you expect to touch very few rows (because of v_ProductId and v_CustomerId), then the following may be faster, in spite of being more complex:
SELECT COALESCE(sum(ProductId = v_ProductId), 0)
INTO v_ProductCount
FROM purchases
WHERE PurchaseDateTime > NOW() - INTERVAL 1 DAY
AND ProductId = v_ProductId;
SELECT COALESCE(sum(CustomerId = v_CustomerId), 0)
INTO v_CustomerCount
FROM purchases
WHERE PurchaseDateTime > NOW() - INTERVAL 1 DAY
AND CustomerId = v_CustomerId;
together with both:
INDEX(ProductId, PurchaseDateTime),
INDEX(CustomerId, PurchaseDateTime)
Yes, the order of the columns is deliberately different.
Original Question
Both of these approaches are better than your original suggestion of a separate table. These isolate the data in one part of an index (or two indexes), thereby having the effect of "separate". And these do the task with less effort on your part.

MySQL how to create a correct index

MySQL 5.5
I am trying to find the correct index for a query.
Table:
create table trans (
trans_id int(11) not null auto_increment,
acct_id int(11) not null,
status_id int(11) not null,
trans_transaction_type varchar(5) not null,
trans_amount float(9,3) default null,
trans_datetime datetime not null default '0000-00-00 00:00:00',
primary key (trans_id)
)
Query:
select trans_id
from trans
where acct_id = _acctid
and transaction_type in ('_1','_2','_3','_4')
and trans_datetime between _start and _end
and status_id = 6
Cardinality:
select *
from information_schema.statistics
where table_name='trans'
Result:
trans_id 424339375
acct_id 12123818
trans_transaction_type 70722272
trans_datetime 84866726
status_id 22
I am trying to find what is the correct index for the query?
alter table trans add index (acct_id, trans_transaction_type, trans_datetime, status_id);
alter table trans add index (acct_id, trans_datetime, trans_transaction_type, status_id);
etc...
Which columns go first in the index?
The goal is query speed/performance. Disk space usage is of no concern.
The base of indexing a table is to make the queries light to improve performance, the first index to be added should always be the primary key of the table (trans_id in this case), and after that, the other id columns should be indexed too.
alter table trans add index (trans_id, acct_id, status_id);
The other fields are not needed as indexes, unless you query too often based on them.
Plan A
Start with any WHERE clause that is col = constant. Then move on to one more thing.
Suggest you add both of the following, because it is not easy to predict which will be better:
INDEX(acct_id, status_id, transaction_type)
INDEX(acct_id, status_id, trans_datetime)
Plan B
Do you really have only trans_id in the SELECT list? If so, then it should not be bad to turn this into a "covering" index. That's an index where the entire operation can be performed in the BTree where the index lives thereby avoid having to reach over into the data.
To build such, first build the optimal non-covering index, then add the rest of the fields mentioned anywhere in the query. Either of these should work:
INDEX(acct_id, status_id, trans_datetime, transaction_type, trans_id)
INDEX(acct_id, status_id, transaction_type, trans_datetime, trans_id)
The first two fields can be in either order (both are '='). The last two fields can be in either order (both are useless for finding the rows; they exist only for 'covering').
I recommend against having more than, say, 5 columns in an index.
More info in my Index Cookbook.
Notes
Perform EXPLAIN SELECT. You should see 'Using index' when it is a 'covering' index.
I think the EXPLAIN's Key_len will (in all cases here) show the combined lengths of only acct_id and status_id.
You are in a Stored Procedure? If the version in the SP runs significantly slower than when you experiment, you may need to re-code to CONCAT, PREPARE, and EXECUTE the query.

Proper Indexing/Optimization of a MySQL GROUP BY and JOIN Query

I've done a lot of reading and Googling on this and I cannot find any satisfactory answer so I'd appreciate any help. Most answers I find come close to my situation but do not address it (and attempting to follow the solutions has not done me any good).
See Edit #2 below for the best example
[This was the original question but is not a great representation of what I'm asking.]
Say I have 2 tables, each with 4 columns:
key (int, auto increment)
c1 (a date)
c2 (a varchar of length 3)
c3 (also a varchar of length 3)
And I want to perform the following query:
SELECT t.c1, t.c2, COUNT(*)
FROM test1 t
LEFT JOIN test2 t2 ON t2.key = t.key
GROUP BY t.c1, t.c2
Both key fields are indexed as primary keys. I want to get the number of rows returned in each grouping of c1, c2.
When I explain this query I get "using temporary; using filesort". The actual table I'm performing this query on is over 500,000 rows, so that means it's a time consuming query.
So my question is (assuming I'm not doing anything wrong in the query): is there a way to index this table to eliminate the temporary/filesort usage?
Thanks in advance for any help.
Edit
Here is the table definition (in this example both tables are identical - in reality they're not but I'm not sure it makes a difference at this point):
CREATE TABLE `test1` (
`key` int(11) NOT NULL auto_increment,
`c1` date NOT NULL,
`c2` varchar(3) NOT NULL,
`c3` varchar(3) NOT NULL,
PRIMARY KEY (`key`),
UNIQUE KEY `c1` (`c1`,`c2`),
UNIQUE KEY `c2_2` (`c2`,`c1`),
KEY `c2` (`c2`,`c3`)
) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
Full EXPLAIN statement:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t ALL NULL NULL NULL NULL 2 Using temporary; Using filesort
1 SIMPLE t2 eq_ref PRIMARY PRIMARY 4 tracking.t.key 1 Using index
This is just for my sample tables. In my real tables the rows for t says 500,000+ (every row in the table, though that could be related to something else).
Edit #2
Here is a more concrete example to better explain my situation.
Let's say I have data on Little League baseball games. I have two tables. One holds data on the games:
CREATE TABLE `ex_games` (
`game_id` int(11) NOT NULL auto_increment,
`home_team` int(11) NOT NULL,
`date` date NOT NULL,
PRIMARY KEY (`game_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
The other holds data on the at bats in each game:
CREATE TABLE `ex_atbats` (
`ab_id` int(11) NOT NULL auto_increment,
`game` int(11) NOT NULL,
`team` int(11) NOT NULL,
`player` int(11) NOT NULL,
`result` tinyint(1) NOT NULL,
PRIMARY KEY (`hit_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
So I have two questions. Let's start with the simple version: I want to return a list of games with a count of how many at bats are in each game. So I think I would do something like this:
SELECT date, home_team, COUNT(h.ab_id) FROM `ex_atbats` h
LEFT JOIN ex_games g ON g.game_id = h.game
GROUP BY g.game_id
This query uses filesort/temporary. Is there a better way to structure this or to index the tables to get rid of that?
Then, the trickier part: say I now want to not only include a count of the number of at bats, but also include a count of the number of at bats that were preceded by an at bat with the same result by the same team. I assume that would be something like:
SELECT g.date, g.home_team, COUNT(ab.ab_id), COUNT(ab2.ab_id) FROM `ex_atbats` ab
LEFT JOIN ex_games g ON g.game_id = ab.game
LEFT JOIN ex_atbats ab2 ON ab2.ab_id = ab.ab_id - 1 AND ab2.result = ab.result
GROUP BY g.game_id
Is that the correct way to structure that query? This also uses filesort/temporary.
So what is the optimal way to go about accomplishing these tasks?
Thanks again.
Phrases Using temporary/filesort usually are not related to the indexes used in the JOIN operation. There is numerous examples where you can have all indexes set (they show up in key and key_len columns in EXPLAIN) but you still get Using temporary and Using filesort.
Check out what the manual says about Using temporary and Using filesort:
How MySQL Uses Internal Temporary Tables
ORDER BY Optimization
Having a combined index for all columns used in GROUP BY clause may help to get rid of Using filesort in certain circumstances. If you also issue ORDER BY you may need to add more complex indexes.
If you have a huge dataset consider partitioning it using some criteria like date or timestamp by means of actual partitioning or a simple WHERE clause.
First of all, the tables' definitions do matter. It's one thing to join using two primary keys, another to join using a primary key from one side and a non-unique key in the other, etc. It also matters what type of engine the tables use as InnoDB treats Primary Keys differently than MyISAM engine.
What I notice though is that on table test1, the (c1,c2) combination is Unique and the fields are not nullable. This allows your query to be rewritten as:
SELECT t.c1, t.c2, COUNT(*)
FROM test1 t
LEFT JOIN test2 t2 ON t2.key = t.key
GROUP BY t.key
It will give the same results while using the same field for the JOIN and the GROUP BY. Note that MySQL allows you to use in the SELECT list fields that are not in the GROUP BY list, without having aggregate functions on them. This is not allowed in most other systems and is seen as a bug by some. In this situation though it is a very nice feature. Every row can be either identified by (key) or (c1,c2), so it shouldn't matter which of the two is used for the grouping.
Another thing to note is that when you use LEFT JOIN, it's common to use the joining column from the right side for the counting: COUNT(t2.key) and not COUNT(*). Your original query will give 1 in that column for records in test1 that do not mmatch any record in test2 because it counts rows while you probably want to count the related records in test2 - and show 0 in those cases.
So, try this query and post the EXPLAIN:
SELECT t.c1, t.c2, COUNT(t2.key)
FROM test1 t
LEFT JOIN test2 t2 ON t2.key = t.key
GROUP BY t.key
The indexes help with the join, but you still need to do a full sort in order to do the group by. Essentially, it still has to process every record in the set.
Adding a where clause and limiting the set would run faster, of course. It just won't get you the results you want.
There may be other options than doing a group by on the entire table. I notice you're doing a SELECT * - What are you trying to get out of the query?
SELECT DISTINCT c1, c2
FROM test t
LEFT JOIN test2 t2 ON t2.key = t.key
may run faster, for instance. (I realize this was just a sample query, but understand that it's hard to optimize when you don't know what the end goal is!)
EDIT - In doing some reading (http://dev.mysql.com/doc/refman/5.0/en/group-by-optimization.html), I learned that, under the correct circumstances, indexes can help significantly with the group by.
What I'm seeing is that it needs to be a sorted index (like BTREE), not a HASH. Perhaps:
CREATE INDEX c1c2 IN t (c1, c2) USING BTREE;
might help.
For innodb it will work, as the index caries your primary key by default. For myisam you have to have the key as the last column of your index be "key". That will give the optimizers all keys in the same order and he can skip the sort. You cannot do any range queryies on the index prefix theN, puts you right back into filesort. currently struggling with a similiar problem

MySQL: Optimizing COUNT(*) and GROUP BY

I have a simple MyISAM table resembling the following (trimmed for readability -- in reality, there are more columns, all of which are constant width and some of which are nullable):
CREATE TABLE IF NOT EXISTS `history` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`time` int(11) NOT NULL,
`event` int(11) NOT NULL,
`source` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `event` (`event`),
KEY `time` (`time`),
);
Presently the table contains only about 6,000,000 rows (of which currently about 160,000 match the query below), but this is expected to increase. Given a particular event ID and grouped by source, I want to know how many events with that ID were logged during a particular interval of time. The answer to the query might be something along the lines of "Today, event X happened 120 times for source A, 105 times for source B, and 900 times for source C."
The query I concocted does perform this task, but it performs monstrously badly, taking well over a minute to execute when the timespan is set to "all time" and in excess of 30 seconds for as little as a week back:
SELECT COUNT(*) AS count FROM history
WHERE event=2000 AND time >= 0 AND time < 1310563644
GROUP BY source
ORDER BY count DESC
This is not for real-time use, so even if the query takes a second or two that would be fine, but several minutes is not. Explaining the query gives the following, which troubles me for obvious reasons:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE history ref event,time event 4 const 160399 Using where; Using temporary; Using filesort
I've experimented with various multi-column indexes (such as (event, time)), but with no improvement. This seems like such a common use case that I can't imagine there not being a reasonable solution, but my Googling all boil down to versions of the query I already have, with no particular suggestions on how to avoid the temporary (and even then, why performance is so abysmal).
Any suggestions?
You say you have tried multi-column indexes. Have you also tried single-column indexes, one per column?
UPDATE: Also, the COUNT(*) operation over a GROUP BY clause is probably a lot faster, if the grouped column also has an index on it... Of course, this depends on the number of NULL values that are actually in that column, which are not indexed.
For event, MySQL can execute a UNIQUE SCAN, which is quite fast, whereas for time, a RANGE SCAN will be applied, which is not so fast... If you separate indexes, I'd expect better performance than with multi-column ones.
Also, maybe you could gain something by partitioning your table by some expected values / value ranges:
http://dev.mysql.com/doc/refman/5.5/en/partitioning-overview.html
I offer you to try this multi-column index:
ALTER TABLE `history` ADD INDEX `history_index` (`event` ASC, `time` ASC, `source` ASC);
Then if it doesn't help, try to force index on this query:
SELECT COUNT(*) AS count FROM history USE INDEX (history_index)
WHERE event=2000 AND time >= 0 AND time < 1310563644
GROUP BY source
ORDER BY count DESC
If the source are known or you want to find the count for specific source, then you can try like this.
select count(source= 'A' or NULL) as A,count(source= 'B' or NULL) as B from history;
and for ordering you can do it in your application code. Also try with indexing event and source together.
This will be definitely faster than the older one.