MySQL query performance degrade with filesort - mysql

I have the below table with more than 190M records,
CREATE TABLE notification (
_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
recipient CHAR(11) NOT NULL,
recipient_group CHAR(11),
topic VARCHAR(25) NOT NULL,
identifier VARCHAR(60) NOT NULL,
timestamp TIMESTAMP(3) NOT NULL,
type VARCHAR(255) NOT NULL,
actioned BIT NOT NULL DEFAULT 0,
expiry_timestamp TIMESTAMP DEFAULT NULL,
INDEX recipient_recipient_group_timestamp_id (recipient, recipient_group, timestamp DESC, _id DESC),
INDEX topic_identifier (topic, identifier),
INDEX expiry_timestamp (expiry_timestamp),
UNIQUE recipient_recipient_group_topic_identifier (recipient, recipient_group, topic, identifier)
) CHARACTER SET ascii COLLATE ascii_bin;
Now I want to query all the notifications for a recipient belonging to a group based on the timestamp,
explain
select * from notification
where (recipient = 'recipient' and (recipient_group = 'group' or recipient_group is null)
and (expiry_timestamp > {ts '2018-06-26 08:00:00.0'} or expiry_timestamp is null)
and timestamp > {ts '1970-01-01 00:00:00.0'} and type in ('TYPE'))
order by timestamp desc, _id desc limit 10;
I have noticed that this query perform poorly when there are large number of notifications for a user as MySQL ended up using filesort in order by timestamp and _id.
+----+-------------+--------------+------------+-------------+----------------------------------------------------------------------------------------------------+--------------------------------------------+---------+-------------+------+----------+----------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------+------------+-------------+----------------------------------------------------------------------------------------------------+--------------------------------------------+---------+-------------+------+----------+----------------------------------------------------+
| 1 | SIMPLE | notification | NULL | ref_or_null | recipient_recipient_group_topic_identifier,recipient_recipient_group_timestamp_id,expiry_timestamp | recipient_recipient_group_topic_identifier | 23 | const,const | 2 | 5.01 | Using index condition; Using where; Using filesort |
+----+-------------+--------------+------------+-------------+----------------------------------------------------------------------------------------------------+--------------------------------------------+---------+-------------+------+----------+----------------------------------------------------+
Is there a way to improve the query performance may be adding/modifying an index?
Edit:
It seems that MySQL uses index recipient_recipient_group_timestamp_id if I remove or recipient_group is null from the where condition.

In general, you can't optimize an inequality condition with an index and also eliminate the filesort in the same query.
Think of a telephone book. It's sorted by last name, first name, then if there are still ties (people with the same name), it's sorted by the phone number. So if you want this query:
SELECT * FROM PhoneBook WHERE last_name=? AND first_name=?
ORDER BY phone_number;
Then the sorting will be a no-op, because if the first two are tied, the matching rows will naturally be stored in the requested order already. The query can skip the filesort if it simply reads the rows in the index order.
But if you query any type of inequality:
SELECT * FROM PhoneBook WHERE last_name=? AND first_name LIKE 'S%'
ORDER BY phone_number;
This matches multiple first names, and reading the matching rows in the index order will not be tied, so they aren't guaranteed to be sorted by phone number. The query has to sort the rows matched.
The same is true of any other type of inequality or range search that can be indexed: !=, IN(), LIKE, >, etc.

Related

Optimize selecting all rows from a table based on results from the same table?

I'll be the first to admit that I'm not great at SQL (and I probably shouldn't be treating it like a rolling log file), but I was wondering if I could get some pointers for improving some slow queries...
I have a large mysql table with 2M rows where I do two full table lookups based on a subset of the most recent data. When I load the page that contains these queries, I often find they take several seconds to complete, but the queries inside are quite quick.
PMA's (supposedly terrible) advisor pretty much throws the entire kitchen sink at me, temporary tables, too many sorts, joins without indexes (I don't even have any joins?), reading from fixed position, reading next position, temporary tables written to disk... that last one especially makes me wonder if it's a configuration issue, but I played with all the knobs, and even paid for a managed service which didn't seem to help.
CREATE TABLE `archive` (
`id` bigint UNSIGNED NOT NULL,
`ip` varchar(15) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
`service` enum('ssh','telnet','ftp','pop3','imap','rdp','vnc','sql','http','smb','smtp','dns','sip','ldap') CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
`hostid` bigint UNSIGNED NOT NULL,
`date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
ALTER TABLE `archive`
ADD PRIMARY KEY (`id`),
ADD KEY `service` (`service`),
ADD KEY `date` (`date`),
ADD KEY `ip` (`ip`),
ADD KEY `date-ip` (`date`,`ip`),
ADD KEY `date-service` (`date`,`service`),
ADD KEY `ip-date` (`ip`,`date`),
ADD KEY `ip-service` (`ip`,`service`),
ADD KEY `service-date` (`service`,`date`),
ADD KEY `service-ip` (`service`,`ip`);
Adding indexes definitely helped (even though they're 4x the size of the actual data), but I'm kindof at a loss where I can optimize further. Initially I thought about caching the subquery results in php and using it twice for the main queries, but I don't think I have access to the result once I close the subquery. I looked into doing joins, but they look like they're meant for 2 or more separate tables, but the subquery is from the same table, so I'm not sure if that would even work either. The queries are supposed to find the most active ip/services based on whether I have data from an ip in the past 24 hours...
SELECT service, COUNT(service) AS total FROM `archive`
WHERE ip IN
(SELECT DISTINCT ip FROM `archive` WHERE date > DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 24 HOUR))
GROUP BY service HAVING total > 1
ORDER BY total DESC, service ASC LIMIT 10
+----+--------------+-----------------+------------+-------+----------------------------------------------------------------------------+------------+---------+------------------------+-------+----------+---------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+--------------+-----------------+------------+-------+----------------------------------------------------------------------------+------------+---------+------------------------+-------+----------+---------------------------------+
| 1 | SIMPLE | <subquery2> | NULL | ALL | NULL | NULL | NULL | NULL | NULL | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | archive | NULL | ref | service,ip,date-service,ip-date,ip-service,service-date,service-ip | ip-service | 47 | <subquery2>.ip | 5 | 100.00 | Using index |
| 2 | MATERIALIZED | archive | NULL | range | date,ip,date-ip,date-service,ip-date,ip-service | date-ip | 5 | NULL | 44246 | 100.00 | Using where; Using index |
+----+--------------+-----------------+------------+-------+----------------------------------------------------------------------------+------------+---------+------------------------+-------+----------+---------------------------------+
SELECT ip, COUNT(ip) AS total FROM `archive`
WHERE ip IN
(SELECT DISTINCT ip FROM `archive` WHERE date > DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 24 HOUR))
GROUP BY ip HAVING total > 1
ORDER BY total DESC, INET_ATON(ip) ASC LIMIT 10
+----+--------------+-----------------+------------+-------+---------------------------------------------------------------+---------+---------+------------------------+-------+----------+---------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+--------------+-----------------+------------+-------+---------------------------------------------------------------+---------+---------+------------------------+-------+----------+---------------------------------+
| 1 | SIMPLE | <subquery2> | NULL | ALL | NULL | NULL | NULL | NULL | NULL | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | archive | NULL | ref | ip,date-ip,ip-date,ip-service,service-ip | ip-date | 47 | <subquery2>.ip | 5 | 100.00 | Using index |
| 2 | MATERIALIZED | archive | NULL | range | date,ip,date-ip,date-service,ip-date,ip-service | date-ip | 5 | NULL | 44168 | 100.00 | Using where; Using index |
+----+--------------+-----------------+------------+-------+---------------------------------------------------------------+---------+---------+------------------------+-------+----------+---------------------------------+
common subquery: 0.0351s
whole query 1: 1.4270s
whole query 2: 1.5601s
total page load: 3.050s (7 queries total)
Am I just doomed to terrible performance with this table?
Hopefully there's enough information here to get an idea of what's going, but if anyone can help I would certainly appreciate it. I don't mind throwing more hardware at the issue, but when an 8c/16t server with 16gb can't handle 150mb of data I'm not sure what will. Thanks in advance for reading my long winded question.
You have the right indexes (as well as many other indexes) and your query both meets your specs and runs close to optimally. It's unlikely that you can make this much faster: it needs to look all the way back to the beginning of your table.
If you can change your spec so you only have to look back a limited amount of time like a year you'll get a good speedup.
Some possible minor tweaks.
use the latin1_bin collation for your ip column. It uses 8-bit characters and collates them without any case sensitivity. That's plenty for IPv4 dotted-quad addresses (and IPv6 addresses). You'll get rid of a bit of overhead in matching and grouping. Or, even better,
If you know you will have nothing but IPv4 addresses, rework your ip column to store their binary representations ( that is, the INET_ATON() - generated value of each IPv4). You can fit those in the UNSIGNED INT 32-bit integer data type, making the lookup, grouping, and ordering even faster.
It's possible to rework the way you gather these data. For example, you could arrange to gather at most one row per service per day. That will reduce the timeseries resolution of your data, but it will also make your queries much faster. Define your table something like this:
CREATE TABLE archive2 (
ip VARCHAR(15) COLLATE latin1_bin NOT NULL,
service ENUM ('ssh','telnet','ftp',
'pop3','imap','rdp',
'vnc','sql','http','smb',
'smtp','dns','sip','ldap') COLLATE NOT NULL,
`date` DATE NOT NULL,
`count` INT NOT NULL,
hostid bigint UNSIGNED NOT NULL,
PRIMARY KEY (`date`, ip, service)
) ENGINE=InnoDB;
Then, when you insert a row, use this query:
INSERT INTO archive2 (`date`, ip, service, `count`, hostid)
VALUES (CURDATE(), ?ip, ?service, 1, ?hostid)
ON DUPLICATE KEY UPDATE
SET count = count + 1;
This will automatically increment your count column if the row for the ip, service, and date already exists.
Then your second query will look like:
SELECT ip, SUM(`count`) AS total
FROM archive
WHERE ip IN (
SELECT ip FROM archive
WHERE `date` > CURDATE() - INTERVAL 1 DAY
GROUP BY ip
HAVING total > 1
)
ORDER BY total DESC, INET_ATON(ip) ASC LIMIT 10;
The index of the primary key will satisfy this query.
First query
(I'm not convinced that it can be made much faster.)
(currently)
SELECT service, COUNT(service) AS total
FROM `archive`
WHERE ip IN (
SELECT DISTINCT ip
FROM `archive`
WHERE date > DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 24 HOUR)
)
GROUP BY service
HAVING total > 1
ORDER BY total DESC, service ASC
LIMIT 10
Notes:
COUNT(service) --> COUNT(*)
DISTINCT is not needed in IN (SELECT DISTINCT ...)
IN ( SELECT ... ) is often slow; rewrite using EXISTS ( SELECT 1 ... ) or JOIN (see below)
INDEX(date, IP) -- for subquery
INDEX(service, IP) -- for your outer query
INDEX(IP, service) -- for my outer query
Toss redundant indexes; they can get in the way. (See below)
It will have to gather all the possible results before getting to the ORDER BY and LIMIT. (That is, LIMIT has very little impact on performance for this query.)
CHARACTER SET utf8 COLLATE utf8_unicode_ci is gross overkill for IP addresses; switch to CHARACTER SET ascii COLLATE ascii_bin.
If you are running MySQL 8.0 (Or MariaDB 10.2), a WITH to calculate the subquery once, together with a UNION to compute the two outer queries, may provide some extra speed.
MariaDB has a "subquery cache" that might have the effect of skipping the second subquery evaluation.
By using DATETIME instead of TIMESTAMP, you will two minor hiccups per year when daylight savings kicks in/out.
I doubt if hostid needs to be a BIGINT (8-bytes).
To switch to a JOIN, think of fetching the candidate rows first:
SELECT service, COUNT(*) AS total
FROM ( SELECT DISTINCT IP
FROM archive
WHERE `date` > NOW() - INTERVAL 24 HOUR
) AS x
JOIN archive USING(IP)
GROUP BY service
HAVING total > 1
ORDER BY total DESC, service ASC
LIMIT 10
For any further discussion any slow (but working) query, please provide both flavors of EXPLAIN:
EXPLAIN SELECT ...
EXPLAIN FORMAT=JSON SELECT ...
Drop these indexes:
ADD KEY `service` (`service`),
ADD KEY `date` (`date`),
ADD KEY `ip` (`ip`),
Recommend only
ADD PRIMARY KEY (`id`),
-- as discussed:
ADD KEY `date-ip` (`date`,`ip`),
ADD KEY `ip-service` (`ip`,`service`),
ADD KEY `service-ip` (`service`,`ip`),
-- maybe other queries need these:
ADD KEY `date-service` (`date`,`service`),
ADD KEY `ip-date` (`ip`,`date`),
ADD KEY `service-date` (`service`,`date`),
The general rule here is that you don't need INDEX(a) when you also have INDEX(a,b). In particular, they may be preventing the use of better indexes; see the EXPLAINs.
Second query
Rewrite it
SELECT ip, COUNT(DISTINCT ip) AS total
FROM `archive`
WHERE date > DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 24 HOUR)
GROUP BY ip
HAVING total > 1
ORDER BY total DESC, INET_ATON(ip) ASC
LIMIT 10
It will use only INDEX(date, ip).

query becomes slow with GROUP BY

I have spent 4 hours googling and trying all sorts of indexes, mysqlyog, reading, searching etc. When I add the GROUP BY the query changes from 0.002 seconds to 0.093 seconds. Is this normal and acceptable? Or can I alter the indexes and/or the query?
Table:
uniqueid int(11) NO PRI NULL auto_increment
ip varchar(64) YES NULL
lang varchar(16) YES MUL NULL
timestamp int(11) YES MUL NULL
correct decimal(12,2) YES NULL
user varchar(32) YES NULL
timestart int(11) YES NULL
timeend int(11) YES NULL
speaker varchar(64) YES NULL
postedAnswer int(32) YES NULL
correctAnswerINT int(32) YES NULL
Query:
SELECT
SQL_NO_CACHE
user,
lang,
COUNT(*) AS total,
SUM(correct) AS correct,
ROUND(SUM(correct) / COUNT(*) * 100) AS score,
TIMESTAMP
FROM
maths_score
WHERE TIMESTAMP > 1
AND lang = 'es'
GROUP BY USER
ORDER BY (
(SUM(correct) / COUNT(*) * 100) + SUM(correct)
) DESC
LIMIT 500
explain extended:
id select_type table type possible_keys key key_len ref rows filtered Extra
------ ----------- ----------- ------ ------------------------- -------------- ------- ------ ------ -------- ---------------------------------------------------------------------
1 SIMPLE maths_score ref scoretable,fulltablething fulltablething 51 const 10631 100.00 Using index condition; Using where; Using temporary; Using filesort
Current indexes (I have tried many)
Keyname Type Unique Packed Column Cardinality Collation Null Comment
uniqueid BTREE Yes No uniqueid 21262 A No
scoretable BTREE No No timestamp 21262 A Yes
lang 21262 A Yes
fulltablething BTREE No No lang 56 A Yes
timestamp 21262 A Yes
user 21262 A Yes
Please use SHOW CREATE TABLE; it is more descriptive than DESCRIBE.
Do you have INDEX(lang, TIMESTAMP)? (Why.) It is likely to help both versions of the query.
Without the GROUP BY, you get one row, correct? With the GROUP BY, you get many rows, correct? Guess what, it takes more time to deliver more rows.
In addition, the GROUP BY probably involves an extra sort. The ORDER BY involves a sort, but in one case there is only 1 row to sort, hence faster. If there are a million USERs, then the ORDER BY will need to sort a million rows, only to deliver 500.
Please provide EXPLAIN SELECT ... for each case -- you will see some of what I am saying.
So you ran the query without GROUP BY and got one result row in 0.002 secs. Then you added GROUP BY (and ORDER BY obviously) and ended up with multiple result rows in 0.093 secs.
In order to produce this result, the DBMS must somehow order your records by user or create buckets per user, so as to get record count, sum, etc. per user. This takes of course much more time than just running through the table, counting records and summing up a value unconditionally. At last the DBMS must even sort these results again. I am not surprised this runs much longer.
The most appropriate index for this query should be:
create index idx on maths_score (lang, timestamp, user, correct);
This is a covering index, starting with the columns in WHERE, continuing with the column in GROUP BY and ending with all other columns used in the query.

Index Columns and Order

If I have a select statement like the statement below, what order and what columns should be included in an index?
SELECT MIN(BenchmarkID),
MIN(BenchmarkDateTime),
Currency1,
Currency2,
BenchmarkType
FROM Benchmark
INNER JOIN MyCurrencyPairs ON Currency1 = Pair1
AND Currency2 = Pair2
WHERE BenchmarkDateTime > IN_BeginningTime
GROUP BY Currency1, Currency2, BenchmarkType;
Items to note:
The Benchmark table will have billions of rows
The MyCurrencyPairs table is a local table that will have less than 10 records
IN_BeginningTime is a input parameter
Columns Currency1 and Currency2 are VARCHARs
Columns BenchmarkID and BenchmarkType are INTs
Column BenchmarkDateTime is a datetime (hopefully that was obvious)
I've created an index with Currency1, Currency2, BenchmarkType, BenchmarkDateTime, and BenchmarkID but I wasn't getting the speed I was wanting. Could I create a better index?
Edit #1: Someone requested the explain results below. Let me know if anything else is needed
Edit #2: Someone requested the DDL (I'm assuming this is the create statement) for the two tables:
(this benchmark table exists in the database)
CREATE TABLE `benchmark` (
`SequenceNumber` INT(11) NOT NULL,
`BenchmarkType` TINYINT(3) UNSIGNED NOT NULL,
`BenchmarkDateTime` DATETIME NOT NULL,
`Identifier` CHAR(6) NOT NULL,
`Currency1` CHAR(3) NULL DEFAULT NULL,
`Currency2` CHAR(3) NULL DEFAULT NULL,
`AvgBMBid` DECIMAL(18,9) NOT NULL,
`AvgBMOffer` DECIMAL(18,9) NOT NULL,
`AvgBMMid` DECIMAL(18,9) NOT NULL,
`MedianBMBid` DECIMAL(18,9) NOT NULL,
`MedianBMOffer` DECIMAL(18,9) NOT NULL,
`OpenBMBid` DECIMAL(18,9) NOT NULL,
`ClosingBMBid` DECIMAL(18,9) NOT NULL,
`ClosingBMOffer` DECIMAL(18,9) NOT NULL,
`ClosingBMMid` DECIMAL(18,9) NOT NULL,
`LowBMBid` DECIMAL(18,9) NOT NULL,
`HighBMOffer` DECIMAL(18,9) NOT NULL,
`BMRange` DECIMAL(18,9) NOT NULL,
`BenchmarkId` INT(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`BenchmarkId`),
INDEX `NextBenchmarkIndex01` (`Currency1`, `Currency2`, `BenchmarkType`),
INDEX `NextBenchmarkIndex02` (`BenchmarkDateTime`, `Currency1`, `Currency2`, `BenchmarkType`, `BenchmarkId`),
INDEX `BenchmarkOptimization` (`BenchmarkType`, `BenchmarkDateTime`, `Currency1`, `Currency2`)
)
(I'm creating the MyCurrencyPairs table in my routine)
CREATE TEMPORARY TABLE MyCurrencyPairs
(
Pair1 VARCHAR(50),
Pair2 VARCHAR(50)
) ENGINE=memory;
CREATE INDEX IDX_MyCurrencyPairs ON MyCurrencyPairs (Pair1, Pair2);
BenchMarkDateTime should be the first column in your index.
The rule is, if you use only a part of a composite index, the used part should be the leading part.
Secondly, the Group By should match an index.
Your performance would be better if some how you can make your query use "=" instead of ">" which is a range check query.
The main problem is that MySQL can't directly use the index to handle the aggregation. This is due to the join with MyCurrencyPairs and the fact that you're asking for MIN(BenchmarkId) while also having the range condition on BenchmarkDateTime. These two need to be eliminated to get a better execution plan.
Let's have a look at the required indexes and the resulting query first:
ALTER TABLE benchmark
ADD KEY `IDX1` (
`Currency1`,
`Currency2`,
`BenchmarkType`,
`BenchmarkDateTime`
),
ADD KEY `IDX2` (
`Currency1`,
`Currency2`,
`BenchmarkType`,
`BenchmarkId`,
`BenchmarkDateTime`
);
SELECT
(
SELECT
BenchmarkId
FROM
benchmark FORCE KEY (IDX2)
WHERE
Currency1 = ob.Currency1 AND
Currency2 = ob.Currency2 AND
BenchmarkType = ob.BenchmarkType
AND BenchmarkDateTime > IN_BeginningTime
ORDER BY
Currency1, Currency2, BenchmarkType, BenchmarkId
LIMIT 1
) AS BenchmarkId
ob.*
FROM
(
SELECT
MIN(BenchmarkDateTime),
Currency1,
Currency2,
BenchmarkType
FROM
benchmark
WHERE
BenchmarkDateTime > IN_BeginningTime
GROUP BY
Currency1, Currency2, BenchmarkType
) AS ob
INNER JOIN
MyCurrencyPairs ON Currency1 = Pair1 AND Currency2 = Pair2;
The first change is that the GROUP BY part happens in its own subquery. This means that it generates all combinations of Currency1, Currency2, BenchmarkType, even those that don't appear in MyCurrencyPairs, but unless there are lots of combinations, the fact that MySQL can now use an index to perform the operation should make this faster. This subquery uses IDX1 without requiring a temporary table or a filesort.
The second change is the isolation of the MIN(BenchmarkId) part into its own subquery. The sorting in that subquery can be handled using IDX2, so no sorting is required here either. The FORCE KEY (IDX2) hint and that even the "fixed-value" columns Currency1, Currency2 and BenchmarkType appear in the ORDER-part is required to make the MySQL optimizer do the right thing. Again, this is a trade-off. If the final result set is large the subqueries might turn out to be a loss, but I presume that there aren't that many rows.
Explaining that query gives the following query plan (uninteresting columns dropped for readability):
+----+--------------------+-----------------+-------+---------+------+---------------------------------------+
| id | select_type | table | type | key_len | rows | Extra |
+----+--------------------+-----------------+-------+---------+------+---------------------------------------+
| 1 | PRIMARY | <derived3> | ALL | NULL | 1809 | |
| 1 | PRIMARY | MyCurrencyPairs | ref | 106 | 2 | Using where |
| 3 | DERIVED | benchmark | range | 17 | 1225 | Using where; Using index for group-by |
| 2 | DEPENDENT SUBQUERY | benchmark | ref | 9 | 520 | Using where; Using index |
+----+--------------------+-----------------+-------+---------+------+---------------------------------------+
We see that all the interesting parts are properly covered by indexes, and we require neither temporary tables nor filesorts.
Timings on my test data show this version to be about 20 times as fast (1.07s vs. 0.05s), but I have only about 1.2million rows in my benchmark table and the data distribution is likely way off, so YMMV.

Optimizing Datetime fields where indexes aren't being used as expected

I have a large, fast-growing log table in an application running with MySQL 5.0.77. I'm trying to find the best way to optimize queries that count instances within the last X days according to message type:
CREATE TABLE `counters` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`kind` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index_counters_on_kind` (`kind`),
KEY `index_counters_on_created_at` (`created_at`)
) ENGINE=InnoDB AUTO_INCREMENT=302 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
For this test set, there are 668521 rows in the table. The query I'm trying to optimize is:
SELECT kind, COUNT(id) FROM counters WHERE created_at >= ? GROUP BY kind;
Right now, that query takes between 3-5 seconds, and is being estimated as follows:
+----+-------------+----------+-------+----------------------------------+------------------------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+-------+----------------------------------+------------------------+---------+------+---------+-------------+
| 1 | SIMPLE | counters | index | index_counters_on_created_at_idx | index_counters_on_kind | 258 | NULL | 1185531 | Using where |
+----+-------------+----------+-------+----------------------------------+------------------------+---------+------+---------+-------------+
1 row in set (0.00 sec)
With the created_at index removed, it looks like this:
+----+-------------+----------+-------+---------------+------------------------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+-------+---------------+------------------------+---------+------+---------+-------------+
| 1 | SIMPLE | counters | index | NULL | index_counters_on_kind | 258 | NULL | 1185531 | Using where |
+----+-------------+----------+-------+---------------+------------------------+---------+------+---------+-------------+
1 row in set (0.00 sec)
(Yes, for some reason the row estimate is larger than the number of rows in the table.)
So, apparently, there's no point to that index.
Is there really no better way to do this? I tried the column as a timestamp, and it just ended up slower.
Edit: I discovered that changing the query to use an interval instead of a specific date ends up using the index, cutting down the row estimate to about 20% of the query above:
SELECT kind, COUNT(id) FROM counters WHERE created_at >=
(NOW() - INTERVAL 7 DAY) GROUP BY kind;
I'm not entirely sure why that happens, but I'm fairly confident that if I understood it then the problem in general would make a lot more sense.
Why not using a concatenated index?
CREATE INDEX idx_counters_created_kind ON counters(created_at, kind);
Should go for an Index-Only Scan (mentioning "Using index" in Extras, because COUNT(ID) is NOT NULL anyway).
References:
Concatenated index vs. merging multiple indexes
Index-Only Scan
After reading the latest edit on the question, the problem seems to be that the parameter being used in the WHERE clause was being interpreted by MySQL as a string rather than as a datetime value. This would explain why the index_counters_on_created_at index was not being selected by the optimizer, and instead it would result in a scan to convert the created_at values to a string representation and then do the comparison. I think, this can be prevented by an explicit cast to datetime in the where clause:
where `created_at` >= convert({specific_date}, datetime)
My original comments still apply for the optimization part.
The real performance killer here is the kind column. Because when doing the GROUP BY the database engine first needs to determine all the distinct values in the kind column which results in a table or index scan. That's why the estimated rows is bigger than the total number of rows in the table, in one pass it will determine the distinct values in the kind column, and in a second pass it will determine which rows meet the create_at >= ? condition.
To make matters worse, the kind column is a varchar (255) which is too big to be efficient, add to that that it uses utf8 character set and utf8_unicode_ci collation, which increment the complexity of the comparisons needed to determine the unique values in that column.
This will perform a lot better if you change the type of the kind column to int. Because integer comparisons are more efficient and simpler than unicode character comparisons. It would also help to have a catalog table for the kind of messages in which you store the kind_id and description. And then do the grouping on a join of the kind catalog table and a subquery of the log table that first filters by date:
select k.kind_id, count(*)
from
kind_catalog k
inner join (
select kind_id
from counters
where create_at >= ?
) c on k.kind_id = c.kind_id
group by k.kind_id
This will first filter the counters table by create_at >= ? and can benefit from the index on that column. Then it will join that to the kind_catalog table and if the SQL optimizer is good it will scan the smaller kind_catalog table for doing the grouping, instead of the counters table.

Why index is not used for Group by and/or Join when key exists on the column

I have this query:
SELECT p.prodno AS id,
proddesc AS label
FROM product p
JOIN sales s
ON s.custno = 00800
AND s.deptno = 0
AND s.prodno = p.prodno
GROUP BY p.prodno
ORDER BY p.prodno ASC
Explain returns this:
+---+-----------+------+--------+------------------------------------------------------------------------------+------------+------+----------------------------------------+------+---------+------------------------------------+
| 1 | 'SIMPLE' | 'p' | 'ALL' | 'PRIMARY' | '' | '' | '' | 481 | 100.00 | 'Using temporary; Using filesort' |
| 1 | 'SIMPLE' | 's' | 'ref' | 'PRIMARY,in_sales_custnodeptnoprodno,in_sales_deptnocustno,in_sales_custno' | 'PRIMARY' | '6' | 'const,const,bkp_teststats2.p.PRODNO' | 93 | 100.00 | 'Using index' |
+---+-----------+------+--------+------------------------------------------------------------------------------+------------+------+----------------------------------------+------+---------+------------------------------------+
As you see there is no index used in the first row for PRODNO, but the table schema has index on it.
CREATE TABLE IF NOT EXISTS `product` (
`PRODNO` decimal(4,0) unsigned zerofill NOT NULL DEFAULT '0000',
`PRODDESC` char(21) NOT NULL DEFAULT '',
`UPCCODE12` decimal(12,0) unsigned zerofill NOT NULL DEFAULT '000000000000',
PRIMARY KEY (`PRODNO`)
)
And sales has these keys:
PRIMARY KEY (`CUSTNO`,`DEPTNO`,`PRODNO`,`ARDATE8N`),
KEY `in_sales_custnodeptnoprodno` (`CUSTNO`,`DEPTNO`,`PRODNO`),
KEY `in_sales_deptnocustno` (`DEPTNO`,`CUSTNO`),
KEY `in_sales_custno` (`CUSTNO`),
I would like to drop Using temporary; Using filesort because the above query takes 14 seconds on a 50G data table.
Update:
Problem: I want to get a unique product list that have sales data for a given custno and deptno.
The database has to check every row in the product table to satisfy your query. If it used the index, it would have to go back to the main table for every row to pick up proddesc. Going back to the main table (a "bookmark lookup") is quite expensive. So the query optimizer chooses to scan and sort the main table, which seems like a good choice to me.
If you omit proddesc from the result, the query only requires prodno. In that case the query optimizer will probably use the index.
You could also expand the index on products from (prodno) to (prodno, proddesc). Expanded this way, the index can satisfy the query without table lookups.