mysql log_queries_not_using_indexes works wrong? - mysql

I have following procedure:
CREATE PROCEDURE getProjectTeams(IN p_idProject INTEGER)
BEGIN
SELECT idTeam, name, workersCount, confirmersCount, isConfirm
FROM Teams JOIN team_project USING (idTeam)
WHERE idProject = p_idProject;
END $$
And here are CREATE TABLE script for tables Teams and team_project:
CREATE TABLE Teams (
idTeam INT PRIMARY KEY auto_increment,
name CHAR(20) NOT NULL UNIQUE,
isConfirm BOOL DEFAULT 0,
workersCount SMALLINT DEFAULT 0,
confirmersCount SMALLINT DEFAULT 0
) engine = innodb DEFAULT CHARACTER SET=utf8 COLLATE=utf8_polish_ci;
CREATE TABLE team_project (
idTeam INT NOT NULL,
idProject INT NOT NULL,
FOREIGN KEY(idTeam) REFERENCES Teams(idTeam)
ON DELETE CASCADE
ON UPDATE CASCADE,
FOREIGN KEY (idProject) REFERENCES Projects(idProject)
ON DELETE CASCADE
ON UPDATE CASCADE,
PRIMARY KEY(idTeam, idProject)
) engine = innodb DEFAULT CHARACTER SET=utf8 COLLATE=utf8_polish_ci;
I have few databases with identical schema on the server, but this procedure is being logged only if it is called by one database. Calles done by those othere databases are not being logged. It's not a question of being slow or not slow query (it always takes about 0.0001s). It's about why it is logged as not using indexes. How is that possible?
As Zagor23 suggested I run that EXPLAIN an and here are the results.
a) in database where procedure is logged:
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | team_project | ref | PRIMARY,idProject | idProject | 4 | const | 3 | Using index |
| 1 | SIMPLE | Teams | ALL | PRIMARY | NULL | NULL | NULL | 4 | Using where; Using join buffer |
b) database, where procedure is not logged:
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | team_project | ref | PRIMARY,idProject | idProject | 4 | const | 1 | Using index |
| 1 | SIMPLE | Teams | eq_ref | PRIMARY | PRIMARY | 4 | ecovbase.team_project.idTeam | 1 | |
The fact is - the data are a bit different, but not that much. The GoodDB (the one that is not logging proc) has 11 rows in Teams and 420 rows in team_project, the BadDB - 4 rows in Teams and about 800 in team_project. It doesn't seem like a bid difference. Is there a way to avoid logging that procedure?

Maybe it's not being logged because it uses indexes in those cases.
Try running
EXPLAIN SELECT idTeam, name, workersCount, confirmersCount, isConfirm
FROM Teams JOIN team_project USING (idTeam)
WHERE idProject = p_idProject;
on database where you feel it shouldn't use index and see if it really does.
MySql will use index if there is one available and suitable for the query, and if the returning result set is up to about 7-8% of the entire result set.
You say that information_schema is identical, but if the data isn't, that could be a reason for different behavior.

#Zagor23 explains why this can happen. Your tables are probbaly much bigger in this database and you haven't the appropriate indices.
My advice would be to add a UNIQUE index on table team_project, at (idProject, idTeam)
After the added EXPLAIN outputs, it seems that in the logged case, MySQL optimizer chooses a plan that doesn't need to use any index from the Team table and just scans that whole (4 rows!) table. Which is most probably faster as the table has only 4 rows.
Now, slow-log has some default settings, if I remember well, that adds in the log any query that doesn't use an index, even if the query takes 0.0001 sec to finish.
You can simply ignore this logging or change the slow-log settings to ignore queries that don't use an index. See the MySQL documenation: The Slow Query Log

Related

MySQL: why is a query using a VIEW less efficient compared to a query directly using the view's underlying JOIN?

I have three tables, bug, bugrule and bugtrace, for which relationships are:
bug 1--------N bugrule
id = bugid
bugrule 0---------N bugtrace
id = ruleid
Because I'm almost always interested in relations between bug <---> bugtrace I have created an appropriate VIEW which is used as part of several queries. Interestingly, queries using this VIEW have significantly worse performance than equivalent queries using the underlying JOIN explicitly.
VIEW definition:
CREATE VIEW bugtracev AS
SELECT t.*, r.bugid
FROM bugtrace AS t
LEFT JOIN bugrule AS r ON t.ruleid=r.id
WHERE r.version IS NULL
Execution plan for a query using the VIEW (bad performance):
mysql> explain
SELECT c.id,state,
(SELECT COUNT(DISTINCT(t.id)) FROM bugtracev AS t
WHERE t.bugid=c.id)
FROM bug AS c
WHERE c.version IS NULL
AND c.id<10;
+----+--------------------+-------+-------+---------------+--------+---------+-----------------+---------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+--------+---------+-----------------+---------+-----------------------+
| 1 | PRIMARY | c | range | id_2,id | id_2 | 8 | NULL | 3 | Using index condition |
| 2 | DEPENDENT SUBQUERY | t | index | NULL | ruleid | 9 | NULL | 1426004 | Using index |
| 2 | DEPENDENT SUBQUERY | r | ref | id_2,id | id_2 | 8 | bugapp.t.ruleid | 1 | Using where |
+----+--------------------+-------+-------+---------------+--------+---------+-----------------+---------+-----------------------+
3 rows in set (0.00 sec)
Execution plan for a query using the underlying JOIN directly (good performance):
mysql> explain
SELECT c.id,state,
(SELECT COUNT(DISTINCT(t.id))
FROM bugtrace AS t
LEFT JOIN bugrule AS r ON t.ruleid=r.id
WHERE r.version IS NULL
AND r.bugid=c.id)
FROM bug AS c
WHERE c.version IS NULL
AND c.id<10;
+----+--------------------+-------+-------+---------------+--------+---------+-------------+--------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+--------+---------+-------------+--------+-----------------------+
| 1 | PRIMARY | c | range | id_2,id | id_2 | 8 | NULL | 3 | Using index condition |
| 2 | DEPENDENT SUBQUERY | r | ref | id_2,id,bugid | bugid | 8 | bugapp.c.id | 1 | Using where |
| 2 | DEPENDENT SUBQUERY | t | ref | ruleid | ruleid | 9 | bugapp.r.id | 713002 | Using index |
+----+--------------------+-------+-------+---------------+--------+---------+-------------+--------+-----------------------+
3 rows in set (0.00 sec)
CREATE TABLE statements (reduced by irrelevant columns) are:
mysql> show create table bug;
CREATE TABLE `bug` (
`id` bigint(20) NOT NULL,
`version` int(11) DEFAULT NULL,
`state` varchar(16) DEFAULT NULL,
UNIQUE KEY `id_2` (`id`,`version`),
KEY `id` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
mysql> show create table bugrule;
CREATE TABLE `bugrule` (
`id` bigint(20) NOT NULL,
`version` int(11) DEFAULT NULL,
`bugid` bigint(20) NOT NULL,
UNIQUE KEY `id_2` (`id`,`version`),
KEY `id` (`id`),
KEY `bugid` (`bugid`),
CONSTRAINT `bugrule_ibfk_1` FOREIGN KEY (`bugid`) REFERENCES `bug` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
mysql> show create table bugtrace;
CREATE TABLE `bugtrace` (
`id` bigint(20) NOT NULL,
`ruleid` bigint(20) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `ruleid` (`ruleid`),
CONSTRAINT `bugtrace_ibfk_1` FOREIGN KEY (`ruleid`) REFERENCES `bugrule` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
You ask why about query optimization for a couple of complex queries with COUNT(DISTINCT val) and dependent subqueries. It's hard to know why for sure.
You will probably fix most of your performance problem by getting rid of your dependent subquery, though. Try something like this:
SELECT c.id,state, cnt.cnt
FROM bug AS c
LEFT JOIN (
SELECT bugid, COUNT(DISTINCT id) cnt
FROM bugtracev
GROUP BY bugid
) cnt ON c.id = cnt.bugid
WHERE c.version IS NULL
AND c.id<10;
Why does this help? To satisfy the query the optimizer can choose to run the GROUP BY subquery just once, rather than many times. And, you can use EXPLAIN on the GROUP BY subquery to understand its performance.
You may also get a performance boost by creating a compound index on bugrule that matches the query in your view. Try this one.
CREATE INDEX bugrule_v ON bugrule (version, ruleid, bugid)
and try switching the last two columns like so
CREATE INDEX bugrule_v ON bugrule (version, ruleid, bugid)
These indexes are called covering indexes because they contain all the columns needed to satisfy your query. version appears first because that helps optimize WHERE version IS NULL in your view definition. That makes it faster.
Pro tip: Avoid using SELECT * in views and queries, especially when you have performance problems. Instead, list the columns you actually need. The * may force the query optimizer to avoid a covering index, even when the index would help.
When using MySQL 5.6 (or older), try with at least MySQL 5.7. According to What’s New in MySQL 5.7?:
We have to a large extent unified the handling of derived tables and views. Until now, subqueries in the FROM clause (derived tables) were unconditionally materialized, while views created from the same query expressions were sometimes materialized and sometimes merged into the outer query. This behavior, beside being inconsistent, can lead to a serious performance penalty.

MySQL - Created index isn't showing up as possible key

I have the following table (it has more data columns, removed them because it would be a long post):
CREATE TABLE `members` (
`memberid` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
`lastname` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`memberid`),
KEY `members_lname_ix` (`lastname`)
) ENGINE=InnoDB AUTO_INCREMENT=1019 DEFAULT CHARSET=utf8
COLLATE=utf8_unicode_ci;
By default, a user only ever accesses 10-20 rows from this table at a time and it is usually sorted by the lastname column, it's all paginated server side. so I decided to add an index to lastname to help with sorting, however the index does not seem to be working like I would expect it to. when I run EXPLAIN SELECT * FROM members ORDER BY lastname ASC I get:
id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra
1 | simple | members | ALL | null | null | null | null | 711 | using filesort
I can at least confirm the index exists because if I run SHOW INDEX FROM members I get:
Table | Non_Unique | Key_name | Seq_in_ix | Col_name | Collation | Cardinality | Sub part | Packed | Null | Ix type
members | 0 | PRIMARY | 1 | memberid | A | 711 | null | null | (blank) | BTREE
members | 1 | members_lname_ix | 1 | lastname | A | 711 | null | null | YES | BTREE
if I add USE INDEX (members_lname_ix) both possible_keys and key will remain null. However if I add FORCE INDEX (members_lname_ix) possible_keys remains null and key shows members_lname_ix. This is my first time trying to apply indexing but to me this doesn't seem very intuitive - it feels like mysql should know that I created an index for lastname, no? I can't quite figure out what I'm doing wrong here unless I am misunderstanding something. Is the solution here to just keep using FORCE INDEX?
There are two ways to perform that query:
Plan A (as you were expecting):
Scan through the index sequentially, reading the entire (estimated) 711 rows.
Randomly look up each row in the data BTree. This involves reading the entire dataset.
Deliver the data in order.
Plan B (what it does):
Scan through the data, reading all 711 rows.
Sort the data
Deliver the sorted data.
Plan B does not touch the index at all; this was deemed to be a bigger savings than not having to sort the data.
In a table as tiny as yours, it would be hard to see a difference in speed. (In my test case, it took under 10 milliseconds either way.) In huge tables, the difference could be significant.
For optimal pagination, see http://mysql.rjweb.org/doc.php/pagination

mariadb optimisation of primary key not working

If you use a count on a non-null-column, on one table, without any where-parts, the optimaizer just return the number of rows in that table.
If you ask for a DISTINCT count on a UNIQE non-null-column, like the PRIMARY KEY, the answers should be the same, but this time mariadb do the calculations insted.
And if you have left join on other tables, and still no where-parts, the results should still be the number of rows in that table.
Is there a reason for mariadb not using thous optimizations? Is there case when the DISTINCT count of an unfiltered primary key, could give any other result then the number of rows in that tabel?
case:
CREATE TABLE products (
our_article_id varchar(50) CHARACTER SET utf8 NOT NULL,
...,
PRIMARY KEY(our_article_id)
);
CREATE TABLE product_article_id (
article_id varchar(255) COLLATE utf8_bin NOT NULL,
our_article_id varchar(50) CHARACTER SET utf8 NOT NULL,
...
PRIMARY KEY(article_id),
INDEX(our_article_id)
);
Count queries, 1st, basic count
DESCRIBE SELECT COUNT(our_article_id) FROM products;
+------+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
| 1 | SIMPLE | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away |
+------+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
2nd DISTINCT on primary key
DESCRIBE SELECT COUNT(DISTINCT our_article_id) FROM products;
+------+-------------+----------+-------+---------------+---------+---------+------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+-------+---------------+---------+---------+------+--------+-------------+
| 1 | SIMPLE | products | index | NULL | PRIMARY | 152 | NULL | 225089 | Using index |
+------+-------------+----------+-------+---------------+---------+---------+------+--------+-------------+
3th, DISTINCT on PRIMARY KEY, and a LEFT JOIN without WHERE-parts
DESCRIBE SELECT COUNT(DISTINCT our_article_id) FROM products LEFT JOIN product_article_id USING (our_article_id);
+------+-------------+--------------------+-------+---------------+---------+---------+----------------------------------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+--------------------+-------+---------------+---------+---------+----------------------------------+--------+-------------+
| 1 | SIMPLE | products | index | NULL | PRIMARY | 152 | NULL | 225089 | Using index |
| 1 | SIMPLE | product_article_id | ref | PRIMARY | PRIMARY | 152 | testseek.products.our_article_id | 12579 | Using index |
+------+-------------+--------------------+-------+---------------+---------+---------+----------------------------------+--------+-------------+
"Is there a reason for mariadb not using thous optimizations?" -- There are a zillion missing optimizations in MySQL/MariaDB; that's missing. Let's look at the history.
MySQL started about 2 decades ago as a lean and mean database engine. It focused on features that most people needed, while minimizing the overhead. This meant that a lot of rare optimizations were not in the early releases, and only get added over time if they seem important enough.
Take the PRIMARY KEY, for example. It is defined as UNIQUE. It is BTree organized. And, with InnoDB, it is also defined as Clustered. Other vendors allow various combinations clustering, non-BTree indexing, etc. MySQL decided that the limitations were "good enough" for "most" people.
Over the years, the 'worst' omissions have been gradually fixed. Transactions is probably the biggest and most important. It arrived in 2001(?), and MyISAM is being removed this year (2016) with the advent of 8.0.
4.1 (2002?) saw subqueries. Before that, creating a tmp table was "good enough". Now (8.0) subqueries are being one-upped by CTEs, which covers a few things that neither tmp tables nor subqueries can do efficiently.
There have been a huge number of optimizations put into MySQL 5.6 and 5.7 and MariaDB 10.x; you probably have not used more than a couple of them. The product is into "diminishing returns". It would damage its "lean and mean" heritage if it slowed down the optimizer to check for the next thousand extremely rare optimizations.
Meanwhile, guys like me spend a lot of time saying "MySQL/MariaDB doesn't have that; here's the workaround". It's the shorter COUNT(*) in your case. Since there is a clean workaround, it may be another decade before your suggestions are implemented. It is OK to file a bug report with bugs.mysql.com or mariadb.com to suggest the optimizations.
Another, almost never needed case, is INDEX(a ASC, b DESC) as a way of optimizing ORDER BY a ASC, b DESC. That is coming with 8.0. But I doubt if more than one query in 5,000 really needs it. (I have seen a lot of queries.) I suggest that its rarity is why it took two decades to implement it. The lack of a clean workaround is why it did not take another decade.

Why is COUNT() query from large table much faster than SUM()

I have a data warehouse with the following tables:
main
about 8 million records
CREATE TABLE `main` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`cid` mediumint(8) unsigned DEFAULT NULL, //This is the customer id
`iid` mediumint(8) unsigned DEFAULT NULL, //This is the item id
`pid` tinyint(3) unsigned DEFAULT NULL, //This is the period id
`qty` double DEFAULT NULL,
`sales` double DEFAULT NULL,
`gm` double DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `idx_pci` (`pid`,`cid`,`iid`) USING HASH,
KEY `idx_pic` (`pid`,`iid`,`cid`) USING HASH
) ENGINE=InnoDB AUTO_INCREMENT=7978349 DEFAULT CHARSET=latin1
period
This table has about 50 records and has the following fields
id
month
year
customer
This has about 23,000 records and the following fileds
id
number //This field is unique
name //This is simply a description field
The following query runs very fast (less than 1 second) and returns about 2,000:
select count(*)
from mydb.main m
INNER JOIN mydb.period p ON p.id = m.pid
INNER JOIN mydb.customer c ON c.id = m.cid
WHERE p.year = 2013 AND c.number = 'ABC';
But this query is much slower (mmore than 45 seconds), which is the same as the previous but sums instead of counts:
select sum(sales)
from mydb.main m
INNER JOIN mydb.period p ON p.id = m.pid
INNER JOIN mydb.customer c ON c.id = m.cid
WHERE p.year = 2013 AND c.number = 'ABC';
When I explain each query, the ONLY difference I see is that on the 'count()'
query the 'Extra' field says 'Using index', while for the 'sum()' query this field is NULL.
Explain count() query
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | c | const | PRIMARY,idx_customer | idx_customer | 11 | const | 1 | Using index |
| 1 | SIMPLE | p | ref | PRIMARY,idx_period | idx_period | 4 | const | 6 | Using index |
| 1 | SIMPLE | m | ref | idx_pci,idx_pic | idx_pci | 6 | mydb.p.id,const | 7 | Using index |
Explain sum() query
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | c | const | PRIMARY,idx_customer | idx_customer | 11 | const | 1 | Using index |
| 1 | SIMPLE | p | ref | PRIMARY,idx_period | idx_period | 4 | const | 6 | Using index |
| 1 | SIMPLE | m | ref | idx_pci,idx_pic | idx_pci | 6 | mydb.p.id,const | 7 | NULL |
Why is the count() so much faster than sum()? Shouldn't it be using the index for both?
What can I do to make the sum() go faster?
Thanks in advance!
EDIT
All the tables show that it is using Engine InnoDB
Also, as a side note, if I just do a 'SELECT *' query, this runs very quickly (less than 2 seconds). I would expect that the 'SUM()' shouldn't take any longer than that since SELECT * has to retrieve the rows anyways...
SOLVED
This is what I've learned:
Since the sales field is not a part of the index, it has to retrieve the records from the hard drive (which can be kind've slow).
I'm not too familiar with this, but it looks like I/O performance can be increased by switching to a SSD (Solid-state drive). I'll have to research this more.
For now, I think I'm going to create another layer of summary in order to get the performance I'm looking for.
I redefined my index on the main table to be (pid,cid,iid,sales,gm,qty) and now the sum() queries are running VERY fast!
Thanks everybody!
The index is the list of key rows.
When you do the count() query the actual data from the database can be ignored and just the index used.
When you do the sum(sales) query, then each row has to be read from disk to get the sales figure, hence much slower.
Additionally, the indexes can be read in bulk and then processed in memory, while the disk access will be randomly trashing the drive trying to read rows from across the disk.
Finally, the index itself may have summaries of the counts (to help with the plan generation)
Update
You actually have three indexes on your table:
PRIMARY KEY (`id`),
KEY `idx_pci` (`pid`,`cid`,`iid`) USING HASH,
KEY `idx_pic` (`pid`,`iid`,`cid`) USING HASH
So you only have indexes on the columns id, pid, cid, iid. (As an aside, most databases are smart enough to combine indexes, so you could probably optimize your indexes somewhat)
If you added another key like KEY idx_sales(id,sales) that could improve performance, but given the likely distribution of sales values numerically, you would be adding extra performance cost for updates which is likely a bad thing
The simple answer is that count() is only counting rows. This can be satisfied by the index.
The sum() needs to identify each row and then fetch the page in order to get the sales column. This adds a lot of overhead -- about one page load per row.
If you add sales into the index, then it should also go very fast, because it will not have to fetch the original data.

How to make MySQL intersect keys of INTEGER field and FLOAT field?

I have the following MySQL table (table size - around 10K records):
CREATE TABLE `tmp_index_test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`m_id` int(11) DEFAULT NULL,
`r_id` int(11) DEFAULT NULL,
`price` float DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `m_key` (`m_id`),
KEY `r_key` (`r_id`),
KEY `price_key` (`price`)
) ENGINE=InnoDB AUTO_INCREMENT=16390 DEFAULT CHARSET=utf8;
As you can see, I have two INTEGER fields (r_id and m_id) and one FLOAT field (price).
For each of these fields I have an index.
Now, when I run a query with condition on the first integer AND on the second one, everything is fine:
mysql> explain select * from tmp_index_test where m_id=1 and r_id=2;
+----+-------------+----------------+-------------+---------------+-------------+---------+------+------+-------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------------+-------------+---------------+-------------+---------+------+------+-------------------------------------------+
| 1 | SIMPLE | tmp_index_test | index_merge | m_key,r_key | r_key,m_key | 5,5 | NULL | 1 | Using intersect(r_key,m_key); Using where |
+----+-------------+----------------+-------------+---------------+-------------+---------+------+------+-------------------------------------------+
Seems like MySQL performs it very well since there is the Using intersect(r_key,m_key) in the Extra field.
I'm not a MySQL expert, but according to what I understand, MySQL is first making the intersection on indexes, and only then collects the result of the intersection from the table itself.
HOWEVER, when I run very similar query, but instead of condition on two integers, I put similar condition on an integer and a float, MySQL refuses to intersect the result on indexes:
mysql> explain select * from tmp_index_test where m_id=3 and price=100;
+----+-------------+----------------+------+-----------------+-----------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------------+------+-----------------+-----------+---------+-------+------+-------------+
| 1 | SIMPLE | tmp_index_test | ref | m_key,price_key | price_key | 5 | const | 1 | Using where |
+----+-------------+----------------+------+-----------------+-----------+---------+-------+------+-------------+
As you can see, MySQL decides to use the index of price only.
My first question is why, and how to fix it?
In addition to it, I need to run queries with MORE sign (>) instead of the equal sign (=) on price. Currently explain shows that for such queries, MySQL uses the integer key only.
mysql> explain select * from tmp_index_test where m_id=3 and price > 100;
+----+-------------+----------------+------+-----------------+-------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------------+------+-----------------+-------+---------+-------+------+-------------+
| 1 | SIMPLE | tmp_index_test | ref | m_key,price_key | m_key | 5 | const | 2 | Using where |
+----+-------------+----------------+------+-----------------+-------+---------+-------+------+-------------+
I need to make somehow MySQL first do the intersection on indexes. Anybody has any idea how?
Thanks a lot in advance!
From the MySQL manual:
ref is used if the join uses only a leftmost prefix of the key or if
the key is not a PRIMARY KEY or UNIQUE index (in other words, if the
join cannot select a single row based on the key value). If the key
that is used matches only a few rows, this is a good join type.
price is not unique or primary, so ref is chosen. I don't believe you can force an intersect.