mysql query optimization (wrong indexes?) avoid filesort - mysql

I will try to explain myself quickly.
I have a database called 'artikli' which has about 1M records.
On this table i run a lots of different queryies but 1 particular is causing problems (long execution time) when ORDER by is present.
This is my table structure:
CREATE TABLE IF NOT EXISTS artikli (
id int(11) NOT NULL,
name varchar(250) NOT NULL,
datum datetime NOT NULL,
kategorije_id int(11) default NULL,
id_valute int(11) default NULL,
podogovoru int(1) default '0',
cijena decimal(10,2) default NULL,
valuta int(1) NOT NULL default '0',
cijena_rezerva decimal(10,0) NOT NULL,
cijena_kupi decimal(10,0) default NULL,
cijena_akcija decimal(10,2) NOT NULL,
period int(3) NOT NULL default '30',
dostupnost enum('svugdje','samobih','samomojgrad','samomojkanton') default 'svugdje',
zemlja varchar(10) NOT NULL,
slike varchar(500) NOT NULL,
od_s varchar(34) default NULL,
od_id int(10) unsigned default NULL,
vrsta int(1) default '0',
trajanje datetime default NULL,
izbrisan int(1) default '0',
zakljucan int(1) default '0',
prijava int(3) default '0',
izdvojen decimal(1,0) NOT NULL default '0',
izdvojen_kad datetime NOT NULL,
izdvojen_datum datetime NOT NULL,
sajt int(1) default '0',
PRIMARY KEY (id),
KEY brend (brend),
KEY kanton (kanton),
KEY datum (datum),
KEY cijena (cijena),
KEY kategorije_id (kategorije_id,podogovoru,sajt,izdvojen,izdvojen_kad,datum)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
And this is the query:
SELECT artikli.datum as brojx,
artikli.izdvojen as i,
artikli.izdvojen_kad as ii,
artikli.cijena as cijena, artikli.name
FROM artikli
WHERE artikli.izbrisan=0 and artikli.prodano!=3
and artikli.zavrseno=0 and artikli.od_id!=0
and (artikli.sajt=0 or (artikli.sajt=1 and artikli.dostupnost='svugdje'))
and kategorije_id IN (18)
ORDER by i DESC, ii DESC, brojx DESC
LIMIT 0,20
What i want to do is to avoid Filesort which is very slow.

It would have been a big help if you'd provided the explain plan for the query.
Why do you think its the filesort which is causing the problem? Looking at the query you seem to be applying a lot filtering - which should reduce the output set significantly - but none of can use the available indexes.
artikli.izbrisan=0 and artikli.prodano!=3
and artikli.zavrseno=0 and artikli.od_id!=0
and (artikli.sajt=0 or (artikli.sajt=1 and artikli.dostupnost='svugdje'))
and kategorije_id IN (18)
Although I don't know what the pattern of your data is, I suspect that you might get a lot more benefit by adding an index on :
kategorije_id,izbrisan,sajt
Are all those other indexes really being used already?
Although you'd get a LOT more bang for your buck by denormalizing all those booleans (assuming that the table is normalised to start with and there are not hidden functional dependencies in there).
C.

The problem is that you don't have an index on the izdvojen, izdvojen_kad and datum columns that are used by the ORDER BY.
Note that the large index you have starting with kategorije_id can't be used for sorting (although it will help somewhat with the where clause) because the columns you are sorting by are at the end of the index.

Actually, the order by is not the basis for the index you want... but the CRITERIA you want to mostly match the query... Filter the smaller set of data out, you'll get smaller set of the table... I would change the WHERE clause a bit, but you'll know your data best. Put your smallest expected condition first and ensure an index is based on that... something like
WHERE
artikli.izbrisan = 0
and artikli.zavrseno = 0
and artikli.kategorije_id IN (18)
and artikli.prodano != 3
and artikli.od_id != 0
and ( artikli.sajt = 0
or ( artikli.sajt = 1
and artikli.dostupnost='svugdje')
)
and having a compound index on (izbrisan, zavrseno, kategorije_id)... I've mode the other != comparisons after as they are not specific key values, instead, they are ALL EXCEPT the value in question.

Related

Improve Laravel Eloquent Query

I have this relation in my model...
$this->hasMany('App\Inventory')->where('status',1)
->whereNull('deleted_at')
->where(function($query){
$query
->where('count', '>=', 1)
->orWhere(function($aQuery){
$aQuery
->where('count', '=' , 0)
->whereHas('containers', function($bQuery){
$bQuery->whereIn('status', [0,1]);
});
});
})
->orderBy('updated_at','desc')
->with('address', 'cabin');
And Sql query generated are :
select
*
from
`inventories`
where
`inventories`.`user_id` = 17
and `inventories`.`user_id` is not null
and `status` = 1
and `deleted_at` is null
and (
`count` >= 1
or (
`count` = 0
and exists (
select
*
from
`containers`
where
`inventories`.`id` = `containers`.`inventory_id`
and `status` in (0, 1)
)
)
)
and `inventories`.`deleted_at` is null
order by
`updated_at` desc
limit
10 offset 0
Unfortunately this take more than 2sec in MySql,
There are anyways to improve and reduce the query time for this?!
Each inventory has many containers. when inventory count is 0 (0 mean out of stock but sometimes there are disabled containers that mean inventory is not out of stock yet.) the real count is depend on count of containers with status [0,1] (containers have other statuses...).
I have an idea to have a column on inventory to count containers with [0,1] status, and update it in other processes to improve this query. but this take too much time and need to modify other process.
Inventories show create table
CREATE TABLE `inventories` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`user_id` bigint unsigned NOT NULL,
`cabin_id` bigint unsigned NOT NULL,
`address_id` bigint unsigned NOT NULL,
`count` mediumint NOT NULL,
`status` mediumint NOT NULL,
`name` varchar(191) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`available_at` datetime DEFAULT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=37837 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Containers show create table
CREATE TABLE `containers` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(191) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`inventory_id` bigint unsigned NOT NULL,
`order_id` bigint unsigned DEFAULT NULL,
`status` tinyint unsigned NOT NULL DEFAULT '1',
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=64503 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Used Solution due comments (Thanks to #ysth #vixducis #Breezer ):
Changed Containers engine from MyISAM to InnoDB ,
Added INDEX to containers.inventory_id
And optimize code like below and limit whereHas select query
$this->hasMany('App\Inventory')->where('status',1)
->whereNull('deleted_at')
->where(function($query){
$query
->where('count', '>=', 1)
->orWhere('count', '=' , 0)
->whereHas('containers', function ($bQuery) {
$bQuery
->select('inventory_id')
->whereIn('status', [0, 1]);
});
})
->orderBy('updated_at','desc')
->with('address', 'cabin');
for whereHas we can use whereIn and subQuery like below
->whereIn('id', function ($subQuery) {
$subQuery->select('inventory_id')
->from('containers')
->whereIn('status', [0, 1]);
});
and for limiting select of dosentHave
->doesntHave('requests', 'and', function($query){
$query->select('inventory_id');
})
It looks like the containers table is still running on the MyISAM engine. While that engine is not deprecated, the development focus has shifted heavily towards InnoDB, and it should be a lot more performant. Switching to InnoDB is a good first step.
Secondly, I see that there is no index on containers.inventory_id. When experiencing performance issues when relating two tables, it's often a good idea to check whether adding an index on the column that relates the tables improves performance.
These two steps should make your query a lot faster.
when your data is big, whereHas statement sometimes run slowly because it use exists syntax. For more detailed explanation, you can read from this post.
To solve this, I prefer you to use mpyw/eloquent-has-by-non-dependent-subquery because it will use in syntax which will improve the performance. I already used this package on my project, and no problem until now.
Change to InnoDB.
inventories needs this composite index: INDEX(user_id, status, deleted_at, updated_at)
containers needs this composite index, not simply (inventory_id), but (inventory_id, status).
Redundant: inventories.user_id is not null because the test for 17 requires NOT NULL.
Redundant: deleted_at is null simply because it is in the query twice.

Speed Up A Large Insert From Select Query With Multiple Joins

I'm trying to denormalize a few MySQL tables I have into a new table that I can use to speed up some complex queries with lots of business logic. The problem that I'm having is that there are 2.3 million records I need to add to the new table and to do that I need to pull data from several tables and do a few conversions too. Here's my query (with names changed)
INSERT INTO database_name.log_set_logs
(offload_date, vehicle, jurisdiction, baselog_path, path,
baselog_index_guid, new_location, log_set_name, index_guid)
(
select STR_TO_DATE(logset_logs.offload_date, '%Y.%m.%d') as offload_date,
logset_logs.vehicle, jurisdiction, baselog_path, path,
baselog_trees.baselog_index_guid, new_location, logset_logs.log_set_name,
logset_logs.index_guid
from
(
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(path, '/', 7), '/', -1) as offload_date,
SUBSTRING_INDEX(SUBSTRING_INDEX(path, '/', 8), '/', -1) as vehicle,
SUBSTRING_INDEX(path, '/', 9) as baselog_path, index_guid,
path, log_set_name
FROM database_name.baselog_and_amendment_guid_to_path_mappings
) logset_logs
left join database_name.log_trees baselog_trees
ON baselog_trees.original_location = logset_logs.baselog_path
left join database_name.baselog_offload_location location
ON location.baselog_index_guid = baselog_trees.baselog_index_guid);
The query itself works because I was able to run it using a filter on log_set_name however that filter's condition will only work for less than 1% of the total records because one of the values for log_set_name has 2.2 million records in it which is the majority of the records. So there is nothing else I can use to break this query up into smaller chunks from what I can see. The problem is that the query is taking too long to run on the rest of the 2.2 million records and it ends up timing out after a few hours and then the transaction is rolled back and nothing is added to the new table for the 2.2 million records; only the 0.1 million records were able to be processed and that was because I could add a filter that said where log_set_name != 'value with the 2.2 million records'.
Is there a way to make this query more performant? Am I trying to do too many joins at once and perhaps I should populate the row's columns in their own individual queries? Or is there some way I can page this type of query so that MySQL executes it in batches? I already got rid of all my indexes on the log_set_logs table because I read that those will slow down inserts. I also jacked my RDS instance up to a db.r4.4xlarge write node. I am also using MySQL Workbench so I increased all of it's timeout values to their maximums giving them all nines. All three of these steps helped and were necessary in order for me to get the 1% of the records into the new table but it still wasn't enough to get the 2.2 million records without timing out. Appreciate any insights as I'm not adept to this type of bulk insert from a select.
'CREATE TABLE `log_set_logs` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`purged` tinyint(1) NOT NULL DEFAUL,
`baselog_path` text,
`baselog_index_guid` varchar(36) DEFAULT NULL,
`new_location` text,
`offload_date` date NOT NULL,
`jurisdiction` varchar(20) DEFAULT NULL,
`vehicle` varchar(20) DEFAULT NULL,
`index_guid` varchar(36) NOT NULL,
`path` text NOT NULL,
`log_set_name` varchar(60) NOT NULL,
`protected_by_retention_condition_1` tinyint(1) NOT NULL DEFAULT ''1'',
`protected_by_retention_condition_2` tinyint(1) NOT NULL DEFAULT ''1'',
`protected_by_retention_condition_3` tinyint(1) NOT NULL DEFAULT ''1'',
`protected_by_retention_condition_4` tinyint(1) NOT NULL DEFAULT ''1'',
`general_comments_about_this_log` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1736707 DEFAULT CHARSET=latin1'
'CREATE TABLE `baselog_and_amendment_guid_to_path_mappings` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`path` text NOT NULL,
`index_guid` varchar(36) NOT NULL,
`log_set_name` varchar(60) NOT NULL,
PRIMARY KEY (`id`),
KEY `log_set_name_index` (`log_set_name`),
KEY `path_index` (`path`(42))
) ENGINE=InnoDB AUTO_INCREMENT=2387821 DEFAULT CHARSET=latin1'
...
'CREATE TABLE `baselog_offload_location` (
`baselog_index_guid` varchar(36) NOT NULL,
`jurisdiction` varchar(20) NOT NULL,
KEY `baselog_index` (`baselog_index_guid`),
KEY `jurisdiction` (`jurisdiction`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1'
'CREATE TABLE `log_trees` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`baselog_index_guid` varchar(36) DEFAULT NULL,
`original_location` text NOT NULL, -- This is what I have to join everything on and since it's text I cannot index it and the largest value is above 255 characters so I cannot change it to a vachar then index it either.
`new_location` text,
`distcp_returncode` int(11) DEFAULT NULL,
`distcp_job_id` text,
`distcp_stdout` text,
`distcp_stderr` text,
`validation_attempt` int(11) NOT NULL DEFAULT ''0'',
`validation_result` tinyint(1) NOT NULL DEFAULT ''0'',
`archived` tinyint(1) NOT NULL DEFAULT ''0'',
`archived_at` timestamp NULL DEFAULT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`dir_exists` tinyint(1) NOT NULL DEFAULT ''0'',
`random_guid` tinyint(1) NOT NULL DEFAULT ''0'',
`offload_date` date NOT NULL,
`vehicle` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `baselog_index_guid` (`baselog_index_guid`)
) ENGINE=InnoDB AUTO_INCREMENT=1028617 DEFAULT CHARSET=latin1'
baselog_offload_location has not PRIMARY KEY; what's up?
GUIDs/UUIDs can be terribly inefficient. A partial solution is to convert them to BINARY(16) to shrink them. More details here: http://localhost/rjweb/mysql/doc.php/uuid ; (MySQL 8.0 has similar functions.)
It would probably be more efficient if you have a separate (optionally redundant) column for vehicle rather than needing to do
SUBSTRING_INDEX(SUBSTRING_INDEX(path, '/', 8), '/', -1) as vehicle
Why JOIN baselog_offload_location? Three seems to be no reference to columns in that table. If there, be sure to qualify them so we know what is where. Preferably use short aliases.
The lack of an index on baselog_index_guid may be critical to performance.
Please provide EXPLAIN SELECT ... for the SELECT in your INSERT and for the original (slow) query.
SELECT MAX(LENGTH(original_location)) FROM .. -- to see if it really is too big to index. What version of MySQL are you using? The limit increased recently.
For the above item, we can talk about having a 'hash'.
"paging the query". I call it "chunking". See http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks . That talks about deleting, but it can be adapted to INSERT .. SELECT since you want to "chunk" the select. If you go with chunking, Javier's comment becomes moot. Your code would be chunking the selects, hence batching the inserts:
Loop:
INSERT .. SELECT .. -- of up to 1000 rows (see link)
End loop

MYSQL - How to add index for group by / order by / sum / with where

I am processing a mysql table with 40K rows. Current execution time is around 2 seconds with the table indexed.could some one guide me how to optimized this query and table better? and how to getrid of "Using where; Using temporary; Using filesort" ??. Any help is appreciated.
The goup by with be for the following cases...
LS_CHG_DTE_OCR
LS_CHG_DTE_OCR/RES_STATE_HSE
LS_CHG_DTE_OCR/RES_STATE_HSE/RES_CITY_HSE
LS_CHG_DTE_OCR/RES_STATE_HSE/RES_CITY_HSE/POSTAL_CDE_HSE
Thanks in advance
SELECT DATE_FORMAT(`LS_CHG_DTE_OCR`, '%Y-%b') AS fmt_date,
SUM(IF(`TYPE`='Connect',COUNT_SUBS,0)) AS connects,
SUM(IF(`TYPE`='Disconnect',COUNT_SUBS,0)) AS disconnects,
SUM(IF(`TYPE`='Connect',ROUND(REV,2),0)) AS REV,
SUM(IF(`TYPE`='Upgrade',COUNT_SUBS,0)) AS upgrades,
SUM(IF(`TYPE`='Downgrade',COUNT_SUBS,0)) AS downgrades,
SUM(IF(`TYPE`='Upgrade',ROUND(REV,2),0)) AS upgradeRev FROM `hsd`
WHERE LS_CHG_DTE_OCR!='' GROUP BY MONTH(LS_CHG_DTE_OCR) ORDER BY LS_CHG_DTE_OCR ASC
CREATE TABLE `hsd` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`SYS_OCR` varchar(255) DEFAULT NULL,
`PRIN_OCR` varchar(255) DEFAULT NULL,
`SERV_CDE_OHI` varchar(255) DEFAULT NULL,
`DSC_CDE_OHI` varchar(255) DEFAULT NULL,
`LS_CHG_DTE_OCR` datetime DEFAULT NULL,
`SALESREP_OCR` varchar(255) DEFAULT NULL,
`CHANNEL` varchar(255) DEFAULT NULL,
`CUST_TYPE` varchar(255) DEFAULT NULL,
`LINE_BUS` varchar(255) DEFAULT NULL,
`ADDR1_HSE` varchar(255) DEFAULT NULL,
`RES_CITY_HSE` varchar(255) DEFAULT NULL,
`RES_STATE_HSE` varchar(255) DEFAULT NULL,
`POSTAL_CDE_HSE` varchar(255) DEFAULT NULL,
`ZIP` varchar(100) DEFAULT NULL,
`COUNT_SUBS` double DEFAULT NULL,
`REV` double DEFAULT NULL,
`TYPE` varchar(255) DEFAULT NULL,
`lat` varchar(100) DEFAULT NULL,
`long` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `idx` (`LS_CHG_DTE_OCR`,`CHANNEL`,`CUST_TYPE`,`LINE_BUS`,`RES_CITY_HSE`,`RES_STATE_HSE`,`POSTAL_CDE_HSE`,`ZIP`,`COUNT_SUBS`,`TYPE`)
) ENGINE=InnoDB AUTO_INCREMENT=402342 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC
Using where; Using temporary; Using filesort[enter image description here][1]
The only condition you apply is LS_CHG_DTE_OCR != "". Other than that you are doing a full table scan because of the aggregations. Index wise you can't do much here.
I ran into the same problem. I had fully optimized my queries (I had joins and more conditions) but the table kept growing and with it query time. Finally I decided to mirror the data to ElasticSearch. In my case it cut down query time to about 1/20th to 1/100th (for different queries).
The only possible index for that SELECT is INDEX(LS_CHG_DTE_OCR). But it is unlikely for it to be used.
Perform the WHERE -- If there are a lot of '' values, then the index may be used for filtering.
GROUP BY MONTH(...) -- You might be folding the same month from multiple years. The Optimizer can't tell, so it will punt on using the index.
ORDER BY LS_CHG_DTE_OCR -- This is done after the GROUP BY; the ORDER BY can't be performed until the data is gathered -- too late for any index. However, if multiple years are folded together, you could get some strange results. Cure it by making the ORDER BY be the same as the GROUP BY. This will also prevent an extra sort that is caused by the GROUP BY and ORDER BY being different.
Yeah, if that idx you added has all the columns in the SELECT, then it is a "covering index". But it won't help any because of the comments above. "Using index" won't help a lot.
GROUP BY LS_CHG_DTE_OCR/RES_STATE_HSE -- Eh? Divide a DATETIME by a VARCHAR? That sounds like a disaster.
This table will grow even bigger over time, correct? Consider building and maintaining Summary Table(s) with month as part of the PRIMARY KEY.

Very Slow simple MySql query with index

i have this table :
CREATE TABLE `messenger_contacts` (
`number` varchar(15) NOT NULL,
`has_telegram` tinyint(1) NOT NULL DEFAULT '0',
`geo_state` int(11) NOT NULL DEFAULT '0',
`geo_city` int(11) NOT NULL DEFAULT '0',
`geo_postal` int(11) NOT NULL DEFAULT '0',
`operator` tinyint(1) NOT NULL DEFAULT '0',
`type` tinyint(1) NOT NULL DEFAULT '0'
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `messenger_contacts`
ADD PRIMARY KEY (`number`),
ADD KEY `geo_city` (`geo_city`),
ADD KEY `geo_postal` (`geo_postal`),
ADD KEY `type` (`type`),
ADD KEY `type1` (`operator`),
ADD KEY `has_telegram` (`has_telegram`),
ADD KEY `geo_state` (`geo_state`);
with about 11 million records.
A simple count select on this table takes about 30 to 60 seconds to complete witch seems very high.
select count(number) from messenger_contacts where geo_state=1
I am not a Database pro so beside setting indexes i don't know what else i can do to make the query faster?
UPDATE:
OK , i made some changes to column type and size:
CREATE TABLE IF NOT EXISTS `messenger_contacts` (
`number` bigint(13) unsigned NOT NULL,
`has_telegram` tinyint(1) NOT NULL DEFAULT '0' ,
`geo_state` int(2) NOT NULL DEFAULT '0',
`geo_city` int(4) NOT NULL DEFAULT '0',
`geo_postal` int(10) NOT NULL DEFAULT '0',
`operator` tinyint(1) NOT NULL DEFAULT '0' ,
`type` tinyint(1) NOT NULL DEFAULT '0' ,
PRIMARY KEY (`number`),
KEY `has_telegram` (`has_telegram`,`geo_state`),
KEY `geo_city` (`geo_city`),
KEY `geo_postal` (`geo_postal`),
KEY `type` (`type`),
KEY `type1` (`operator`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Now the query only takes 4 to 5 seconds with * and number
Tanks every one for your help, even the guy that gave me -1. this would be good enough for now considering that my server is a low end hardware and i will be caching the select count results.
Maybe
select count(geo_state) from messenger_contacts where geo_state=1
as it will give the same result but will not use number column from the clustered index?
If this does not help, I would try to change number column into INT type, which should reduce the index size, or try to increase amount of memory MySQL could use for caching indexes.
You did not change the datatypes. INT(11) == INT(2) == INT(100) -- each is a 4-byte signed integer. You probably want 1-byte unsigned TINYINT UNSIGNED or 2-byte SMALLINT UNSIGNED.
It is a waste to index "flags", which I assume type and has_telegram are. The optimizer will never use them because it will less efficient than simply doing a table scan.
The standard coding pattern is:
select count(*)
from messenger_contacts
where geo_state=1
unless you need to not count NULLs, which is what COUNT(geo_state) implies.
Once you have the index on geo_state (or an index starting with geo_state), the query will scan the index (which is a separate BTree structure) starting with the first occurrence of geo_state=1 until the last, counting as it goes. That is, it will touch 1.1 millions index entries. So, a few seconds is to be expected. Counting a 'rare' geo_state will run much faster.
The reason for 30-60 seconds versus 4-5 seconds is very likely to be caching. The former had to read stuff from disk; the latter did not. Run the query twice.
Using the geo_state index will be faster for that query than using the PRIMARY KEY unless there are caching differences.
INDEX(number,geo_state) is virtually useless for any of the SELECTs mentioned -- geo_state should be first. This is an example of a "covering" index for the select count(number)... case.
More on building indexes.

Query runs faster without an index. Why?

I have two tables. One of those tables has this schema:
CREATE TABLE `object_master_70974_` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`id_object` int(10) unsigned NOT NULL DEFAULT '0',
`id_master` int(10) unsigned NOT NULL DEFAULT '0',
`id_slave` int(10) unsigned NOT NULL DEFAULT '0',
`id_field` bigint(20) unsigned NOT NULL DEFAULT '0',
`id_slave_field` bigint(20) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `id_object` (`id_object`,`id_master`,`id_slave`,`id_field`,`id_slave_field`),
KEY `id_object_2` (`id_object`,`id_master`,`id_field`,`id_slave_field`),
KEY `id_object_3` (`id_object`,`id_slave`,`id_field`),
KEY `id_object_4` (`id_object`,`id_slave_field`),
KEY `id_object_5` (`id_object`,`id_master`,`id_slave`,`id_field`),
KEY `id_object_6` (`id_object`,`id_master`,`id_slave`,`id_slave_field`),
KEY `id_master` (`id_master`,`id_slave_field`),
KEY `id_object_7` (`id_object`,`id_field`)
) ENGINE=InnoDB AUTO_INCREMENT=17827 DEFAULT CHARSET=utf8;
As you can see, there is an overlapping index KEY id_object_5 (id_object,id_master,id_slave,id_field) and there is no index that would cover these three fields: id_object, id_master, id_field. However, when I run these two queries:
SELECT f1.id
FROM object_70974_ f1
LEFT JOIN object_master_70974_ mss0 ON mss0.id_object IN (70974,71759)
AND mss0.id_master = 71100 AND mss0.id_slave = 70912 AND mss0.id_field = f1.id
and
SELECT f1.id
FROM object_70974_ f1
LEFT JOIN object_master_70974_ mss0 ON mss0.id_object IN (70974,71759)
AND mss0.id_master = 71100 AND mss0.id_field = f1.id
they both return the same number of rows (since in fact id_slave field does not really matter) - 3530, however, the first query is slower than the second query by one second - 8 and 7 seconds respectively. So, I guess I have to ask two questions - 1) why does the second query run faster, even though it does not use index and 2) why does the first query run so slowly and why does not it use an index (obviously). In short, what the heck is going on?
EDIT
This is the result of EXPLAIN command (identical for both queries):
"id" "select_type" "table" "type" "possible_keys" "key" "key_len" "ref" "rows" "Extra"
"1" "SIMPLE" "f1" "index" \N "attr_80420_" "5" \N "3340" "Using index"
"1" "SIMPLE" "mss0" "ref" "id_object,id_object_2,id_object_3,id_object_4,id_object_5,id_object_6,id_master,id_object_7" "id_master" "4" "const" "3529" "Using where"
EDIT
It's extremely interesting, because if I DROP id_master index (which for some strange reason is used by both queries), then it starts to use id_object_5 index.
EDIT
And, yes, with id_master index being dropped, both queries start to run super-fast. So, I guess there is some trouble with optimizer.
EDIT
I even have a guess what trouble faces the optimizer - it may be incorrectly treats id_slave_field field name in the key, as if it were two fields instead - id_slave and id_field. In this case it becomes reasonable, why it firstly used this key in both queries.
EDIT
Schema of object_70974_
CREATE TABLE `object_70974_` (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`id_inherit` BIGINT(20) NOT NULL DEFAULT '0',
`id_obj` INT(10) UNSIGNED NOT NULL DEFAULT '0',
`if_control` TINYINT(1) NOT NULL DEFAULT '0',
`id_order` BIGINT(20) NOT NULL DEFAULT '0',
`if_archive` TINYINT(1) NOT NULL DEFAULT '0',
`id_group` BIGINT(20) NOT NULL DEFAULT '0',
`if_hist` SMALLINT(6) NOT NULL DEFAULT '0',
`if_garbage` TINYINT(1) NOT NULL DEFAULT '0',
`id_color` CHAR(6) DEFAULT NULL,
`id_text` TINYINT(4) NOT NULL DEFAULT '0',
`if_default` TINYINT(1) NOT NULL DEFAULT '0',
`id_parent` BIGINT(20) NOT NULL DEFAULT '0',
.... a long list of other fields
PRIMARY KEY (`id`),
KEY `id_order` (`id_order`)
) ENGINE=INNODB AUTO_INCREMENT=3636 DEFAULT CHARSET=utf8;
Why does the SELECT mention f1 at all? It is essentially useless. This would give the same answer, possibly except for some end case:
SELECT mss0.id_field
FROM object_master_70974_ mss0
WHERE mss0.id_object IN (70974, 71759)
AND mss0.id_master = 71100
AND mss0.id_slave = 70912
The optimal index for that is
INDEX(id_master, id_slave, id_object)
where master and slave can be in either order, but id_object is last. Build the 'best' index by starting with any WHERE clause that have = (constant).
Don't use LEFT unless you are want to see NULLs for the 'right' table when there is no match. I think this is part of the problem -- the optimizer was forced to start with f1 when it would be a lot better to start with the other table.
8 vs 7 seconds could be caching.
Note in the EXPLAIN that it needs to hit 3K rows in each table.