I have a pretty simple query that gives me headaches because of a very long execution time which I cannot explain. The query:
explain select A.* from NAV_ADRESSEN A left outer join MITGL_KENNZEICHEN K on (K.MNR=A.MNR)
where ((A.MNR='19012546') or (IMPORTID='19012546') or (K.KENNZEICHEN='19012546')) and
(not UNGUELTIG) order by ZUNAME, VORNAME limit 0, 25;
The query (the real one, not the explain) takes about 17 seconds, whether or not any matches are found.
The explain result:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE A index MNR,IMPORTID ZUNAME 164 NULL 25 Using where
1 SIMPLE K ref MNR MNR 23 gsco-test.A.MNR 1 Using where
This looks pretty normal to me. All relevant columns have keys (A.MNR, K.MNR, A.IMPORTID, K.KENNZEICHEN); the tables contain ~600 000 rows (NAV_ADRESSEN) and 180 rows (MITGL_KENNZEICHEN).
What could be the problem?
Edited to add:
The explain looks slightly different when leaving out the limit clause (but the execution time doubles):
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE A ALL MNR,IMPORTID NULL NULL NULL 544587 Using where; Using filesort
1 SIMPLE K ref MNR MNR 23 gsco-test.A.MNR 1 Using where
The table definition:
CREATE TABLE `MITGL_KENNZEICHEN` (
`PK` int(11) unsigned NOT NULL AUTO_INCREMENT,
`MNR` varchar(20) DEFAULT NULL,
`KENNZEICHEN` varchar(80) DEFAULT NULL,
`DESCRIPTION` varchar(80) DEFAULT NULL,
PRIMARY KEY (`PK`),
KEY `MNR` (`MNR`),
KEY `KENNZEICHEN` (`KENNZEICHEN`)
) ENGINE=InnoDB AUTO_INCREMENT=247 DEFAULT CHARSET=latin1;
... and ...
CREATE TABLE `NAV_ADRESSEN` (
`PK` int(11) NOT NULL AUTO_INCREMENT,
`MNR` varchar(20) NOT NULL,
-- ... 50 fields omitted for brevity ...
`UNGUELTIG` tinyint(1) NOT NULL,
`IMPORTID` varchar(20) NOT NULL,
`MATCHCODE` varchar(255) DEFAULT NULL,
PRIMARY KEY (`PK`),
UNIQUE KEY `MNR` (`MNR`),
KEY `ZUNAME` (`ZUNAME`,`VORNAME`),
KEY `IMPORTID` (`IMPORTID`),
KEY `MATCHCODE` (`MATCHCODE`),
KEY `ANGELEGTDAT` (`ANGELEGTDAT`)
) ENGINE=InnoDB AUTO_INCREMENT=1076829 DEFAULT CHARSET=latin1;
Just FYI, that query can (presumably) be rewritten as follows:
SELECT a.*
FROM nav_adressen a
JOIN mitgl_kennzeichen k
ON k.mnr = a.mnr
AND 19012546 IN (a.mnr,importid,k.kennzeichen)
AND NOT ungueltid
ORDER
BY zuname
, vorname
LIMIT 0,25;
Related
So basically I created a table:
CREATE TABLE IF NOT EXISTS `student` (
`id` int(4) unsigned NOT NULL AUTO_INCREMENT,
`campus` enum('CAMPUS1', 'CAMPUS2') NOT NULL,
`fullname` char(32) NOT NULL,
`gender` enum('MALE', 'FEMALE') NOT NULL,
`birthday` char(16) NOT NULL,
`phone` char(32) NOT NULL,
`emergency` char(32) NOT NULL,
`address` char(128) NOT NULL,
PRIMARY KEY (`idx`),
KEY `key_student` (`campus`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
I have like 20 rows with only 12 in CAMPUS1
But when I use query it: SELECT * FROM student WHERE campus='CAMPUS1'; The EXPLAIN is this:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE student ALL key_student NULL NULL NULL 20 Using where
I am new to this thing, how does a KEY really works? I read documentation but I cant understand that much.
MySQL is trying to be smart (with varying success) when deciding which index to use for a query.
There are cases where it is faster to query the entire table instead of using the index. E.g: if your table has 500 records for CAMPUS1 and 100 records for CAMPUS2 it is faster to do a full (600 records) scan when looking for campus='CAMPUS1'.
When you have only 20 rows you run into the edge cases of the algorithm. Try adding some more rows, and see what happens.
Also, it seems this index will have a very low cardinality (an even split between only 2 values). It will probably not be very useful.
I have the following MySQL query which takes about 40 seconds on a linux VM:
SELECT
* FROM `clients_event_log`
WHERE
`ex_long` = 1475461 AND
`type` in (2, 1) AND NOT
(
(category=1 AND error=-2147212542) OR
(category=7 AND error=67)
)
ORDER BY `ev_time` DESC LIMIT 100
The table has around 7 million rows, aprox. 800 MB in size and it has indexes on all the fields used in the WHERE and ORDER BY clauses.
Now if I change the query in such a way that the ordering is done in an outer SELECT, everything works much faster (around 100ms):
SELECT res.* FROM
(
SELECT * FROM `clients_event_log`
WHERE
`ex_long` = 1475461 AND
`type` in (2, 1) AND NOT
(
(category=1 AND error=-2147212542) OR
(category=7 AND error=67)
)
) AS res
ORDER BY res.ev_time DESC LIMIT 0, 100
Do you have any idea why the first query takes such a long time? Thank you.
Later Update:
1st Query EXPLAIN:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE clients_event_log index category,ex_long,type,error,categ_error ev_time 4 NULL 5636 Using where
2nd Query EXPLAIN:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY <derived2> system NULL NULL NULL NULL 1
2 DERIVED clients_event_log ref category,ex_long,type,error,categ_error ex_long 5 131264 Using where
Table definition:
CREATE TABLE `clients_event_log` (
`ev_id` int(11) NOT NULL,
`type` int(6) NOT NULL,
`ev_time` int(11) NOT NULL,
`category` smallint(6) NOT NULL,
`error` int(11) NOT NULL,
`ev_text` varchar(1024) DEFAULT NULL,
`userid` varchar(20) DEFAULT NULL,
`ex_long` int(11) DEFAULT NULL,
`client_ex_long` int(11) DEFAULT NULL,
`ex_text` varchar(1024) DEFAULT NULL,
PRIMARY KEY (`ev_id`),
KEY `category` (`category`),
KEY `ex_long` (`ex_long`),
KEY `type` (`type`),
KEY `ev_time` (`ev_time`),
KEY `error` (`error`),
KEY `categ_error` (`category`,`error`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
I ended up using the second query (inner SELECT) because the MySQL optimiser decided to always use the ev_time index even if I tried multiple versions of a composite index containing the columns in the WHERE and ORDER BY clauses.
Using force index (ex_long) also worked.
The MySQL version was 5.5.38
Thank you.
Add these
INDEX(ev_long, ev_time),
INDEX(ev_long, type)
and use the first format of the query and let the optimizer decide which is better based on the statistics.
I'm struggling to understand if I've indexed this query properly, it's somewhat slow and I feel it could use optimization. MySQL 5.1.70
select snaps.id, snaps.userid, snaps.ins_time, usr.gender
from usersnaps as snaps
join user as usr on usr.id = snaps.userid
left join user_convert as conv on snaps.userid = conv.userid
where (conv.level is null or conv.level = 4) and snaps.active = 'N'
and (usr.status = "unfilled" or usr.status = "unapproved") and usr.active = 1
order by snaps.ins_time asc
usersnaps table (irrelevant deta removed, size about 250k records) :
CREATE TABLE IF NOT EXISTS `usersnaps` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`userid` int(11) unsigned NOT NULL DEFAULT '0',
`picture` varchar(250) NOT NULL,
`active` enum('N','Y') NOT NULL DEFAULT 'N',
`ins_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`,`userid`),
KEY `userid` (`userid`,`active`),
KEY `ins_time` (`ins_time`),
KEY `active` (`active`)
) ENGINE=InnoDB;
user table (irrelevant deta removed, size about 300k records) :
CREATE TABLE IF NOT EXISTS `user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`active` tinyint(1) NOT NULL DEFAULT '1',
`status` enum('15','active','approval','suspended','unapproved','unfilled','rejected','suspended_auto','incomplete') NOT NULL DEFAULT 'approval',
PRIMARY KEY (`id`),
KEY `status` (`status`,`active`)
) ENGINE=InnoDB;
user_convert table (size about : 60k records) :
CREATE TABLE IF NOT EXISTS `user_convert` (
`userid` int(10) unsigned NOT NULL,
`level` tinyint(4) NOT NULL,
UNIQUE KEY `userid` (`userid`),
KEY `level` (`level`)
) ENGINE=InnoDB;
Explain extended returns :
id select_type table type possible_keys key key_len ref rows filtered Extra
1 SIMPLE snaps ref userid,default_pic,active active 1 const 65248 100.00 Using where; Using filesort
1 SIMPLE usr eq_ref PRIMARY,active,status PRIMARY 4 snaps.userid 1 100.00 Using where
1 SIMPLE conv eq_ref userid userid 4s snaps.userid 1 100.00 Using where
Using filesort is probably your performance killer.
You need the records from usersnaps where active = 'N' and you need them sorted by ins_time.
ALTER TABLE usersnaps ADD KEY active_ins_time (active,ins_time);
Indexes are stored in sorted order, and read in sorted order... so if the optimizer chooses that index, it will go for the records with active = 'N' and -- hey, look at that -- they're already sorted by ins_time -- because of that index. So as it reads the rows referenced by the index, the result-set internally is already in the order you want it to ORDER BY, and the optimizer should realize this... no filesort required.
I would recommend changing the userid index (assuming you're not using it right now) to have active first and userid later.
That should make it more useful for this query.
I have a myisam table with a primary key spanning 5 columns. I do a select using a WHERE on every of those 5 columns ANDed. Using the primary key (multicolumn index) it takes 25s, using a single index in one of the columns it takes 1 sec. I did a profiling and most of the 25s is taken in “Sending data” stage. The primary key has cardinality of about 7M and the single column about 80. Am i missing somehting?
CREATE TABLE `mytable` (
`a` int(11) unsigned NOT NULL,
`b` varchar(2) NOT NULL,
`c` int(11) unsigned NOT NULL,
`d` varchar(560) NOT NULL,
`e` varchar(45) NOT NULL,
PRIMARY KEY (`a`,`e`,`d`,`b`,`c`),
KEY `d` (`d`),
KEY `e` (`e`),
KEY `b` (`b`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
EXPLAIN SELECT * FROM mytable USE INDEX (PRIMARY)
WHERE a=12 AND e=1319677200 AND d='69.171.242.53' AND b='*' AND c=0;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE i ref PRIMARY PRIMARY 4 const 5912231 Using where
EXPLAIN SELECT * FROM mytable
WHERE a=12 AND e=1319677200 AND d='69.171.242.53' AND b='*' AND c=0;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE i ref PRIMARY,d,e,b d 562 const 158951 Using where
The problem is caused by casting,
try quote every varchar column b,d,e
SELECT * FROM mytable USE INDEX (PRIMARY)
WHERE a=12 AND e='1319677200' AND d='69.171.242.53' AND b='*' AND c=0;
This query:
explain
SELECT `Lineitem`.`id`, `Donation`.`id`, `Donation`.`order_line_id`
FROM `order_line` AS `Lineitem`
LEFT JOIN `donations` AS `Donation`
ON (`Donation`.`order_line_id` = `Lineitem`.`id`)
WHERE `Lineitem`.`session_id` = '1'
correctly uses the Donation.order_line_id and Lineitem.id indexes, shown in this EXPLAIN output:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Lineitem ref session_id session_id 97 const 1 Using where; Using index
1 SIMPLE Donation ref order_line_id order_line_id 4 Lineitem.id 2 Using index
However, this query, which simply includes another field:
explain
SELECT `Lineitem`.`id`, `Donation`.`id`, `Donation`.`npo_id`,
`Donation`.`order_line_id`
FROM `order_line` AS `Lineitem`
LEFT JOIN `donations` AS `Donation`
ON (`Donation`.`order_line_id` = `Lineitem`.`id`)
WHERE `Lineitem`.`session_id` = '1'
Shows that the Donation table does not use an index:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Lineitem ref session_id session_id 97 const 1 Using where; Using index
1 SIMPLE Donation ALL order_line_id NULL NULL NULL 3
All of the _id fields in the tables are indexed, but I can't figure out how adding this field into the list of selected fields causes the index to be dropped.
As requested by James C, here are the table definitions:
CREATE TABLE `donations` (
`id` int(10) unsigned NOT NULL auto_increment,
`npo_id` int(10) unsigned NOT NULL,
`order_line_detail_id` int(10) unsigned NOT NULL default '0',
`order_line_id` int(10) unsigned NOT NULL default '0',
`created` datetime default NULL,
`modified` datetime default NULL,
PRIMARY KEY (`id`),
KEY `npo_id` (`npo_id`),
KEY `order_line_id` (`order_line_id`),
KEY `order_line_detail_id` (`order_line_detail_id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8
CREATE TABLE `order_line` (
`id` bigint(20) unsigned NOT NULL auto_increment,
`order_id` bigint(20) NOT NULL,
`npo_id` bigint(20) NOT NULL default '0',
`session_id` varchar(32) collate utf8_unicode_ci default NULL,
`created` datetime default NULL,
PRIMARY KEY (`id`),
KEY `order_id` (`order_id`),
KEY `npo_id` (`npo_id`),
KEY `session_id` (`session_id`)
) ENGINE=InnoDB AUTO_INCREMENT=23 DEFAULT CHARSET=utf8
I also did some reading about cardinality, and it looks like both the Donations.npo_id and Donations.order_line_id have a cardinality of 2. Hopefully this suggests something useful?
I'm thinking that a USE INDEX might solve the problem, but I'm using an ORM that makes this a bit tricky, and I don't understand why it wouldn't grab the correct index when the JOIN specifically names indexed fields?!?
Thanks for your brainpower!
The first explain has "uses index" at the end. This means that it was able to find the rows and return the result for the query by just looking at the index and not having to fetch/analyse any row data.
In the second query you add a row that's likely not indexed. This means that MySQL has to look at the data of the table. I'm not sure why the optimiser chose to do a table scan but I think it's likely that if the table is fairly small it's easier for it to just read everything than trying to pick out details for individual rows.
edit: I think adding the following indexes will improve things even more and let all of the join use indexes only:
ALTER TABLE order_line ADD INDEX(session_id, id);
ALTER TABLE donations ADD INDEX(order_line_id, npo_id, id)
This will allow order_line to to find the rows using session_id and then return id and also allow donations to join onto order_line_id and then return the other two columns.
Looking at the auto_increment values can I assume that there's not much data in there. It's worth noting that the amount of data in the tables will have an effect on the query plan and it's good practice to put some sample data in there to test things out. For more detail have a look in this blog post I made some time back: http://webmonkeyuk.wordpress.com/2010/09/27/what-makes-a-good-mysql-index-part-2-cardinality/