I met a unexpect result in my mysql server.
the lines more , the query time less??
I have one table and for the total rows for each filter:
select count(*) from tcr where eid=648;
+----------+
| count(*) |
+----------+
| 11336 |
select count(*) from tcr where eid=997;
+----------+
| count(*) |
+----------+
| 1262307 |
but the query time is oppisite to the total lines for each filter:
select * from tcr where eid=648 order by start_time desc limit 0,10;
[data display]
10 rows in set (16.92 sec)
select * from tcr where eid=997 order by start_time desc limit 0,10;
[data display]
10 rows in set (0.21 sec)
"reset query cache" has been execute before every query sql.
the index of table tcr is
KEY `cridx_eid` (`eid`),
KEY `cridx_start_time` (`start_time`)
BTW:attach the explain result: this is very strange, but it looks more like the reuslt we take.(the eid=997 has less lines than eid=648
explain select * from talk_call_record where eid=648 order by start_time desc limit 0,10;
+----+-------------+------------------+-------+---------------+------------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------------+-------+---------------+------------------+---------+------+------+-------------+
| 1 | SIMPLE | talk_call_record | index | cridx_eid | cridx_start_time | 5 | NULL | 3672 | Using where |
explain select * from talk_call_record where eid=997 order by start_time desc limit 0,10;
+----+-------------+------------------+-------+---------------+------------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------------+-------+---------------+------------------+---------+------+------+-------------+
| 1 | SIMPLE | talk_call_record | index | cridx_eid | cridx_start_time | 5 | NULL | 32 | Using where |
First, you must have a very large table.
MySQL is using the index on start_time for the queries. What is happening is that it is "walking" through the table, one row at a time. It happens to find eid=997 much more quickly than it finds eid=648. It only has to find 10 records, so the engine stops when it gets to the 10th one.
What can you do? The optimal index for the query is a composite index on (eid, start_time). This will go directly to the values that you want.
Related
MariaDB 10 (myisam)
Query executes rather slowly, takes about 90 seconds.
I tried deleting some old rows and then optimizing the table.
SELECT ceil(rate * 8 / 1000000)
FROM db.Octets
WHERE id = 5344
order by datetime DESC
LIMIT 1;
Query takes a really long time to execute.
+------+-------------+----------------+-------+---------------+------------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------------+-------+---------------+------------------+---------+------+------+-------------+
| 1 | SIMPLE | Octets | index | NULL | Octets_1_idx | 8 | NULL | 1 | Using where |
+------+-------------+----------------+-------+---------------+------------------+---------+------+------+-------------+
you could try adding a composite redundant index
create index idx2 on Octets ( id , datetime, rate)
I can't make sense of the following two queries. First one only gets the count of the entire resultset.
Second one gets the actual data , but limit the resultset to 10 rows.
Somehow the first one can't use index. I have tried to use USE INDEX (timestamp_index,Fulltext_title,Fulltext_description) with no avail.
The count query does not need an order by , but I just tried to see if it can use index that way.
As far as I can see the WHERE clause is the same, which to my knowledge is the biggest factor in selecting the index.
GET THE COUNT
SELECT count(*) as total FROM table1
WHERE 1=1
AND type in ('category1','category3','category2')
AND (
MATCH(title) AGAINST (' +"apple"' IN BOOLEAN MODE)
OR
MATCH(description) AGAINST (' +"apple"' IN BOOLEAN MODE)
)
ORDER BY timestamp DESC
;
+-------+
| total |
+-------+
| 798 |
+-------+
1 row in set (3.75 sec)
EXPLAIN EXTENDED
+----+-------------+----------+------+---------------+------+---------+------+--------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------+------+---------------+------+---------+------+--------+----------+-------------+
| 1 | SIMPLE | table1 | ALL | NULL | NULL | NULL | NULL | 669689 | 100.00 | Using where |
+----+-------------+----------+------+---------------+------+---------+------+--------+----------+-------------+
Get the Actual Result
SELECT id, title,desciption,timestamp FROM table1
WHERE 1=1
AND type in ('category1','category3','category2')
AND (
MATCH(title) AGAINST (' +"apple"' IN BOOLEAN MODE)
OR
MATCH(description) AGAINST (' +"apple"' IN BOOLEAN MODE)
)
ORDER BY timestamp DESC
LIMIT 0, 10 ;
10 rows in set (0.06 sec)
EXPLAIN EXTENDED
+----+-------------+----------+-------+---------------+------+---------+------+------+------------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------+-------+---------------+------+---------+---- --+------+------------+-------------+
| 1 | SIMPLE | table1 index | NULL | timestamp_index | 21 | NULL | 10 | 6696890.00 | Using where |
+----+-------------+----------+-------+---------------+------+---------+------+------+------------+-------------+
On second query. You want first 10 elements. So optimizer use timestamp index, sort the table and keep checking rows until found 10 element matching your WHERE
On your first query, the db has to scan the whole db to find what element match your query, so your ORDER BY doesnt help because you want count the total number of rows matching yor where.
Now also depend on how you define your index. Do you have one Index for Type, Title and Description ? Do you have compisite Index?
Check this one MySQL index TIPS
I found the answer ... I combined the two indexes. So we don't have to have a full table scan just because we are doing a count(*)
SELECT count(*) as total FROM table1 WHERE 1=1
AND type in ('category1','category2','category3')
AND MATCH(title, description) AGAINST (' +"apple"' IN BOOLEAN MODE)
;
+----+-------------+----------+----------+----------------------+----------------------+---------+------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------+----------+----------------------+----------------------+---------+------+------+----------+-------------+
| 1 | SIMPLE | table1 | fulltext | FT_title_description | FT_title_description | 0 | NULL | 1 | 100.00 | Using where |
+----+-------------+----------+----------+----------------------+----------------------+---------+------+------+----------+-------------+
+-------+
| total |
+-------+
| 798 |
+-------+
1 row in set (0.83 sec)
I have a query which is running far slower than it should. I have distilled the problem down to a simple select statement (some fields have been renamed for privacy):
SELECT SQL_NO_CACHE SQL_CALC_FOUND_ROWS id, date_started, date_complete, status
FROM table_a
ORDER BY date DESC
LIMIT 0, 100
When SQL_CALC_FOUND_ROWS is used then query completes in about 0.70 seconds, however when SQL_CALC_FOUND_ROWS is removed then the query completes in about 0.0005 seconds (in both cases SQL_NO_CACHE is used in the query).
table_a has an index on the date field.
Apparently SQL_CALC_FOUND_ROWS can prevent an index from being used:
So, obvious conclusion from this simple test is: when we have
appropriate indexes for WHERE/ORDER clause in our query, it is much
faster to use two separate queries instead of one with
SQL_CALC_FOUND_ROWS.
I have confirmed this. No index is used when SQL_CALC_FOUND_ROWS is included:
EXPLAIN SELECT SQL_NO_CACHE SQL_CALC_FOUND_ROWS id, date_started, date_complete, status FROM table_a ORDER BY date DESC limit 0, 100;
+----+-------------+-------------+------+---------------+------+---------+------+--------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+------+---------------+------+---------+------+--------+----------------+
| 1 | SIMPLE | table_a | ALL | NULL | NULL | NULL | NULL | 132208 | Using filesort |
+----+-------------+-------------+------+---------------+------+---------+------+--------+----------------+
But when SQL_CALC_FOUND_ROWS is not used then the index on the date field is used:
EXPLAIN SELECT SQL_NO_CACHE id, date_started, date_complete, status FROM table_a ORDER BY date DESC limit 0, 100;
+----+-------------+-------------+-------+---------------+------+---------+------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+---------------+------+---------+------+--------+-------+
| 1 | SIMPLE | table_a | index | NULL | date | 13 | NULL | 132208 | |
+----+-------------+-------------+-------+---------------+------+---------+------+--------+-------+
Is there any way to speed the query up without removing SQL_CALC_FOUND_ROWS from the query?
I'm using MySQL version 5.0.51a-3ubuntu5.1-log.
I have a strange behavior with my mysql query below:
SELECT domain_id, domain_name, domain_lastupdate
FROM domains
WHERE domain_id > 300000 LIMIT 2000
takes ~ 15seconds...
while
SELECT domain_id, domain_name
FROM domains
WHERE domain_id > 300000 LIMIT 2000
takes ~ 0.05seconds...
I've tried different ids with different limits doing one before the other and the other way around not to get cached results, but I end up with dramatic time differences.
I have 1 index on the domain_id, 1 on the domain_name, but none with both columns...
I just don't get it...
#
The domain_lastupdate is a simple Date column.
Here's the EXPLAIN output of both queries:
explain SELECT domain_id, domain_name, domain_lastupdate FROM domains WHERE domain_id > 255000 LIMIT 500;
+----+-------------+---------+-------+---------------+-------------+---------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+-------+---------------+-------------+---------+------+----------+-------------+
| 1 | SIMPLE | domains | range | UN_domainid | UN_domainid | 4 | NULL | 12575357 | Using where |
+----+-------------+---------+-------+---------------+-------------+---------+------+----------+-------------+
1 row in set (0.00 sec)
second one:
explain SELECT domain_id, domain_name FROM domains WHERE domain_id > 255000 LIMIT 500;
+----+-------------+---------+-------+---------------+-------------+---------+------+----------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+-------+---------------+-------------+---------+------+----------+--------------------------+
| 1 | SIMPLE | domains | range | UN_domainid | UN_domainid | 4 | NULL | 12575369 | Using where; Using index |
+----+-------------+---------+-------+---------------+-------------+---------+------+----------+--------------------------+
1 row in set (0.01 sec)
Any idea why the first one doesn't use the index ?
When you are pulling out the non date columns that you have indexed the SQL Server is able to pull your data directly out of the index and needn't go to the table at all. To get the date it is having to hit the table. Add an index on the date column.
Also I suppose you could create a multi column index. Make sure you have domain_id as the first column in the index. Creating Indexes
What you want to use is what is called A Covering Index
mysql> EXPLAIN SELECT fldjobitemid, fldstatus, tblbulkreportjobitems.fldparticipantid, CONCAT(fldFirstName, ' ', fldLastName) as full_name FROM tblbulkreportjobitems FORCE INDEX (fldparticipantid) JOIN tblparticipant ON tblparticipant.fldParticipantId = tblbulkreportjobitems.fldparticipantid WHERE fldjobid = 9 ORDER BY fldjobitemid
-> ;
+----+-------------+-----------------------+--------+--------------------------+---------+---------+------------------------------------------------------+------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------------+--------+--------------------------+---------+---------+------------------------------------------------------+------+-----------------------------+
| 1 | SIMPLE | tblbulkreportjobitems | ALL | fldparticipantid | NULL | NULL | NULL | 869 | Using where; Using filesort |
| 1 | SIMPLE | tblparticipant | eq_ref | PRIMARY,fldParticipantId | PRIMARY | 4 | medicus_devel.tblbulkreportjobitems.fldparticipantid | 1 | Using where |
+----+-------------+-----------------------+--------+--------------------------+---------+---------+------------------------------------------------------+------+-----------------------------+
2 rows in set (0.05 sec)
Why is it using a filesort still?
The MySQL Query Optimizer will always overrule your choice of index if the number of keys is 5% of the number of rows in the table.
Run these queries please:
SELECT COUNT(1) FROM tblbulkreportjobitems;
I guess this should be 869 (from the explain plan)
SELECT COUNT(1) FROM tblbulkreportjobitems WHERE fldjobid = 9;
SELECT COUNT(1),fldjobid FROM tblbulkreportjobitems GROUP BY fldjobid WITH ROLLUP;
SELECT COUNT(1),fldjobid FROM tblbulkreportjobitems GROUP BY fldjobid WITH ROLLUP;
You will see which rows that appear more than 5% of the total rows. In that case, the MySQL Query Optimizer will choose a full table scan over using a lopsided index.
If you had fldparticipantid in the WHERE clause, then you will get a different result.