Count with where is faster then without? - mysql

I have table with around 60 000 rows. I have this two queries that are drastically different in speed. Can you explain why?
SELECT COUNT(id) FROM table;
300ms - 58936 rows
Explain:
id
select_type
table
partitions
type
possible_keys
key
key_length
ref
rows
filtered
Extra
1
SIMPLE
table
NULL
index
NULL
table_id_index
8
NULL
29325
100.00
using index
SELECT COUNT(id) FROM table WHERE dummy = 1;
50ms - 58936 rows
Explain:
id
select_type
table
partitions
type
possible_keys
key
key_length
ref
rows
filtered
Extra
1
SIMPLE
table
NULL
index
NULL
dummy_index
5
const
14662
100.00
using index

It depends.
COUNT(id) may be slower than COUNT(*). The former checks id for being NOT NULL; the latter simply counts the rows. (If id is the PRIMARY KEY, then this is unlikely to make any measurable difference.)
The Optimizer may decide to scan the entire table rather than use an index.
The Optimizer may pick an irrelevant index if not forced to by the WHERE clause. In your example, any index with id can be used for the first, and any index with both dummy and id for the second.
If you run the same query twice, it may run much faster the second time due to caching. This can happen even for a 'similar' query. I suspect this is the "answer". I often see a speedup of 10x if the first run was from disk and the second found everything needed in cache (the buffer_pool).
To get more insight, do EXPLAIN SELECT ...
The optimal index for your second query is INDEX(dummy, id).

Related

SQL query with subquery takes longer than both queries separately

Problem
I have two queries where one needs the result of the other one. My first guess was to use an independent subquery:
SELECT P2.*
FROM ExampleTable P2
WHERE P2.delivery_start >= (
SELECT MIN(P1.delivery_start)
FROM ExampleTable P1
WHERE 1641288602 < P1.delivery_end
);
The entire query takes 5-6 seconds which is way to long for my application. Running these queries after another takes only around 800ms for both:
SELECT MIN(P1.delivery_start)
FROM ExampleTable P1
WHERE 1641288602 < P1.delivery_end;
SELECT P2.*
FROM ExampleTable P2
WHERE P2.delivery_start >= 1641286800;
I am using Mariadb 10.2 and have indices on both delivery_start and delivery_end.
What I have tried
I have used a CTE instead of a subquery which resulted in the same performance. Using a Variable with SET yields similar results as to running both queries separately, so thats what I will use for the time being.
I ran EXPLAIN on all 3 Queries:
1. Query with subquery
id
select_type
table
type
possible_keys
key
key_len
ref
rows
Extra
1
PRIMARY
P2
ALL
delivery_start
NULL
NULL
NULL
6388282
Using where
2
SUBQUERY
P1
range
delivery_end
delivery_end
4
NULL
36378
Using index condition
2. Separate Queries
id
select_type
table
type
possible_keys
key
key_len
ref
rows
Extra
1
SIMPLE
P1
range
delivery_end
delivery_end
4
NULL
36432
Using index condition
id
select_type
table
type
possible_keys
key
key_len
ref
rows
Extra
1
SIMPLE
P2
range
delivery_start
delivery_start
4
NULL
35944
Using index condition
Question
I think the issue is shown in the first EXPLAIN table as it has type ALL which means that the database performs a full table scan. My question is simply: why? Is the optimizer not able to figure out that the subquery produces a number with which we only need a range type query? And why does it not use any index?
The problem is described in the MariaDB docs:
In all remaining cases when NULL cannot be substituted with FALSE, it
is not possible to use index lookups. This is not a limitation in the
server, but a consequence of the NULL semantics in the ANSI SQL
standard.
There is a full examination here:
https://mariadb.com/kb/en/non-semi-join-subquery-optimizations/
The result of your subquery can potentially return a NULL in the case no rows were found. Hence, MariaDB cannot use the index for the parent query.
You must rewrite your subquery in a way that it will always return a row with a non-NULL scalar or stick with two separate queries. However, what should happen if your first query returns NULL? With a compound statement you could put an if around the second query and don't even execute it if the first returns NULL.
Replace these
INDEX(delivery_start)
INDEX(delivery_end)
with these:
INDEX(delivery_start, delivery_end)
INDEX(delivery_end, delivery_start)
The second one will help significantly with the subquery. Then the first may help with the outer query.
(If those don't help, please add SHOW CREATE TABLE, EXPLAIN SELECT ... and table sizes.)

Optinimizing query with fts + composite index

I have the following query:
SELECT *
FROM table
WHERE
structural_type=1
AND parent_id='167F2-F'
AND points_to_id=''
# AND match(search) against ('donotmatch124213123123')
The search takes about 10ms to run, running on the composite index (structural_type, parent_id, points_to_id). However, when I add in the fts index, the query balloons to taking ~1s, regardless of what is contained in the match criteria. Basically it seems like it 'skips the index' whenever I have a fts search applied.
What would be the best way to optimize this query?
Update: a few explains:
EXPLAIN SELECT... # without fts
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE table NULL ref structural_type structural_type 209 const,const,const 2 100.00 NULL
With fts (also adding 'force index'):
explain SELECT ... force INDEX (structural_type) AND match...
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE table NULL fulltext structural_type,search search 0 const 1 5.00 Using where; Ft_hints: sorted
The only thing I can think of which would be incredibly hack-ish, would be to add an additional term to the fts so it does the filter 'within' that. For example:
fts_term = fts_term += " StructuralType1ParentID167F2FPointsToID"
The MySQL optimizer can only use one index for your WHERE clause, so it has to choose between the composite one and the FULLTEXT one.
Since it can't run both queries to bench which one is faster, it will estimate how fast will different execution plans be.
To do so, MySQL uses some internal stats it keeps about each table. But those stats can be very different from the reality if they aren't updated and the data changes in the table.
Running a OPTIMIZE TABLE table query allows MySQL to refresh its table stats, so it will be able to perform better estimates and choose the better index.
Try expressing this without the full text logic, using like:
SELECT *
FROM table
WHERE structural_type = 1 AND
parent_id ='167F2-F' AND
points_to_id = '' AND
search not like '%donotmatch124213123123%';
The index should still be used for the first three columns. LIKE might be slow, but if not many rows match the first three, this might not be as bad as using the full text index.

MySQL (MyISAM) SELECT query takes too long with join

I have a pretty long insert query that inserts data from a select query in a table. The problem is that the select query takes too long to execute. The table is MyISAM and the select locks the table which affects other users who also use the table. I have found that problem of the query is a join.
When I remove this part of the query, it takes less then a second to execute but when I leave this part the query takes more than 15 minutes:
LEFT JOIN enq_217 Pex_217
ON e.survey_panelId = Pex_217.survey_panelId
AND e.survey_respondentId = Pex_217.survey_respondentId
AND Pex_217.survey_respondentId != 0
db.table_1 contains 5,90,145 rows and e contains 4,703 rows.
Explain Output:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY e ALL survey_endTime,survey_type NULL NULL NULL 4703 Using where
1 PRIMARY Pex_217 ref survey_respondentId,idx_table_1 idx_table_1 8 e.survey_panelId,e.survey_respondentId 2 Using index
2 DEPENDENT SUBQUERY enq_11525_timing eq_ref code code 80 e.code 1
How can I edit this part of the query to be faster?
I suggest creating an index on the table db.table_1 for the fields panelId and respondentId
You want an index on the table. The best index for this logic is:
create index idx_table_1 on table_1(panelId, respondentId)
The order of these two columns in the index should not matter.
You might want to include other columns in the index, depending on what the rest of the query is doing.
Note: a single index with both columns is different from two indexes with each column.
Why is it a LEFT join?
How many rows in Pex_217?
Run ANALYZE TABLE on each table used. (This sometimes helps MyISAM; rarely is needed for InnoDB.)
Since the 'real problem' seems to be that the query "holds up other users", switch to InnoDB.
Tips on conversion
The JOIN is not that bad (with the new index -- note Using index): 4703 rows scanned, then reach into the other table's index about 2 times each.
Perhaps the "Dependent subquery" is the costly part. Let's see that.

MySQL Index questions, simple query takes too long for a relatively small amount of indexes rows on a medium-large table

I have a relatively large table (5,208,387 rows, 400mb data/670mb index),
all columns i use to search with are indexes.
name and type are VARCHAR(255) BTREE INDEX
and sdate is an INTEGER column containing timestamps.
I fail to understand some issues,
first this query is very slow (5sec):
SELECT *
FROM `mytable`
WHERE `name` LIKE 'hello%my%big%text%thing%'
AND `type` LIKE '%'
ORDER BY `sdate` DESC LIMIT 3
EXPLAIN for the above:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE mytable range name name 257 NULL 5191 Using where
while this one is very fast (5msec):
SELECT *
FROM `mytable`
WHERE `name` LIKE 'hello.my%big%text%thing%'
AND `type` LIKE '%'
ORDER BY `sdate` DESC LIMIT 3
EXPLAIN for the above:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE mytable range name name 257 NULL 204 Using where
the amount of rows scanned different makes sense because of the indexes,
but having 5k of indexed rows take 5 seconds seems way too much.
also, ordering by name instead of sdate makes the queries very fast, but I need to order by the timestamp.
Second thing I do not understand is that before
adding the last column to the index,
the db had index of 1.4GB,
not after running an OPTIMIZE/REPAIR the size is just 670MB.
The problem is, only the portion before the first % can take advantage of the index, the rest of the like strings needs to process all rows which match hello% or hello.my% without the help of one. Also, ordering by another column then the index used, probably requires a second pass, or at least a scan rather then an already sorted index. Options to better performance (can be implemented independently from each other) are:
Using a full-text index on the name column and using a MATCH() AGAINST() search rather then LIKE with %'s.
Adding the sdate to in index combined (name,sdate) could very well speed up sorting.

mysql query optimization

I would need some help on how to optimize the query.
select * from transaction where id < 7500001 order by id desc limit 16
when i do an explain plan on this - the type is "range" and rows is "7500000"
According to the some online reference's this is explained as, it took the query 7,500,000 rows to scan and get the data.
Is there any way i can optimize so it uses less rows to scan and get the data. Also, id is the primary key column.
online reference's this is explained as, it took the query 7,500,000 rows to scan and get the data
not actually. it's the approximate (optimizer cannot say the correct number in many different cases) number of rows that potentially will be scanned. but you specified LIMIT - so only first 16 rows will be affected while query executed.
ps: i hope the used key in EXPLAIN is id?
I performed an explain with your query on a 8 million rows table
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE transaction range PRIMARY PRIMARY 8 NULL 4079100 Using where
The actual execution was fast, Execution Time : 00:00:00:044.