Is better use this SQL code suppose the right index in apply on the column!!
Suppose constant is a input from a textfield!!
select ...
from .....
where lower(column) like 'Constant%' or lower(column) like '%Constant%'
Is better than?
select ...
from .....
where lower(column) like '%Constant%'
In the first code i try to match a "constant" using like but using a index trying being lucky to find a match and later i try to do a full match!!
All i want is my performance is not decreased! I mean if both queries runs in the same time or if the query can sometimes get a performance upgrade is OK with me
I use lower because we use DEFAULT CHARSET=utf8 COLLATE=utf8_bin
I created a little table:
create table dotdotdot (
col varchar(20),
othercol int,
key(col)
);
I did an EXPLAIN on a query similar to the one you showed:
explain select * from dotdotdot where lower(col) = 'value'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: dotdotdot
partitions: NULL
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 1
filtered: 100.00
Extra: Using where
Notice the type: ALL which means it can't use the index on col. By using the lower() function, we spoil the ability for MySQL to use the index, and it has to resort to a table-scan, evaluating the expression for every row. As your table gets larger, this will get more and more expensive.
And it's unnecessary anyway! String comparisons are case-insensitive in the default collations. So unless you deliberately declared your table with a case-sensitive collation or binary collation, it's just as good to skip the lower() function call, so you can use an index.
Example:
explain select * from dotdotdot where col = 'value'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: dotdotdot
partitions: NULL
type: ref
possible_keys: col
key: col
key_len: 23
ref: const
rows: 1
filtered: 100.00
Extra: NULL
The type: ref indicates the use of a non-unique index.
Also compare to using wildcards for pattern-matching. This also defeats the use of an index, and it has to do a table-scan.
explain select * from dotdotdot where col like '%value%'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: dotdotdot
partitions: NULL
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 1
filtered: 100.00
Extra: Using where
Using wildcards like this for pattern-matching is terribly inefficient!
Instead, you need to use a fulltext index.
You might like my presentation Full Text Search Throwdown and the video here: https://www.youtube.com/watch?v=-Sa7TvXnQwY
In the other answer you ask if using OR helps. It doesn't.
explain select * from dotdotdot where col like 'value%' or col like '%value%'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: dotdotdot
partitions: NULL
type: ALL
possible_keys: col
key: NULL
key_len: NULL
ref: NULL
rows: 1
filtered: 100.00
Extra: Using where
Notice the optimizer identifies the col index as a possible key, but then ultimately decides not to use it (key: NULL).
No, this would not improve the query performance significantly.
MySQL will match the WHERE clause "per row" and therefore inspect ALL of the conditions before proceeding to the next row. Hitting the index first may slightly increase the performance if there is a match, but this gain will most likely be overtaken by the double evaluation in case the first condition does not match.
What could have helped is :
1) run the query with like 'Constant%'
2) run another query with like '%Constant%'
in which case, the first one may be accelerated if there is a match.
However, you will most likely suffer from the overhead and perform worse in 2 queries than in one.
Moreover, the LIKE operator is case insensitive. Therefore, the lower(column) is unnecessary.
Meanwhile, if you expect your data to match principally on the first condition, and rarely on the second, then YES, this would lead to an increase as the second condition is not evaluated.
Using LOWER() prevents use of the index. So, switch to a ..._ci collation and ditch the LOWER.
Consider a FULLTEXT index; it is much faster than LIKE%...`. The former is fast; the latter is a full table scan.
OR is almost always a performance killer.
Related
Which query is better ?
SELECT true;
SELECT true FROM users LIMIT 1;
In terms of:
Best practice
Performance
The first query has less overhead because it doesn't reference any tables.
mysql> explain select true\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: NULL
partitions: NULL
type: NULL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: NULL
filtered: NULL
Extra: No tables used
Whereas the second query does reference a table, which means it has to spend time:
Checking that the table exists and if the query references any columns, check that the columns exist.
Checking that your user has privileges to read that table.
Acquiring a metadata lock, so no one does any DDL or LOCK TABLES while your query is reading it.
Starting to do an index-scan, even though it will be cut short by the LIMIT.
Here's the explain for the second query for comparison:
mysql> explain select true from mysql.user limit 1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: user
partitions: NULL
type: index
possible_keys: NULL
key: PRIMARY
key_len: 276
ref: NULL
rows: 8
filtered: 100.00
Extra: Using index
First query will one row with value true.
Second query will return all the rows from users table but true as only value.
So you if you need one row user first query. But if you need all the rows with same value then use second one.
In either case, it is obvious you want the value of TRUE :) With this intention, the "SELECT TRUE" is the most efficient as it won't cause MySQL to go further looking for users table, no matter how many rows in it, and then go even further with "LIMIT 1" if there are rows!
By the term BEST PRACTICE, I am not sure what you meant here, because, from my point of view, this doesn't even require a PRACTICE, let alone BEST, as I fail to see any real life application of this approach.
From what I understand about multi-column indexes, they are only useful if you're using columns starting from the left and not skipping any. As in, when you have an index for (a, b, c), you can query on a, a, b, or a, b, c.
But today I found out that when there's an index (BTREE on an InnoDB table) on:
some_varchar, some_bigint, other_varchar
I can query:
SELECT MAX(some_bigint) FROM the_table
and the plan for it says:
id: 1
select_type: SIMPLE
table: the_table
type: index
possible_keys: NULL
key: index_some_varchar_some_bigint_other_varchar
key_len: 175
ref: NULL
rows: 1
Extra: Using index
This seems to disagree with the docs. It's also confusing since the key is set, but possible_keys isn't.
How does this work in practice? If the key is ordered by some_varchar first, (or a prefix of it) how can MySQL get a MAX of the second column from it?
(a guess would be that MySQL collects some extra information about all columns in an index, but if that's true - is it possible to see it directly?)
My understanding about the indexes was correct, but the understanding of what Using index means was wrong.
Using index doesn't necessarily mean that the value was accessed via a fast lookup. It simply means that the row data was not accessed. When the type is index and the Extra is Using index, it still means that the whole index is being scanned:
From the documentation:
The index join type is the same as ALL, except that the index tree is scanned.
For a MAX lookup which is actually using a prefix of an index, the explain looks like this:
id: 1
select_type: SIMPLE
table: NULL
type: NULL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: NULL
Extra: Select tables optimized away
I have a nested query and I trying to see if there is any full table scan in my query.
explain delete from ACCESS where ACCESS.MESSAGEID in (select ID from MESSAGE where MESSAGE.CID = 'xzy67sd’)\G
The sub query is hitting index but the first is not using index. Here is the query plan.
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: ACCESS
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 18295
Extra: Using where
*************************** 2. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: MESSAGE
type: unique_subquery
possible_keys: PRIMARY
key: PRIMARY
key_len: 8
ref: func
rows: 1
Extra: Using where
But if I separate the query and check the query plan then it is using index. I am not able to understand why and looking for some hints
explain delete from ACCESS where ACCESS.MESSAGEID in (2,3)\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: ACCESS
type: range
possible_keys: ACCESS_ID1
key: ACCESS_ID1
key_len: 8
ref: const
rows: 2
Extra: Using where
Subquery select statement returns constant, so rather than using select statement I type integer and the query plan start picking index
select ID from MESSAGE where MESSAGE.CID = 'xzy67sd’)\G
Thanks in advance
You don't need a subquery, here, and as a general rule, you shouldn't use one in MySQL unless you actually do need one.
DELETE a
FROM ACCESS a
JOIN MESSAGE m ON m.ID = a.MESSAGEID
WHERE m.CID = 'xzy67sd’;
This will delete the rows from ACCESS while leaving MESSAGE alone because only ACCESS is listed (by its alias "a") between DELETE and FROM, which is where you specify which tables you want to delete matching rows from.
The optimizer should use the indexes appropriately.
https://dev.mysql.com/doc/refman/5.6/en/delete.html (multi-table syntax)
Let's say i have an "articles" table which has the columns:
article_text: fulltext indexed
author_id: indexed
now i want to search for a term that appears in an article that a particular arthor has written.
so something like:
select * from articles
where author_id=54
and match (article_text) against ('foo');
the explain for this query tells me that mysql is only going to use the fulltext index.
I believe mysql can only use 1 index, but it sure seems like a wise idea to get all the articles a particular author has written first before fulltext searching for the term... so is there anyway to help mysql?
for example.. if you did a self-join?
select articles.* from articles as acopy
join articles on acopy.author_id = articles.author_id
where
articles.author_id = 54
and match(article_text) against ('foo');
the explain for this lists the use of the author_id index first, then the fulltext search.
does that mean it's actually only doing the fulltext search on the limited set as filtered by author_id?
ADDENDUM
explain plan for the self join as follows:
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: acopy
type: ref
possible_keys: index_articles_on_author_id
key: index_articles_on_author_id
key_len: 5
ref: const
rows: 20
filtered: 100.00
Extra: Using where; Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: articles
type: fulltext
possible_keys: index_articles_on_author_id,fulltext_articles
key: fulltext_articles
key_len: 0
ref:
rows: 1
filtered: 100.00
Extra: Using where
2 rows in set (0.00 sec)
Ok, so, since
Index Merge is not applicable to full-text indexes
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html
I would try this approach: (replace author_id_index by the name of your index on author_id)
select * from articles use index (author_id_index)
where author_id=54
and match (article_text) against ('foo');
Here the problem is the following:
it is indeed impossible to use a regular index in combination with a full-text index
if you join the table with itself, you are using an index already on each side of the join (the ON clause will use the author_id column, you definetly need the index here)
The most efficient has to be decided by you, with some test cases, whether using the author index is better than the text one.
I have following table structure.
town:
id (MEDINT,PRIMARY KEY,autoincrement),
town(VARCHAR(150),not null),
lat(FLOAT(10,6),notnull)
lng(FLOAT(10,6),notnull)
i frequently use "SELECT * FROM town ORDER BY town" query. I tried indexing town but it is not being used. So what is the best way to index so that i can speed up my queries.
USING EXPLAIN(UNIQUE INDEX Is PRESENT ON town):
mysql> EXPLAIN SELECT * FROM studpoint_town order by town \G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: studpoint_town
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 3
Extra: Using filesort
1 row in set (0.00 sec)
ragards ,
ravi.
Your EXPLAIN output indicates that currently the studpoint_town table has only 3 rows. As explained in the manual:
The output from EXPLAIN shows ALL in the type column when MySQL uses a table scan to resolve a query. This usually happens under the following conditions:
[...]
The table is so small that it is faster to perform a table scan than to bother with a key lookup. This is common for tables with fewer than 10 rows and a short row length. Don't worry in this case.