Query optimization with LIKE statement - mysql

Create a table like this:
CREATE TABLE test ( a INT(10), b char(10));
add two indexes:
alter table test add key aa (a), add key bb (b);
insert some rows:
insert into test values (132,'logi');
insert into test values (322,'koko');
insert into test values (32,'kola');
etc...
You can test with a little number of rows BUT in my application i will have a hundred million of rows so what i am going to ask makes sense:
So, let's say that i want to execute two queries:
1) explain select * from test where b like "frif%";
2) explain select * from test where a like "32%";
The first uses the index bb since b is a char, while the second doesn't use the index aa since a is just an INT (and i treat it like a char!).
How can i make the second query execute as the first and search for numbers like "some_number%" and make that a query of range type and not ALL (as explain shows).

If you are treating the number as a character string, then store it that way. It sounds like the number is a code of some sort. Although consisting of numbers, it is really just the name of something (such as an account number).
If the number is fixed length, say 5, then you can do:
where a >= 32000 and a < 33000;
You could extend this idea for different lengths:
where a >= 32 and a < 33 or
a >= 320 and a < 330 or
a >= 3200 and a < 3300 or
a >= 30000 and a < 33000

Related

How to speed up a query containing HAVING?

I have a table with close to a billion records, and need to query it with HAVING. It's very slow (about 15 minutes on decent hardware). How to speed it up?
SELECT ((mean - 3.0E-4)/(stddev/sqrt(N))) as t, ttest.strategyid, mean, stddev, N,
kurtosis, strategies.strategyId
FROM ttest,strategies
WHERE ttest.strategyid=strategies.id AND dataset=3 AND patternclassid="1"
AND exitclassid="1" AND N>= 300 HAVING t>=1.8
I think the problem is t cannot be indexed because it needs to be computed. I cannot add it as a column because the '3.0E-4' will vary per query.
Table:
create table ttest (
strategyid bigint,
patternclassid integer not null,
exitclassid integer not null,
dataset integer not null,
N integer,
mean double,
stddev double,
skewness double,
kurtosis double,
primary key (strategyid, dataset)
);
create index ti3 on ttest (mean);
create index ti4 on ttest (dataset,patternclassid,exitclassid,N);
create table strategies (
id bigint ,
strategyId varchar(500),
primary key(id),
unique key(strategyId)
);
explain select.. :
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
SIMPLE
ttest
NULL
range
PRIMARY,ti4
ti4
17
NULL
1910344
100.00
Using index condition; Using MRR
1
SIMPLE
strategies
NULL
eq_ref
PRIMARY
PRIMARY
8
Jellyfish_test.ttest.strategyid
1
100.00
Using where
The query needs to reformulated and an index needs to be added.
Plan A:
SELECT ((tt.mean - 3.0E-4)/(tt.stddev/sqrt(tt.N))) as t,
tt.strategyid, tt.mean, tt.stddev, tt.N, tt.kurtosis,
s.strategyId
FROM ttest AS tt
JOIN strategies AS s ON tt.strategyid = s.id
WHERE tt.dataset = 3
AND tt.patternclassid = 1
AND tt.exitclassid = 1
AND tt.N >= 300
AND ((tt.mean - 3.0E-4)/(tt.stddev/sqrt(tt.N))) >= 1.8
and a 'composite' and 'covering' index on test. Replace your ti4 with this (to make it 'covering'):
INDEX(dataset, patternclassid, exitclassid, -- any order
N, strategyid) -- in this order
Plan B:
SELECT ((tt.mean - 3.0E-4)/(tt.stddev/sqrt(tt.N))) as t,
tt.strategyid, tt.mean, tt.stddev, tt.N, tt.kurtosis,
( SELECT s.strategyId
FROM strategies AS s
WHERE s.id = tt.strategyid = s.id
) AS strategyId
FROM ttest AS tt
WHERE tt.dataset = 3
AND tt.patternclassid = 1
AND tt.exitclassid = 1
AND tt.N >= 300
AND ((tt.mean - 3.0E-4)/(tt.stddev/sqrt(tt.N))) >= 1.8
With the same index.
Unfortunately the expression for t needs to be repeated. By moving it from HAVING to WHERE, avoids gathering unwanted rows, only to end up throwing them away. Maybe the optimizer will do that automatically. Please provide EXPLAIN SELECT ... to see.
Also, it is unclear whether one of the two formulations will run faster than the other.
To be honest, I've never seen HAVING being used like this; for 20+ years I've assumed it can only be used in GROUP BY situations!
Anyway, IMHO you don't need it here, as Rick James points out, you can put it all in the WHERE.
Rewriting it a bit I end up with:
SELECT ((t.mean - 3.0E-4)/(t.stddev/sqrt(t.N))) as t,
t.strategyid,
t.mean,
t.stddev,
t.N,
t.kurtosis,
s.strategyId
FROM ttest t,
JOIN strategies s
ON s.id = t.strategyid =
WHERE t.dataset=3
AND t.patternclassid="1"
AND t.exitclassid="1"
AND t.N>= 300
AND ((t.mean - 3.0E-4)/(t.stddev/sqrt(t.N))) >= 1.8
Most of that we can indeed foresee a reasonable index. The problem remains with the last calculation:
AND ((t.mean - 3.0E-4)/(t.stddev/sqrt(t.N))) >= 1.8
However, before we go to that: how many rows are there if you ignore this 'formula'? 100? 200? If so, indexing as foreseen in Rick James' answer should be sufficient IMHO.
If it's 1000's or many more than the question becomes: how much of those are thrown out by the formula? 1%? 50% 99%? If it's on the low side then again, indexing as proposed by Rick James will do. If however you only need to keep a few you may want to further optimize this and index accordingly.
From your explanation I understand that 3.0E-4 is variable so we can't include it in the index.. so we'll need to extract the parts we can:
If my algebra isn't failing me you can play with the formula like this:
AND ((t.mean - 3.0E-4) / (t.stddev / sqrt(t.N))) >= 1.8
AND ((t.mean - 3.0E-4) ) >= 1.8 * (t.stddev / sqrt(t.N))
AND t.mean - 3.0E-4 >= (1.8 * (t.stddev / sqrt(t.N)))
AND - 3.0E-4 >= (1.8 * (t.stddev / sqrt(t.N))) - t.mean
So the query becomes:
SELECT ((t.mean - 3.0E-4)/(t.stddev/sqrt(t.N))) as t,
t.strategyid,
t.mean,
t.stddev,
t.N,
t.kurtosis,
s.strategyId
FROM ttest t,
JOIN strategies s
ON s.id = t.strategyid =
WHERE t.dataset=3
AND t.patternclassid="1"
AND t.exitclassid="1"
AND t.N>= 300
AND (1.8 * (t.stddev / sqrt(t.N))) - t.mean <= -3.0E-4
I'm not familiar with mysql but glancing the documentation it should be possible to include 'generated columns' in the index. So, we'll do exactly that with (1.8 * (t.stddev / sqrt(t.N)) - t.mean).
Your indexed fields thus become:
dataset, paternclassid, exitclassid, N, (1.8 * (t.stddev / sqrt(t.N))) - t.mean)
Note that the system will have to calculate this value for each and every row on insert (and possibly update) you do on the table. However, once there (and indexed) it should make the query quite a bit faster.

How do I Query for used BETWEEN Operater for text searches in MySql database?

I have a SQL Table in that i use BETWEEN Operater.
The BETWEEN Operater selects values within range. The values can be numbers, text , dates.
stu_id name city pin
1 Raj Ranchi 123456
2 sonu Delhi 652345
3 ANU KOLKATA 879845
4 K.K's Company Delhi 345546
5 J.K's Company Delhi 123456
I have a query like this:-
SELECT * FROM student WHERE stu_id BETWEEN 2 AND 4 //including 2 & 4
SELECT * FROM `student` WHERE name between 'A' and 'K' //including A & not K
Here My Question is why not including K.
but I want K also in searches.
Don't use between -- until you really understand it. That is just general advice. BETWEEN is inclusive, so your second query is equivalent to:
WHERE name >= 'A' AND
name <= 'K'
Because of the equality, 'K' is included in the result set. However, names longer than one character and starting with 'K' are not -- "Ka" for instance.
Instead, be explicit:
WHERE name >= 'A' AND
name < 'L'
Of course, BETWEEN can be useful. However, it is useful for discrete values, such as integers. It is a bit dangerous with numbers with decimals, strings, and date/time values. That is why I encourage you to express the logic as inequalities.
In supplement to gordon's answer, one way to get what you're expecting is to turn your name into a discrete set of values:
SELECT * FROM `student` WHERE LEFT(name, 1) between 'A' and 'K'
You need to appreciate that K.K's Company is alphabetically AFTER the letter K on its own so it is not BETWEEN, in the same way that 4.1 is not BETWEEN 2 and 4
By stripping it down to just a single character from the start of the string it will work like you expect, but take cautionary note, you should always avoid running functions on values in tables, because if you had a million names, thats a million strings that mysql has to strip out to just the first letter and it might no longer be able to use an index on name, battering the performance.
Instead, you could :
SELECT * FROM `student` WHERE name >= 'A' and name < 'L'
which is more likely to permit the use of an index as you aren't manipulating the stored values before comparing them
This works because it asks for everything up to but not including L.. Which includes all of your names starting with K, even kzzzzzzzz. Numerically it is equivalent to saying number >= 2 and number < 5 which gives you all the numbers starting with 2, 3 or 4 (like the 4.1 from before) but not the 5
Remember that BETWEEN is inclusive at both ends. Always revert to a pattern of a >= b and a < c, a >= c and a < d when you want to specify ranges that capture all possible values
Compare in lexicographical order, 'K.K's Company' > 'K'
We should convert the string to integer. You can try that mysql script with CAST and SUBSTRING. I've updated your script here. It will include the last record as well.
SELECT * FROM student WHERE name CAST(SUBSTRING(username FROM 1) AS UNSIGNED)
BETWEEN 'A' AND 'K';
The script will work. Hope it will helps to you.
Here I've attached my test sample.

Reorganizing mysql aggregate row into single piece rows

Consider the following mysql table:
ID WeightS AmountS WeightM AmountM WeightL AmountL Someothercolumnshere
1 6 3 10 2 18 2 ...
I need to reorganize this data into a pivot-friendly table, where each piece in the amount columns should be one result row. E.g. from the first two columns, WeightS and AmountS, the SELECT should produce 3 result rows, each having a weight of 2 kgs (=6 kgs total). So the full result table should be like this:
Weight Someothercolumnshere
2 ...
2 ...
2 ...
5 ...
5 ...
9 ...
9 ...
I don't even know if there's a SQL syntax which is able to do this kind of operation? I've never had a request like this before. Worst case scenario, I have to do it in php instead, but I think MYSQL is a lot more fun :p
I've built the schema on sqlfiddle, but I'm afraid that's all I've got.
You need a Tally table for the task like this. Create as much rows as needed in it.
Create table Tally(`N` int);
insert into Tally( `N`) values(1),(2),(3),(4),(5);
Then
(select `ID`, `WeightS`/`AmountS`, `Someothercolumnshere`
from Catches
join Tally on Catches.`AmountS` >= Tally.`N`
)
UNION ALL
(select `ID`, `WeightL`/`AmountL`, `Someothercolumnshere`
from Catches
join Tally on Catches.`AmountL` >= Tally.`N`
)
UNION ALL
(select `ID`, `WeightM`/`AmountM`, `Someothercolumnshere`
from Catches
join Tally on Catches.`AmountM` >= Tally.`N`
)

MySQL: Optimized query to find matching strings from set of strings

I am having 10 sets of strings each set having 9 strings. Of this 10 sets, all strings in first set have length 10, those in second set have length 9 and so on. Finally, all strings in 10th set have length 1.
There is common prefix of (length-2) characters in each set. And the prefix length reduces by 1 in next set. Thus, first set has 8 characters in common, second has 7 and so on.
Here is what a sample of 10 sets look like:
pu3q0k0vwn
pu3q0k0vwp
pu3q0k0vwr
pu3q0k0vwq
pu3q0k0vwm
pu3q0k0vwj
pu3q0k0vtv
pu3q0k0vty
pu3q0k0vtz
pu3q0k0vw
pu3q0k0vy
pu3q0k0vz
pu3q0k0vx
pu3q0k0vr
pu3q0k0vq
pu3q0k0vm
pu3q0k0vt
pu3q0k0vv
pu3q0k0v
pu3q0k0y
pu3q0k1n
pu3q0k1j
pu3q0k1h
pu3q0k0u
pu3q0k0s
pu3q0k0t
pu3q0k0w
pu3q0k0
pu3q0k2
pu3q0k3
pu3q0k1
pu3q07c
pu3q07b
pu3q05z
pu3q0hp
pu3q0hr
pu3q0k
pu3q0m
pu3q0t
pu3q0s
pu3q0e
pu3q07
pu3q05
pu3q0h
pu3q0j
pu3q0
pu3q2
pu3q3
pu3q1
pu3mc
pu3mb
pu3jz
pu3np
pu3nr
pu3q
pu3r
pu3x
pu3w
pu3t
pu3m
pu3j
pu3n
pu3p
pu3
pu9
pud
pu6
pu4
pu1
pu0
pu2
pu8
pu
pv
0j
0h
05
pg
pe
ps
pt
p
r
2
0
b
z
y
n
q
Requirement:
I have a table PROFILES having columns SRNO (type bigint, primary key) and UNIQUESTRING (type char(10), unique key). I want to find 450 SRNOs for matching UNIQUESTRINGs from those 10 sets.
First find strings like in the first set. If we don't get enough results (ie. 450), find strings like in second set. If we still don't get enough results (450 minus results of first set) find strings like in third set. And so on.
Existing Solution:
I've written query something like:
select srno from profiles
where ( (uniquestring like 'pu3q0k0vwn%')
or (uniquestring like 'pu3q0k0vwp%') -- all those above uniquestrings after this and finally the last one
or (uniquestring like 'n%')
or (uniquestring like 'q%')
)
limit 450
However, after getting feedback from Rick James in this answer I realized this is not optimized query as it touches lot many rows than it needs.
So I plan to rewrite the query like this:
(select srno from profiles where uniquestring like 'pu3q0k0vwn%' LIMIT 450)
UNION DISTINCT
(select srno from profiles where uniquestring like 'pu3q0k0vwp%' LIMIT 450); -- and more such clauses after this for each uniquestring
I like to know if there are any better solutions to do this.
SELECT ...
WHERE str LIKE 'pu3q0k0vw%' AND -- the 10-char set
str REGEXP '^pu3q0k0vw[nprqmj]' -- the 9 next letters
LIMIT ...
# then check for 450; if not enough, continue...
SELECT ...
WHERE str LIKE 'pu3q0k0vt%' AND -- the 10-char set
str REGEXP '^pu3q0k0vt[vyz]' -- the 9 next letters
LIMIT 450
# then check for 450; if not enough, continue...
etc.
SELECT ...
WHERE str LIKE 'pu3q0k0v%' AND -- the 9-char set
str REGEXP '^pu3q0k0v[wyzxrqmtv]' -- the 9 next letters
LIMIT ...
# check, etc; for a total of 10 SELECTs or 450 rows, whichever comes first.
This will be 10+ selects. Each select will be somewhat optimized by first picking rows with a common prefix with LIKE, then it double checks with a REGEXP.
(If you don't like splitting the inconsistent pu3q0k0vw vs. pu3q0k0vt; we can discuss things further.)
You say "prefix"; I have coded the LIKE and REGEXP to assume arbitrary text after the prefix given.
UNION is not viable, since it will (I think) gather all the rows before picking 450. Each SELECT will stop at the LIMIT if there is no DISTINCT GROUP BY or ORDER BY that require gathering everything first.
REGEXP is not smart enough to avoid scanning the entire table; adding the LIKE avoids such (except when more than, say, 20% of the rows match the LIKE).

Perform MySQL select on unsorted digits

I am working on an application that requires me to validate if 3 randomly generated numbers match a 3 digit string that has been entered into a database from user input. I also need to preserve the exact order that the user enters the string, so sorting on input is not an option.
For example, the randomly generated digits may be 6 4 0, and in the database a string may show as '406'.
Is there an easy way this can be accomplished in a single query without enumerating the options or adding an extra column/view?
maybe you could try
create table y (z varchar(10));
insert into y values ('406');
insert into y values ('604');
insert into y values ('446');
insert into y values ('106');
insert into y values ('123');
and then
SELECT * from y where FIND_IN_SET(Substring('640',1,1),MAKE_SET(7,Substring(z,1,1),Substring(z,2,1),Substring(z,3,1))) and FIND_IN_SET(Substring('640',2,1),MAKE_SET(7,Substring(z,1,1),Substring(z,2,1),Substring(z,3,1))) and FIND_IN_SET(Substring('640',3,1),MAKE_SET(7,Substring(z,1,1),Substring(z,2,1),Substring(z,3,1)));
returns
406
604
Sum the three random digits
Something like
Select * From Triplets Where (Ascii(Substring(Number,0,1)) - 48) + (Ascii(substring(Number,1,1)) -48) +
(Ascii(substring(Number,2,1)) -48) = MySumOfNumber
easy is a state of mind isn't it, Storage requirement of an extra "CheckSum" int, versus the high cost of a query like this.