I am developing a web application that has a listbox with thousands of records from a MySQL database ('Table1').
I have a quick search field that searches all fields with the OR
operator (value: aaaa).
I have a dialog box that allows filtering by all fields with the AND
operator (values: xxx, yyy, zzz).
The idea is to create a query that contains both, the filtered values and the search value.
I have created a subquery on a FROM clause like this:
SELECT b.*
FROM
( SELECT *
FROM Table1
WHERE Field1 LIKE '%xxx%'
AND Field2 LIKE '%yyy%'
AND Field3 LIKE '%zzz%'
) b
WHERE b.Field1 LIKE '%aaaa%'
OR b.Field2 LIKE '%aaaa%'
OR b.Field3 LIKE '%aaaa%'
After running the subquery it seems to me that the performance is not optimal.
Would it be possible to improve the query in some way to optimize the performance (and lower the response time)?
Thank you very much.
Wardiam
Update:
It seems to me that it would be more correct to use a Common Table Expression (CTE). I have used this CTE expression:
WITH CTE_Expression AS
(SELECT *
FROM Table1
WHERE Field1 LIKE '%xxx%'
AND Field2 LIKE '%yyy%'
AND Field3 LIKE '%zzz%'
)
SELECT b.*
FROM CTE_Expression b
WHERE b.Field1 LIKE '%aaaa%'
OR b.Field2 LIKE '%aaaa%'
OR b.Field3 LIKE '%aaaa%'
What do you think?.
Found a similar issue:
Mysql Improve Search Performance with wildcards (%%)
No, because MySQL will not be able to utilize the index when you have
a leading wildcard. If you changed your LIKE to 'aaaa%', then it would
be able to use the index.
If you want to check if indices are being utilized, check the execution plan with EXPLAIN:
https://dev.mysql.com/doc/refman/8.0/en/using-explain.html
Or try using the MYSQL Full-Text Index MATCH()/AGAINST(). Here are some articles about:
https://severalnines.com/database-blog/full-text-searches-mysql-good-bad-and-ugly
https://www.w3resource.com/mysql/mysql-full-text-search-functions.php
Edit:
After some investigation, I came to the conclusion that there is no way a leading wildcard can utilize the Table index
Wildcard search in MySQL full-text search
https://blog.monyog.com/problem-queries-are-killing-your-database-performance/
Related
I'm facing a problem when dealing with a large table with many records in it (MySQL by the way).
I have two scenarios. The first one is a single table with a primary key and 18 different varchar fields.
id
Field1
Field2
Field18
1
abc
abc
abc
2
def
def
def
100
xyz
xyz
xyz
The second scenario is that I have a single table that organizes all information in a different way:
id_record
field_name
value
1
field1
abc
1
field2
abc
1
field3
abc
2
field1
def
100
field18
xyz
On the first I have a fixed structure (no flexibility) and may have a lot of blank spaces. On the second solution I can easily add new fields but the table will grow quickly.
On some tests I ran both perform well at about 200 000 records stored. But as I grow it (I tested with 500 000 and 1M) things get painful slow on the second scenario.
On the second id_record and field_name are indexes and value is fulltext. But it does not help much.
When I try to combine two matches things get specially slow:
select f1.id_record from table f1 where f1.field = 'field1' and f1.value like '%abc%' and f1.id_record in (
select f2.id_record from table f2 where f2.field = 'field18' and f2.value like '%abc%'
);
or
select f1.id_record from table f1, table f2 where f1.field = 'field1' and f2.field = 'field18' and f1.id_record = f2.id_record and f1.value like '%abc%' and f2.value like '%abc%';
Any ideas on how to perform better on the second scenario? Or if there's any new ideas on how to structure better this kind of data?
If you have exactly 18 columns and don't need to add more, then the first is a very reasonable way to store the data. The query that you want is simply:
select t.*
from t
where t.field like '%abc%' and t.field2 like '%abc%';
Unfortunately, this query requires a full-table scan (because of the wildcards in the like). Without using a full text index, this is probably the best that you can do unless the data is quite sparse.
The second structure gives you two possibilities for the above query. One uses JOIN:
select f1.id_record
from table f1 join
table f2
on f1.id_record = f2.id_record
where f1.field = 'field1' and f2.field = 'field18' and
f1.value like '%abc%' and f2.value like '%abc%';
The best index for this is (id_record, field, value). This might have okay performance if field1 or field18 are quite sparse.
I usually recommend group by, for this type of query:
select f.id_record
from table f
where (f.field = 'field1' and f.value like '%abc%') and
(f.field = 'field18' and f.value like '%abc%')
group by f.id_record
having count(*) = 2;
However, I recommend group by because of its flexibility, not specifically because of its performance.
i tried lots of thing but not of them worked hope someone may help me with this query
let me show my query first then issue
select log.*,client.client_name
from ( select * from sessions
where ( `report_error_status` like CONCAT('%' ,'consec', '%')
or `ipaddress` like CONCAT('%' ,'consec', '%') or `last_updated` like CONCAT('%' ,'consec', '%') )
ORDER BY `id` DESC LIMIT 10 OFFSET 0 )
log
inner join
(select * from clients
where ( `client_name` like CONCAT('%' ,'consec', '%') ) )
client on log.client_id = client.id
in order to prevent exponential reducing query speed i'm applying limit in my table session above query working perfectly fine without "where", but my problem lies over here if user from front end try to search any thing in datatable , where clause is dynamically get attached in backend (above query with where) now my problem is that suppose table (session) does not contain user search value consec ,but table (client) contain then final query still return null value now is there any way to apply conditional where like below query
ifnull((select id from sessions where
(`report_error_status` like CONCAT('%' ,'consec', '%')
or `ipaddress` like CONCAT('%' ,'consec', '%')
or `last_updated` like CONCAT('%' ,'consec', '%'))
),
(select * from sessions ORDER BY `id` DESC LIMIT 10 OFFSET 0) ))
it will resolve all my problem is there any way to achieve in mysql.
if table session contain 100 000 data it will search with client table one by one against 100k records. suppose time taken to execute is 1 sec now what if my session table has 200k data again time will increase exponentially in inner join, to avoid this i'm using subquery in session with limit
Note report_error_status,ipaddress, client_name etc in index
There is no way to optimize a MySQL SELECT statement that use a regex opening with the wildcard. Your REGEX is %consec%, and you could add an index, but to quote the official MySQL documentation...
The index also can be used for LIKE comparisons if the argument to LIKE is a constant string that does not start with a wildcard character. For example, the following SELECT statements use indexes:
SELECT * FROM tbl_name WHERE key_col LIKE 'Patrick%';
SELECT * FROM tbl_name WHERE key_col LIKE 'Pat%_ck%';
Source: Dev.MySQL.com: Comparison of B-Tree and Hash Indexes; B-Tree Index Characteristics
Your query falls outside of this use case, so indices will not help. Here's another answer suggesting the same.
I am going to suggest Database Normalization...
You're selecting fields that are LIKE %consec%. Why? What is this value? Is it a special, internal code that means something special for your software and your software alone? After all, look the names of the fields -- report_error_status, ipaddress, last_updated. Except for maybe the error code one, there's no reason "consec" would appear in these, unless it had some internal significance.
For instance, table.field has value of "userconsec", sometimes you want to search for "user", other times "consec".
In that case, you'd want a new table; "tableType", with tableType.tableid pointing to the other table and tableType.Type being the Type value ("user", "consec", etc.), an index on both tableid and Type, and then you can drop from your query WHERE LIKE ... and add instead JOIN ON tableType.tableid = table.id AND tableType.Type = "consec";.
It will be faster because...
It is not looking through all the text of several text fields.
It is looking through an ordered list of integers to identify the record you need.
Is it possible to make a query that changes the where clause acording to some condition? For instance I want to select * from table1 where data is 19/July/2016 but if field id is null then do nothing, else compare id to something else. Like the query bellow?
Select * from table1 where date="2016-07-19" if(isnull(id),"",and id=(select * from ...))
Yes. This should be possible.
If we assume that date and id are references to columns in (the unfortunately named) table table1, if I'm understanding what you are attempting to achieve, we could write a query like this:
SELECT t.id
, t.date
, t....
FROM table1 t
WHERE t.date='2016-07-19'
AND ( t.id IS NULL
OR t.id IN ( SELECT expr FROM ... )
)
It would also be possible to incorporate the MySQL IF() and IFNULL() functions, if there's some requirement to do that.
As far as dynamically changing the text of the SQL statement after the statement is submitted to the database, no, that's not possible. Any dynamic changes to the SQL text would need to be done when the SQL statement is generated, before it is submitted to the database.
My personal preference would be to use a join operation rather than a IN (subquery) predicate.
I think you're trying too hard. If id is NULL that's equivalent to having a FALSE in the where clause. So:
Select * from table1 where date="2016-07-19" and id=(select * from ...)
Should only match the records you want. If id is NULL you get nothing.
Say you have the following query:
SELECT * FROM table1 WHERE table1.id IN (1, 2, 3, 4, 5, ..., 999999)
What is a reasonable maximum for the number of items in the IN clause? I'm using Sphinx to generate full-text search results and inserting the IDs into a MySQL query. Is this an acceptable way to do it?
You can also have the IN clause take the results of a query, such as:
SELECT * FROM table1
WHERE table1.id IN
(
SELECT id from table2
)
That way, you don't need to generate a text string with all the possible values.
In mysql, you should be able to put as many values in the IN clause as you want, only constrained by the value of "max_allowed_packet".
http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_in
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_allowed_packet
MariaDB (10.3.22 in my case) has a limit of 999 parameters to IN() before it creates a materialized temporary table, resulting in possibly much longer execution times. Depending on your indices. I haven't found a way to control this behaviour. MySQL 5.6.27 does not have this limit, at least not at ~1000 parameters. MySQL 5.7 might very well have the same "feature".
I ended up doing a series of where id = a or id = b ... but it also works fine by using series of where id in(a, b) or id in(c, d) ....
You have to add laravel row query and then add NOT IN condition into this:
$object->whereRaw('where id NOT IN (' . $array_list . ') ');
This will work for my code.
From my experience the maximum values is 1000 values in clause IN ('1',....,'1000'),
I have 1300 value in my excel sheet,I put them all into IN ,MySQL return only 1000 .
I ran a query that resulted in the string '1,2,3,4'.
How can I run a second query that treats that string as a list of numbers. So I'll be able to do:
select * from tbl where name not in (1,2,3,4)
I would like an answer in pure MySQL.
Well first of all, this usually means that your database structure is not good; you should normalize your database.
However, you can do what you want, with the FIND_IN_SET function:
SELECT * FROM tbl WHERE NOT FIND_IN_SET(name, '1,2,3,4')
Use FIND_IN_SET:
select * from tbl where FIND_IN_SET(name, '1,2,3,4') = 0
Like the other answer, I would also recommend normalizing your database if at all possible. This query could be slow as it will require a scan of the table. Even if there is an index on name this query won't be able to use it efficiently.