Why MySQL query is so slow with two WHEREs - mysql

i have a problem.
Within the time, i get much rows in mysql.
To Filter i want to get some informations.
This is my query
SELECT COUNT(*) AS cases,
SUM(`item_price`) AS preturile
FROM cases
WHERE opened = 1 AND
trade_id = '1234'
It is very slow... needs 1.5 secs or something
If i kick out opened = 1, so it looks like that
SELECT COUNT(*) AS cases,
SUM(`item_price`) AS preturile
FROM cases
WHERE trade_id = '1234'
The speed is fast and good! But i need that opened 1 in there... But why is that so slow?
opened is int(11) and has INDEX.
I dont know what i can do there, its so slow...

be sure you have a proper index eg a composite index on the columns trade_id, opened
create index myidx on cases ( trade_id, opened )

Related

SQL gets slow on a simple query with ORDER BY

I have problem with MySQL ORDER BY, it slows down query and I really don't know why, my query was a little more complex so I simplified it to a light query with no joins, but it stills works really slow.
Query:
SELECT
W.`oid`
FROM
`z_web_dok` AS W
WHERE
W.`sent_eRacun` = 1 AND W.`status` IN(8, 9) AND W.`Drzava` = 'BiH'
ORDER BY W.`oid` ASC
LIMIT 0, 10
The table has 946,566 rows, with memory taking 500 MB, those fields I selecting are all indexed as follow:
oid - INT PRIMARY KEY AUTOINCREMENT
status - INT INDEXED
sent_eRacun - TINYINT INDEXED
Drzava - VARCHAR(3) INDEXED
I am posting screenshoots of explain query first:
The next is the query executed to database:
And this is speed after I remove ORDER BY.
I have also tried sorting with DATETIME field which is also indexed, but I get same slow query as with ordering with primary key, this started from today, usually it was fast and light always.
What can cause something like this?
The kind of query you use here calls for a composite covering index. This one should handle your query very well.
CREATE INDEX someName ON z_web_dok (Drzava, sent_eRacun, status, oid);
Why does this work? You're looking for equality matches on the first three columns, and sorting on the fourth column. The query planner will use this index to satisfy the entire query. It can random-access the index to find the first row matching your query, then scan through the index in order to get the rows it needs.
Pro tip: Indexes on single columns are generally harmful to performance unless they happen to match the requirements of particular queries in your application, or are used for primary or foreign keys. You generally choose your indexes to match your most active, or your slowest, queries. Edit You asked whether it's better to create specific indexes for each query in your application. The answer is yes.
There may be an even faster way. (Or it may not be any faster.)
The IN(8, 9) gets in the way of easily handling the WHERE..ORDER BY..LIMIT completely efficiently. The possible solution is to treat that as OR, then convert to UNION and do some tricks with the LIMIT, especially if you might also be using OFFSET.
( SELECT ... WHERE .. = 8 AND ... ORDER BY oid LIMIT 10 )
UNION ALL
( SELECT ... WHERE .. = 9 AND ... ORDER BY oid LIMIT 10 )
ORDER BY oid LIMIT 10
This will allow the covering index described by OJones to be fully used in each of the subqueries. Furthermore, each will provide up to 10 rows without any temp table or filesort. Then the outer part will sort up to 20 rows and deliver the 'correct' 10.
For OFFSET, see http://mysql.rjweb.org/doc.php/index_cookbook_mysql#or

How to optimize the following SELECT query

We have the following table
id # primary key
device_id_fk
auth # there's an index on it
old_auth # there's an index on it
And the following query.
$select_user = $this->db->prepare("
SELECT device_id_fk
FROM wtb_device_auths AS dv
WHERE (dv.auth= :auth OR dv.old_auth= :auth)
LIMIT 1
");
explain, I can't reach the server of the main client, but here's another client with fewer data
Since there's a lot of other updates queries on auth, update queries start getting written to the slow query log and the cpu spikes
If you remove the index from auth, then the select query gets written to the slow query log, but not the update, if you add an index to device_id_fk, it makes no difference.
I tried rewriting the query using union instead of or, but I was told that there was still cpu spike and the select query gets written to the slow query log still
$select_user = $this->db->prepare("
(SELECT device_id_fk
FROM wtb_device_auths
AS dv WHERE dv.auth= :auth)
UNION ALL
(SELECT device_id_fk
FROM wtb_device_auths AS dv
WHERE dv.old_auth= :auth)
LIMIT 1"
);
");
Explain
Most often, this is the only query in the slow query log. Is there a more optimal way to write the query? Is there a more optimal way to add indexes? The client is using an old MariaDB version, the equivalent of MYSQL 5.5, on a centos 6 server running LAMP
Additional info
The update query that gets logged to the slow query log whenever an index is added to auth is
$update_device_auth = $this->db->prepare("UPDATE wtb_device_auths SET auth= :auth WHERE device_id_fk= :device_id_fk");
Your few indexes should not be slowing down your updates.
You need two indexes to make both your update and select perform well. My best guess is you never had both at the same time.
UPDATE wtb_device_auths SET auth= :auth WHERE device_id_fk= :device_id_fk
You need an index on device_id_fk for this update to perform well. And regardless of its index it should be declared a foreign key.
SELECT device_id_fk
FROM wtb_device_auths AS dv
WHERE (dv.auth= :auth OR dv.old_auth= :auth)
LIMIT 1
You need a single combined index on auth, old_auth for this query to perform well.
Separate auth and old_auth indexes should also work well assuming there's no too many duplicates. MySQL will merge the results from the indexes and that merge should be fast... unless a lot of rows match.
If you also search for old_auth alone, add an index on old_auth.
And, as others have pointed out, the select query could return one of several matching devices with a matching auth or old_auth. This is probably bad. If auth and old_auth are intended to identify a device, add a unique constraint.
Alternatively, you need to restructure your data. Multiple columns holding the same value is a red flag. It can result in a proliferation of indexes, as you're experiencing, and also limit how many versions you can store. Instead, have just one auth per row and allow each device to have multiple rows.
create table wtb_device_auths (
id serial primary key,
device_id bigint not null references wtb_devices(id),
auth text not null,
created_at datetime not null default current_timestamp,
index(auth)
);
Now you only need to search one column.
select device_id from wtb_device_auths where auth = ?
Now one device can have many wtb_device_auths rows. If you want the current auth for a device, search for the newest one.
select device_id
from wtb_device_auths
where device_id = ?
order by created_at desc
limit 1
Since each device will only have a few auths, this is likely to be plenty fast with the device_id index alone; sorting the handful of rows for a device will be fast.
If not, you might need an additional combined index like created_at, device_id. This covers searching and sorting by created_at alone as well as queries searching and sorting by both created_at and device_id.
OR usually leads to a slow, full-table scan. This UNION trick, together with appropriate INDEXes is much faster:
( SELECT device_id_fk
FROM wtb_device_auths AS dv
WHERE dv.auth= :auth
LIMIT 1 )
UNION ALL
( SELECT device_id_fk
FROM wtb_device_auths AS dv
WHERE dv.old_auth= :auth
LIMIT 1 )
LIMIT 1
And have these "composite" indexes:
INDEX(auth, device_id)
INDEX(old_auth, device_id)
These indexes can replace the existing indexes with the same first column.
Notice that I had 3 LIMITs; you had only 1.
That UNION ALL involves a temp table. You should upgrade to 5.7 (at least); that version optimizes away the temp table.
A LIMIT without an ORDER BY gives a random row; is that OK?
Please provide the entire text of the slowlog entry for this one query -- it has info that might be useful. If "Rows_examined" is more than 2 (or maybe 3), then something strange is going on.

Count query works too slow in Couchbase

I am using couchbase:community-6.0.0 in my Spring application. I have like 250.000 records in database. My database query works very fast without using COUNT query command.
SELECT app.*, META(app).id AS id FROM app WHERE ( deleted = FALSE OR
deleted IS MISSING ) AND _class =
“com.myexample.app.device.data.model.DeviceEntity” AND appId =
“something” AND dp.language = “somelanguage” LIMIT 100 OFFSET 0
This query works very well and fast…Response time smaller than 50ms.
However
SELECT COUNT(*) AS count FROM app WHERE ( deleted = FALSE OR deleted
IS MISSING ) AND _class =
“com.myexample.app.device.data.model.DeviceEntity” AND appId =
“something”
It takes 1 minute. I can not reduce.
Indexes
primary
CREATE INDEX class_appId_idx ON app(_class,appId)
CREATE INDEX ix1 ON app(_class,appId,ifmissing(deleted, false))
What is the solution of this ? I think index does not work with a count ? Any advice please, how can I achieve this?
Note : I tried with EE edition, did not work.
The system isn't able to match the index to the query. Sometimes the optimizer isn't all that bright. Try this:
create index ix_test on test(_class, appId) WHERE deleted = FALSE OR deleted IS MISSING
That will use the index.
Generally speaking, because of how we build the indexes, we have trouble with IS MISSING clauses. But putting that bit in the WHERE clause of the index makes it work. But this is a very specialized index. Consider changing your data so the "deleted" field is always present.
It works in miliseconds with using enterprise-6.0.0

How to optimize MIN + ORDER + LIMIT

I try to implement backward pagination in my app. Data comes from a large NoSQL database and if I do pagination in the trivial way, then I see that the further page I jump to, the more time it takes me to get there. To improve performanace I plan to use MySQL table which stores just indices. What I want from MySQL - to find a starting index of the page as fast as possible. This approach on a table with 3 million rows takes almost 3 second to get and index:
SELECT MIN(id) FROM index_77635_ ORDER BY id DESC LIMIT $large_skip_number
As you see, I try to find a row with the least index so to jump to those rows which were added earlier. Probably, there is a better way to implement this task.
EDIT
The correct query which works quite good (=relatively fast, or at least faster than in pure Mongo) turned out to be this one:
SELECT a.id FROM index_77635_ a
INNER JOIN (
SELECT MAX(id) AS id FROM (
SELECT id FROM index_77635_ ORDER BY id DESC LIMIT $skip,$limit
) t
) b ON a.id = b.id
In this case I try to find the starting (that is maximum for backward pagination) index, and then in mongo I query chucnk of data up to this index.

Very slow query, any other ways to format this with better performace?

I have this query (I didn't write) that was working fine for a client until the table got more then a few thousand rows in it, now it's taking 40 seconds+ on only 4200 rows.
Any suggetions on how to optimize and get the same result?
I've tried a few other methods but didn't get the correct result that this slower query returned...
SELECT COUNT(*) AS num
FROM `fl_events`
WHERE id IN(
SELECT DISTINCT (e2.id)
FROM `fl_events` AS e1, fl_events AS e2
WHERE e1.startdate >= now() AND e1.startdate = e2.startdate
)
ORDER BY `startdate`
Any help would be greatly appriciated!
Appart from the obvious indexes needed, I don't really get why you are joining your table with itself for choosing the IN condition. The ORDER BY is also not needed. Are you sure that your query can't be written just like this?:
SELECT COUNT(*) AS num
FROM `fl_events` AS e1
WHERE e1.startdate >= now()
I don't think rewriting the query will help. The key to your question is "until the table got more than a few thousand rows." This implies that important columns aren't indexed. Prior to a certain number of records, all the data fit on a single memory block - over that point, it takes a new block. And index is the only way to speed up the search.
first - check to see that the ID in fl_events is actually marked as a primary key. That physically orders the records and without it you can see data corruption and occasionally super-slow results. The use of distinct in the query makes it look like it might NOT be a unique value. That will pose a problem.
Then, make sure to add an index on the start_date.
The slowness is probably related to the join of the event table with itself, and possibly startdate not having an index.