MySQL Query Needs Optimization? - mysql

I have a couple of queries that serve their purposes but since I'm not an expert and just dabble here and there, I'm wondering if these queries can be optimized.
I'm asking because it seems that as more records are added, the time it takes for the queries to complete also seem to increase.
SELECT u.`UserId`, u.`GroupId`, u.`MemberType`, g.`GroupLevel`, g.`U`, g.`D`,
(SELECT COUNT(u2.`UserId`) FROM `users` u2
INNER JOIN `groups` g2 ON g2.`Active` = 1 AND u2.`UserId` = g2.`UserId`
WHERE u2.`Level` = '$L_MEMBER'
AND u2.`MemberType` = '$M_Type'
AND u2.`CounUserId` = u.`UserId`
AND u2.`Active` = 1
AND g2.`U` > g.`U`
AND g2.`D` < g.`D`) as UsersGroup
FROM `users` u
INNER JOIN `groups` g ON g.`UserId` = u.`UserId` AND g.`Active`
WHERE u.`Level` = '$L_MEMBER' AND u.`DateCreated` < '$subdate' AND u.`Active` = 1
ORDER BY u.`UserId`
SELECT g.`UserId` FROM `groups` g
WHERE g.`U` BETWEEN '$U' AND '$D'
AND g.`UserId` !=0
AND g.`UserId` NOT IN (SELECT da.`TaggedUserId` as UserId FROM `dateawarded` da WHERE da.`UserId` = '$userid' AND `DateTagged` != '$datetagged')
AND g.`UserId` NOT IN (SELECT u.`UserId` FROM `users` u WHERE u.`Membership` <= '1')
AND g.`UserId` NOT IN (SELECT d.`DemotedUserId` FROM `demoted` d WHERE d.`UserId` = '$userid' AND d.`DateDemoted` < '$datetagged 00:00:00')
AND g.`DateModified` < '$thedate'
EXPLAIN Results:
Query 1:
1 PRIMARY g ALL NULL NULL NULL NULL 18747 Using where; Using temporary; Using filesort
1 PRIMARY u eq_ref PRIMARY PRIMARY 4 user_db.g.UserId 1 Using where
2 DEPENDENT SUBQUERY g2 ALL NULL NULL NULL NULL 18747 Using where
2 DEPENDENT SUBQUERY u2 eq_ref PRIMARY PRIMARY 4 user_db.g2.UserId 1 Using where
Query 2:
1 PRIMARY g ALL NULL NULL NULL NULL 18747 Using where
4 SUBQUERY d ALL NULL NULL NULL NULL 6895 Using where
3 SUBQUERY u ALL PRIMARY NULL NULL NULL 9354 Using where
2 SUBQUERY da ALL NULL NULL NULL NULL 39260 Using where
Any help would be appreciated.
Thanks!

To optimize the first query, add these indexes:
ALTER TABLE `groups` ADD INDEX `groups_idx_userid_grouple_u_d` (`UserId`, `GroupLevel`, `U`, `D`);
ALTER TABLE `groups` ADD INDEX `groups_idx_active_userid_u` (`Active`, `UserId`, `U`);
ALTER TABLE `users` ADD INDEX `users_idx_level_activ_useri_datec_group_membe` ( `Level`, `Active`, `UserId`, `DateCreated`, `GroupId`, `MemberType` );
ALTER TABLE `users` ADD INDEX `users_idx_level_member_active_userid_counus` ( `Level`, `MemberType`, `Active`, `UserId`, `CounUserId` );
To optimize the second query, add these indexes:
ALTER TABLE `dateawarded` ADD INDEX `dateawarded_idx_userid_datetagged_taggeduser` (`UserId`, `DateTagged`, `TaggedUserId`);
ALTER TABLE `demoted` ADD INDEX `demoted_idx_userid_datedemote_demoteduse` (`UserId`, `DateDemoted`, `DemotedUserId`);
ALTER TABLE `groups` ADD INDEX `groups_idx_u_userid` (`U`, `UserId`);
ALTER TABLE `users` ADD INDEX `users_idx_membership_userid` (`Membership`, `UserId`);

Did you profile these queries, to see how long they take to run?
Then add more (dummy test) data & see how much longer they take & decide if that is acceptable?
See https://dev.mysql.com/doc/refman/5.5/en/show-profile.html for details.
You will get output something like this
mysql> SHOW PROFILE;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| checking permissions | 0.000040 |
| creating table | 0.000056 |
| After create | 0.011363 |
| query end | 0.000375 |
| freeing items | 0.000089 |
| logging slow query | 0.000019 |
| cleaning up | 0.000005 |
+----------------------+----------+
7 rows in set (0.00 sec)
Only you can decide what is an acceptable time for the query to run.
[Udpdate] I only posted an answer as this was too large for a comment. #Raymond is correct to say that EXPLAIN [query] will be very helpful

Related

MYSQL delete - Table 'USER_TABLE' is specified twice, both as a target for 'DELETE' and as a separate source for data

I am new to MySql large queries, and trying to find some solution for my problem,
I looking for delete duplicate values based on "ID_object" column in my USER_TABLE.
Here is my USER_TABLE description,
`USER_TABLE` (
`ID` varchar(256) NOT NULL,
`ID_OBJECT` varchar(256) DEFAULT NULL,
`INSERTION_TIME` date DEFAULT NULL,
KEY `USER_TABLE_inx01` (`ID`(255)),
KEY `user_inx02` (`ID_OBJECT`(255))
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
I tried the following query to remove the duplicate ID_OBJECTs,
delete from USER_TABLE where id in (
select ID from USER_TABLE,
(select ID_OBJECT, min(INSERTION_TIME) as fecha from USER_TABLE group by ID_OBJECT having count(ID_OBJECT)>1) tbpr
where USER_TABLE.ID_OBJECT = tbpr.ID_OBJECT and USER_TABLE.INSERTION_TIME=tbpr.fecha);
But it says,
SQL Error (1093): Table 'USER_TABLE' is specified twice, both as a target for 'DELETE' and as a separate source for data
Can anyone assist me in this?
This will do it. I haven't attempted to check whether your actual business logic for removing duplicates is correct, since your stated requirement isn't 100% clear anyway, but this is one way you can overcome the error message:
CREATE TEMPORARY TABLE IF NOT EXISTS duplicates AS (
SELECT UT.id
FROM `USER_TABLE` AS UT
INNER JOIN
(SELECT
ID_OBJECT,
MIN(INSERTION_TIME) AS fecha
FROM `USER_TABLE`
GROUP BY ID_OBJECT
HAVING COUNT(ID_OBJECT)>1) AS tbpr
ON
UT.ID_OBJECT = tbpr.ID_OBJECT AND UT.INSERTION_TIME = tbpr.fecha
);
DELETE FROM `USER_TABLE`
WHERE id IN (SELECT id FROM duplicates);
DROP TABLE IF EXISTS duplicates;
You can see a working demo here: https://www.db-fiddle.com/f/amnAPUftLD1SmW67fjVSEv/0
You could change your query slightly
delete from USER_TABLE
where concat(id_object,insertion_time) in
(
select concat(ID_object,fecha) from
(
select ID_OBJECT, min(INSERTION_TIME) as fecha
from USER_TABLE
group by ID_OBJECT
having count(ID_OBJECT)>1
) tbpr
)
But this would not cope with triplicates, quadruplets etc. so maybe you need to reverse the logic and keep only the max where there are multiples
delete from USER_TABLE
where concat(id_object,insertion_time) not in
(
select concat(ID_object,fecha) from
(
select ID_OBJECT, max(INSERTION_TIME) as fecha
from USER_TABLE
group by ID_OBJECT
having count(ID_OBJECT)>1
) tbpr
)
and
id_object not in
(
select ID_object from
(
select ID_OBJECT, count(*) as fecha
from USER_TABLE
group by ID_OBJECT
having count(ID_OBJECT) = 1
) tbpr2
)
;
create table `USER_TABLE` (
`ID` varchar(256) NOT NULL,
`ID_OBJECT` varchar(256) DEFAULT NULL,
`INSERTION_TIME` date DEFAULT NULL,
KEY `USER_TABLE_inx01` (`ID`(255)),
KEY `user_inx02` (`ID_OBJECT`(255))
) ;
truncate table user_table;
insert into user_table values
(1,1,'2017-01-01'),(2,1,'2017-01-02'),(3,1,'2017-01-03'),
(4,2,'2017-01-01');
Result of first query
MariaDB [sandbox]> select * from user_table;
+----+-----------+----------------+
| ID | ID_OBJECT | INSERTION_TIME |
+----+-----------+----------------+
| 2 | 1 | 2017-01-02 |
| 3 | 1 | 2017-01-03 |
| 4 | 2 | 2017-01-01 |
+----+-----------+----------------+
3 rows in set (0.00 sec)
Result of second query
MariaDB [sandbox]> select * from user_table;
+----+-----------+----------------+
| ID | ID_OBJECT | INSERTION_TIME |
+----+-----------+----------------+
| 3 | 1 | 2017-01-03 |
| 4 | 2 | 2017-01-01 |
+----+-----------+----------------+
2 rows in set (0.00 sec)

Mysql subquery much faster than join

I have the following queries which both return the same result and row count:
select * from (
select UNIX_TIMESTAMP(network_time) * 1000 as epoch_network_datetime,
hbrl.business_rule_id,
display_advertiser_id,
hbrl.campaign_id,
truncate(sum(coalesce(hbrl.ad_spend_network, 0))/100000.0, 2) as demand_ad_spend_network,
sum(coalesce(hbrl.ad_view, 0)) as demand_ad_view,
sum(coalesce(hbrl.ad_click, 0)) as demand_ad_click,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else 100*sum(hbrl.ad_click)/sum(hbrl.ad_view) end, 0), 2) as ctr_percent,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else sum(hbrl.ad_spend_network)/100.0/sum(hbrl.ad_view) end, 0), 2) as ecpm,
truncate(coalesce(case when sum(hbrl.ad_click) = 0 then 0 else sum(hbrl.ad_spend_network)/100000.0/sum(hbrl.ad_click) end, 0), 2) as ecpc
from hourly_business_rule_level hbrl
where (publisher_network_id = 31534)
and network_time between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f') and str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f')
and (network_time IS NOT NULL and display_advertiser_id > 0)
group by network_time, hbrl.campaign_id, hbrl.business_rule_id
having demand_ad_spend_network > 0
OR demand_ad_view > 0
OR demand_ad_click > 0
OR ctr_percent > 0
OR ecpm > 0
OR ecpc > 0
order by epoch_network_datetime) as atb
left join dim_demand demand on atb.display_advertiser_id = demand.advertiser_dsp_id
and atb.campaign_id = demand.campaign_id
and atb.business_rule_id = demand.business_rule_id
ran explain extended, and these are the results:
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 1451739 | 100.00 | NULL |
| 1 | PRIMARY | demand | ref | PRIMARY,join_index | PRIMARY | 4 | atb.campaign_id | 1 | 100.00 | Using where |
| 2 | DERIVED | hourly_business_rule_level | ALL | _hourly_business_rule_level_supply_idx,_hourly_business_rule_level_demand_idx | NULL | NULL | NULL | 1494447 | 97.14 | Using where; Using temporary; Using filesort |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+
and the other is:
select UNIX_TIMESTAMP(network_time) * 1000 as epoch_network_datetime,
hbrl.business_rule_id,
display_advertiser_id,
hbrl.campaign_id,
truncate(sum(coalesce(hbrl.ad_spend_network, 0))/100000.0, 2) as demand_ad_spend_network,
sum(coalesce(hbrl.ad_view, 0)) as demand_ad_view,
sum(coalesce(hbrl.ad_click, 0)) as demand_ad_click,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else 100*sum(hbrl.ad_click)/sum(hbrl.ad_view) end, 0), 2) as ctr_percent,
truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else sum(hbrl.ad_spend_network)/100.0/sum(hbrl.ad_view) end, 0), 2) as ecpm,
truncate(coalesce(case when sum(hbrl.ad_click) = 0 then 0 else sum(hbrl.ad_spend_network)/100000.0/sum(hbrl.ad_click) end, 0), 2) as ecpc
from hourly_business_rule_level hbrl
join dim_demand demand on hbrl.display_advertiser_id = demand.advertiser_dsp_id
and hbrl.campaign_id = demand.campaign_id
and hbrl.business_rule_id = demand.business_rule_id
where (publisher_network_id = 31534)
and network_time between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f') and str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f')
and (network_time IS NOT NULL and display_advertiser_id > 0)
group by network_time, hbrl.campaign_id, hbrl.business_rule_id
having demand_ad_spend_network > 0
OR demand_ad_view > 0
OR demand_ad_click > 0
OR ctr_percent > 0
OR ecpm > 0
OR ecpc > 0
order by epoch_network_datetime;
and these are the results for the second query:
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+
| 1 | SIMPLE | hourly_business_rule_level | ALL | _hourly_business_rule_level_supply_idx,_hourly_business_rule_level_demand_idx | NULL | NULL | NULL | 1494447 | 97.14 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | demand | ref | PRIMARY,join_index | PRIMARY | 4 | my6sense_datawarehouse.hourly_business_rule_level.campaign_id | 1 | 100.00 | Using where; Using index |
+----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+
the first one takes about 2 seconds while the second one takes over 2 minutes!
why is the second query taking so long?
what am I missing here?
thanks.
Use a subquery whenever the subquery significantly shrinks the number of rows before - ANY JOIN - always to reinforce Rick James Plan B.
To reinforce Rick & Paul's answer which you have already documented. The answers by Rick and Paul deserve Acceptance.
One possible reason is the number of rows that have to be joined with the second table.
The GROUP BY clause and the HAVING clause will limit the number of rows returned from your subquery.
Only those rows will be used for the join.
Without the subquery only the WHERE clause is limiting the number of rows for the JOIN.
The JOIN is done before the GROUP BY and HAVING clauses are processed.
Depending on group size and the selectivity of the HAVING conditions there would be much more rows that need to be joined.
Consider the following simplified example:
We have a table users with 1000 entries and the columns id, email.
create table users(
id smallint auto_increment primary key,
email varchar(50) unique
);
Then we have a (huge) log table user_actions with 1,000,000 entries and the columns id, user_id, timestamp, action
create table user_actions(
id mediumint auto_increment primary key,
user_id smallint not null,
timestamp timestamp,
action varchar(50),
index (timestamp, user_id)
);
The task is to find all users who have at least 900 entries in the log table since 2017-02-01.
The subquery solution:
select a.user_id, a.cnt, u.email
from (
select a.user_id, count(*) as cnt
from user_actions a
where a.timestamp >= '2017-02-01 00:00:00'
group by a.user_id
having cnt >= 900
) a
left join users u on u.id = a.user_id
The subquery returns 135 rows (users). Only those rows will be joined with the users table.
The subquery runs in about 0.375 seconds. The time needed for the join is almost zero, so the full query runs in about 0.375 seconds.
Solution without subquery:
select a.user_id, count(*) as cnt, u.email
from user_actions a
left join users u on u.id = a.user_id
where a.timestamp >= '2017-02-01 00:00:00'
group by a.user_id
having cnt >= 900
The WHERE condition filters the table to 866,081 rows.
The JOIN has to be done for all those 866K rows.
After the JOIN the GROUP BY and the HAVING clauses are processed and limit the result to 135 rows.
This query needs about 0.815 seconds.
So you can already see, that a subquery can improve the performance.
But let's make things worse and drop the primary key in the users table.
This way we have no index which can be used for the JOIN.
Now the first query runs in 0.455 seconds. The second query needs 40 seconds - almost 100 times slower.
Notes
It's difficult to say if the same applies to your case. Reasons are:
Your queries are quite complex and far away from from beeing an MVCE.
I don't see anything beeng selected from the demand table - So it's unclear why you are joining it at all.
You use a LEFT JOIN in one query and an INNER JOIN in another one.
The relation between the two tables is unclear.
No information about indexes. You should provide the CREATE statements (SHOW CREATE table_name).
Test setup
drop table if exists users;
create table users(
id smallint auto_increment primary key,
email varchar(50) unique
)
select seq as id, rand(1) as email
from seq_1_to_1000
;
drop table if exists user_actions;
create table user_actions(
id mediumint auto_increment primary key,
user_id smallint not null,
timestamp timestamp,
action varchar(50),
index (timestamp, user_id)
)
select seq as id
, floor(rand(2)*1000)+1 as user_id
#, '2017-01-01 00:00:00' + interval seq*20 second as timestamp
, from_unixtime(unix_timestamp('2017-01-01 00:00:00') + seq*20) as timestamp
, rand(3) as action
from seq_1_to_1000000
;
MariaDB 10.0.19 with sequence plugin.
The queries are different. One says JOIN, the other says LEFT JOIN. You are not using demand, so the join is probably useless. However, in the case of JOIN, you are filtering out advertisers that are not in dim_demand; it that the intent?
But that does not address the question.
The EXPLAINs estimate that there are 1.5M rows in hbrl. But how many show up in the result? I would guess it is a lot fewer. From this, I can answer your question.
Consider these two:
SELECT ... FROM ( SELECT ... FROM a
GROUP BY or HAVING or LIMIT ) x
JOIN b
SELECT ... FROM a
JOIN b
GROUP BY or HAVING or LIMIT
The first will decrease the number of rows that need to join to b; the second will need to do a full 1.5M joins. I suspect that the time taken to do the JOIN (be it LEFT or not) is where the difference is.
Plan A: Remove demand from the query.
Plan B: Use a subquery whenever the subquery significantly shrinks the number of rows before the JOIN.
Indexing (may speed up both variants):
INDEX(publisher_network_id, network_time)
and get rid of this as being useless (since the between will fail anyway for NULL):
and network_time IS NOT NULL
Side note: I recommend simplifying and fixing this
and network_time
between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f')
AND str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f')
to
and network_time >= '2017-08-13 17:00:00
and network_time < '2017-08-13 17:00:00 + INTERVAL 24 HOUR

Need help tuning sql query

My mysql DB has become CPU hungry trying to execute a particularly slow query. When I do an explain, mysql says "Using where; Using temporary; Using filesort". Please help deciphering and solving this puzzle.
Table structure:
CREATE TABLE `topsources` (
`USER_ID` varchar(255) NOT NULL,
`UPDATED_TIME` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`URL_ID` int(11) NOT NULL,
`SOURCE_SLUG` varchar(100) NOT NULL,
`FEED_PAGE_URL` varchar(255) NOT NULL,
`CATEGORY_SLUG` varchar(100) NOT NULL,
`REFERRER` varchar(2048) DEFAULT NULL,
PRIMARY KEY (`USER_ID`,`URL_ID`),
KEY `USER_ID` (`USER_ID`),
KEY `FEED_PAGE_URL` (`FEED_PAGE_URL`),
KEY `SOURCE_SLUG` (`SOURCE_SLUG`),
KEY `CATEGORY_SLUG` (`CATEGORY_SLUG`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
The table has 370K rows...sometimes higher. The below query takes 10+ seconds.
SELECT topsources.SOURCE_SLUG, COUNT(topsources.SOURCE_SLUG) AS VIEW_COUNT
FROM topsources
WHERE CATEGORY_SLUG = '/newssource'
GROUP BY topsources.SOURCE_SLUG
HAVING MAX(CASE WHEN topsources.USER_ID = 'xxxx' THEN 1 ELSE 0 END) = 0
ORDER BY VIEW_COUNT DESC;
Here's the extended explain:
+----+-------------+------------+------+---------------+---------------+---------+-------+--------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------+---------------+---------------+---------+-------+--------+----------+----------------------------------------------+
| 1 | SIMPLE | topsources | ref | CATEGORY_SLUG | CATEGORY_SLUG | 302 | const | 160790 | 100.00 | Using where; Using temporary; Using filesort |
+----+-------------+------------+------+---------------+----
-----------+---------+-------+--------+----------+----------------------------------------------+
Is there a way to improve this query? Also, are there any mysql settings that can help in reducing CPU load? I can allocate more memory that's available on my server.
The most likely thing to help the query is an index on CATEGORY_SLUG, especially if it takes on many values. (That is, if the query is highly selective.) The query needs to read the entire table to get the results -- although 10 seconds seems like a long time.
I don't think the HAVING clause would be affecting the query processing.
Does the query take just as long if you run it two times in a row?
If there are many rows that match your CATEGORY_SLUG criteria, it may be difficult to make this fast, but is this any quicker?
SELECT ts.SOURCE_SLUG, COUNT(ts.SOURCE_SLUG) AS VIEW_COUNT
FROM topsources ts
WHERE ts.CATEGORY_SLUG = '/newssource'
AND NOT EXISTS(SELECT 1 FROM topsources ts2
WHERE ts2.CATEGORY_SLUG = '/newssource'
AND ts.SOURCE_SLUG = TS2.SOURCE_SLUG
AND ts2.USER_ID = 'xxxx')
GROUP BY ts.SOURCE_SLUG
ORDER BY VIEW_COUNT DESC;
That should do the trick if I read this my sql alteration correcty
SELECT topsources.SOURCE_SLUG, COUNT(topsources.SOURCE_SLUG) AS VIEW_COUNT
FROM topsources
WHERE CATEGORY_SLUG = '/newssource' and
topsources.SOURCE_SLUG not in (
select distinct SOURCE_SLUG
from topsources
where USER_ID = 'xxxx'
)
GROUP BY topsources.SOURCE_SLUG
ORDER BY VIEW_COUNT DESC;
Always hard to optimise something when you can't just throw queries at the data yourself, but this would be my first attempt if I was doing it myself:
SELECT t.SOURCE_SLUG, COUNT(t.SOURCE_SLUG) AS VIEW_COUNT
FROM topsources t
LEFT JOIN (
SELECT SOURCE_SLUG
FROM topsources t
WHERE CATEGORY_SLUG = '/newssource'
AND USER_ID = 'xxx'
GROUP BY .SOURCE_SLUG
) x USING (SOURCE_SLUG)
WHERE t.CATEGORY_SLUG = '/newssource'
AND x.SOURCE_SLUG IS NULL
GROUP BY t.SOURCE_SLUG
ORDER BY VIEW_COUNT DESC;

MySQL simple select query is slow

I have a large mysql-table with about 110.000.000 items
The Table Design is:
CREATE TABLE IF NOT EXISTS `tracksim` (
`tracksimID` int(11) NOT NULL AUTO_INCREMENT,
`trackID1` int(11) NOT NULL,
`trackID2` int(11) NOT NULL,
`sim` double NOT NULL,
PRIMARY KEY (`tracksimID`),
UNIQUE KEY `TrackID1` (`trackID1`,`trackID2`),
KEY `sim` (`sim`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Now I want to query a normal query:
SELECT trackID1, trackID2 FROM `tracksim`
WHERE sim > 0.5 AND
(`trackID1` = 168123 OR `trackID2`= 168123)
ORDER BY sim DESC LIMIT 0,100
The Explain statement gives me:
+----+-------------+----------+-------+---------------+------+---------+------+----------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------+-------+---------------+------+---------+------+----------+----------+-------------+
| 1 | SIMPLE | tracksim | range | TrackID1,sim | sim | 8 | NULL | 19980582 | 100.00 | Using where |
+----+-------------+----------+-------+---------------+------+---------+------+----------+----------+-------------+
The query seems to be very slow(about 185 seconds), but i don't know if it is only because of the amount of items in the table. Do you have a tip how I can speedup the query or the table-lookup?
With 110 million records, I can't imagine there are many entries with the track ID in question. I would have indexes such as
(trackID1, sim )
(trackID2, sim )
(tracksimID, sim)
and do a PREQUERY via union and join against that result
select STRAIGHT_JOIN
TS2.*
from
( select ts.tracksimID
from tracksim ts
where ts.trackID1 = 168123
and ts.sim > 0.5
UNION
select ts.trackSimID
from tracksim ts
where ts.trackid2 = 168123
and ts.sim > 0.5
) PreQuery
JOIN TrackSim TS2
on PreQuery.TrackSimID = TS2.TrackSimID
order by
TS2.SIM DESC
LIMIT 0, 100
Mostly I agree with Drap, but the following variation of the query might be even more efficient, especially for larger LIMIT:
SELECT TS2.*
FROM (
SELECT tracksimID, sim
FROM tracksim
WHERE trackID1 = 168123
AND sim > 0.5
UNION
SELECT trackSimID, sim
FROM tracksim
WHERE trackid2 = 168123
AND ts.sim > 0.5
ORDER BY sim DESC
LIMIT 0, 100
) as PreQuery
JOIN TrackSim TS2 USING (TrackSimID);
Requires (trackID1, sim) and (trackID2, sim) indexes.
Try filtering your query so you don't return the full table. Alternatively you could try applying an index to the table on one of the track ID's, for example:
CREATE INDEX TRACK_INDEX
ON tracksim (trackID1)
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
http://www.tutorialspoint.com/mysql/mysql-indexes.htm

Why is Mysql using the wrong index?

Mysql is using an index on (faver_profile_id,removed,notice_id) when it should be using the index on (faver_profile_id,removed,id). The weird thing is that for some values of faver_profile_id it does use the correct index. I can use FORCE INDEX which drastically speeds up the query, but I'd like to figure out why mysql is doing this.
This is a new table (35m rows) copied from another table using INSERT INTO.. SELECT FROM.
I did not run OPTIMIZE TABLE or ANALYZE after. Could that help?
SELECT `Item`.`id` , `Item`.`cached_image` , `Item`.`submitter_id` , `Item`.`source_title` , `Item`.`source_url` , `Item`.`source_image` , `Item`.`nudity` , `Item`.`tags` , `Item`.`width` , `Item`.`height` , `Item`.`tumblr_id` , `Item`.`tumblr_reblog_key` , `Item`.`fave_count` , `Item`.`file_size` , `Item`.`animated` , `Favorite`.`id` , `Favorite`.`created`
FROM `favorites` AS `Favorite`
LEFT JOIN `items` AS `Item` ON ( `Favorite`.`notice_id` = `Item`.`id` )
WHERE `faver_profile_id` =11619
AND `Favorite`.`removed` =0
AND `Item`.`removed` =0
AND `nudity` =0
ORDER BY `Favorite`.`id` DESC
LIMIT 26
Query execution plan: "idx_notice_id_profile_id" is an index on (faver_profile_id,removed,notice_id)
1 | SIMPLE | Favorite | ref | idx_faver_idx_id,idx_notice_id_profile_id,notice_id_idx | idx_notice_id_profile_id | 4 | const,const | 15742 | Using where; Using filesort |
1 | SIMPLE | Item | eq_ref | PRIMARY | PRIMARY | 4 | gragland_imgfave.Favorite.notice_id | 1 | Using where
I don't know if its causing any confusion or not, but maybe by moving some of the AND qualifiers to the Item's join might help as its directly correlated to the ITEM and not the favorite. In addition, I've explicitly qualified table.field references where they were otherwise missing.
SELECT
Item.id,
Item.cached_image,
Item.submitter_id,
Item.source_title,
Item.source_url,
Item.source_image,
Item.nudity,
Item.tags,
Item.width,
Item.height,
Item.tumblr_id,
Item.tumblr_reblog_key,
Item.fave_count,
Item.file_size,
Item.animated,
Favorite.id,
Favorite.created
FROM favorites AS Favorite
LEFT JOIN items AS Item
ON Favorite.notice_id = Item.id
AND Item.Removed = 0
AND Item.Nudity = 0
WHERE Favorite.faver_profile_id = 11619
AND Favorite.removed = 0
ORDER BY Favorite.id DESC
LIMIT 26
So now, from the "Favorites" table, its criteria is explicitly down to faver_profile_id, removed, id (for order)