I'm trying to understand a strange performance behaviour happening in a MYSQL data structure that I'm working on:
CREATE TABLE metric_values
(
dmm_id INT NOT NULL,
dtt_id BIGINT NOT NULL,
cus_id INT NOT NULL,
nod_id INT NOT NULL,
dca_id INT NULL,
value DOUBLE NOT NULL
)
ENGINE = InnoDB;
CREATE INDEX metric_values_dmm_id_index
ON metric_values (dmm_id);
CREATE INDEX metric_values_dtt_index
ON metric_values (dtt_id);
CREATE INDEX metric_values_cus_id_index
ON metric_values (cus_id);
CREATE INDEX metric_values_nod_id_index
ON metric_values (nod_id);
CREATE INDEX metric_values_dca_id_index
ON metric_values (dca_id);
CREATE TABLE dim_metric
(
dmm_id INT AUTO_INCREMENT
PRIMARY KEY,
met_id INT NOT NULL,
name VARCHAR(45) NOT NULL,
instance VARCHAR(45) NULL,
active BIT DEFAULT b'0' NOT NULL
)
ENGINE = InnoDB;
CREATE INDEX dim_metric_dmm_id_met_id_index
ON dim_metric (dmm_id, met_id);
CREATE INDEX dim_metric_met_id_index
ON dim_metric (met_id);
CONTEXT:
Some context, I'm trying to understand some strange performance behaviour happening in a data structure that I'm working on:
CREATE TABLE metric_values
(
dmm_id INT NOT NULL,
dtt_id BIGINT NOT NULL,
cus_id INT NOT NULL,
nod_id INT NOT NULL,
dca_id INT NULL,
value DOUBLE NOT NULL
)
ENGINE = InnoDB;
CREATE INDEX metric_values_dmm_id_index
ON metric_values (dmm_id);
CREATE INDEX metric_values_dtt_index
ON metric_values (dtt_id);
CREATE INDEX metric_values_cus_id_index
ON metric_values (cus_id);
CREATE INDEX metric_values_nod_id_index
ON metric_values (nod_id);
CREATE INDEX metric_values_dca_id_index
ON metric_values (dca_id);
CREATE TABLE dim_metric
(
dmm_id INT AUTO_INCREMENT
PRIMARY KEY,
met_id INT NOT NULL,
name VARCHAR(45) NOT NULL,
instance VARCHAR(45) NULL,
active BIT DEFAULT b'0' NOT NULL
)
ENGINE = InnoDB;
CREATE INDEX dim_metric_dmm_id_met_id_index
ON dim_metric (dmm_id, met_id);
CREATE INDEX dim_metric_met_id_index
ON dim_metric (met_id);
CONTEXT:
Metric_values have something close to 100 milion rows and table dim_metric has 1024 rows.
I'm doing a simple JOIN between this 2 tables and I'm having huge performance issues. Trying to figure out what the problem is I stumbled in this strange behaviour.
I can't execute the JOIN using the column met_id as a filter. I left it running for 10 minutes and lost the connection to the database due to timeout before I got any results back;
Running a explain on the query I can see that the indexes are being used correctly (I assume) and only 1052 rows are being scanned from metric_values.
EXPLAIN
SELECT
count(0)
FROM metric_values v
INNER JOIN dim_metric m ON m.dmm_id = v.dmm_id
WHERE 1=1
AND m.met_id = 1;
1 SIMPLE m ref PRIMARY,dim_metric_met_id_index,dim_metric_dmm_id_met_id_index dim_metric_met_id_index 4 const 1 Using index
1 SIMPLE v ref metric_values_dmm_id_index metric_values_dmm_id_index 4 oi_fact.m.dmm_id 1052 Using index
Doing a simple change to the query to use a sub select instead of a JOIN I can get the results after ~45 seconds.
Running a explain on the modified query I can see that the index is not the primary resource being used to fetch the data and that almost 20 million rows were scaned to bring me the result.
EXPLAIN
SELECT
count(0)
FROM metric_values v
WHERE 1=1
AND v.dmm_id = (SELECT m.dmm_id FROM dim_metric m WHERE m.met_id = 1);
1 PRIMARY v ref metrics_values_dmm_id_index metrics_values_dmm_id_index 4 const 19589800 Using where; Using index
2 SUBQUERY m ref dim_metric_met_id_index dim_metric_met_id_index 4 const 1 Using index
Can someone explain to me what is happening? Did I misunderstood what the EXPLAIN is telling me? Can I do some changes to the data model to improve the query performance?
Related
Im new to using MySQL.
Im trying to run an inner join query, between a database of 80,000 (this is table B) records against a 40GB data set with approx 600million records (this is table A)
Is Mysql suitable for running this sort of query?
Whay sort of time should I expect it to take?
This is the code I ied is below. However it failed as my dbs connection failed at 60000 secs.
set net_read_timeout = 36000;
INSERT
INTO C
SELECT A.id, A.link_id, link_ref, network,
date_1, time_per,
veh_cls, data_source, N, av_jt
from A
inner join B
on A.link_id = B.link_id;
Im starting to look into ways to cutting down the 40GB table size to a temp table, to try and make the query more manageabe. But I keep getting
Error Code: 1206. The total number of locks exceeds the lock table size 646.953 sec
Am I on the right track?
cheers!
my code for splitting the database is:
LOCK TABLES TFM_830_car WRITE, tfm READ;
INSERT
INTO D
SELECT A.id, A.link_id, A.time_per, A.av_jt
from A
where A.time_per = 34 and A.veh_cls = 1;
UNLOCK TABLES;
Perhaps my table indices are in correct all I have is a simple primary key
CREATE Table A
(
id int unsigned Not Null auto_increment,
link_id varchar(255) not Null,
link_ref int not Null,
network int not Null,
date_1 varchar(255) not Null,
#date_2 time default Null,
time_per int not null,
veh_cls int not null,
data_source int not null,
N int not null,
av_jt int not null,
sum_squ_jt int not null,
Primary Key (id)
);
Drop table if exists B;
CREATE Table B
(
id int unsigned Not Null auto_increment,
TOID varchar(255) not Null,
link_id varchar(255) not Null,
ABnode varchar(255) not Null,
#date_2 time not Null,
Primary Key (id)
);
In terms of the schema, it is just these two two tables (A and B) loaded underneath a database
I believe that answer has already been given in this post: The total number of locks exceeds the lock table size
ie. use a table lock to avoid InnoDB default row by row lock mode
thanks foryour help.
Indexing seems to have solved the problem. I managed to reduce the query time from 700secs to aprox 0.2secs per record by indexing on:
A.link_id
i.e. from
from A
inner join B
on A.link_id = B.link_id;
found this really usefull post. v helpfull for a newbe like myself
http://hackmysql.com/case4
code used to index was:
CREATE INDEX linkid_index ON A(link_id);
I have a database with the following three tables:
matches table has 200,000 matches...
CREATE TABLE `matches` (
`match_id` bigint(20) unsigned NOT NULL,
`start_time` int(10) unsigned NOT NULL,
PRIMARY KEY (`match_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
heroes table has ~100 heroes...
CREATE TABLE `heroes` (
`hero_id` smallint(5) unsigned NOT NULL,
`name` char(40) NOT NULL,
PRIMARY KEY (`hero_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
matches_heroes table has 2,000,000 relationships (10 random heroes per match)...
CREATE TABLE `matches_heroes` (
`relation_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`match_id` bigint(20) unsigned NOT NULL,
`hero_id` smallint(6) unsigned NOT NULL,
PRIMARY KEY (`relation_id`),
KEY `match_id` (`match_id`),
KEY `hero_id` (`hero_id`),
CONSTRAINT `matches_heroes_ibfk_2` FOREIGN KEY (`hero_id`)
REFERENCES `heroes` (`hero_id`),
CONSTRAINT `matches_heroes_ibfk_1` FOREIGN KEY (`match_id`)
REFERENCES `matches` (`match_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=3689891 DEFAULT CHARSET=utf8
The following query takes over 1 second, which seems pretty slow to me for something so simple:
SELECT SQL_NO_CACHE COUNT(*) AS match_count
FROM matches INNER JOIN matches_heroes ON matches.match_id = matches_heroes.match_id
WHERE hero_id = 5
Removing only the WHERE clause doesn't help, but if I take out the INNER JOIN also, like so:
SELECT SQL_NO_CACHE COUNT(*) AS match_count FROM matches
...it only takes 0.05 seconds. It seems that INNER JOIN is very costly. I don't have much experience with joins. Is this normal or am I doing something wrong?
UPDATE #1: Here's the EXPLAIN result.
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE matches_heroes ref match_id,hero_id,match_id_hero_id hero_id 2 const 34742
1 SIMPLE matches eq_ref PRIMARY PRIMARY 8 mydatabase.matches_heroes.match_id 1 Using index
UPDATE #2: After listening to you guys, I think it's working properly and this is simply as fast as it gets. Please let me know if you disagree. Thanks for all the help. I really appreciate it.
Use COUNT(matches.match_id) instead of count(*), as when using joins it's best to not use the * as it does extra computation. Using columns from the join are the best way ensure you are not requesting any other operations. (not a problem on MySql InnerJoin, my bad).
Also you should verify that you have all keys defragmented, and enough ram free for the index to load in memory
Update 1:
Try to add a composed index for match_id,hero_id as it should give better performance.
ALTER TABLE `matches_heroes` ADD KEY `match_id_hero_id` (`match_id`,`hero_id`)
Update 2:
I wasn't satisfied with the accepted answer, that mysql is that slow for just 2 mill records and I runed benchmarks on my ubuntu PC (i7 processor, with standard HDD).
-- pre-requirements
CREATE TABLE seq_numbers (
number INT NOT NULL
) ENGINE = MYISAM;
DELIMITER $$
CREATE PROCEDURE InsertSeq(IN MinVal INT, IN MaxVal INT)
BEGIN
DECLARE i INT;
SET i = MinVal;
START TRANSACTION;
WHILE i <= MaxVal DO
INSERT INTO seq_numbers VALUES (i);
SET i = i + 1;
END WHILE;
COMMIT;
END$$
DELIMITER ;
CALL InsertSeq(1,200000)
;
ALTER TABLE seq_numbers ADD PRIMARY KEY (number)
;
-- create tables
-- DROP TABLE IF EXISTS `matches`
CREATE TABLE `matches` (
`match_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`start_time` int(10) unsigned NOT NULL,
PRIMARY KEY (`match_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
CREATE TABLE `heroes` (
`hero_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
`name` char(40) NOT NULL,
PRIMARY KEY (`hero_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
CREATE TABLE `matches_heroes` (
`match_id` bigint(20) unsigned NOT NULL,
`hero_id` smallint(6) unsigned NOT NULL,
PRIMARY KEY (`match_id`,`hero_id`),
KEY (match_id),
KEY (hero_id),
CONSTRAINT `matches_heroes_ibfk_2` FOREIGN KEY (`hero_id`) REFERENCES `heroes` (`hero_id`),
CONSTRAINT `matches_heroes_ibfk_1` FOREIGN KEY (`match_id`) REFERENCES `matches` (`match_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
-- insert DATA
-- 100
INSERT INTO heroes(name)
SELECT SUBSTR(CONCAT(char(RAND()*25+65),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97),char(RAND()*25+97)),1,RAND()*9+4) as RandomName
FROM seq_numbers WHERE number <= 100
-- 200000
INSERT INTO matches(start_time)
SELECT rand()*1000000
FROM seq_numbers WHERE number <= 200000
-- 2000000
INSERT INTO matches_heroes(hero_id,match_id)
SELECT a.hero_id, b.match_id
FROM heroes as a
INNER JOIN matches as b ON 1=1
LIMIT 2000000
-- warm-up database, load INDEXes in ram (optional, works only for MyISAM tables)
LOAD INDEX INTO CACHE matches_heroes,matches,heroes
-- get random hero_id
SET #randHeroId=(SELECT hero_id FROM matches_heroes ORDER BY rand() LIMIT 1);
-- test 1
SELECT SQL_NO_CACHE #randHeroId,COUNT(*) AS match_count
FROM matches as a
INNER JOIN matches_heroes as b ON a.match_id = b.match_id
WHERE b.hero_id = #randHeroId
; -- Time: 0.039s
-- test 2: adding some complexity
SET #randName = (SELECT `name` FROM heroes WHERE hero_id = #randHeroId LIMIT 1);
SELECT SQL_NO_CACHE #randName, COUNT(*) AS match_count
FROM matches as a
INNER JOIN matches_heroes as b ON a.match_id = b.match_id
INNER JOIN heroes as c ON b.hero_id = c.hero_id
WHERE c.name = #randName
; -- Time: 0.037s
Conclusion: The test results are about 20x faster, and my server load was about 80% before testing as it's not a dedicated mysql server and had other cpu intensive tasks running, so if you run the whole script (from above) and get lower results it can be because:
you have a shared host, and the load is too big. In this case there isn't much you can do: you either complain to your current host, pay for a better host/vm or try another host
your configured key_buffer_size(for MyISAM) or innodb_buffer_pool_size(for innoDB) is too small, the optimum size would be over 150MB
your available ram is not enough, you would require about 100 - 150 mb of ram for the indexes to be loaded into memory. solution: free up some ram or buy more of it
Note that by using the test script, the generating of new data rules out the index fragmentation problem.
Hope this helps, and ask if you have issues in testing this.
obs:
SELECT SQL_NO_CACHE COUNT(*) AS match_count
FROM matches INNER JOIN matches_heroes ON matches.match_id = matches_heroes.match_id
WHERE hero_id = 5`
is the equivalent to:
SELECT SQL_NO_CACHE COUNT(*) AS match_count
FROM matches_heroes
WHERE hero_id = 5`
So you wouldn't require a join, if that's the count you need, but I'm guessing that was just an example.
So you say reading a table of 200,000 records is faster than reading a table of 2,000,000 records, finding the desired ones, then take them all to find matching records in the 200,000 record table?
And this surprises you? It's simply a lot of more work for the dbms. (It can even be, btw, that the dbms decides not to use the hero_id index when it considers a full table scan to be faster.)
So in my opinion there is nothing wrong with what is happening here.
I want to reduce the time taken by the query in mysql.
There are three tables say
A ~600k rows,
B ~2K rows,
C ~100K rows
having 2 columns each.
A has one column which is used in aggregation and other to join with table B.
B has one column to join with A and other with C
C has one column to join with B and other column to group by.
What should be the indexing plan to reduce the run time. As of now it is using temporary tables and then file sort. Is there any way we could avoid temporary tables.
Sample query :
SELECT
sum(`revenue_facts`.`total_price`) AS `m0`
FROM
`category_groups` AS `category_groups`,
`revenue_facts` AS `revenue_facts`,
`dim_products` AS `dim_products`
WHERE
`dim_products`.`product_category_group_sk` = `category_groups`.`product_category_group_sk` AND
`revenue_facts`.`product_sk` = `dim_products`.`product_sk`
GROUP BY `category_groups`.`category_name`;
I already have indexes on group by column and the columns in join.
my query is currently taking *6 minute*s. I want to reduce the time taken. table structure is as
table A :
CREATE TABLE `revenue_facts` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`product_sk` bigint(20) unsigned NOT NULL,
`total_price` decimal(12,2) NOT NULL,
PRIMARY KEY (`id`),
KEY `product_sk` (`product_sk`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Table B :
CREATE TABLE `dim_products` (
`product_sk` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`product_category_group_sk` bigint(20) unsigned NOT NULL,
PRIMARY KEY (`product_sk`),
KEY `product_id` (`product_id`),
KEY (`product_sk`) (`product_sk`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
table C :
CREATE TABLE `category_groups` (
`product_category_group_sk` bigint(20) unsigned NOT NULL,
`category_sk` bigint(20) unsigned NOT NULL,
`category_name` varchar(255) NOT NULL,
PRIMARY KEY (`product_category_group_sk`,`category_sk`),
KEY `category_sk` (`category_sk`),
KEY `product_category_group_sk` (`product_category_group_sk`
KEY `category_sk` (`category_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Execution plan used is:
1 SIMPLE dim_products index PRIMARY,product_category_group_index product_category_group_index 8 NULL 651264 Using index; Using temporary; Using filesort
1 SIMPLE category_groups ref PRIMARY,category_sk,product_category_group_sk,category_name product_category_group_sk 8 etl_testing.dim_products.product_category_group_sk 4 Using index
1 SIMPLE revenue_facts ref product_sk product_sk 8 etl_testing..dim_products.product_sk 5 NULL
Try this:
SELECT
sum(`revenue_facts`.`total_price`) AS `m0`
FROM
(`dim_products` LEFT JOIN `category_groups` ON `dim_products`.`product_category_group_sk` = `category_groups`.`product_category_group_sk`)
LEFT JOIN `revenue_facts` ON `dim_products`.`product_sk` = `revenue_facts`.`product_sk`
GROUP BY `category_groups`.`category_name`;
Also, as Abdul said:
"Post your table structures and explain plan"
I have a normalized database structure, which I will try to explain.
3 tables:
profiles
keywords
keyword_profile
Every profile on my website can have a various number of keywords linked to it. Every keyword gets an ID-number in the keywords-table. Every profile gets an ID-number in the profiles table. The keyword_profile table has about 600k rows with a keywordID linked to a profileID.
I have a PRIMARY index on my ID column in the profiles table.
I have a PRIMARY index on my ID column in the keywords table.
I have a UNIQUE index on my keyword-name column in the keywords table.
I have a PRIMARY index on the keyword_profile table like this: (profile_id, keyword_id)
I have a index on the profile_ID column in the keyword_profile table
Next: when I execute the following query (the specific keyword is named 'dienst'):
EXPLAIN SELECT profiles.hoofdrubriek, profiles.plaats, profiles.bedrijfsnaam, profiles.gemeente, profiles.bedrijfsslogan, profiles.straatnaam, profiles.huisnummer, profiles.postcode, profiles.telefoonnummer, profiles.fax,profiles.email, profiles.website, profiles.bedrijfslogo
FROM profiles
INNER JOIN profile_dienst ON profiles.ID = profile_dienst.profile_id
INNER JOIN diensten ON profile_dienst.dienst_id = diensten.ID
WHERE (
diensten.dienst = 'Aannemersdiensten'
)
ORDER BY profiles.grade DESC , profiles.bedrijfsnaam
I get the following result. It scans all 600k rows!! That's not really the result I was hoping for.. What indexes can I apply so it won't scan the entire table?
id - select_type - table - type - key - rows - Extra
1 - SIMPLE - diensten - const - dienst - 1 - Using temporary; Using filesort
1 - SIMPLE - profile_dienst - index - PRIMARY - 662000 - Using where; Using index
1 - SIMPLE - profiles - eq_ref - PRIMARY - 1 - Using where
Thanks for the help guys!!
EDIT: Added SHOW CREATE TABLE results:
CREATE TABLE `diensten` (
`ID` mediumint(9) NOT NULL AUTO_INCREMENT,
`dienst` varchar(255) NOT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `dienst` (`dienst`)
) ENGINE=MyISAM AUTO_INCREMENT=1903 DEFAULT CHARSET=utf8
CREATE TABLE `profile_dienst` (
`profile_id` varchar(20) NOT NULL,
`dienst_id` varchar(20) NOT NULL,
PRIMARY KEY (`dienst_id`,`profile_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
CREATE TABLE `profiles` (
`ID` varchar(255) NOT NULL DEFAULT '',
`username` varchar(255) DEFAULT NULL,
...more columns...,
`grade` int(5) NOT NULL,
PRIMARY KEY (`ID`),
KEY `IDX_TIMESTAMP` (`timestamp`),
KEY `IDX_NIEUW` (`nieuw`),
KEY `IDX_HOOFDRUBRIEK` (`hoofdrubriek`),
KEY `bedrijfsnaam` (`bedrijfsnaam`),
KEY `grade` (`grade`),
KEY `gemeente` (`gemeente`),
KEY `plaats` (`plaats`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
I think it's fine, it's scanning thought all the values of profile_dienst because MySql has to look for the diensten.ID. The good news is that is using and index.
You can check for more info about the Extra column of the MySql explain plan here: EXPLAIN Extra Information
you need to do normalization better.. you are JOINING an varrhar(20) with an mediumint profile_dienst.dienst_id = diensten.ID thats why an FULL INDEX SCAN is needed.. that is what Explain columns type:index and Extra: "using index" means.. MySQL only can use indexes if the datatypes are the same
little demo with an inner self join http://sqlfiddle.com/#!2/1ef09/4 when MySQL can use indexes.. INT, SMALLINT, CHAR and VARCHAR datatypes used.. here you can see that JOIN ON and INT and SMALLINT can use indexes and an JOIN on CHAR and VARCHAR also.. but mixing INT with CHAR MySQL can't use indexes and an FULL table scan is needed look at TYPE: ALL
Having some real issues with a few queries, this one inparticular. Info below.
tgmp_games, about 20k rows
CREATE TABLE IF NOT EXISTS `tgmp_games` (
`g_id` int(8) NOT NULL AUTO_INCREMENT,
`site_id` int(6) NOT NULL,
`g_name` varchar(255) NOT NULL,
`g_link` varchar(255) NOT NULL,
`g_url` varchar(255) NOT NULL,
`g_platforms` varchar(128) NOT NULL,
`g_added` datetime NOT NULL,
`g_cover` varchar(255) NOT NULL,
`g_impressions` int(8) NOT NULL,
PRIMARY KEY (`g_id`),
KEY `g_platforms` (`g_platforms`),
KEY `site_id` (`site_id`),
KEY `g_link` (`g_link`),
KEY `g_release` (`g_release`),
KEY `g_genre` (`g_genre`),
KEY `g_name` (`g_name`),
KEY `g_impressions` (`g_impressions`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
tgmp_reviews - about 200k rows
CREATE TABLE IF NOT EXISTS `tgmp_reviews` (
`r_id` int(8) NOT NULL AUTO_INCREMENT,
`site_id` int(6) NOT NULL,
`r_source` varchar(128) NOT NULL,
`r_date` date NOT NULL,
`r_score` int(3) NOT NULL,
`r_copy` text NOT NULL,
`r_link` text NOT NULL,
`r_int_link` text NOT NULL,
`r_parent` int(8) NOT NULL,
`r_platform` varchar(12) NOT NULL,
`r_impressions` int(8) NOT NULL,
PRIMARY KEY (`r_id`),
KEY `site_id` (`site_id`),
KEY `r_parent` (`r_parent`),
KEY `r_platform` (`r_platform`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;
Here is the query, takes 3 seconds ish
SELECT * FROM tgmp_games g
RIGHT JOIN tgmp_reviews r ON g_id = r.r_parent
WHERE g.site_id = '34'
GROUP BY g_name
ORDER BY g_impressions DESC LIMIT 15
EXPLAIN
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE r ALL r_parent NULL NULL NULL 201133 Using temporary; Using filesort
1 SIMPLE g eq_ref PRIMARY,site_id PRIMARY 4 engine_comp.r.r_parent 1 Using where
I am just trying to grab the 15 most viewed games, then grab a single review (doesnt really matter which, I guess highest rated would be ideal, r_score) for each game.
Can someone help me figure out why this is so horribly inefficient?
I don't understand what is the purpose of having a GROUP BY g_name in your query, but this makes MySQL performing aggregates on the columns selected, or all columns from both table. So please try to exclude it and check if it helps.
Also, RIGHT JOIN makes database to query tgmp_reviews first, which is not what you want. I suppose LEFT JOIN is a better choice here. Please, try to change the join type.
If none of the first options helps, you need to redesign your query. As you need to obtain 15 most viewed games for the site, the query will be:
SELECT g_id
FROM tgmp_games g
WHERE site_id = 34
ORDER BY g_impressions DESC
LIMIT 15;
This is the very first part that should be executed by the database, as it provides the best selectivity. Then you can get the desired reviews for the games:
SELECT r_parent, max(r_score)
FROM tgmp_reviews r
WHERE r_parent IN (/*1st query*/)
GROUP BY r_parent;
Such construct will force database to execute the first query first (sorry for the tautology) and will give you the maximal score for each of the wanted games. I hope you will be able to use the obtained results for your purpose.
Your MyISAM table is small, you can try converting it to see if that resolves the issue. Do you have a reason for using MyISAM instead of InnoDB for that table?
You can also try running an analyze on each table to update the statistics to see if the optimizer chooses something different.