Why would this simple MySQL update query take so long? - mysql

My host has been sending me messages over the last few months saying that my site is using way too many MySQL minutes. They also send some logs showing which queries use up the most time on occasion. Some of the queries are kind of long and complicated, so I understand why they would be an issue. But a few have me scratching my head. The one I want to focus on next is this:
UPDATE parentmessages SET views=views+1 WHERE parentid='11308'
The number is just an example, it could be any parentid. The parentmessages table has parentid as the primary key, so I would think it would be indexed and easily found. There are about 11,000 records in the table, which is not really that many. Here are the numbers my host gave me for how long this query took over 6 instances yesterday:
Taking 0.126455 , 1.472929 , 1.638743 , 3.040538 , 7.130041 , 112.498037 seconds to complete
The 112 could be a random glitch I suppose, but why would it take 3, 7 seconds sometimes?! My best bet is because I have a lot of indices on the table but I don't know enough about MySQL to know if that would matter. And why would it sometimes be 1/10th of a second and sometimes many seconds?
Here is the show create table:
CREATE TABLE `parentmessages` (
`parentid` int(7) NOT NULL AUTO_INCREMENT,
`active` tinyint(1) NOT NULL,
`level` int(2) NOT NULL,
`type` varchar(10) NOT NULL,
`hidden` tinyint(1) DEFAULT NULL,
`sticky` tinyint(1) NOT NULL,
`poll` tinyint(1) NOT NULL,
`topic` varchar(120) DEFAULT NULL,
`message` varchar(30000) NOT NULL,
`views` int(6) NOT NULL,
`replies` int(5) NOT NULL,
`userid` int(7) NOT NULL,
`datetimecalc` int(11) NOT NULL,
`lastreplycalc` int(11) NOT NULL,
`lastreplyuser` int(7) NOT NULL,
`editedcalc` int(11) DEFAULT NULL,
`editeduser` int(7) DEFAULT NULL,
`realediteduser` int(7) DEFAULT NULL,
`altint` int(7) DEFAULT NULL,
`imageurl` varchar(125) DEFAULT NULL,
`locked` tinyint(1) NOT NULL,
`tempid` int(12) NOT NULL,
PRIMARY KEY (`parentid`),
KEY `useridindex` (`userid`),
KEY `datetimecalcindex` (`datetimecalc`),
KEY `activeindex` (`active`),
KEY `lastreplycalcindex` (`lastreplycalc`),
KEY `levelindex` (`level`),
KEY `stickyindex` (`sticky`)
) ENGINE=MyISAM AUTO_INCREMENT=11716 DEFAULT CHARSET=latin1

One reason could be, that another slow query is blocking the table and your update is just waiting for the other query to finish.

Don't use MyISAM. I forget who said it, maybe PeterZ but "using myisam means you don't care about your data". The easiest way is to check for table locking is to look at the processlist. Dumps, inserts, updates etc will all lock the table. MyISAM is all but deprecated in 5.6 for good reason.

Related

MySQL Query Optimization that touches three tables via a union of two of them

I have a query that returns results from a single table based on the provided ID existing in a column in one of two, or both, tables. The DB schema for the relevant tables is provided below as well as the initial query and then what was later recommended to me by a peer. I go into some details below as to why this query works but I need to optimize it farther for larger datasets and pagination.
CREATE TABLE `killmails` (
`id` BIGINT(20) UNSIGNED NOT NULL,
`hash` VARCHAR(255) NOT NULL,
`moon_id` BIGINT(20) NULL DEFAULT NULL,
`solar_system_id` BIGINT(20) UNSIGNED NOT NULL,
`war_id` BIGINT(20) NULL DEFAULT NULL,
`is_npc` TINYINT(1) NOT NULL DEFAULT '0',
`is_awox` TINYINT(1) NOT NULL DEFAULT '0',
`is_solo` TINYINT(1) NOT NULL DEFAULT '0',
`dropped_value` DECIMAL(18,4) UNSIGNED NOT NULL DEFAULT '0.0000',
`destroyed_value` DECIMAL(18,4) UNSIGNED NOT NULL DEFAULT '0.0000',
`fitted_value` DECIMAL(18,4) UNSIGNED NOT NULL DEFAULT '0.0000',
`total_value` DECIMAL(18,4) UNSIGNED NOT NULL DEFAULT '0.0000',
`killmail_time` DATETIME NOT NULL,
`created_at` DATETIME NOT NULL,
`updated_at` DATETIME NOT NULL,
PRIMARY KEY (`id`, `hash`),
INDEX `total_value` (`total_value`),
INDEX `killmail_time` (`killmail_time`),
INDEX `solar_system_id` (`solar_system_id`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
;
CREATE TABLE `killmail_attackers` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`killmail_id` BIGINT(20) UNSIGNED NOT NULL,
`alliance_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`character_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`corporation_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`faction_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`damage_done` BIGINT(20) UNSIGNED NOT NULL,
`final_blow` TINYINT(1) NOT NULL DEFAULT '0',
`security_status` DECIMAL(17,15) NOT NULL,
`ship_type_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`weapon_type_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`created_at` DATETIME NOT NULL,
`updated_at` DATETIME NOT NULL,
PRIMARY KEY (`id`),
INDEX `ship_type_id` (`ship_type_id`),
INDEX `weapon_type_id` (`weapon_type_id`),
INDEX `alliance_id` (`alliance_id`),
INDEX `corporation_id` (`corporation_id`),
INDEX `killmail_id_character_id` (`killmail_id`, `character_id`),
CONSTRAINT `killmail_attackers_killmail_id_killmails_id_foreign_key` FOREIGN KEY (`killmail_id`) REFERENCES `killmails` (`id`) ON UPDATE CASCADE ON DELETE CASCADE
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
;
CREATE TABLE `killmail_victim` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`killmail_id` BIGINT(20) UNSIGNED NOT NULL,
`alliance_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`character_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`corporation_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`faction_id` BIGINT(20) UNSIGNED NULL DEFAULT NULL,
`damage_taken` BIGINT(20) UNSIGNED NOT NULL,
`ship_type_id` BIGINT(20) UNSIGNED NOT NULL,
`ship_value` DECIMAL(18,4) NOT NULL DEFAULT '0.0000',
`pos_x` DECIMAL(30,10) NULL DEFAULT NULL,
`pos_y` DECIMAL(30,10) NULL DEFAULT NULL,
`pos_z` DECIMAL(30,10) NULL DEFAULT NULL,
`created_at` DATETIME NOT NULL,
`updated_at` DATETIME NOT NULL,
PRIMARY KEY (`id`),
INDEX `corporation_id` (`corporation_id`),
INDEX `alliance_id` (`alliance_id`),
INDEX `ship_type_id` (`ship_type_id`),
INDEX `killmail_id_character_id` (`killmail_id`, `character_id`),
CONSTRAINT `killmail_victim_killmail_id_killmails_id_foreign_key` FOREIGN KEY (`killmail_id`) REFERENCES `killmails` (`id`) ON UPDATE CASCADE ON DELETE CASCADE
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
;
This first query is where the problem started:
SELECT
*
FROM
killmails k
LEFT JOIN killmail_attackers ka ON k.id = ka.killmail_id
LEFT JOIN killmail_victim kv ON k.id = kv.killmail_id
WHERE
ka.character_id = ?
OR kv.character_id = ?
ORDER BY killmails.killmail_time DESC
LIMIT ? OFFSET ?
This worked okay, but long query times. We optimized to this
SELECT
killmails.*,
FROM (
SELECT killmail_victim.killmail_id FROM killmail_victim
WHERE killmail_victim.corporation_id = ?
UNION
SELECT killmail_attackers.killmail_id FROM killmail_attackers
WHERE killmail_attackers.corporation_id = ?
) SELECTED_KMS
LEFT JOIN killmails ON killmails.id = SELECTED_KMS.killmail_id
ORDER BY killmails.killmail_time DESC
LIMIT ? OFFSET ?
I saw a huge improvement in query times when looking up killmails for characters, however when I started querying for larger datasets like corporation and alliance killmails, the query slows down. This is because the queries that are union'd together can potentially return large sets of data and the time it takes to read all that into memory so that the SELECTED_KMS table can be created is what I believe is taking so much time. Most of the time, with alliances, my connection to the database times out from the application. One alliance returned 900K killmailIDs from one of the union'd tables, not sure what the other returned.
I can easily add limit statements to the internal queries, but this will introduce a lot of complications when I get to paginating the data or when I introduce a feature to search for KMs by date for example.
I am looking for suggestions on how this query can be optimized and still allow for easy pagination in the near future.
Thank You
Change INDEX(corporation_id) in both tables to INDEX(corporation_id, killmail_id) so that the inner queries will be "covering".
In general, INDEX(a) is useless when you also have INDEX(a,b). Any query that needs just a, can use either of those indexes. (This rule does not apply to b; only the "leftmost" column(s).)
Where does killmails.id come from? It's not AUTO_INCREMENT; it is not alone in the PRIMARY KEY, so there is no specified "uniqueness" constraint. Is it unique by some other design? Is it computed somewhere else in the code? (I ask because I need a feel for its uniqueness and other characteristics.)
Add INDEX(id, killmails_time).
What version are you using?
Perhaps UNION ALL give the same results? It would be faster because it would not need to de-dup.
How much RAM do you have? What is the value of innodb_buffer_pool_size?
Do you really need 8-byte BIGINTs? Even if your application is using longlong (or whatever it calls it), you can probably change the schema without changing the app.
Do you need this much precision and range? DECIMAL(30,10) -- it takes 14 bytes each. DOUBLE would give you about 16 significant digits in 8 bytes, with a wider range of values (up to about 10^308). What "units" are you using? (Overkill for light-years or parsecs; inadequate for miles or km. Perhaps AUs? Then the bottom digit would be a precision of a few meters?)
The last few questions are aimed at shrinking the table and seeing if we can avoid it being as I/O-bound as it apparently is now.
Important
innodb_buffer_pool_size = 128M is terribly small, especially for a 32GB machine, and especially if your dataset is much bigger than 128MB. If there are not any other apps running on the server, bump that setting up to 20G.

MySQL - ordering by column with many different values in big table

Probably through poor database design, the following really simple query is taking ~1.5 minutes to run.
SELECT s.title, t.name AS team_name
FROM stories AS s
JOIN teams AS t ON s.team_id = t.id
WHERE s.pubdate >= "1970-01-01 00:00"
ORDER BY s.hits /* <-- here's the problem */
LIMIT 3 OFFSET 0
The problem is the stories table is fairly big, with ~1.5m rows, and there's a ton of unique values for hits (this column logs the hits to each story.)
Take out the order clause and it resolves almost instantly.
Question: what can I do to optimise for queries like this? Presumably I shouldn't apply an index to hits since direct no look-ups take place on that column.
[UPDATE]
SHOW CREATE TABLE for all tables concerned:
CREATE TABLE stories (
`id` varchar(11) NOT NULL,
`link` text NOT NULL,
`title` varchar(255) CHARACTER SET utf8 NOT NULL,
`description` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
`pubdate` datetime NOT NULL,
`source_id` varchar(11) NOT NULL,
`team_id` varchar(11) NOT NULL,
`hits` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `Unique combo (title + date)` (`title`,`pubdate`),
KEY `team (FK)` (`team_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
CREATE TABLE teams (
`id` varchar(11) NOT NULL,
`is_live` enum('y') DEFAULT NULL,
`name` varchar(50) NOT NULL,
`short_name` varchar(12) DEFAULT NULL,
`server` varchar(11) DEFAULT NULL,
`url_token` varchar(255) NOT NULL,
`league` varchar(11) NOT NULL,
`away_game_id` varchar(255) DEFAULT NULL,
`digest_list_id` varchar(25) DEFAULT NULL,
`twitter_handle` varchar(255) DEFAULT NULL,
`no_official_news` enum('y') DEFAULT NULL,
`alt_names` varchar(255) DEFAULT NULL,
`no_use_nickname` enum('y') DEFAULT NULL,
`official_hashtag` varchar(30) DEFAULT NULL,
`merge_news_and_fans` enum('y') DEFAULT NULL,
`colour_1` varchar(6) NOT NULL,
`colour_2` varchar(6) DEFAULT NULL,
`colour_3` varchar(6) DEFAULT NULL,
`link_colour_modifier` float DEFAULT NULL,
`alt_link_colour_modifier` float DEFAULT NULL,
`title_shade` enum('dark','light') NOT NULL,
`shirt_style` enum('vert_stripes','horiz_stripes','vert_stripes_thin','horiz_stripes_thin','vert_split','horiz_split') DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `URL token` (`url_token`),
KEY `league (FK)` (`league`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
Consider removing the filter on pubdate if the user does not need it. It confuses the optimizer.
INDEX(hits, pubdate, title)
will probably help the query the most. It is "covering".
The reason why removing ORDER BY runs fast: Without it, it gives you any 3 rows. With it, and without a useful index, it needs to sort the 1.5M rows to discover the 3 with the least number of hits.
Perhaps you wanted ORDER BY s.hits DESC? -- to get those with the most hits.

MySQL index help - which is faster?

What I'm dealing with:
I have a project which uses ActiveCollab 2, and the database structure is new to me - practically everything gets stored to a project_objects table and has a recursively hierarchical relationship:
Record 1234 might be type "Ticket" with parent_id of 123
Record 123 might be type "Category" with parent_id of 12
Record 12 might be type "Milestone" and so on.
Currently there are upwards of 450,000 records in this table and many of the queries in the code reference the name field which does NOT have an index on it. An example value might be Design or Development.
This might be an example query:
SELECT * FROM project_objects WHERE type = "Ticket" and name = "Design"
My problem:
I have a query that is taking upwards of 12-15 seconds and I have a feeling it's from that
name column lacking the index and requiring the full text search. My understanding with indexes is that if I add one to the name field, it'll speed up the reads, but slow down the inserts and updates. Does the index need to get rebuilt completely every time a record is added or updated or is it just altered/appended? I don't want to optimize this query with an index if it means drastically slowing down other parts of the code base which depend on faster writes.
My question:
Assume 100 reads and 100 writes per day, which is more likely to be a faster process for MySQL - executing the above query on the above table without the index or having to rebuild the index every time a record is added?
I don't have the knowledge or authority to start running benchmarks, but I would like to offer a suggestion to the client without sounding completely novice. Thanks!
EDIT: Here is the table:
'CREATE TABLE `project_objects` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`source` varchar(50) DEFAULT NULL,
`type` varchar(30) NOT NULL DEFAULT ''ProjectObject'',
`module` varchar(30) NOT NULL DEFAULT ''system'',
`project_id` int(10) unsigned NOT NULL DEFAULT ''0'',
`milestone_id` int(10) unsigned DEFAULT NULL,
`parent_id` int(10) unsigned DEFAULT NULL,
`parent_type` varchar(30) DEFAULT NULL,
`name` varchar(150) DEFAULT NULL,
`body` longtext,
`tags` text,
`state` tinyint(4) NOT NULL DEFAULT ''0'',
`visibility` tinyint(4) NOT NULL DEFAULT ''0'',
`priority` tinyint(4) DEFAULT NULL,
`created_on` datetime DEFAULT NULL,
`created_by_id` smallint(5) unsigned NOT NULL DEFAULT ''0'',
`created_by_name` varchar(100) DEFAULT NULL,
`created_by_email` varchar(100) DEFAULT NULL,
`updated_on` datetime DEFAULT NULL,
`updated_by_id` smallint(5) unsigned DEFAULT NULL,
`updated_by_name` varchar(100) DEFAULT NULL,
`updated_by_email` varchar(100) DEFAULT NULL,
`due_on` date DEFAULT NULL,
`completed_on` datetime DEFAULT NULL,
`completed_by_id` smallint(5) unsigned DEFAULT NULL,
`completed_by_name` varchar(100) DEFAULT NULL,
`completed_by_email` varchar(100) DEFAULT NULL,
`comments_count` smallint(5) unsigned DEFAULT NULL,
`has_time` tinyint(1) unsigned NOT NULL DEFAULT ''0'',
`is_locked` tinyint(3) unsigned DEFAULT NULL,
`estimate` float(9,2) DEFAULT NULL,
`start_on` date DEFAULT NULL,
`start_on_text` varchar(50) DEFAULT NULL,
`due_on_text` varchar(50) DEFAULT NULL,
`workflow_status` int(4) DEFAULT NULL,
`varchar_field_1` varchar(255) DEFAULT NULL,
`varchar_field_2` varchar(255) DEFAULT NULL,
`integer_field_1` int(11) DEFAULT NULL,
`integer_field_2` int(11) DEFAULT NULL,
`float_field_1` double(10,2) DEFAULT NULL,
`float_field_2` double(10,2) DEFAULT NULL,
`text_field_1` longtext,
`text_field_2` longtext,
`date_field_1` date DEFAULT NULL,
`date_field_2` date DEFAULT NULL,
`datetime_field_1` datetime DEFAULT NULL,
`datetime_field_2` datetime DEFAULT NULL,
`boolean_field_1` tinyint(1) unsigned DEFAULT NULL,
`boolean_field_2` tinyint(1) unsigned DEFAULT NULL,
`position` int(10) unsigned DEFAULT NULL,
`version` int(10) unsigned NOT NULL DEFAULT ''0'',
PRIMARY KEY (`id`),
KEY `type` (`type`),
KEY `module` (`module`),
KEY `project_id` (`project_id`),
KEY `parent_id` (`parent_id`),
KEY `created_on` (`created_on`),
KEY `due_on` (`due_on`)
KEY `milestone_id` (`milestone_id`)
) ENGINE=InnoDB AUTO_INCREMENT=993109 DEFAULT CHARSET=utf8'
As #Ray points out, indexes do not have to be rebuilt on every Insert, Update or Delete operation. So, if you only want to improve efficuency of this (or similar) queries, add either an index on (name, type) or on (type, name).
Since you already have an index on (type) alone, I would add the first one:
ALTER TABLE project_objects
ADD INDEX name_type_IDX
(name, type) ;
It may take a few seconds on a busy server but it has to be done once and then all the queries with conditions like yours will benefit. It may also improve efficiency of several other types of queries that involve name only or name and type:
WHERE name = 'Design' AND type = 'Ticket' --- your query
WHERE name = 'Design' --- condition on `name` only
GROUP BY name --- group by `name`
WHERE name LIKE 'Design%' --- range condition on `name` only
WHERE name = 'Design' --- equality condition on `name`
AND type LIKE 'Ticket%' --- and range condition on `type`
WHERE name = 'Design' --- equality condition on `name`
GROUP BY type --- and group by `type`
GROUP BY name --- group by `name`
, type --- and `type`
The insert cost of adding a single point index on the name column is most likely negligible--it will probably amount to an addition of a constant time increase, probably no more that a few milliseconds. You will eat up some extra disk space, but that's usually not a concern. Nothing like the multiple seconds you're experienceing on select performance.
Add the index, enjoy the performance improvement.
BTW: Indexes aren't 'rebuilt' on every insert. They're usually implemented in B-Trees and unless you're deleting frequently, should require very little re-balancing once you get larger than a few levels (and rebalancing with little depth is pretty cheap).

MySQL Error - unused primary key value returns already used

MySQL MyISAM database which currently has 2,280 rows in a table has locked up twice in the past 6 months or so.
When trying to add a new row it says "Primary key already used", when the next increment value is higher than the last id in the table. Seems to fix itself when I reset the auto increment.
Database is for a site which gets around 200+ hits a day, peak of about 25-20 an hour, so can't imagine it's due to overload on the database.
Trying to figure out why this keeps happening and if I can fix the issue so the client doesn't have to keep calling up whenever they can't add a new article to their site.
EDIT: Just to preempt potential comments, I realise that the table and code are not ideal, but I'm not looking for ways I should improve this, unless it's the root cause of the problem, poor performance/security I can live with (just), but I do need to figure out what could be causing it to lock the table. Thanks.
Table Definition
CREATE TABLE `articles` (
`article_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`article_meta_desc` text,
`article_meta_keyw` text,
`article_title` varchar(255) DEFAULT NULL,
`article_date` int(10) unsigned DEFAULT NULL,
`article_intro` text,
`article_embed` text,
`article_content` text,
`article_sector` varchar(255) DEFAULT NULL,
`article_type` varchar(255) DEFAULT NULL,
`article_ma` tinyint(1) unsigned DEFAULT NULL,
`article_pn` tinyint(1) unsigned DEFAULT NULL,
`article_cw` tinyint(1) unsigned DEFAULT NULL,
`article_er` tinyint(1) unsigned DEFAULT NULL,
`article_kr` tinyint(1) unsigned DEFAULT NULL,
`article_rc` tinyint(1) unsigned DEFAULT NULL,
`article_rs` tinyint(1) DEFAULT NULL,
`article_img_s` varchar(255) DEFAULT NULL,
`article_img_l` varchar(255) DEFAULT NULL,
`article_link` varchar(255) DEFAULT NULL,
`article_highlight` int(10) unsigned DEFAULT NULL,
`article_slug` text,
`article_alias` text,
`article_hide` smallint(1) unsigned NOT NULL DEFAULT '0',
`article_ad_layout` tinyint(1) unsigned DEFAULT '0',
`article_ad_banner` smallint(5) unsigned DEFAULT NULL,
`article_ad_sky` smallint(5) unsigned DEFAULT NULL,
`article_ad_square1` smallint(5) unsigned DEFAULT NULL,
`article_ad_square2` smallint(5) unsigned DEFAULT NULL,
`article_ad_square3` smallint(5) unsigned DEFAULT NULL,
`article_newswire` smallint(1) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`article_id`),
FULLTEXT KEY `full_text` (`article_title`,`article_intro`,`article_content`)
) ENGINE=MyISAM AUTO_INCREMENT=2384 DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC;
_ma,_pn,_cw,_er,_kr,_rc,_rs are used for showing which category articles are for. Please ignore the bad use of a table for the ads and section, site was made quite a long time ago, I have learnt better since :p
Insert statement
INSERT INTO articles (article_id, article_meta_desc, article_meta_keyw, article_title, article_date, article_intro, article_embed, article_content, article_sector, article_type, article_ma, article_pn, article_cw, article_er, article_kr, article_rc, article_rs, article_img_s, article_img_l, article_link, article_highlight, article_slug, article_alias, article_hide)
VALUES ('','$insert_article_meta_desc','$insert_article_meta_keyw','$insert_article_title','$insert_article_date','$insert_article_intro','$insert_article_embed','$insert_article_content','$insert_article_sector','$insert_article_category','$insert_article_ma','$insert_article_pn','$insert_article_cw','$insert_article_er','$insert_article_kr','$insert_article_rc','$insert_article_rs','$insert_article_img_s','$insert_article_img_l','','$insert_article_highlight','$insert_article_slug','','$insert_article_hide')
Again, old site, please forgive me. Not sure if it's something to do with doing an insert that sets the id to '' which would then be set to the next increment value, could this cause problems?
Try taking out the article_id field name, and first '' in the values, from the INSERT stmt; the auto increment ID should get allocated automatically.

Is there a better index to speed up this query?

The following query is using temporary and filesort. I'd like to avoid that if possible.
SELECT lib_name, description, count(seq_id), floor(avg(size))
FROM libraries l JOIN sequence s ON (l.lib_id=s.lib_id)
WHERE s.is_contig=0 and foreign_seqs=0 GROUP BY lib_name;
The EXPLAIN says:
id,select_type,table,type,possible_keys,key,key_len,ref,rows,Extra
1,SIMPLE,s,ref,libseq,contigs,contigs,4,const,28447,Using temporary; Using filesort
1,SIMPLE,l,eq_ref,PRIMARY,PRIMARY,4,s.lib_id,1,Using where
The tables look like this:
libraries
CREATE TABLE `libraries` (
`lib_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`lib_name` varchar(30) NOT NULL,
`method_id` int(10) unsigned DEFAULT NULL,
`lib_efficiency` decimal(4,2) unsigned DEFAULT NULL,
`insert_avg` decimal(5,2) DEFAULT NULL,
`insert_high` decimal(5,2) DEFAULT NULL,
`insert_low` decimal(5,2) DEFAULT NULL,
`amtvector` decimal(4,2) unsigned DEFAULT NULL,
`description` text,
`foreign_seqs` tinyint(1) NOT NULL DEFAULT '0' COMMENT '1 means the sequences in this library are not ours',
PRIMARY KEY (`lib_id`),
UNIQUE KEY `lib_name` (`lib_name`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1;
sequence
CREATE TABLE `sequence` (
`seq_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`seq_name` varchar(40) NOT NULL DEFAULT '',
`lib_id` int(10) unsigned DEFAULT NULL,
`size` int(10) unsigned DEFAULT NULL,
`add_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`sequencing_date` date DEFAULT '0000-00-00',
`comment` text DEFAULT NULL,
`is_contig` int(10) unsigned NOT NULL DEFAULT '0',
`fasta_seq` longtext,
`primer` varchar(15) DEFAULT NULL,
`gc_count` int(10) DEFAULT NULL,
PRIMARY KEY (`seq_id`),
UNIQUE KEY `seq_name` (`seq_name`),
UNIQUE KEY `libseq` (`lib_id`,`seq_id`),
KEY `primer` (`primer`),
KEY `sgitnoc` (`seq_name`,`is_contig`),
KEY `contigs` (`is_contig`,`seq_name`) USING BTREE,
CONSTRAINT `FK_sequence_1` FOREIGN KEY (`lib_id`) REFERENCES `libraries` (`lib_id`)
) ENGINE=InnoDB AUTO_INCREMENT=61508 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC;
Are there any changes I can do to make the query go faster? If not, when (for a web application) is it worth putting the results of a query like the above into a MEMORY table?
First strategy: make it faster for mySQL to locate the records you want summarized.
You've already got an index on sequence.is_contig. You might try indexing on libraries.foreign_seqs. I don't know if that will help, but it's worth a try.
Second strategy: see if you can get your sort to run in memory, rather than in a file. Try making the sort_buffer_size parameter bigger. This will consume RAM on your server, but that's what RAM is for.
Third strategy: IF your application needs to do this query a lot but updates the underlying data only a little, take your own suggestion and create a summary table. Perhaps use an EVENT to remake the summary table., and run it once every few minutes. If you're going to follow that strategy, start by creating a view with this table in it and have your app retrieve information from the view. Then get the summary table stuff working, drop the view, and give the summary table the same name as the view. That way your data model work and your application design work can proceed independently of each other.
Final suggestion: If this is truly slowly-changing summary data, switch to myISAM. It's a little faster for this kind of data wrangling.