I had an mysql event and runs eery day at 9:45 AM.
Begin
SET #v_ym :=(SELECT extract(year_month from DATE_SUB(SYSDATE(),INTERVAL 1 DAY)));
SELECT CAST(#ym AS CHAR);
select ssaname,extract(year_month from date_sub(sysdate(),interval 1 day)) ym,
omcr.btscount_ssa(ssaname) btscount,sum(case when duration>30 then duration else 0 end) dur_30 from
btsoutage.bts_faults
where ym=#v_ym and ssaname is not null
group by ssaname;
END;
in the query [ym is yearmonth and ym is indexed] when i substitute with variable #v_ym it is taking full table scan and the table is locked for further inserts. where as when i given the value directly it is using index and the output is fast.
The table contains more than 10 million records.
Create table is
CREATE TABLE `bts_faults` (
`bts_name` varchar(250) DEFAULT NULL,
`make` varchar(10) DEFAULT NULL,
`occuredtime` datetime DEFAULT NULL,
`clearedtime` datetime DEFAULT NULL,
`duration` int(10) DEFAULT NULL,
`reason` varchar(100) DEFAULT NULL,
`site_type` varchar(10) DEFAULT NULL,
`tech` varchar(5) DEFAULT NULL,
`fault_id` bigint(20) NOT NULL AUTO_INCREMENT,
`ssaname` varchar(20) DEFAULT NULL,
`fault_type` int(1) DEFAULT '0',
`remarks` varchar(250) DEFAULT NULL,
`bts_section` varchar(100) DEFAULT NULL,
`vendor` varchar(50) DEFAULT NULL,
`occureddate` date DEFAULT NULL,
`cleareddate` date DEFAULT NULL,
`ym` varchar(6) DEFAULT NULL,
`updatedate` datetime DEFAULT NULL,
`USERNAME` varchar(100) DEFAULT NULL,
`mask` int(1) DEFAULT '0',
`mask_cat` varchar(10) DEFAULT NULL,
`outage_cat` varchar(20) DEFAULT NULL,
`site_category` varchar(50) DEFAULT NULL,
`escalated_time` datetime DEFAULT NULL,
`zone` varchar(20) DEFAULT NULL,
`zone_fault_reason` varchar(500) DEFAULT NULL,
`zone_fault_remarks` varchar(500) DEFAULT NULL,
`zone_username` varchar(20) DEFAULT NULL,
`zone_updatetime` datetime DEFAULT NULL,
`zone_fault_duration` int(11) DEFAULT NULL,
`fault_category` varchar(250) DEFAULT NULL,
`remarks_1` varchar(2500) DEFAULT NULL,
PRIMARY KEY (`fault_id`),
UNIQUE KEY `UIDX_BTS_FAULTS` (`bts_name`,`occuredtime`),
KEY `indx_btsfaults_ym` (`ym`),
KEY `indx_btsfaults_cleareddate` (`cleareddate`),
KEY `Index_btsfaults_btsname` (`bts_name`),
KEY `index_btsfaults_ssaname` (`ssaname`),
KEY `indx_btsfaults_occureddate` (`occureddate`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=3807469710 DEFAULT CHARSET=latin1
The Explain Plan for the 2 type are
What percentage of the table is in the "current month"? If that is more than something like 20%, then there is no fix -- a table scan is likely to be faster. If it is less than 20%, then, as you suspect, #variables may be the villain. In that case, change the test to be
WHERE ym = CAST(
extract(year_month from DATE_SUB(SYSDATE(),INTERVAL 1 DAY))
AS CHAR)
AND ...
Much faster would be to build and maintain a Summary Table with a PRIMARY KEY of day and ssaname. This would have the subtotals for each day. It would be maintained either as the data is INSERTed or each night after midnight.
Then the 9:45 query becomes very fast. Maybe so fast that you don't even need to do it just once a day, but instead "on-demand".
More discussion: http://mysql.rjweb.org/doc.php/summarytables
I suggest you use NOW() instead of SYSDATE() -- The former is constant throughout a statement; the latter is not.
bts_faults looks like it might be a terabyte in size. If so, you probably don't want to here ways to make is smaller.
If the Auto_inc value is at 3.8B, yet there are only 10M rows, does this mean that you are purging 'old' data? Do you want to discuss speeding up the Deletes? (Start a new Question if you do.)
Related
I have a very long running MySql query. The query simply joins two tables which are very huge
bizevents - Nearly 34 Million rows
bizevents_actions - Nearly 17 million rows
Here is the query:
select
bizevent0_.id as id1_37_,
bizevent0_.json as json2_37_,
bizevent0_.account_id as account_3_37_,
bizevent0_.createdBy as createdB4_37_,
bizevent0_.createdOn as createdO5_37_,
bizevent0_.description as descript6_37_,
bizevent0_.iconCss as iconCss7_37_,
bizevent0_.modifiedBy as modified8_37_,
bizevent0_.modifiedOn as modified9_37_,
bizevent0_.name as name10_37_,
bizevent0_.version as version11_37_,
bizevent0_.fired as fired12_37_,
bizevent0_.preCreateFired as preCrea13_37_,
bizevent0_.entityRefClazz as entityR14_37_,
bizevent0_.entityRefIdAsStr as entityR15_37_,
bizevent0_.entityRefIdType as entityR16_37_,
bizevent0_.entityRefName as entityR17_37_,
bizevent0_.entityRefType as entityR18_37_,
bizevent0_.entityRefVersion as entityR19_37_
from
BizEvent bizevent0_
left outer join BizEvent_actions actions1_ on
bizevent0_.id = actions1_.BizEvent_id
where
bizevent0_.createdOn >= '1969-12-31 19:00:01.0'
and (actions1_.action <> 'SoftLock'
and actions1_.targetRefClazz = 'com.biznuvo.core.orm.domain.org.EmployeeGroup'
and actions1_.targetRefIdAsStr = '1'
or actions1_.action <> 'SoftLock'
and actions1_.objectRefClazz = 'com.biznuvo.core.orm.domain.org.EmployeeGroup'
and actions1_.objectRefIdAsStr = '1')
order by
bizevent0_.createdOn;
Below are the table definitions -- As you see i have defined the indexes well enough on these two tables on all the search columns plus the sort column. But still my queries are running for very very long time. Appreciate any more ideas either with respective indexing.
-- bizevent definition
CREATE TABLE `bizevent` (
`id` bigint(20) NOT NULL,
`json` longtext,
`account_id` int(11) DEFAULT NULL,
`createdBy` varchar(50) NOT NULL,
`createdon` datetime(3) DEFAULT NULL,
`description` varchar(255) DEFAULT NULL,
`iconCss` varchar(50) DEFAULT NULL,
`modifiedBy` varchar(50) NOT NULL,
`modifiedon` datetime(3) DEFAULT NULL,
`name` varchar(255) NOT NULL,
`version` int(11) NOT NULL,
`fired` bit(1) NOT NULL,
`preCreateFired` bit(1) NOT NULL,
`entityRefClazz` varchar(255) DEFAULT NULL,
`entityRefIdAsStr` varchar(255) DEFAULT NULL,
`entityRefIdType` varchar(25) DEFAULT NULL,
`entityRefName` varchar(255) DEFAULT NULL,
`entityRefType` varchar(50) DEFAULT NULL,
`entityRefVersion` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `IDXk9kxuuprilygwfwddr67xt1pw` (`createdon`),
KEY `IDXsf3ufmeg5t9ok7qkypppuey7y` (`entityRefIdAsStr`),
KEY `IDX5bxv4g72wxmjqshb770lvjcto` (`entityRefClazz`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-- bizevent_actions definition
CREATE TABLE `bizevent_actions` (
`BizEvent_id` bigint(20) NOT NULL,
`action` varchar(255) DEFAULT NULL,
`objectBizType` varchar(255) DEFAULT NULL,
`objectName` varchar(255) DEFAULT NULL,
`objectRefClazz` varchar(255) DEFAULT NULL,
`objectRefIdAsStr` varchar(255) DEFAULT NULL,
`objectRefIdType` int(11) DEFAULT NULL,
`objectRefVersion` int(11) DEFAULT NULL,
`targetBizType` varchar(255) DEFAULT NULL,
`targetName` varchar(255) DEFAULT NULL,
`targetRefClazz` varchar(255) DEFAULT NULL,
`targetRefIdAsStr` varchar(255) DEFAULT NULL,
`targetRefIdType` int(11) DEFAULT NULL,
`targetRefVersion` int(11) DEFAULT NULL,
`embedJson` longtext,
`actions_ORDER` int(11) NOT NULL,
PRIMARY KEY (`BizEvent_id`,`actions_ORDER`),
KEY `IDXa21hhagjogn3lar1bn5obl48gll` (`action`),
KEY `IDX7agsatk8u8qvtj37vhotja0ce77` (`targetRefClazz`),
KEY `IDXa7tktl678kqu3tk8mmkt1mo8lbo` (`targetRefIdAsStr`),
KEY `IDXa22eevu7m820jeb2uekkt42pqeu` (`objectRefClazz`),
KEY `IDXa33ba772tpkl9ig8ptkfhk18ig6` (`objectRefIdAsStr`),
CONSTRAINT `FKr9qjs61id11n48tdn1cdp3wot` FOREIGN KEY (`BizEvent_id`) REFERENCES `bizevent` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;>
By the way we are using Amazon RDS 5.7.33 MySql version. 16 GB RAM and 4 vCPU.
I also did a Explain Extended on the query and below is what it shows. Appreciate any help.
Initially the search of the bizevent_actions didn;t have the indexes defined. I have defined the indexes for them and tried the query but of no use.
One technique that worked for me in a similar situation was abandoning the idea of JOIN completely and switching to queries by PK. More detailed: find out which table in join would give less rows on average if you use only that table and related filter to query; get the primary keys from that table and then query the other one using WHERE pk IN ().
In your case one example would be:
SELECT
bizevent0_.id as id1_37_,
bizevent0_.json as json2_37_,
bizevent0_.account_id as account_3_37_,
...
FROM BizEvent bizevent0_
WHERE
bizevent0_.createdOn >= '1969-12-31 19:00:01.0'
AND bizevent0_.id IN (
SELECT BizEvent_id
FROM BizEvent_actions actions1_
WHERE
actions1_.action <> 'SoftLock'
and actions1_.targetRefClazz = 'com.biznuvo.core.orm.domain.org.EmployeeGroup'
and actions1_.targetRefIdAsStr = '1'
or actions1_.action <> 'SoftLock'
and actions1_.objectRefClazz = 'com.biznuvo.core.orm.domain.org.EmployeeGroup'
and actions1_.objectRefIdAsStr = '1')
ORDER BY
bizevent0_.createdOn;
This assumes that you're not actually willing to select 33+ Mio rows from BizEvent though - your code with LEFT OUTER JOIN would have done exactly this.
I have 2 tables. The first, called stazioni, where I store live weather data from some weather station, and the second called archivio2, where are stored archived day data. The two tables have in common the ID station data (ID on stazioni, IDStazione on archvio2).
stazioni (1,743 rows)
CREATE TABLE `stazioni` (
`ID` int(10) NOT NULL,
`user` varchar(100) NOT NULL,
`nome` varchar(100) NOT NULL,
`email` varchar(50) NOT NULL,
`localita` varchar(100) NOT NULL,
`provincia` varchar(50) NOT NULL,
`regione` varchar(50) NOT NULL,
`altitudine` int(10) NOT NULL,
`stazione` varchar(100) NOT NULL,
`schermo` varchar(50) NOT NULL,
`installazione` varchar(50) NOT NULL,
`ubicazione` varchar(50) NOT NULL,
`immagine` varchar(100) NOT NULL,
`lat` double NOT NULL,
`longi` double NOT NULL,
`file` varchar(255) NOT NULL,
`url` varchar(255) NOT NULL,
`temperatura` decimal(10,1) DEFAULT NULL,
`umidita` decimal(10,1) DEFAULT NULL,
`pressione` decimal(10,1) DEFAULT NULL,
`vento` decimal(10,1) DEFAULT NULL,
`vento_direzione` decimal(10,1) DEFAULT NULL,
`raffica` decimal(10,1) DEFAULT NULL,
`pioggia` decimal(10,1) DEFAULT NULL,
`rate` decimal(10,1) DEFAULT NULL,
`minima` decimal(10,1) DEFAULT NULL,
`massima` decimal(10,1) DEFAULT NULL,
`orario` varchar(16) DEFAULT NULL,
`online` int(1) NOT NULL DEFAULT '0',
`tipo` int(1) NOT NULL DEFAULT '0',
`webcam` varchar(255) DEFAULT NULL,
`webcam2` varchar(255) DEFAULT NULL,
`condizioni` varchar(255) DEFAULT NULL,
`Data2` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
archivio2 (2,127,347 rows)
CREATE TABLE `archivio2` (
`ID` int(10) NOT NULL,
`IDStazione` int(4) NOT NULL DEFAULT '0',
`localita` varchar(100) NOT NULL,
`temp_media` decimal(10,1) DEFAULT NULL,
`temp_minima` decimal(10,1) DEFAULT NULL,
`temp_massima` decimal(10,1) DEFAULT NULL,
`pioggia` decimal(10,1) DEFAULT NULL,
`pressione` decimal(10,1) DEFAULT NULL,
`vento` decimal(10,1) DEFAULT NULL,
`raffica` decimal(10,1) DEFAULT NULL,
`records` int(10) DEFAULT NULL,
`Data2` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
The indexes that I set
-- Indexes for table `archivio2`
--
ALTER TABLE `archivio2`
ADD PRIMARY KEY (`ID`),
ADD KEY `IDStazione` (`IDStazione`),
ADD KEY `Data2` (`Data2`);
-- Indexes for table `stazioni`
--
ALTER TABLE `stazioni`
ADD PRIMARY KEY (`ID`),
ADD KEY `Tipo` (`Tipo`);
ALTER TABLE `stazioni` ADD FULLTEXT KEY `localita` (`localita`);
On a map, I call by a calendar the date to search data on archive2 table, by this INNER JOIN query (I put an example date):
SELECT *, c.pioggia AS rain, c.raffica AS raff, c.vento AS wind, c.pressione AS press
FROM stazioni as o
INNER JOIN archivio2 as c ON o.ID = c.IDStazione
WHERE c.Data2 LIKE '2019-01-01%'
All works fine, but the time needed to show result are really slow (4/5 seconds), even if the query execution time seems to be ok (about 0.5s/1.0s).
I tried to execute the query on PHPMyadmin, and the results are the same. Execution time quickly, but time to show result extremely slow.
EXPLAIN query result
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE o ALL PRIMARY,ID NULL NULL NULL 1743 NULL
1 SIMPLE c ref IDStazione,Data2 IDStazione 4 sccavzuq_rete.o.ID 1141 Using where
UPDATE: the query goes fine if I remove the index from 'IDStazione'. But in this way I lost all advantages and speed on other queries... why only that query become slow if I put index on that field?
In your WHERE clause
WHERE c.Data2 LIKE '2019-01-01%'
the value of Data2 must be casted to a string. No index can be used for that condition.
Change it to
WHERE c.Data2 >= '2019-01-01' AND c.Data2 < '2019-01-01' + INTERVAL 1 DAY
This way the engine should be able to use the index on (Data2).
Now check the EXPLAIN result. I would expect, that the table order is swapped and the key column will show Data2 (for c) and ID (for o).
(Fixing the DATE is the main performance solution; here is a less critical issue.)
The tables are much bigger than necessary. Size impacts disk space and, to some extent, speed.
You have 1743 stations, yet the datatype is a 32-bit (4-byte) number (INT). SMALLINT UNSIGNED would allow for 64K stations and use only 2 bytes.
Does it get really, really, hot there? Like 999999999.9 degrees? DECIMAL(10.1) takes 5 bytes; DECIMAL(4,1) takes only 3 and allows up to 999.9 degrees. DECIMAL(3,1) has a max of 99.9 and takes only 2 bytes.
What is "localita varchar(100)" doing in the big table? Seems like you could JOIN to the stations table when you need it? Removing that might cut the table size in half.
I have a table Mysql fiddle with about 500k records.
CREATE TABLE IF NOT EXISTS `p_transactions` (
`transaction_id` bigint(10) unsigned NOT NULL,
`amount` decimal(19,2) NOT NULL,
`dt` bigint(1) NOT NULL,
`transaction_status` int(1) NOT NULL,
`transaction_type` varchar(15) NOT NULL,
`payment_method` varchar(25) NOT NULL,
`notes` text NOT NULL,
`member_id` int(10) unsigned NOT NULL,
`new_amount` decimal(19,2) NOT NULL,
`paid_amount` decimal(19,2) NOT NULL,
`secret_code` char(40) NOT NULL,
`internal_status` varchar(40) NOT NULL,
`ip_addr` varchar(15) NOT NULL,
`description` text NOT NULL,
`seller_transaction_id` varchar(50) DEFAULT NULL,
`return_url` varchar(255) DEFAULT NULL,
`fail_url` varchar(255) DEFAULT NULL,
`success_url` varchar(255) DEFAULT NULL,
`result_url` varchar(255) DEFAULT NULL,
`user_fee` decimal(19,3) DEFAULT '0.000',
`currency` char(255) DEFAULT 'USD',
`gateway_transaction_id` char(255) DEFAULT NULL,
`load_amount` decimal(19,2) NOT NULL,
`transaction_mode` varchar(1) NOT NULL DEFAULT '',
`p_fee` decimal(19,2) NOT NULL,
`country` varchar(2) NOT NULL,
`email` varchar(255) NOT NULL,
`vat` decimal(19,2) NOT NULL DEFAULT '0.00',
`name` varchar(255) NOT NULL,
`bdate` varchar(255) NOT NULL,
`child_method` varchar(255) NOT NULL,
`processing_fee` decimal(19,2) NOT NULL DEFAULT '0.00',
`flat_fee` varchar(1) NOT NULL DEFAULT 'n',
`user_fee_sum` decimal(19,2) NOT NULL DEFAULT '0.00',
`p_fee_sum` decimal(19,2) NOT NULL DEFAULT '0.00',
`dt_open` bigint(1) NOT NULL DEFAULT '0',
`user_fee_type` varchar(1) NOT NULL DEFAULT 'r',
`custom_gateway_fee` decimal(19,2) NOT NULL DEFAULT '0.00',
`paid_currency` varchar(3) NOT NULL DEFAULT 'USD',
`paid_microtime` bigint(10) unsigned NOT NULL,
`check_ballance` varchar(1) NOT NULL DEFAULT 'n',
PRIMARY KEY (`transaction_id`),
KEY `member_id` (`member_id`),
KEY `payment_method` (`payment_method`),
KEY `child_method` (`child_method`),
KEY `check_ballance` (`check_ballance`),
KEY `dt` (`dt`),
KEY `transaction_type` (`transaction_type`),
KEY `paid_microtime` (`paid_microtime`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
When I execute query
SELECT *
FROM `p_transactions`
WHERE dt >= 1517443200
AND dt <= 1523404799
AND member_id = 2051
ORDER BY `paid_microtime` DESC
LIMIT 50;
it runs 0,000 sec. (+ 0,016 sec. network)
but if I add to query this condition AND transaction_status = 7
SELECT *
FROM `p_transactions`
WHERE dt >= 1517443200
AND dt <= 1523404799
AND member_id = 2051
AND transaction_status = 7
ORDER BY `paid_microtime` DESC
LIMIT 50
query run 12,938 sec. (+ 0,062 sec. network)
Please help me to find out the reason of such behavior
PS. There was index on transaction_status and it increased execution time even more.
Add a suitable index, such as:
ON payzoff_transactions (member_id, dt)
or
ON payzoff_transactions (member_id, dt, transaction_status)
We want member_id column as the leading column in the index, because of the equality comparison, and we expect the result to be a substantially smaller subset of the entire table. We want dt column after that, because of the "range scan" on that.
Including additional columns in the index may allow MySQL to check that condition using values from the index, without a visit/lookup of the row in the underlying table pages.
Either of these indexes would be suitable for both of the queries shown in the question.
Use EXPLAIN to see the execution plan... which index is being used.
There's really no getting around the "Using filesort" operation, since we're pulling a small subset of the entire table.
(If we were pulling the entire table (or a huge subset), we might be able to avoid an expensive sort operation with an access plan that pulls rows in reverse index order, with that has an index with leading column of paid_microtime.)
For the original query have these
INDEX(member_id, dt)
INDEX(member_id, paid_microtime)
For the secondary query, have
INDEX(transaction_status, member_id, dt)
INDEX(transaction_status, member_id, paid_microtime)
Without getting into the details of the distribution of the data values, we cannot explain why one query so much slower; however, my 4 indexes should make both queries run faster most of the time.
More discussion of how I came up with those indexes (and why (member_id, dt, transaction_status) is not so good): http://mysql.rjweb.org/doc.php/index_cookbook_mysql
Probably through poor database design, the following really simple query is taking ~1.5 minutes to run.
SELECT s.title, t.name AS team_name
FROM stories AS s
JOIN teams AS t ON s.team_id = t.id
WHERE s.pubdate >= "1970-01-01 00:00"
ORDER BY s.hits /* <-- here's the problem */
LIMIT 3 OFFSET 0
The problem is the stories table is fairly big, with ~1.5m rows, and there's a ton of unique values for hits (this column logs the hits to each story.)
Take out the order clause and it resolves almost instantly.
Question: what can I do to optimise for queries like this? Presumably I shouldn't apply an index to hits since direct no look-ups take place on that column.
[UPDATE]
SHOW CREATE TABLE for all tables concerned:
CREATE TABLE stories (
`id` varchar(11) NOT NULL,
`link` text NOT NULL,
`title` varchar(255) CHARACTER SET utf8 NOT NULL,
`description` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
`pubdate` datetime NOT NULL,
`source_id` varchar(11) NOT NULL,
`team_id` varchar(11) NOT NULL,
`hits` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `Unique combo (title + date)` (`title`,`pubdate`),
KEY `team (FK)` (`team_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
CREATE TABLE teams (
`id` varchar(11) NOT NULL,
`is_live` enum('y') DEFAULT NULL,
`name` varchar(50) NOT NULL,
`short_name` varchar(12) DEFAULT NULL,
`server` varchar(11) DEFAULT NULL,
`url_token` varchar(255) NOT NULL,
`league` varchar(11) NOT NULL,
`away_game_id` varchar(255) DEFAULT NULL,
`digest_list_id` varchar(25) DEFAULT NULL,
`twitter_handle` varchar(255) DEFAULT NULL,
`no_official_news` enum('y') DEFAULT NULL,
`alt_names` varchar(255) DEFAULT NULL,
`no_use_nickname` enum('y') DEFAULT NULL,
`official_hashtag` varchar(30) DEFAULT NULL,
`merge_news_and_fans` enum('y') DEFAULT NULL,
`colour_1` varchar(6) NOT NULL,
`colour_2` varchar(6) DEFAULT NULL,
`colour_3` varchar(6) DEFAULT NULL,
`link_colour_modifier` float DEFAULT NULL,
`alt_link_colour_modifier` float DEFAULT NULL,
`title_shade` enum('dark','light') NOT NULL,
`shirt_style` enum('vert_stripes','horiz_stripes','vert_stripes_thin','horiz_stripes_thin','vert_split','horiz_split') DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `URL token` (`url_token`),
KEY `league (FK)` (`league`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
Consider removing the filter on pubdate if the user does not need it. It confuses the optimizer.
INDEX(hits, pubdate, title)
will probably help the query the most. It is "covering".
The reason why removing ORDER BY runs fast: Without it, it gives you any 3 rows. With it, and without a useful index, it needs to sort the 1.5M rows to discover the 3 with the least number of hits.
Perhaps you wanted ORDER BY s.hits DESC? -- to get those with the most hits.
What I'm dealing with:
I have a project which uses ActiveCollab 2, and the database structure is new to me - practically everything gets stored to a project_objects table and has a recursively hierarchical relationship:
Record 1234 might be type "Ticket" with parent_id of 123
Record 123 might be type "Category" with parent_id of 12
Record 12 might be type "Milestone" and so on.
Currently there are upwards of 450,000 records in this table and many of the queries in the code reference the name field which does NOT have an index on it. An example value might be Design or Development.
This might be an example query:
SELECT * FROM project_objects WHERE type = "Ticket" and name = "Design"
My problem:
I have a query that is taking upwards of 12-15 seconds and I have a feeling it's from that
name column lacking the index and requiring the full text search. My understanding with indexes is that if I add one to the name field, it'll speed up the reads, but slow down the inserts and updates. Does the index need to get rebuilt completely every time a record is added or updated or is it just altered/appended? I don't want to optimize this query with an index if it means drastically slowing down other parts of the code base which depend on faster writes.
My question:
Assume 100 reads and 100 writes per day, which is more likely to be a faster process for MySQL - executing the above query on the above table without the index or having to rebuild the index every time a record is added?
I don't have the knowledge or authority to start running benchmarks, but I would like to offer a suggestion to the client without sounding completely novice. Thanks!
EDIT: Here is the table:
'CREATE TABLE `project_objects` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`source` varchar(50) DEFAULT NULL,
`type` varchar(30) NOT NULL DEFAULT ''ProjectObject'',
`module` varchar(30) NOT NULL DEFAULT ''system'',
`project_id` int(10) unsigned NOT NULL DEFAULT ''0'',
`milestone_id` int(10) unsigned DEFAULT NULL,
`parent_id` int(10) unsigned DEFAULT NULL,
`parent_type` varchar(30) DEFAULT NULL,
`name` varchar(150) DEFAULT NULL,
`body` longtext,
`tags` text,
`state` tinyint(4) NOT NULL DEFAULT ''0'',
`visibility` tinyint(4) NOT NULL DEFAULT ''0'',
`priority` tinyint(4) DEFAULT NULL,
`created_on` datetime DEFAULT NULL,
`created_by_id` smallint(5) unsigned NOT NULL DEFAULT ''0'',
`created_by_name` varchar(100) DEFAULT NULL,
`created_by_email` varchar(100) DEFAULT NULL,
`updated_on` datetime DEFAULT NULL,
`updated_by_id` smallint(5) unsigned DEFAULT NULL,
`updated_by_name` varchar(100) DEFAULT NULL,
`updated_by_email` varchar(100) DEFAULT NULL,
`due_on` date DEFAULT NULL,
`completed_on` datetime DEFAULT NULL,
`completed_by_id` smallint(5) unsigned DEFAULT NULL,
`completed_by_name` varchar(100) DEFAULT NULL,
`completed_by_email` varchar(100) DEFAULT NULL,
`comments_count` smallint(5) unsigned DEFAULT NULL,
`has_time` tinyint(1) unsigned NOT NULL DEFAULT ''0'',
`is_locked` tinyint(3) unsigned DEFAULT NULL,
`estimate` float(9,2) DEFAULT NULL,
`start_on` date DEFAULT NULL,
`start_on_text` varchar(50) DEFAULT NULL,
`due_on_text` varchar(50) DEFAULT NULL,
`workflow_status` int(4) DEFAULT NULL,
`varchar_field_1` varchar(255) DEFAULT NULL,
`varchar_field_2` varchar(255) DEFAULT NULL,
`integer_field_1` int(11) DEFAULT NULL,
`integer_field_2` int(11) DEFAULT NULL,
`float_field_1` double(10,2) DEFAULT NULL,
`float_field_2` double(10,2) DEFAULT NULL,
`text_field_1` longtext,
`text_field_2` longtext,
`date_field_1` date DEFAULT NULL,
`date_field_2` date DEFAULT NULL,
`datetime_field_1` datetime DEFAULT NULL,
`datetime_field_2` datetime DEFAULT NULL,
`boolean_field_1` tinyint(1) unsigned DEFAULT NULL,
`boolean_field_2` tinyint(1) unsigned DEFAULT NULL,
`position` int(10) unsigned DEFAULT NULL,
`version` int(10) unsigned NOT NULL DEFAULT ''0'',
PRIMARY KEY (`id`),
KEY `type` (`type`),
KEY `module` (`module`),
KEY `project_id` (`project_id`),
KEY `parent_id` (`parent_id`),
KEY `created_on` (`created_on`),
KEY `due_on` (`due_on`)
KEY `milestone_id` (`milestone_id`)
) ENGINE=InnoDB AUTO_INCREMENT=993109 DEFAULT CHARSET=utf8'
As #Ray points out, indexes do not have to be rebuilt on every Insert, Update or Delete operation. So, if you only want to improve efficuency of this (or similar) queries, add either an index on (name, type) or on (type, name).
Since you already have an index on (type) alone, I would add the first one:
ALTER TABLE project_objects
ADD INDEX name_type_IDX
(name, type) ;
It may take a few seconds on a busy server but it has to be done once and then all the queries with conditions like yours will benefit. It may also improve efficiency of several other types of queries that involve name only or name and type:
WHERE name = 'Design' AND type = 'Ticket' --- your query
WHERE name = 'Design' --- condition on `name` only
GROUP BY name --- group by `name`
WHERE name LIKE 'Design%' --- range condition on `name` only
WHERE name = 'Design' --- equality condition on `name`
AND type LIKE 'Ticket%' --- and range condition on `type`
WHERE name = 'Design' --- equality condition on `name`
GROUP BY type --- and group by `type`
GROUP BY name --- group by `name`
, type --- and `type`
The insert cost of adding a single point index on the name column is most likely negligible--it will probably amount to an addition of a constant time increase, probably no more that a few milliseconds. You will eat up some extra disk space, but that's usually not a concern. Nothing like the multiple seconds you're experienceing on select performance.
Add the index, enjoy the performance improvement.
BTW: Indexes aren't 'rebuilt' on every insert. They're usually implemented in B-Trees and unless you're deleting frequently, should require very little re-balancing once you get larger than a few levels (and rebalancing with little depth is pretty cheap).