getting locks on table due to select queries? - mysql

we are using below select queries from long time.
But today we are receiving many locks on database.
Please help me how to resolve the locks due to select queries.
the table size is very small 300kb.
we optimized table but no luck
query info and table structure from below.
Req-SQL:[select max(fullname) from prod_sets where name='view_v01' for update]
Req-Time: 5 sec
Blocker-SQL:[]
Blocker-Command:[Sleep]
Blocker-Time: 73 sec
Req-SQL:[select max(fullname) from prod_sets where name='view_v01' for update]
Req-Time: 22 sec
Blocker-SQL:[]
Blocker-Command:[Sleep]
Blocker-Time: 73 sec
CREATE TABLE `prod_sets` (
`modified` datetime DEFAULT NULL,
`create` datetime DEFAULT NULL,
`name` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`fullname` decimal(12,0) DEFAULT NULL,
UNIQUE KEY `idx_n` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin
Explain Plan:
mysql> explain select max(fullname) from prod_sets where name='view_v01' for update;
+----+-------------+---------------+-------+---------------+----------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------+---------------+----------+---------+-------+------+-------+
| 1 | SIMPLE | prod_sets | const | idx_name | idx_name | 53 | const | 1 | |
+----+-------------+---------------+-------+---------------+----------+---------+-------+------+-------+
1 row in set (0.01 sec)

If you are locking some rows of a table then you must explicitly unlock the table after your work has been done.
use:
UNLOCK TABLES;
or use :
kill put_process_id_here;
refer these links for further reading
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
http://lmorgado.com/blog/2008/09/10/mysql-locks-and-a-bit-of-the-query-cache/

Assuming you know what FOR UPDATE means. Is there any reason name is DEFAULT NULL? If not, I would like to make name to PRIMARY KEY. Innodb's PK is clustered, so it makes access fullname faster
CREATE TABLE `prod_sets` (
`modified` datetime DEFAULT NULL,
`create` datetime DEFAULT NULL,
`name` varchar(50) COLLATE latin1_bin DEFAULT NOT NULL,
`fullname` decimal(12,0) DEFAULT NULL,
PRIMARY KEY `idx_n` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin
Or simply add following INDEX.
ALTER TABLE prod_sets ADD INDEX(name, fullname);

Related

Struggling to Updating the value using (connection.query(update...)) when customer select the product using - NodeJS/MySQL

I am looking to decrease the quantity of stock(stockQuantity) each time customer purchases in main_product_info.
I am trying to use the below code but it looks like its not working, i am a new to DB so not sure if i am using the correct code.
Also as in the below code i am directly updating the quantity against the productNumner or shall i use join(i have seen few post around but couldn't undersrtand) and including the main_product_sub_category(as it has the primary key ) as well, just want to understand the correct way of doing it.
Error: stockQuantity is not defined.
Any suggestions please.
connection.query("UPDATE main_product_info SET stockQuantity=? WHERE main_product_info.producNumber=?", [stockQuantity - 1, checkQuantity], function (err, result) {}
// here i am looking to decrese stockQuantity by 1
-main_product_info
-Snippet of code using show create table,
mysql> SHOW CREATE TABLE main_Products_category;
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| main_Products_category | CREATE TABLE `main_products_category` (
`productId` int NOT NULL AUTO_INCREMENT,
`productCategory` varchar(45) NOT NULL,
PRIMARY KEY (`productId`)
) ENGINE=InnoDB AUTO_INCREMENT=105 DEFAULT CHARSET=utf8 |
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
mysql> SHOW CREATE TABLE main_products_sub_category;
+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| main_products_sub_category | CREATE TABLE `main_products_sub_category` (
`main_Products_sub_category_id` varchar(45) NOT NULL,
`main_Products_sub_category_name` varchar(45) DEFAULT NULL,
`main_Products_category_productId` int NOT NULL,
PRIMARY KEY (`main_Products_sub_category_id`),
KEY `fk_main_Products_sub_category_main_Products_category1_idx` (`main_Products_category_productId`),
CONSTRAINT `fk_main_Products_sub_category_main_Products_category1` FOREIGN KEY (`main_Products_category_productId`) REFERENCES `main_products_category` (`productId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> SHOW CREATE TABLE main_product_info;
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| main_product_info | CREATE TABLE `main_product_info` (
`producInfoId` varchar(45) NOT NULL,
`productDescription` mediumtext,
`stockQuantity` int DEFAULT NULL,
`producNumber` int DEFAULT NULL,
`price` varchar(255) DEFAULT NULL,
`product_name` varchar(255) DEFAULT NULL,
`main_Products_sub_category_main_Products_sub_category_id` varchar(45) NOT NULL,
PRIMARY KEY (`producInfoId`),
KEY `fk_main_Product_Info_main_Products_sub_category1_idx` (`main_Products_sub_category_main_Products_sub_category_id`),
CONSTRAINT `fk_main_Product_Info_main_Products_sub_category1` FOREIGN KEY (`main_Products_sub_category_main_Products_sub_category_id`) REFERENCES `main_products_sub_category` (`main_Products_sub_category_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql>
You would do it in mysql direcly without parameter
connection.query("UPDATE main_product_info SET stockQuantity= stockQuantity - 1 WHERE main_product_info.producNumber=?", [checkQuantity], function (err, result) {}
// here i am looking to decrese stockQuantity by 1

Huge innodb tables with SELECT performance issue

I have two huge innodb tables (page: +40M rows, +30Gb and stat: +45M rows, +10Gb). I have a query that selects rows from the join of these two tables and it used to take about a second for execution. Recently it's taking more than 20 seconds (sometime up to few minutes) for the exact same query to be completed. I suspected that with lot's of inserts and updates it might need an optimization. I ran OPTIMIZE TABLE on the table using phpMyAdmin but no improvements. I've Googled a lot but couldn't find any content helping me on this situation.
The query I mentioned earlier looks like below:
SELECT `c`.`unique`, `c`.`pub`
FROM `pages` `c`
LEFT JOIN `stat` `s` ON `c`.`unique`=`s`.`unique`
WHERE `s`.`isc`='1'
AND `s`.`haa`='0'
AND (`pubID`='24')
ORDER BY `eid` ASC LIMIT 0, 10
These are the tables structure:
CREATE TABLE `pages` (
`eid` int(10) UNSIGNED NOT NULL,
`ti` text COLLATE utf8_persian_ci NOT NULL,
`fat` text COLLATE utf8_persian_ci NOT NULL,
`de` text COLLATE utf8_persian_ci NOT NULL,
`fad` text COLLATE utf8_persian_ci NOT NULL,
`pub` varchar(100) COLLATE utf8_persian_ci NOT NULL,
`pubID` int(10) UNSIGNED NOT NULL,
`pubn` text COLLATE utf8_persian_ci NOT NULL,
`unique` tinytext COLLATE utf8_persian_ci NOT NULL,
`pi` tinytext COLLATE utf8_persian_ci NOT NULL,
`kw` text COLLATE utf8_persian_ci NOT NULL,
`fak` text COLLATE utf8_persian_ci NOT NULL,
`te` text COLLATE utf8_persian_ci NOT NULL,
`fae` text COLLATE utf8_persian_ci NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_persian_ci;
ALTER TABLE `pages`
ADD PRIMARY KEY (`eid`),
ADD UNIQUE KEY `UNIQ` (`unique`(128)),
ADD KEY `pub` (`pub`),
ADD KEY `unique` (`unique`(128)),
ADD KEY `pubID` (`pubID`) USING BTREE;
ALTER TABLE `pages` ADD FULLTEXT KEY `faT` (`fat`);
ALTER TABLE `pages` ADD FULLTEXT KEY `faA` (`fad`,`fae`);
ALTER TABLE `pages` ADD FULLTEXT KEY `faK` (`fak`);
ALTER TABLE `pages` ADD FULLTEXT KEY `pubn` (`pubn`);
ALTER TABLE `pages` ADD FULLTEXT KEY `faTAK` (`fat`,`fad`,`fak`,`fae`);
ALTER TABLE `pages` ADD FULLTEXT KEY `ab` (`de`,`te`);
ALTER TABLE `pages` ADD FULLTEXT KEY `Ti` (`ti`);
ALTER TABLE `pages` ADD FULLTEXT KEY `Kw` (`kw`);
ALTER TABLE `pages` ADD FULLTEXT KEY `TAK` (`ti`,`de`,`kw`,`te`);
ALTER TABLE `pages`
MODIFY `eid` int(10) UNSIGNED NOT NULL AUTO_INCREMENT;
CREATE TABLE `stat` (
`sid` int(10) UNSIGNED NOT NULL,
`unique` tinytext COLLATE utf8_persian_ci NOT NULL,
`haa` tinyint(1) UNSIGNED NOT NULL,
`isc` tinyint(1) NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_persian_ci;
ALTER TABLE `stat`
ADD PRIMARY KEY (`sid`),
ADD UNIQUE KEY `Unique` (`unique`(128)),
ADD KEY `isc` (`isc`),
ADD KEY `haa` (`haa`),
ALTER TABLE `stat`
MODIFY `sid` int(10) UNSIGNED NOT NULL AUTO_INCREMENT;
The following query took only 0.0126 seconds with 38685601 total results as said by phpMyAdmin:
SELECT `sid` FROM `stat` WHERE `s`.`isc`='1' AND `s`.`haa`='0'
and this one took 0.0005 seconds with 5159484 total results
SELECT `eid`, `unique`, `pubn`, `pi` FROM `pages` WHERE `pubID`='24'
Am I missing something? Can anybody help?
The slowdown is probably due to scanning so many rows, and that is now more than can fit in cache. So, let's try to improve the query.
Replace INDEX(pubID) with INDEX(pubID, eid) -- This may allow both the WHERE and ORDER BY to be handled by the index, thereby avoiding a sort.
Replace TINYTEXT with VARCHAR(255) or some smaller limit. This may speed up tmp tables.
Don't use prefix index on eid -- its an INT !
Don't say UNIQUE with prefixing -- UNIQUE(x(128)) only checks the uniqueness of the first 128 columns !
Once you change to VARCHAR(255) (or less), you can apply UNIQUE to the entire column.
The biggest performance issue is filtering on two tables -- can you move the status flags into the main table?
Change LEFT JOIN to JOIN.
What does unique look like? If it is a "UUID", that could further explain the trouble.
If that is a UUID that is 39 characters, the string can be converted to a 16-byte column for further space savings (and speedup). Let's discuss this further if necessary.
5 million results in 0.5ms is bogus -- it was fetching from the Query cache. Either turn off the QC or run with SELECT SQL_NO_CACHE...
+1 to #RickJames answer, but following it I have done a test.
I would also recommend you do not use the name unique for a column name, because it's an SQL reserved word.
ALTER TABLE pages
CHANGE `unique` objectId VARCHAR(128) NOT NULL COMMENT 'Document Object Identifier',
DROP KEY pubId,
ADD KEY bktest1 (pubId, eid, objectId, pub);
ALTER TABLE stat
CHANGE `unique` objectId VARCHAR(128) NOT NULL COMMENT 'Document Object Identifier',
DROP KEY `unique`,
ADD UNIQUE KEY bktest2 (objectId, isc, haa);
mysql> explain SELECT `c`.`objectId`, `c`.`pub` FROM `pages` `c` JOIN `stat` `s` ON `c`.`objectId`=`s`.`objectId` WHERE `s`.`isc`='1' AND `s`.`haa`='0' AND (`pubID`='24') ORDER BY `eid` ASC LIMIT 0, 10;
+----+-------------+-------+------------+--------+-------------------------+---------+---------+-----------------------------+------+----------+--------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+--------+-------------------------+---------+---------+-----------------------------+------+----------+--------------------------+
| 1 | SIMPLE | c | NULL | ref | unique,unique_2,bktest1 | bktest1 | 4 | const | 1 | 100.00 | Using where; Using index |
| 1 | SIMPLE | s | NULL | eq_ref | bktest2,haa,isc | bktest2 | 388 | test.c.objectId,const,const | 1 | 100.00 | Using index |
+----+-------------+-------+------------+--------+-------------------------+---------+---------+-----------------------------+------+----------+--------------------------+
By creating the multi-column indexes, this makes them covering indexes, and you see "Using index" in the EXPLAIN report.
It's important to put eid second in the bktest1 index, so you avoid a filesort.
This is the best you can hope to optimize this query without denormalizing or partitioning the tables.
Next you should make sure your buffer pool is large enough to hold all the requested data.

Where oh where is my FULLTEXT index?

Okay, I'm fully prepared to be told this is something dumb. I've got a table like so:
mysql> show create table node\G
*************************** 1. row ***************************
Table: node
Create Table: CREATE TABLE `node` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`graph` varchar(100) CHARACTER SET latin1 DEFAULT NULL,
`subject` varchar(200) NOT NULL,
`predicate` varchar(200) NOT NULL,
`object` mediumtext NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `nodeindex` (`graph`(20),`subject`(100),`predicate`(100),`object`(100)),
KEY `ix_node_subject` (`subject`),
KEY `ix_node_graph` (`graph`),
KEY `ix_node_object` (`object`(255)),
KEY `ix_node_predicate` (`predicate`),
KEY `node_po` (`predicate`,`object`(130)),
KEY `node_so` (`subject`,`object`(130)),
KEY `node_sp` (`subject`,`predicate`(130)),
FULLTEXT KEY `node_search` (`object`)
) ENGINE=MyISAM AUTO_INCREMENT=574161093 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
Note the line FULLTEXT KEYnode_search(object). When I try the query
mysql> select count(*) from node where match(object) against ('Entrepreneur');
I get the error
ERROR 1191 (HY000): Can't find FULLTEXT index matching the column list
Duh?
Update
I tried an ANALYZE TABLE to no aval
mysql> analyze table node;
+------------------+---------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+------------------+---------+----------+----------+
| xxxxxxxxxxx.node | analyze | status | OK |
+------------------+---------+----------+----------+
1 row in set (21 min 13.86 sec)

Why does MySQL not use the index from EXPLAIN?

I have a straight forward table which currently has ~10M rows.
Here is the definition:
CREATE TABLE `train_run_messages` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`train_id` int(10) unsigned NOT NULL,
`customer_id` int(10) unsigned NOT NULL,
`station_id` int(10) unsigned NOT NULL,
`train_run_id` int(10) unsigned NOT NULL,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`type` tinyint(4) NOT NULL,
`customer_station_track_id` int(10) unsigned DEFAULT NULL,
`lateness_type` tinyint(3) unsigned NOT NULL,
`lateness_amount` mediumint(9) NOT NULL,
`lateness_code` tinyint(3) unsigned DEFAULT '0',
`info_text` varchar(32) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `timestamp` (`timestamp`),
KEY `lateness_amount` (`lateness_amount`),
KEY `customer_timestamp` (`customer_id`,`timestamp`),
KEY `trm_customer` (`customer_id`),
KEY `trm_train` (`train_id`),
KEY `trm_station` (`station_id`),
KEY `trm_trainrun` (`train_run_id`),
KEY `FI_trm_customer_station_tracks` (`customer_station_track_id`),
CONSTRAINT `FK_trm_customer_station_tracks` FOREIGN KEY (`customer_station_track_id`) REFERENCES `customer_station_tracks` (`id`),
CONSTRAINT `trm_customer` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `trm_station` FOREIGN KEY (`station_id`) REFERENCES `stations` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `trm_train` FOREIGN KEY (`train_id`) REFERENCES `trains` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `trm_trainrun` FOREIGN KEY (`train_run_id`) REFERENCES `train_runs` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=9928724 DEFAULT CHARSET=utf8;
We have lots of queries that filter by customer_id and timestamp so we have created a combined index for that.
Now I have this simple query:
SELECT * FROM `train_run_messages` WHERE `customer_id` = '5' AND `timestamp` >= '2013-12-01 00:00:57' AND `timestamp` <= '2013-12-31 23:59:59' LIMIT 0, 100
On our current machine with ~10M entries this query takes ~16 seconds, which is way to long in my taste, since there is an index for queries like this.
So lets look at the output of explain for this query:
+----+-------------+--------------------+------+------------------------------------------- +--------------------+---------+-------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+------+-------------------------------------------+--------------------+---------+-------+--------+-------------+
| 1 | SIMPLE | train_run_messages | ref | timestamp,customer_timestmap,trm_customer | customer_timestamp | 4 | const | 551405 | Using where |
+----+-------------+--------------------+------+-------------------------------------------+--------------------+---------+-------+--------+-------------+
So MySQL is telling me that it would use the customer_timestamp index, fine! Why does the query still take ~16 seconds?
Since I don't always trust the MySQL query analyzer lets try it with a forced index:
SELECT * FROM `train_run_messages` USE INDEX (customer_timestamp) WHERE `customer_id` = '5' AND `timestamp` >= '2013-12-01 00:00:57' AND `timestamp` <= '2013-12-31 23:59:59' LIMIT 0, 100
Query Time: 0.079s!!
Me: puzzled!
So can anyone explain why MySQL is obviously not using the index that it says it would use from the EXPLAIN output? And is there any way to prove what index it really used when performing the real query?
Btw: Here is the output from the slow-log:
# Time: 131217 11:18:04
# User#Host: root[root] # localhost [127.0.0.1]
# Query_time: 16.252878 Lock_time: 0.000168 Rows_sent: 100 Rows_examined: 9830711
SET timestamp=1387275484;
SELECT * FROM `train_run_messages` WHERE `customer_id` = '5' AND `timestamp` >= '2013-12-01 00:00:57' AND `timestamp` <= '2013-12-31 23:59:59' LIMIT 0, 100;
Alltough it does not specifically say that it is not using any index the Rows_examined suggests that it does a full tablescan.
So is this fixable without using USE INDEX? We are using Propel as ORM and there is currently no way to use MySQL-specific "USE INDEX" without manually writing the query.
Edit:
Here is the output of EXPLAIN and USE INDEX:
+----+-------------+--------------------+-------+--------------------+--------------------+---------+------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+-------+--------------------+--------------------+---------+------+--------+-------------+
| 1 | SIMPLE | train_run_messages | range | customer_timestmap | customer_timestmap | 8 | NULL | 191264 | Using where |
+----+-------------+--------------------+-------+--------------------+--------------------+---------+------+--------+-------------+
MySQL has three candidate indexes
(timestamp)
(customer_id, timestamp)
(customer_id)
and you are asking
`customer_id` = '5' AND `timestamp` BETWEEN ? AND ?
The optimizer has choose (customer_id, timestamp) from statistics.
InnoDB Engine's optimizer depends on statistics which uses sampling when table is opend. default sampling reads 8 pages on index file.
So, I suggest three things as follows
increase innodb_stats_sample_pages=64.
Default value of innodb_stats_sample_pages is 8 pages.
refer to http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_stats_sample_pages
remove redandant index. following index is just fine. currently there is only customer_id = 5 (you said)
(timestamp)
(customer_id)
run OPTIMIZE TABLE train_run_messages to re-organize table.
this reduces table and index size and sometimes this makes optimizer smarter
To me, the biggest thing it is failing on your wrapping the customer ID in quotes... such as = '5'. By doing this, it cant use the customer/timestamp index because the customer Id needs to be converted to a string to match your '5' vs just = 5 and you should be good to go.

Simple SELECT mysql query very slow (using intersect)

A query that used to work just fine on a production server has started becoming extremely slow (in a matter of hours).
This is it:
SELECT * FROM news_articles WHERE published = '1' AND news_category_id = '4' ORDER BY date_edited DESC LIMIT 1;
This takes up to 20-30 seconds to execute (the table has ~200.000 rows)
This is the output of EXPLAIN:
+----+-------------+---------------+-------------+----------------------------+----------------------------+---------+------+------+--------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------------+----------------------------+----------------------------+---------+------+------+--------------------------------------------------------------------------+
| 1 | SIMPLE | news_articles | index_merge | news_category_id,published | news_category_id,published | 5,5 | NULL | 8409 | Using intersect(news_category_id,published); Using where; Using filesort |
+----+-------------+---------------+-------------+----------------------------+----------------------------+---------+------+------+--------------------------------------------------------------------------+
Playing around with it, I found that hinting a specific index (date_edited) makes it much faster:
SELECT * FROM news_articles USE INDEX (date_edited) WHERE published = '1' AND news_category_id = '4' ORDER BY date_edited DESC LIMIT 1;
This one takes milliseconds to execute.
EXPLAIN output for this one is:
+----+-------------+---------------+-------+---------------+-------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------+---------------+-------------+---------+------+------+-------------+
| 1 | SIMPLE | news_articles | index | NULL | date_edited | 8 | NULL | 1 | Using where |
+----+-------------+---------------+-------+---------------+-------------+---------+------+------+-------------+
Columns news_category_id, published and date_edited are all indexed.
The storage engine is InnoDB.
This is the table structure:
CREATE TABLE `news_articles` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` text NOT NULL,
`subtitle` text NOT NULL,
`summary` text NOT NULL,
`keywords` varchar(500) DEFAULT NULL,
`body` mediumtext NOT NULL,
`source` varchar(255) DEFAULT NULL,
`source_visible` int(11) DEFAULT NULL,
`author_information` enum('none','name','signature') NOT NULL DEFAULT 'name',
`date_added` datetime NOT NULL,
`date_edited` datetime NOT NULL,
`views` int(11) DEFAULT '0',
`news_category_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`c_forwarded` int(11) DEFAULT '0',
`published` int(11) DEFAULT '0',
`deleted` int(11) DEFAULT '0',
`permalink` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`),
KEY `news_category_id` (`news_category_id`),
KEY `published` (`published`),
KEY `deleted` (`deleted`),
KEY `date_edited` (`date_edited`),
CONSTRAINT `news_articles_ibfk_3` FOREIGN KEY (`news_category_id`) REFERENCES `news_categories` (`id`) ON DELETE SET NULL ON UPDATE CASCADE,
CONSTRAINT `news_articles_ibfk_4` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=192588 DEFAULT CHARSET=utf8
I could possibly change all queries my web application does to hint using that index. but this is considerable work.
Is there some way to tune MySQL so that the first query is made more efficient without actually rewriting all queries?
just a few tips..
1 - It seems to me the fields published and news_category_id are INTEGER. If so, please remove the single quotes from your query. It can make a huge difference when comes to performance;
2 - Also, I'd say that your field published has no many different values (it is probably 1 - yes and 0 - no, or something like that). If I'm right, this is not a good field to index at all. The parse in this case still has to go through all the records to find what it is looking for; In this case move the news_category_id to be the first field in your WHERE clause.
3 - "Don't forget about the most left index". This affirmation is valid for your SELECT, JOINS, WHERE, ORDER BY. Even the position of the columns on the table are imporant, keep the indexed ones on the top. Indexes are your friend as long as you know how to play with them.
Hope it can help you in somehow..
SELECT * FROM news_articles WHERE published = '1' AND news_category_id = '4' ORDER BY date_edited DESC LIMIT 1;
Original:
SELECT * FROM news_articles
WHERE published = 1 AND news_category_id = 4
ORDER BY date_edited DESC LIMIT 1;
Since you have LIMIT 1, you're only selecting the latest row. ORDER BY date_edited tells MySQL to sort then take 1 row off the top. This is really slow, and why USE INDEX would help.
Try to match MAX(date_edited) in the WHERE clause instead. That should get the query planner to use its index automatically.
Choose MAX(date_entered):
SELECT * FROM news_articles
WHERE published = 1 AND news_category_id = 4
AND date_edited = (select max(date_edited) from news_articles);
Please change your query to :
SELECT * FROM news_articles WHERE published = 1 AND news_category_id = 4 ORDER BY date_edited DESC LIMIT 1;
Please note that i have removed quotes from '1' and '4' data provided in query
The difference in the datatype passed and the column structure does not allow mysql to be able to use the index on these 2 columns.