Hello this query is producing this explain which is odd considering I have a index set up for both the columns
'1', 'SIMPLE', 'vtr_video_transactions', 'ALL', 'user_standard,user_date', NULL, NULL, NULL, '5', 'Using where; Using filesort'
CREATE TABLE `vtr_video_transactions` (
`vtr_id` int(11) NOT NULL AUTO_INCREMENT,
`vtr_transaction_id` int(11) unsigned DEFAULT NULL,
`vtr_user_id` int(11) unsigned DEFAULT NULL,
`vtr_standards_id` int(11) unsigned DEFAULT NULL,
`vtr_video_date` datetime DEFAULT NULL,
PRIMARY KEY (`vtr_id`),
KEY `user_standard` (`vtr_user_id`,`vtr_standards_id`),
KEY `user_date` (`vtr_user_id`,`vtr_video_date`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8;
See user_date for the index. I have it set to DESC on MYSQL Workbench. But I get filesort with the explain. Not sure why. Cheers
Data for table
LOCK TABLES `vtr_video_transactions` WRITE;
/*!40000 ALTER TABLE `vtr_video_transactions` DISABLE KEYS */;
INSERT INTO `vtr_video_transactions` VALUES (1,1,1,2,'2015-09-05 17:18:59'),(2,2,1,3,'2015-08-27 19:04:12'),(3,2,1,4,'2015-08-27 18:55:53'),(4,10,1,119,'2015-08-27 19:04:12'),(5,11,1,10,'2015-08-27 19:04:12');
See the manual page here
Indexes are less important for queries on small tables, or big tables
where report queries process most or all of the rows. When a query
needs to access most of the rows, reading sequentially is faster than
working through an index. Sequential reads minimize disk seeks, even
if not all the rows are needed for the query.
Related
I have a large table (over 2 billion records) which is partitioned. Each partition contains roughly 500 million records. I have recently moved from physical hardware to AWS, i used a mysqldump to backup and restore the MySQL data. I have also recently created a new partition (p108). Querying data from old partitions (created on the old server) are running as normal, very quick, returning data in seconds. However querying records in the newly created partition (p108) is very slow - minutes.
show create table results
CREATE TABLE `termusage`
(
`id` BIGINT(20) NOT NULL auto_increment,
`terminal` BIGINT(20) DEFAULT NULL,
`date` DATETIME DEFAULT NULL,
`dest` VARCHAR(255) DEFAULT NULL,
`feattrans` BIGINT(20) DEFAULT NULL,
`cost_type` TINYINT(4) DEFAULT NULL,
`cost` DECIMAL(16, 6) DEFAULT NULL,
`gprsup` BIGINT(20) DEFAULT NULL,
`gprsdown` BIGINT(20) DEFAULT NULL,
`duration` TIME DEFAULT NULL,
`file` BIGINT(20) DEFAULT NULL,
`custcost` DECIMAL(16, 6) DEFAULT '0.000000',
`invoice` BIGINT(20) NOT NULL DEFAULT '99999999',
`carriertrans` BIGINT(20) DEFAULT NULL,
`session_start` DATETIME DEFAULT NULL,
`session_end` DATETIME DEFAULT NULL,
`mt_mo` VARCHAR(4) DEFAULT NULL,
`grps_rounded` BIGINT(20) DEFAULT NULL,
`gprs_rounded` BIGINT(20) DEFAULT NULL,
`country` VARCHAR(25) DEFAULT NULL,
`network` VARCHAR(25) DEFAULT NULL,
`ctn` VARCHAR(20) DEFAULT NULL,
`pricetrans` BIGINT(20) DEFAULT NULL,
PRIMARY KEY (`id`, `invoice`),
KEY `idx_terminal` (`invoice`, `terminal`),
KEY `idx_feattrans` (`invoice`, `feattrans`),
KEY `idx_file` (`invoice`, `file`),
KEY `termusage_carriertrans_idx` (`carriertrans`),
KEY `idx_ctn` (`invoice`, `ctn`),
KEY `idx_pricetrans` (`invoice`, `pricetrans`)
)
engine=innodb
auto_increment=17449438880
DEFAULT charset=latin1
/*!50500 PARTITION BY RANGE COLUMNS(invoice)
(PARTITION p103 VALUES LESS THAN (621574) ENGINE = InnoDB,
PARTITION p104 VALUES LESS THAN (628214) ENGINE = InnoDB,
PARTITION p106 VALUES LESS THAN (634897) ENGINE = InnoDB,
PARTITION p107 VALUES LESS THAN (649249) ENGINE = InnoDB,
PARTITION p108 VALUES LESS THAN (662763) ENGINE = InnoDB,
PARTITION plast VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB) */
I created the partition p108 using the following query
ALTER TABLE termusage reorganize partition plast
INTO ( partition p108 VALUES less than (662763),
partition plast VALUES less than maxvalue )
I can see the file termusage#p#p108.ibd and looks to be "normal" and the data is there as i can get results from the query.
information_schema.PARTITIONS shows the following for the table - which indicates there is some kind of issue
Name Pos Rows Avg Data Length Method
p103 1 412249206 124 51124371456 RANGE COLUMNS
p104 2 453164890 133 60594061312 RANGE COLUMNS
p106 3 542767414 135 73562849280 RANGE COLUMNS
p107 4 587042147 129 76288098304 RANGE COLUMNS
p108 5 0 0 16384 RANGE COLUMNS
plast 6 0 0 16384 RANGE COLUMNS
How can i fix the partition ?
Updated
Explain for good query
# id, select_type, table, partitions, type, possible_keys, key, key_len, ref, rows, filtered, Extra
1, SIMPLE, t, p107, ref, idx_terminal,idx_feattrans,idx_file,idx_ctn,idx_pricetrans, idx_terminal, 17, const,const, 603, 100.00, Using index condition; Using temporary; Using filesort
Explain for poor query
# id, select_type, table, partitions, type, possible_keys, key, key_len, ref, rows, filtered, Extra
1, SIMPLE, t, p108, ALL, idx_terminal,idx_feattrans,idx_file,idx_ctn,idx_pricetrans, , , , 1, 100.00, Using where; Using temporary; Using filesort
For future readers, the issue was resolved by running ALTER TABLE ... ANALYZE PARTITION p108.
The table and index statistics that guide the optimizer to choose the best way to read the table were out of date. It's common to use ANALYZE to make sure these statistics are updated after a significant data load or delete.
I am using a MySQL database in my ASP.NET with C# web application. The MySQL Server version is 5.7 and there is 8 GB RAM in the PC. When I am executing the select query in MySQL database table, it takes more time in execution; a simple select query takes around 42 seconds. Across 1 crorerecord (10 million records) in the table. I have also done indexing for the table. How can I fix this?
The following is my table structure.
CREATE TABLE `smstable_read` (
`MessageID` int(11) NOT NULL AUTO_INCREMENT,
`ApplicationID` int(11) DEFAULT NULL,
`Api_userid` int(11) DEFAULT NULL,
`ReturnMessageID` varchar(255) DEFAULT NULL,
`Sequence_Id` int(11) DEFAULT NULL,
`messagetext` longtext,
`adtextid` int(11) DEFAULT NULL,
`mobileno` varchar(255) DEFAULT NULL,
`deliverystatus` int(11) DEFAULT NULL,
`SMSlength` int(11) DEFAULT NULL,
`DOC` varchar(255) DEFAULT NULL,
`DOM` varchar(255) DEFAULT NULL,
`BatchID` int(11) DEFAULT NULL,
`StudentID` int(11) DEFAULT NULL,
`SMSSentTime` varchar(255) DEFAULT NULL,
`SMSDeliveredTime` varchar(255) DEFAULT NULL,
`SMSDeliveredTimeTicks` decimal(28,0) DEFAULT '0',
`SMSSentTimeTicks` decimal(28,0) DEFAULT '0',
`Sent_SMS_Day` int(11) DEFAULT NULL,
`Sent_SMS_Month` int(11) DEFAULT NULL,
`Sent_SMS_Year` int(11) DEFAULT NULL,
`smssent` int(11) DEFAULT '1',
`Batch_Name` varchar(255) DEFAULT NULL,
`User_ID` varchar(255) DEFAULT NULL,
`Year_ID` int(11) DEFAULT NULL,
`Date_Time` varchar(255) DEFAULT NULL,
`IsGroup` double DEFAULT NULL,
`Date_Time_Ticks` decimal(28,0) DEFAULT NULL,
`IsNotificationSent` int(11) DEFAULT NULL,
`Module_Id` double DEFAULT NULL,
`Doc_Batch` decimal(28,0) DEFAULT NULL,
`SMS_Category_ID` int(11) DEFAULT NULL,
`SID` int(11) DEFAULT NULL,
PRIMARY KEY (`MessageID`),
KEY `index2` (`ReturnMessageID`),
KEY `index3` (`mobileno`),
KEY `BatchID` (`BatchID`),
KEY `smssent` (`smssent`),
KEY `deliverystatus` (`deliverystatus`),
KEY `day` (`Sent_SMS_Day`),
KEY `month` (`Sent_SMS_Month`),
KEY `year` (`Sent_SMS_Year`),
KEY `index4` (`ApplicationID`,`SMSSentTimeTicks`),
KEY `smslength` (`SMSlength`),
KEY `studid` (`StudentID`),
KEY `batchid_studid` (`BatchID`,`StudentID`),
KEY `User_ID` (`User_ID`),
KEY `Year_Id` (`Year_ID`),
KEY `IsNotificationSent` (`IsNotificationSent`),
KEY `isgroup` (`IsGroup`),
KEY `SID` (`SID`),
KEY `SMS_Category_ID` (`SMS_Category_ID`),
KEY `SMSSentTimeTicks` (`SMSSentTimeTicks`)
) ENGINE=MyISAM AUTO_INCREMENT=16513292 DEFAULT CHARSET=utf8;
The following is my select query:
SELECT messagetext, SMSSentTime, StudentID, batchid,
User_ID,MessageID,Sent_SMS_Day, Sent_SMS_Month,
Sent_SMS_Year,Module_Id,Year_ID,Doc_Batch
FROM smstable_read
WHERE StudentID=977 AND SID = 8582 AND MessageID>16013282
You need to learn about compound indexes and covering indexes. Read about those things.
Your query is slow because it's doing a half-scan of the table. It uses the primary key to find the first row with a qualifying MessageID, then looks at every row of the table to find matching rows.
Your filter criteria are StudentID = constant, SID = constant AND MessageID > constant. That means you need those three columns, in that order, in an index. The first two filter criteria will random-access your index to the correct place. The third criterion will scan the index starting right after the constant value in your query. It's called an Index Range Scan operation, and it's quite efficient.
ALTER TABLE smstable_read
ADD INDEX StudentSidMessage (StudentId, SID, MessageId);
This compound index should make your query efficient. Notice that in MyISAM, the primary key column of a table should appear in compound indexes. That's cool in this case because it's also part of your query criteria.
If this query is used very frequently, you could make a covering index: you could add the other columns of the query (the ones mentioned in your SELECT clause) to the index.
But, unfortunately you have defined your messageText column with a longtext data type. That allows for each message to contain up to four gigabytes. (Why? Is this really SMS data? There's a limit of 160 bytes per message in SMS. Four gigabytes >> 160 bytes.)
Now the point of a covering index is to allow the query to be satisfied entirely from the index, without referring back to the table. But when you include a longtext or any other LOB column in an index, it only contains a subset of the data. So the point of the covering index is lost.
If I were you I would change my table so messageText was a VARCHAR(255) data type, and then create this covering index:
ALTER TABLE smstable_read
ADD INDEX StudentSidMessage (StudentId, SID, MessageId,
SMSSentTime, batchid,
User_ID, Sent_SMS_Day, Sent_SMS_Month,
Sent_SMS_Year,Module_Id,Year_ID,Doc_Batch,
messageText);
(Notice that you should put variable-length items last in the index if you can.)
If you can't change your application to handle VARCHAR(255) then go with the first index I mentioned.
Pro tip: putting lots of single-column indexes on MySQL tables rarely helps SELECT performance and always harms INSERT and UPDATE performance. You need an index on your primary key, and you need indexes to support the queries you run. Extra indexes are harmful.
It looks like your database is not properly indexed and even not properly normalized. Normalizing your database will go a long way to speed up all your queries. Particularly in view of the fact that mysql used only one index per table in a query. Even though you have lot's of indexes, they cannot be used.
Your current query filters on StudentID,SID, and MessageID. The last is an inequality comparision so an index will not be very effective with that but the other two columns are equality comparisons. I suggest an index like this:
KEY `studid` (`StudentID`,`SID`)
Follow that up by dropping your existing index on SID. If you find that you don't want to drop it because it's used in another query, further evidence that your table is in desperate need of normalization.
Too many indexes slow down inserts and adds a little overhead to each SELECT because the query planner needs more effort to figure out which index to use.
I have this table in one server:
CREATE TABLE `mh` (
`M` char(13) NOT NULL DEFAULT '',
`F` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`D` char(6) DEFAULT NULL,
`A` int(11) DEFAULT NULL,
`DC` char(13) DEFAULT NULL,
`S` char(22) DEFAULT NULL,
`S0` int(11) DEFAULT NULL,
PRIMARY KEY (`F`,`M`),
KEY `IDX_S` (`S`),
KEY `IDX_M` (`M`),
KEY `IDX_A` (`M`,`A`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1;
And the same table but using MyISAM engine in another similar server.
When I execute this query:
CREATE TEMPORARY TABLE temp
(S VARCHAR(22) PRIMARY KEY)
AS
(
SELECT S, COUNT(S) AS HowManyS
FROM mh
WHERE A = 1 AND S IS NOT NULL
GROUP BY S
);
The table has 120 millions of rows. The server using TokuDB executes the query in 3 hours... the server using MyISAM in 22 minutes.
The query using TokuDB shows a "Queried about 38230000 rows, Fetched about 303929 rows, loading data still remains" status.
Why TokuDB query duration take so long? TokuDB is a really good engine, but I don't know what I'm doing wrong with this query
The servers are using a MariaDB 5.5.38 server
TokuDB is not currently using it's bulk-fetch algorithm on this statement, as noted in https://github.com/Tokutek/tokudb-engine/issues/143. I've added a link to this page so it is considered as part of the upcoming effort.
I've read heaps of posts here on stackoverflow, blog posts, tutorials and more, but I still fail to resolve a rather nasty performance issue with my MySQL db. Keep in mind that I'm a novice when it comes to large MySQL databases.
I have a table with approx. 11.000.000 rows (will increase to say 20.000.000 or more). Here's the layout:
CREATE TABLE `myTable` (
`intcol1` int(11) DEFAULT NULL,
`charcol1` char(25) DEFAULT NULL,
`intcol2` int(11) DEFAULT NULL,
`charcol2` char(50) DEFAULT NULL,
`charcol3` char(50) DEFAULT NULL,
`charcol4` char(50) DEFAULT NULL,
`intcol3` int(11) DEFAULT NULL,
`charcol5` char(50) DEFAULT NULL,
`intcol4` int(20) DEFAULT NULL,
`intcol5` int(20) DEFAULT NULL,
`intcol6` int(20) DEFAULT NULL,
`intcol7` int(11) DEFAULT NULL,
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
FULLTEXT KEY `idx` (`charcol2`,`charcol3`)
) ENGINE=MyISAM AUTO_INCREMENT=11665231 DEFAULT CHARSET=latin1;
A select statement like
SELECT * from myTable where charchol2='bogus' AND charcol3='bogus2';
takes 25 seconds or so to execute. That's too slow, and will be even slower as the table grows.
The table will not have any inserts or updates at all (so to speak), and will be primarily used for outputting searches on the char-columns.
I've tried to make indexing work (playing around with FULLTEXT, as you can see), but it seems that I'm missing something. Any takes on how to speed up the performance?
Please note: Im currently running MySQL on my Macbook Air (1.7 GHz i5, 4GB RAM). If this is the only answer to my performance issues, I'll move the database to something appropriate ;-)
EDIT: Explain table
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE myTable ALL NULL NULL NULL NULL 11596725 Using where
You don't need to create FULLTEXT indexes for such requests, where equality operator is used. Just create an index on every char field, that will be used in WHERE condition, and remove the fulltext index:
DROP INDEX idx;
ALTER TABLE myTable ADD INDEX charchol_idx (charchol2, charchol3);
I have an incredibly simple query (table type InnoDb) and EXPLAIN says that MySQL must do an extra pass to find out how to retrieve the rows in sorted order.
SELECT * FROM `comments`
WHERE (commentable_id = 1976)
ORDER BY created_at desc LIMIT 0, 5
exact explain output:
table select_type type extra possible_keys key key length ref rows
comments simple ref using where; using filesort common_lookups common_lookups 5 const 89
commentable_id is indexed. Comments has nothing trick in it, just a content field.
The manual suggests that if the order by is different to the where, there is no way filesort can be avoided.
http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
I also tried order by id as well as it's equivalent but makes no difference, even if I add id as an index (which I understand is not required as id is indexed implicitly in MySQL).
thanks in advance for any ideas!
To Mark -- here's SHOW CREATE TABLE
CREATE TABLE `comments` (
`id` int(11) NOT NULL auto_increment,
`user_id` int(11) default NULL,
`commentable_type` varchar(255) default NULL,
`commentable_id` int(11) default NULL,
`content` text,
`created_at` datetime default NULL,
`updated_at` datetime default NULL,
`hidden` tinyint(1) default '0',
`public` tinyint(1) default '1',
`access_point` int(11) default '0',
`item_id` int(11) default NULL,
PRIMARY KEY (`id`),
KEY `created_at` (`created_at`),
KEY `common_lookups` (`commentable_id`,`commentable_type`,`hidden`,`created_at`,`public`),
KEY `index_comments_on_item_id` (`item_id`),
KEY `index_comments_on_item_id_and_created_at` (`item_id`,`created_at`),
KEY `index_comments_on_user_id` (`user_id`),
KEY `id` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=31803 DEFAULT CHARSET=latin1
Note that the MySQL term filesort doesn't necessarily mean it writes to the disk. It just means it's going to sort without using an index. If the result set is small enough, MySQL will sort it in memory, which is orders of magnitude faster than disk I/O.
You can increase the amount of memory MySQL allocates for in-memory filesorts using the sort_buffer_size server variable. In MySQL 5.1, the default sort buffer size is 2MB, and the maximum you can allocate is 4GB.
update: Regarding Jonathan Leffler's comment about measuring how long the sorting takes, you can learn how to use SHOW PROFILE FOR QUERY which will give you the breakdown of how long each phase of query execution takes.
Try adding a combined index on (commentable_id, created_at).