I have a table storing weekly viewing statistic for around 40K businesses, the tables passed 2.2M records and is starting to slow things down, I'm looking at partitioning it to speed things up but I'm not sure how best to do it.
My ORM requires an id field as a primary key, but that field has no relevance to the data, I've been using a unique index on fields for year, week number and business ID.
As I need the primary key to be involved in the partition map, I'm not sure how best to organise this (I've never used partitioning before).
Currently I have...
CREATE TABLE `weekly_views` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`business_id` int(11) NOT NULL,
`year` smallint(4) UNSIGNED NOT NULL,
`week` tinyint(2) UNSIGNED NOT NULL,
`hits` int(5) NOT NULL,
`created` timestamp NOT NULL ON UPDATE CURRENT_TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
`updated` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
UNIQUE `search` USING BTREE (business_id, `year`, `week`),
UNIQUE `id` USING BTREE (id, `week`)
) ENGINE=`InnoDB` AUTO_INCREMENT=2287009 DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ROW_FORMAT=COMPACT CHECKSUM=0 DELAY_KEY_WRITE=0 PARTITION BY LIST(week) PARTITIONS 52 (PARTITION p1 VALUES IN (1) ENGINE = InnoDB,
PARTITION p2 VALUES IN (2) ENGINE = InnoDB,
PARTITION p3 VALUES IN (3) ENGINE = InnoDB,
PARTITION p4 VALUES IN (4) ENGINE = InnoDB,
(5 ... 51)
PARTITION p52 VALUES IN (52) ENGINE = InnoDB);
One partition per week seemed the only logical way to break them up. Am I right that when I search for a record for the current week/business using 'business_id = xx and week = xx and year = xx' it's going to know which partition to use without searching them all? But, when I get the result and save it via the ORM, it's going to use the id field and not know which partition to use?
I guess I could use a custom query to insert or update (I haven't originally done this as the ORM doesn't support it).
Am I going the right way about this, or is there a better way to partition a table like this?
Thanks for your help!
As long as the query has week column in WHERE clause, MySQL will look in correct partition. However, weeks repeat each year and you'll end up with data from different years in the same partition.
Also you need 53 not 52 partitions, as you'll need to deal with leap years.
Related
I have a table that contains a month and a year column.
I have a query which usually looks something like WHERE month=1 AND year=2022
Given how large this table is i would like to make it more efficient using partitions and sub partitions.
table 1
Querying the data i need took around 2 minutes and 30 seconds.
CREATE TABLE `table_1` (
`id` int NOT NULL AUTO_INCREMENT,
`entity_id` varchar(36) NOT NULL,
`entity_type` varchar(36) NOT NULL,
`score` decimal(4,3) NOT NULL,
`month` int NOT NULL DEFAULT '0',
`year` int NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `idx_month_year` (`month`,`year`, `entity_type`)
)
Partitioning by "month"
Querying the data i need took around 21 seconds (big improvement).
CREATE TABLE `table_1` (
`id` int NOT NULL AUTO_INCREMENT,
`entity_id` varchar(36) NOT NULL,
`entity_type` varchar(36) NOT NULL,
`score` decimal(4,3) NOT NULL,
`month` int NOT NULL DEFAULT '0',
`year` int NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`,`month`),
KEY `idx_month_year` (`month`,`year`, `entity_type`)
) ENGINE=InnoDB AUTO_INCREMENT=21000001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50100 PARTITION BY LIST (`month`)
(PARTITION p0 VALUES IN (0) ENGINE = InnoDB,
PARTITION p1 VALUES IN (1) ENGINE = InnoDB,
PARTITION p2 VALUES IN (2) ENGINE = InnoDB,
PARTITION p3 VALUES IN (3) ENGINE = InnoDB,
PARTITION p4 VALUES IN (4) ENGINE = InnoDB,
PARTITION p5 VALUES IN (5) ENGINE = InnoDB,
PARTITION p6 VALUES IN (6) ENGINE = InnoDB,
PARTITION p7 VALUES IN (7) ENGINE = InnoDB,
PARTITION p8 VALUES IN (8) ENGINE = InnoDB,
PARTITION p9 VALUES IN (9) ENGINE = InnoDB,
PARTITION p10 VALUES IN (10) ENGINE = InnoDB,
PARTITION p11 VALUES IN (11) ENGINE = InnoDB,
PARTITION p12 VALUES IN (12) ENGINE = InnoDB) */
I would like to see if i can improve the performance even further by partitioning by year and then subpartitioning by month. How can i do that?
I'm not sure the following question Partition by year and sub-partition by month mysql is relevant with no marked answers and that question looks to be particular to mysql 5* and php. Im asking about mysql 8, are there no changes since then regarding partioning/subpartioning/list columns/range columns etc? which could help me.
Broader query im making
SELECT
table_1.entity_id AS entity_id,
table_1.entity_type,
table_1.score
FROM table_1
WHERE table_1.month = 12 AND table_1.year = 2022
AND table_1.score > 0
AND table_1.entity_type IN ('type1', 'type2', 'type3', 'type4') # only ever 4 types usually all 4 are present in the query
To answer your question directly, below is example syntax that accomplishes the subpartitioning. Notice the PRIMARY KEY must include all columns used for partitioning or subpartitioning. Read the manual on subpartitioning for more information: https://dev.mysql.com/doc/refman/8.0/en/partitioning-subpartitions.html
Schema (MySQL v8.0)
CREATE TABLE `table_1` (
`id` int NOT NULL AUTO_INCREMENT,
`entity_id` varchar(36) NOT NULL,
`entity_type` varchar(36) NOT NULL,
`score` decimal(4,3) NOT NULL,
`month` int NOT NULL DEFAULT '0',
`year` int NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`,`month`, `year`),
KEY `idx_month_year` (`month`,`year`, `score`, `entity_type`)
) ENGINE=InnoDB AUTO_INCREMENT=21000001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
PARTITION BY LIST (`month`)
SUBPARTITION BY HASH(`year`)
SUBPARTITIONS 10 (
PARTITION p0 VALUES IN (0) ENGINE = InnoDB,
PARTITION p1 VALUES IN (1) ENGINE = InnoDB,
PARTITION p2 VALUES IN (2) ENGINE = InnoDB,
PARTITION p3 VALUES IN (3) ENGINE = InnoDB,
PARTITION p4 VALUES IN (4) ENGINE = InnoDB,
PARTITION p5 VALUES IN (5) ENGINE = InnoDB,
PARTITION p6 VALUES IN (6) ENGINE = InnoDB,
PARTITION p7 VALUES IN (7) ENGINE = InnoDB,
PARTITION p8 VALUES IN (8) ENGINE = InnoDB,
PARTITION p9 VALUES IN (9) ENGINE = InnoDB,
PARTITION p10 VALUES IN (10) ENGINE = InnoDB,
PARTITION p11 VALUES IN (11) ENGINE = InnoDB,
PARTITION p12 VALUES IN (12) ENGINE = InnoDB
);
Using EXPLAIN on your query reveals that the query references only one subpartition.
Query #1
EXPLAIN
SELECT
table_1.entity_id AS entity_id,
table_1.entity_type,
table_1.score
FROM table_1
WHERE table_1.month = 12
AND table_1.year = 2022
AND table_1.score > 0
AND table_1.entity_type IN ('type1', 'type2', 'type3', 'type4');
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
SIMPLE
table_1
p12_p12sp2
range
idx_month_year
idx_month_year
11
1
100
Using index condition
The partitions field of the EXPLAIN shows that it accesses only partition p12_p12sp2. The year the query references, 2022, modulus the number of subpartitions, 10, will read from the subpartition 2.
In addition to the partitioning by month and year, it is also helpful to use an index. In this case, I added score to the index so it would filter out rows where score <= 0. The note in the EXPLAIN "Using index condition" shows that it is delegating further filtering on entity_type to the storage engine. Though in your example, you said there are only four values for entity type, and all four are selected, so that condition won't filter out any rows anyway.
View on DB Fiddle
Re your questions in comments below:
a little bit confused on SUBPARTITIONS 10 , why 10
It's just an example. You can choose a different number of subpartitions. Whatever you feel is required to reduce the search as much as you want.
To be honest, I've never encountered a situation that required subpartitioning at all, if the search is also optimized with indexes. So I have no guidance on what is an appropriate number of subpartitions.
It's your responsibility to test performance until you are satisfied.
also bit confusd on the partition name p12_p12sp2 how do i know it selected the partition with year 2022 from looking at that?
The query has a condition year = 2022.
There are 10 subpartitions in my example.
Hash partitioning just uses the integer value to be partitioned, modulus the number of partitions.
2022 modulus 10 is 2. Hence the partition ending in ...sp2 is the one used.
I also came across this anothermysqldba.blogspot.com/2014/12/… do you know how yours differs from what it shown here ( bare in mind that blog is from 2014)
They chose to name the subpartitions. There's no need to do that.
would there be any performance difference in having a single date e.g (2022-12-21) instead of sepreate columns month and year.
That depends on the query, and I'll leave it to you to test. Any predictions I make won't be accurate with your data on your server.
i can also see that you partition by month and subpartition by year, as oppose to partition by year and subpartition by month. can you explain the reasoning?
Subpartitioning works only if the outer partitions are LIST or RANGE partitions, and the subpartitions are HASH or KEY partitions. This is in the manual page I linked to.
There are a finite number of months (12). This makes it easy to partition by LIST as you did. You won't ever need more partitions. If you had partitioned by YEAR as the outer partition, you would have needed to specify year values in the list, and this is a growing set, so you would periodically have to alter the table to extend the list or range to account for new years.
Whereas when partitioning by HASH for the subpartitioning, the new year values are mapped into the finite set of subpartitions, so it's okay that it's not a finite list. You won't have to alter table to repartition (unless you want to change the number of subpartitions).
Splitting a date into columns is usually counterproductive. It is much easier to split during SELECT.
PARTITIONing is usually useless for performance of any SELECT.
When partitioning (or unpartitioning), the indexes usually need changing.
For that query, I recommend a combined date column,
WHERE date >= '2022-01-01'
AND date < '2022-01-01' + INTERVAL 1 MONTH
and some INDEX starting with date.
(You probably have other queries; let's see some of them; they may need a different index.)
Covering index -- This is an index that contains all the columns found anywhere in the SELECT. It is may be better (faster) than having only the columns needed for WHERE or WHERE + GROUP BY + ORDER BY. It depends on a lot of variables.
Order of columns in an index (or PK): The leftmost column(s) have priority. That is the order of the index rows on disk. PK(id, date) is useful if looking up by id (in the WHERE), but not if you are just searching by date.
Sargable -- sargable -- Hiding a column in a function disables the use of an index. That is MONTH(date) cannot use INDEX(date).
Blogs -- Index Cookbook and Partition
Test plan
I recommend you time all your queries against a variety of Create Tables.
For the WHERE clause:
The order of ANDs does not matter.
When using IN, a single value os equivalent to = and optimizes better. Multiple values may optimize more poorly. As Bill hints at, when the IN list contains all the options, you should eliminate the clause since the Optimizer is not smart enough. So, be sure to test with 1 and/or many items, so as to be realistic to your app.
For the table
Try Partition BY year + Subpartition by month.
Try Partition by a column that is the combination of year and month.
Try without partitioning.
For indexes
Order of the columns (in a composite index) does matter, so try different orderings.
When partitioning, be sure to tack onto the end of the PK the partition key(s).
A partitioned table needs different indexes than a non-partitioned table. That is, what works well for one may work poorly for the other.
Simply use something like this pattern to test various layouts:
CREATE TABLE (( a new layout with or without partitioning and with indexes ))
INSERT INTO test_table SELECT ... FROM real_table;
Change the "..." to adapt to any extra/missing columns in test_table
SELECT ...
Run various 'real' queries
Run each query twice (caching sometimes messes with the timing)
Report the results -- If you provide sufficient info (CREATE TABLE and SELECT), I may have suggestions on further speeding up the test (whether it is partitioned or not).
I am trying to partition my table MySQL innoDB. Right now there are approximately 2 million rows in the location table (and growing always) rows of history data. I must perodicly delete the dataset old by year
I use MySQL 5.7.22 Community Server.
CREATE TABLE `geo_data` (
`ID` bigint(20) NOT NULL AUTO_INCREMENT,
`ID_DISP` bigint(20) DEFAULT NULL,
`SYS_TIMESTAMP` datetime DEFAULT NULL,
`DATA_TIMESTAMP` bigint(20) DEFAULT NULL,
`X` double DEFAULT NULL,
`Y` double DEFAULT NULL,
`SPEED` bigint(20) DEFAULT NULL,
`HEADING` bigint(20) DEFAULT NULL,
`ID_DATA_TYPE` bigint(20) DEFAULT NULL,
`PROCESSED` bigint(20) DEFAULT NULL,
`ALTITUDE` bigint(20) DEFAULT NULL,
`ID_UNIT` bigint(20) DEFAULT NULL,
`ID_DRIVER` bigint(20) DEFAULT NULL,
UNIQUE KEY `part_id` (`ID`,`DATA_TIMESTAMP`,`ID_DISP`),
KEY `Index_idDisp_dataTS_type` (`ID_DISP`,`DATA_TIMESTAMP`,`ID_DATA_TYPE`),
KEY `Index_idDisp_dataTS` (`ID_DISP`,`DATA_TIMESTAMP`),
KEY `Index_TS` (`DATA_TIMESTAMP`),
KEY `idx_sysTS_idDisp` (`ID_DISP`,`SYS_TIMESTAMP`),
KEY `idx_clab_geo_data_ID_UNIT_DATA_TIMESTAMP_ID_DATA_TYPE` (`ID_UNIT`,`DATA_TIMESTAMP`,`ID_DATA_TYPE`),
KEY `idx_idUnit_dataTS` (`ID_UNIT`,`DATA_TIMESTAMP`),
KEY `idx_clab_geo_data_ID_DRIVER_DATA_TIMESTAMP_ID_DATA_TYPE` (`ID_DRIVER`,`DATA_TIMESTAMP`,`ID_DATA_TYPE`)
) ENGINE=InnoDB AUTO_INCREMENT=584390 DEFAULT CHARSET=latin1;
I have to partition by DATA_TIMESTAMP (format timestamp date gps).
ALTER TABLE geo_data
PARTITION BY RANGE (year(from_unixtime(data_timestamp)))
(
PARTITION p2018 VALUES LESS THAN ('2018'),
PARTITION p2019 VALUES LESS THAN ('2019'),
PARTITION pmax VALUES LESS THAN MAXVALUE
);
Error Code: 1697. VALUES value for partition 'p2018' must have type INT
How can I do?
I would like to add later a subpartion range by ID_DISP. How can I do?
Thanks in advance!
Since data_timestamp was actually a BIGINT, you are not permitted to use date functions. It seemed there were two errors, and this probably fixes them:
ALTER TABLE geo_data
PARTITION BY RANGE (data_timestamp)
(
PARTITION p2018 VALUES LESS THAN (UNIX_TIMESTAMP('2018-01-01') * 1000),
PARTITION p2019 VALUES LESS THAN (UNIX_TIMESTAMP('2019-01-01') * 1000),
PARTITION pmax VALUES LESS THAN MAXVALUE
);
I am assuming your data_timestamp is really milliseconds, a la Java? If not, the decide what to do with the * 1000.
SUBPARTITIONs are useless; don't bother with them. If you really want to partition by Month or Quarter, then simply do it at the PARTITION level.
Recommendation: Don't have more than about 50 partitions.
How many "drivers" do you have? I suspect you do not have trillions. So, don't blindly use BIGINT for ids. Each takes 8 bytes. SMALLINT UNSIGNED, for example, would take only 2 bytes and allow 64K drivers (etc).
If X and Y are latitude and longitude, it might be clearer to name them such. Here are what datatype to use instead of the 8-byte DOUBLE, depending on the resolution you have (and need). 4-byte FLOATs are probably good enough for vehicles.
The table has several redundant indexes; toss them. Also, note that when you have INDEX(a,b,c), it is redundant to also have INDEX(a,b).
See also my discussion on Partitioning, especially related to time-series, such as yours.
Hmmm... I wonder if the 63 bits of precision for SPEED will let you record them when they go at the speed of light?
Another point: Don't create p2019 until just before the start of 2019. You have pmax in case you goof and fail to add that partition in time. And the REORGANIZE PARTITION mentioned in my discussion covers how to recover from such a goof.
Update:
Seems you cannot use from_unixtime in a PARTITION BY RANGE query because hash partitions must be based on an integer expression. More info see this answer
Its expecting an INT not a STRING (as per the error message), so try :
ALTER TABLE geo_data
PARTITION BY RANGE (year(from_unixtime(data_timestamp)))
(
PARTITION p2018 VALUES LESS THAN (2018),
PARTITION p2019 VALUES LESS THAN (2019),
PARTITION pmax VALUES LESS THAN MAXVALUE
);
Here I have specified the year in the partition values as an int ie 2018 / 2019, and not strings as in '2018' / '2019'
I have to tables with 65.5 Million rows:
1)
CREATE TABLE RawData1 (
cdasite varchar(45) COLLATE utf8_unicode_ci NOT NULL,
id int(20) NOT NULL DEFAULT '0',
timedate datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
type int(11) NOT NULL DEFAULT '0',
status int(11) NOT NULL DEFAULT '0',
branch_id int(20) DEFAULT NULL,
branch_idString varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (id,cdasite,timedate),
KEY idx_timedate (timedate,cdasite)
) ENGINE=InnoDB;
2)
Same table with partition (call it RawData2)
PARTITION BY RANGE ( TO_DAYS(timedate))
(PARTITION p20140101 VALUES LESS THAN (735599) ENGINE = InnoDB,
PARTITION p20140401 VALUES LESS THAN (735689) ENGINE = InnoDB,
.
.
PARTITION p20201001 VALUES LESS THAN (738064) ENGINE = InnoDB,
PARTITION future VALUES LESS THAN MAXVALUE ENGINE = InnoDB);
I'm using the same query:
SELECT count(id) FROM RawData1
where timedate BETWEEN DATE_FORMAT(date_sub(now(),INTERVAL 2 YEAR),'%Y-%m-01') AND now();
2 problems:
1. why the partitioned table runs longer then the regular table?
2. the regular table returns 36380217 in 17.094 Sec. is it normal, all R&D leaders think it is not fast enough, it need to return in ~2 Sec.
What do I need to check / do / change ?
Is it realistic to scan 35732495 rows and retrieve 36380217 in less then 3-4 sec?
You have found one example of why PARTITIONing is not a performance panacea.
Where does id come from?
How many different values are there for cdasite? If thousands, not millions, build a table mapping cdasite <=> id and switch from a bulky VARCHAR(45) to a MEDIUMINT UNSIGNED (or whatever is appropriate). This item may help the most, but perhaps not enough.
Ditto for status, but probably using TINYINT UNSIGNED. Or think about ENUM. Either is 1 byte, not 4.
The (20) on INT(20) means nothing. You get a 4-byte integer with a limit of about 2 billion.
Are you sure there are no duplicate timedates?
branch_id and branch_idString -- this smells like a pair that needs to be in another table, leaving only the id here?
Smaller -> faster.
COUNT(*) is the same as COUNT(id) since id is NOT NULL.
Do not include future partitions before they are needed; it slows things down. (And don't use partitioning at all.)
To get that query even faster, build and maintain a Summary Table. It would have at least a DATE in the PRIMARY KEY and at least COUNT(*) as a column. Then the query would fetch from that table. More on Summary tables: http://mysql.rjweb.org/doc.php/summarytables
I'm running a table that has built up to 600 million rows and is rapidly growing, which has been slowing down queries that need to run as quickly as possible. Current schema is:
CREATE TABLE `user_history` (
`userId` int(11) NOT NULL,
`asin` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
`dateSent` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
KEY `userId` (`userId`,`asin`,`dateSent`),
KEY `dateSent` (`dateSent`,`asin`),
KEY `asin` (`asin`,`dateSent`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Everything I've read about partitioning suggested that this was a prime candidate for partitioning by date range. We only tend to use the last 14 days data, but the client doesn't want to delete old data. The new schema looks like:
CREATE TABLE `user_history_partitioned` (
`userId` int(11) NOT NULL,
`asin` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
`dateSent` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`dateSent`,`asin`,`userId`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
PARTITION BY RANGE ( UNIX_TIMESTAMP(dateSent) ) (
PARTITION Apr2013 VALUES LESS THAN (UNIX_TIMESTAMP('2013-05-01')),
etc...
PARTITION Mar2014 VALUES LESS THAN (UNIX_TIMESTAMP('2014-04-01')),
PARTITION Apr2014 VALUES LESS THAN (UNIX_TIMESTAMP('2014-05-01')),
PARTITION May2014 VALUES LESS THAN (UNIX_TIMESTAMP('2014-06-01')),
PARTITION Future VALUES LESS THAN MAXVALUE);
The idea of the Future partition is because a REORGANIZE PARTITION run on a populated partition was taking a long time to complete. So Future will always be empty and can reorganized into new partitions instantly. And other queries using this table have been reordered to use the primary key only, to reduce the number of indexes on the table.
The time-critical query is apropos of:
SELECT SQL_NO_CACHE *
FROM books B
WHERE (non-relevant stuff deleted)
AND NOT EXISTS
(
SELECT 1 FROM user_history H
WHERE
H.userId=$userId
AND H.asin=B.ASIN
AND dateSent > DATE_SUB(NOW(), INTERVAL 14 DAY)
)
AND (non-relevant stuff deleted)
LIMIT 1
So we're avoid duplicates that have already been selected for the same user in the last 14 days. And this currently returns in < 0.1 secs, which is okay but slower than it used to be on the current schema.
For the new schema, the inner SELECT has been reordered to:
SELECT 1 FROM user_history_partitioned H
WHERE dateSent > DATE_SUB(NOW(), INTERVAL 14 DAY)
AND H.asin=B.ASIN
AND H.userId=$userId
And it's taking 5 minutes per query. and I can't see why. The idea is that the current partition and indexes should be in memory (or maybe the previous month too, at some times of the month), and the primary index covers the WHERE clause. But from the time it's taking, it could be performing a full table scan on asin or userId. Which is difficult to identify from EXPLAIN because it's an inner query.
What am I missing? Do I need another combined index for (asin, userID)? If so, why?
Thanks,
PS: Tried wrapping the DATE_SUB(...) as UNIX_TIMESTAMP(DATE_SUB(...)) just in case it was a type conversion issue, but made no difference.
I was trying to check whether implementing MySQL database partitioning is beneficial for our application or not. I have heard a lot about the benefits of using partitioning for large number of records.
But surprisingly, the response time of the application got reduced by 3 times when doing the load testing after partitioning was implemented. Could someone please help with the reason why this may happen?
Let me explain in detail:
Below is the DDL of the table when partitioning was ‘not’ in place.
CREATE TABLE `myTable` (
`column1` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`column2` char(3) NOT NULL,
`column3` char(3) NOT NULL,
`column4` char(2) NOT NULL,
`column5` smallint(4) unsigned NOT NULL,
`column6` date NOT NULL,
`column7` varchar(2) NOT NULL,
`column8` tinyint(3) unsigned NOT NULL COMMENT 'Seat Count Ranges from 0-9.',
`column9` varchar(2) NOT NULL,
`column10` varchar(4) NOT NULL,
`column11` char(2) NOT NULL,
`column12` datetime NOT NULL,
`column13` datetime DEFAULT NULL,
PRIMARY KEY (`column1`),
KEY `index1` (`column2`,`column3`,`column4`,`column5`,`column7`,`column6`),
KEY `index2` (`column2`,`column3`,`column6`,`column4`)
) ENGINE=InnoDB AUTO_INCREMENT=342024674 DEFAULT CHARSET=latin1;
And below is the DDL of the same table after implementing ‘Range’ partitioning based on a date field.
CREATE TABLE `myTable` (
`column1` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`column2` char(3) NOT NULL,
`column3` char(3) NOT NULL,
`column4` char(2) NOT NULL,
`column5` smallint(4) unsigned NOT NULL,
`column6` date NOT NULL,
`column7` varchar(2) NOT NULL,
`column8` tinyint(3) unsigned NOT NULL COMMENT 'Seat Count Ranges from 0-9.',
`column9` varchar(2) NOT NULL,
`column10` varchar(4) NOT NULL,
`column11` char(2) NOT NULL,
`column12` datetime NOT NULL,
`column13` datetime DEFAULT NULL,
PRIMARY KEY (`column1`,`column6`),
KEY `index1` (`column2`,`column3`,`column4`,`column5`,`column7`,`column6`),
KEY `index2` (`column2`,`column3`,`column6`,`column4`)
) ENGINE=InnoDB AUTO_INCREMENT=342024674 DEFAULT CHARSET=latin1
PARTITION BY RANGE COLUMNS(`column6`)
(PARTITION date_jul_11 VALUES LESS THAN ('2011-08-01') ENGINE = InnoDB,
PARTITION date_aug_11 VALUES LESS THAN ('2011-09-01') ENGINE = InnoDB,
PARTITION date_sep_11 VALUES LESS THAN ('2011-10-01') ENGINE = InnoDB,
PARTITION date_oct_11 VALUES LESS THAN ('2011-11-01') ENGINE = InnoDB,
PARTITION date_nov_11 VALUES LESS THAN ('2011-12-01') ENGINE = InnoDB,
PARTITION date_dec_11 VALUES LESS THAN ('2012-01-01') ENGINE = InnoDB,
PARTITION date_jan_12 VALUES LESS THAN ('2012-02-01') ENGINE = InnoDB,
PARTITION date_feb_12 VALUES LESS THAN ('2012-03-01') ENGINE = InnoDB,
PARTITION date_mar_12 VALUES LESS THAN ('2012-04-01') ENGINE = InnoDB,
PARTITION date_apr_12 VALUES LESS THAN ('2012-05-01') ENGINE = InnoDB,
PARTITION date_may_12 VALUES LESS THAN ('2012-06-01') ENGINE = InnoDB,
PARTITION date_jun_12 VALUES LESS THAN ('2012-07-01') ENGINE = InnoDB,
PARTITION date_jul_12 VALUES LESS THAN ('2012-08-01') ENGINE = InnoDB,
PARTITION date_aug_12 VALUES LESS THAN ('2012-09-01') ENGINE = InnoDB,
PARTITION date_sep_12 VALUES LESS THAN ('2012-10-01') ENGINE = InnoDB,
PARTITION date_oct_12 VALUES LESS THAN ('2012-11-01') ENGINE = InnoDB,
PARTITION date_nov_12 VALUES LESS THAN ('2012-12-01') ENGINE = InnoDB,
PARTITION date_dec_12 VALUES LESS THAN ('2013-01-01') ENGINE = InnoDB,
PARTITION date_jan_13 VALUES LESS THAN ('2013-02-01') ENGINE = InnoDB,
PARTITION date_feb_13 VALUES LESS THAN ('2013-03-01') ENGINE = InnoDB,
PARTITION date_mar_13 VALUES LESS THAN ('2013-04-01') ENGINE = InnoDB,
PARTITION date_apr_13 VALUES LESS THAN ('2013-05-01') ENGINE = InnoDB,
PARTITION date_may_13 VALUES LESS THAN ('2013-06-01') ENGINE = InnoDB,
PARTITION date_jun_13 VALUES LESS THAN ('2013-07-01') ENGINE = InnoDB,
PARTITION date_oth VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB);
Below is a sample query which was used for doing the load testing to test the performance.
SELECT column8, column9
FROM myTable
WHERE column2 = ? AND column3 = ? AND column4 =? AND column5 = ? AND column7 = ? AND column6 = ?
LIMIT 1
The ? above were replaced with real values present in the database for testing.
Please note that the number of records in ‘myTable’ table is around 342 million, and the number of test data used for doing the performance testing is about 2 million.
However, as I said, the performance after implementing partitioning was reduced by a shocking 3 times. Any idea what may have caused this?
Also, please let me know if doing any further change in the table structure or indexing may help resolve this issue.
Remember, the goal of partitioning is to speed up queries where your query limits the number of partitions the result could be found in. I think the issues is the column6 = ? in your test query. I'm guessing that requiring an exact value, rather than a range, for column6 reduces your result set to very few values. Therefore, in the process of narrowing down the partitions, you've already essentially found the result. And since the indexes are split across the multiple partitions, there is a cost to that narrowing process.
The kind of query you would expect to benefit from partitioning on column6 is one that returns a range of values, limited to a small number of partitions. For example, try something like this as a test query:
SELECT column8, column9
FROM myTable
WHERE column6 < ? AND column6 > ? AND column2 = ? AND column3 = ? AND column4 =? AND column5 = ?
where that column6 range spans around 2 partitions, and the total result count is expected to be reasonably large.
This might help: http://dev.mysql.com/tech-resources/articles/partitioning.html
Looking at this, there's several things I would consider.
The first, and most glaring issue is that the big benefit from partitioning comes when you spread your data across different devices (disks) - and there's no evidence of that from the code posted.
Next, your partitioning is hard coded to specific date ranges - hence you're going to have to come up with a better plan when date_oth starts to fill up.
AND column6 = ?
So you only tested the performance of data from single partition? At best this will be no faster than with all the data in one table.
As Nathan points out, you are partitioning by column 6 - but you don't have this at the front of any of your indexes, hence the DBMS must search the index in each partition to find the data - this is ilkely the reason why the performance is so poor. (I disagree that partitioning only helps range queries).