MyISAM vs InnoDB for Logging - mysql

I am optimizing a database with almost no knowledge for my bachelor thesis. In no way i want to let you do the work for me, but i have some questions which no one could answer so far.
Table Structure:
data_inc, CREATE TABLE 'data_inc' (
'id' bigint(20) NOT NULL AUTO_INCREMENT,
'id_para' int(10) unsigned NOT NULL DEFAULT '0',
't_s' int(11) unsigned NOT NULL DEFAULT '0',
't_ms' smallint(6) unsigned NOT NULL DEFAULT '0',
't_ns' bigint(20) unsigned NOT NULL DEFAULT '0',
'id_inst' smallint(6) NOT NULL DEFAULT '1',
'value' varchar(255) NOT NULL DEFAULT '',
'isanchor' tinyint(4) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY ('id','t_ns'),
KEY 't_s' ('t_s'),
KEY 't_ns' ('t_ns')
) ENGINE=MyISAM AUTO_INCREMENT=2128295174 DEFAULT CHARSET=latin1
/*
!50100 PARTITION BY RANGE (t_ns)
(PARTITION 19_02_2015_23_59 VALUES LESS THAN (1424386799000000000) ENGINE = MyISAM,
PARTITION 20_02_2015_23_59 VALUES LESS THAN (1424473199000000000) ENGINE = MyISAM,
PARTITION 21_02_2015_23_59 VALUES LESS THAN (1424559599000000000) ENGINE = MyISAM,
PARTITION 22_02_2015_23_59 VALUES LESS THAN (1424645999000000000) ENGINE = MyISAM,
PARTITION 23_02_2015_23_59 VALUES LESS THAN (1424732399000000000) ENGINE = MyISAM,
PARTITION 24_02_2015_23_59 VALUES LESS THAN (1424818799000000000) ENGINE = MyISAM,
PARTITION 25_02_2015_23_59 VALUES LESS THAN (1424905199000000000) ENGINE = MyISAM,
PARTITION 05_03_2015_23_59 VALUES LESS THAN (1425596399000000000) ENGINE = MyISAM,
PARTITION 13_03_2015_23_59 VALUES LESS THAN (1426287599000000000) ENGINE = MyISAM,
PARTITION 14_03_2015_23_59 VALUES LESS THAN (1426373999000000000) ENGINE = MyISAM,
PARTITION 15_03_2015_23_59 VALUES LESS THAN (1426460399000000000) ENGINE = MyISAM,
PARTITION 16_03_2015_23_59 VALUES LESS THAN (1426546799000000000) ENGINE = MyISAM,
PARTITION 17_03_2015_23_59 VALUES LESS THAN (1426633199000000000) ENGINE = MyISAM,
PARTITION 18_03_2015_23_59 VALUES LESS THAN (1426719599000000000) ENGINE = MyISAM)
*/
The system is currently logging up to 4000 Parameters per second into a database (differnet tables, which one is decided in stored procedures). Every 5 minutes, 1 hour and daily different scripts are called to analyse the logging data, during that time data is written to the tables. This results in some heavy loads right now. Is there a chance that switching from MyISAM to InnoDB (or others) that the performance improves?
Thanks for your help!

For logging quickly followed by analysis...
Gather the data into a MyISAM table with no indexes. After 5 min (1.2M rows!):
Analyze it into InnoDB "Summary Table(s)".
DROP TABLE or TRUNCATE TABLE.
The analysis would be put into other table(s). These would have summary information and be much smaller than 1.2M rows.
To get hourly data, summarize the summary table(s). But don't create "hourly" tables; simply fetch and recalculate as needed.
Here are some related discussions: High speed ingestion and Summary Tables.

Related

How to partition a table by year and then subpartition by month in mysql 8

I have a table that contains a month and a year column.
I have a query which usually looks something like WHERE month=1 AND year=2022
Given how large this table is i would like to make it more efficient using partitions and sub partitions.
table 1
Querying the data i need took around 2 minutes and 30 seconds.
CREATE TABLE `table_1` (
`id` int NOT NULL AUTO_INCREMENT,
`entity_id` varchar(36) NOT NULL,
`entity_type` varchar(36) NOT NULL,
`score` decimal(4,3) NOT NULL,
`month` int NOT NULL DEFAULT '0',
`year` int NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `idx_month_year` (`month`,`year`, `entity_type`)
)
Partitioning by "month"
Querying the data i need took around 21 seconds (big improvement).
CREATE TABLE `table_1` (
`id` int NOT NULL AUTO_INCREMENT,
`entity_id` varchar(36) NOT NULL,
`entity_type` varchar(36) NOT NULL,
`score` decimal(4,3) NOT NULL,
`month` int NOT NULL DEFAULT '0',
`year` int NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`,`month`),
KEY `idx_month_year` (`month`,`year`, `entity_type`)
) ENGINE=InnoDB AUTO_INCREMENT=21000001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50100 PARTITION BY LIST (`month`)
(PARTITION p0 VALUES IN (0) ENGINE = InnoDB,
PARTITION p1 VALUES IN (1) ENGINE = InnoDB,
PARTITION p2 VALUES IN (2) ENGINE = InnoDB,
PARTITION p3 VALUES IN (3) ENGINE = InnoDB,
PARTITION p4 VALUES IN (4) ENGINE = InnoDB,
PARTITION p5 VALUES IN (5) ENGINE = InnoDB,
PARTITION p6 VALUES IN (6) ENGINE = InnoDB,
PARTITION p7 VALUES IN (7) ENGINE = InnoDB,
PARTITION p8 VALUES IN (8) ENGINE = InnoDB,
PARTITION p9 VALUES IN (9) ENGINE = InnoDB,
PARTITION p10 VALUES IN (10) ENGINE = InnoDB,
PARTITION p11 VALUES IN (11) ENGINE = InnoDB,
PARTITION p12 VALUES IN (12) ENGINE = InnoDB) */
I would like to see if i can improve the performance even further by partitioning by year and then subpartitioning by month. How can i do that?
I'm not sure the following question Partition by year and sub-partition by month mysql is relevant with no marked answers and that question looks to be particular to mysql 5* and php. Im asking about mysql 8, are there no changes since then regarding partioning/subpartioning/list columns/range columns etc? which could help me.
Broader query im making
SELECT
table_1.entity_id AS entity_id,
table_1.entity_type,
table_1.score
FROM table_1
WHERE table_1.month = 12 AND table_1.year = 2022
AND table_1.score > 0
AND table_1.entity_type IN ('type1', 'type2', 'type3', 'type4') # only ever 4 types usually all 4 are present in the query
To answer your question directly, below is example syntax that accomplishes the subpartitioning. Notice the PRIMARY KEY must include all columns used for partitioning or subpartitioning. Read the manual on subpartitioning for more information: https://dev.mysql.com/doc/refman/8.0/en/partitioning-subpartitions.html
Schema (MySQL v8.0)
CREATE TABLE `table_1` (
`id` int NOT NULL AUTO_INCREMENT,
`entity_id` varchar(36) NOT NULL,
`entity_type` varchar(36) NOT NULL,
`score` decimal(4,3) NOT NULL,
`month` int NOT NULL DEFAULT '0',
`year` int NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`deleted_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`,`month`, `year`),
KEY `idx_month_year` (`month`,`year`, `score`, `entity_type`)
) ENGINE=InnoDB AUTO_INCREMENT=21000001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
PARTITION BY LIST (`month`)
SUBPARTITION BY HASH(`year`)
SUBPARTITIONS 10 (
PARTITION p0 VALUES IN (0) ENGINE = InnoDB,
PARTITION p1 VALUES IN (1) ENGINE = InnoDB,
PARTITION p2 VALUES IN (2) ENGINE = InnoDB,
PARTITION p3 VALUES IN (3) ENGINE = InnoDB,
PARTITION p4 VALUES IN (4) ENGINE = InnoDB,
PARTITION p5 VALUES IN (5) ENGINE = InnoDB,
PARTITION p6 VALUES IN (6) ENGINE = InnoDB,
PARTITION p7 VALUES IN (7) ENGINE = InnoDB,
PARTITION p8 VALUES IN (8) ENGINE = InnoDB,
PARTITION p9 VALUES IN (9) ENGINE = InnoDB,
PARTITION p10 VALUES IN (10) ENGINE = InnoDB,
PARTITION p11 VALUES IN (11) ENGINE = InnoDB,
PARTITION p12 VALUES IN (12) ENGINE = InnoDB
);
Using EXPLAIN on your query reveals that the query references only one subpartition.
Query #1
EXPLAIN
SELECT
table_1.entity_id AS entity_id,
table_1.entity_type,
table_1.score
FROM table_1
WHERE table_1.month = 12
AND table_1.year = 2022
AND table_1.score > 0
AND table_1.entity_type IN ('type1', 'type2', 'type3', 'type4');
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
SIMPLE
table_1
p12_p12sp2
range
idx_month_year
idx_month_year
11
1
100
Using index condition
The partitions field of the EXPLAIN shows that it accesses only partition p12_p12sp2. The year the query references, 2022, modulus the number of subpartitions, 10, will read from the subpartition 2.
In addition to the partitioning by month and year, it is also helpful to use an index. In this case, I added score to the index so it would filter out rows where score <= 0. The note in the EXPLAIN "Using index condition" shows that it is delegating further filtering on entity_type to the storage engine. Though in your example, you said there are only four values for entity type, and all four are selected, so that condition won't filter out any rows anyway.
View on DB Fiddle
Re your questions in comments below:
a little bit confused on SUBPARTITIONS 10 , why 10
It's just an example. You can choose a different number of subpartitions. Whatever you feel is required to reduce the search as much as you want.
To be honest, I've never encountered a situation that required subpartitioning at all, if the search is also optimized with indexes. So I have no guidance on what is an appropriate number of subpartitions.
It's your responsibility to test performance until you are satisfied.
also bit confusd on the partition name p12_p12sp2 how do i know it selected the partition with year 2022 from looking at that?
The query has a condition year = 2022.
There are 10 subpartitions in my example.
Hash partitioning just uses the integer value to be partitioned, modulus the number of partitions.
2022 modulus 10 is 2. Hence the partition ending in ...sp2 is the one used.
I also came across this anothermysqldba.blogspot.com/2014/12/… do you know how yours differs from what it shown here ( bare in mind that blog is from 2014)
They chose to name the subpartitions. There's no need to do that.
would there be any performance difference in having a single date e.g (2022-12-21) instead of sepreate columns month and year.
That depends on the query, and I'll leave it to you to test. Any predictions I make won't be accurate with your data on your server.
i can also see that you partition by month and subpartition by year, as oppose to partition by year and subpartition by month. can you explain the reasoning?
Subpartitioning works only if the outer partitions are LIST or RANGE partitions, and the subpartitions are HASH or KEY partitions. This is in the manual page I linked to.
There are a finite number of months (12). This makes it easy to partition by LIST as you did. You won't ever need more partitions. If you had partitioned by YEAR as the outer partition, you would have needed to specify year values in the list, and this is a growing set, so you would periodically have to alter the table to extend the list or range to account for new years.
Whereas when partitioning by HASH for the subpartitioning, the new year values are mapped into the finite set of subpartitions, so it's okay that it's not a finite list. You won't have to alter table to repartition (unless you want to change the number of subpartitions).
Splitting a date into columns is usually counterproductive. It is much easier to split during SELECT.
PARTITIONing is usually useless for performance of any SELECT.
When partitioning (or unpartitioning), the indexes usually need changing.
For that query, I recommend a combined date column,
WHERE date >= '2022-01-01'
AND date < '2022-01-01' + INTERVAL 1 MONTH
and some INDEX starting with date.
(You probably have other queries; let's see some of them; they may need a different index.)
Covering index -- This is an index that contains all the columns found anywhere in the SELECT. It is may be better (faster) than having only the columns needed for WHERE or WHERE + GROUP BY + ORDER BY. It depends on a lot of variables.
Order of columns in an index (or PK): The leftmost column(s) have priority. That is the order of the index rows on disk. PK(id, date) is useful if looking up by id (in the WHERE), but not if you are just searching by date.
Sargable -- sargable -- Hiding a column in a function disables the use of an index. That is MONTH(date) cannot use INDEX(date).
Blogs -- Index Cookbook and Partition
Test plan
I recommend you time all your queries against a variety of Create Tables.
For the WHERE clause:
The order of ANDs does not matter.
When using IN, a single value os equivalent to = and optimizes better. Multiple values may optimize more poorly. As Bill hints at, when the IN list contains all the options, you should eliminate the clause since the Optimizer is not smart enough. So, be sure to test with 1 and/or many items, so as to be realistic to your app.
For the table
Try Partition BY year + Subpartition by month.
Try Partition by a column that is the combination of year and month.
Try without partitioning.
For indexes
Order of the columns (in a composite index) does matter, so try different orderings.
When partitioning, be sure to tack onto the end of the PK the partition key(s).
A partitioned table needs different indexes than a non-partitioned table. That is, what works well for one may work poorly for the other.
Simply use something like this pattern to test various layouts:
CREATE TABLE (( a new layout with or without partitioning and with indexes ))
INSERT INTO test_table SELECT ... FROM real_table;
Change the "..." to adapt to any extra/missing columns in test_table
SELECT ...
Run various 'real' queries
Run each query twice (caching sometimes messes with the timing)
Report the results -- If you provide sufficient info (CREATE TABLE and SELECT), I may have suggestions on further speeding up the test (whether it is partitioned or not).

Does mysql process queries involving multiple partitions parallely or serially for each partition?

This is my table schema.
CREATE TABLE users (
`id` int(11) NOT NULL AUTO_INCREMENT,
`created_at` datetime DEFAULT NULL,
`account_id` tinyint(4) NOT NULL,
) ENGINE=InnoDB AUTO_INCREMENT=25600033 DEFAULT CHARSET=utf8
PARTITION BY LIST (account_id)
(PARTITION p0 VALUES IN (1) ENGINE = InnoDB,
PARTITION p1 VALUES IN (2) ENGINE = InnoDB,
PARTITION p2 VALUES IN (3) ENGINE = InnoDB)
The query is
select * from users where account_id in (1,2);
Does sql server will check in partion 1 & 2 parallely or one by one??
Yes, one by one.
There is no parallelism in a single connection in MySQL. Not for UNION, not for PARTITIONs. Not (so far) in version 8.0.
There is probably no performance to be gained by PARTITION BY LIST. Further comments: http://mysql.rjweb.org/doc.php/partitionmaint

mysql repartitioned table much larger

I have a very large table on a mysql 5.6.10 instance (roughly 480 million rows).
The storage engine is InnoDB. (Table and DB Default).
The table was partitioned by hash of merchantId (bigint: a kind of client identifier) which helped when queries related to a single merchant. Due to significant performance degradation when queries spanned multiple merchants, I decided to repartition the table by Range on ACTION_DATE (the DATE that an activity occurred). Thinking I was being clever, I decided to add a few (5) new fields for future use (unused_varchar1 varchar(200), etc.), since the table is so large, adding new fields essentially requires a rebuild anyway, so why not...
I created the new table structure as _new, dumped the existing file to a secondary server using mysql dump. I then used an awk script to finesse the name and a few other details to fit the new table (change tableName to tableName_new), and started the load.
The existing table was approximately 430 GB. The text file similarly was about 403 GB. I was surprised therefore that the new table ended up taking about 840 GB!! (Based on the linux fize size of the .ibd files)
So, I have 2 basic questions, which really amount to why and what now...
I imagine that the new table is larger because the dump file was in the order of the previous partition (merchantId) while the load was inserting into the new partitioning (Activity date) creating a semi-random insertion order. The randomness led mysql to leave plenty of space (roughly 50%) in the pages for future insertions. (I'm a little fuzzy on the terminology here, having spent much more time in my career with Sql Server DBs than MySql Dbs...) I'm not able to find any internal statistics in mysql for space free per page. The INFORMATION_SCHEMA.TABLES DATA_FREE stat is an unconvincing 68MB.
If it helps these are the relevant stats from I_S.TABLES:
TABLE_TYPE: BASE TABLE
Engine: InnoDB
VERSION: 10
ROW_FORMAT: Compact
TABLE_ROWS: 488,094,271
AVG_ROW_LENGTH: 1,564
DATA_LENGTH: 763,509,358,592 (711 GB)
INDEX_LENGTH: 100,065,574,912 (93.19 GB)
DATA_FREE: 68,157,440 (0.06 GB)
I realize that that doesn't add up to 840 GB, but as I said, that was the size of the .ibd files which seems to be slightly different than the I_S.TABLES stats. Either way, it is significantly more than the text dump file.
I digress...
My question is whether my theory about whether the repartioning explains the roughly doubled size. Or is there another explanation? I think the extra columns (2 Bigint, 2 Varchar(200), 1 Date) are not the culprit since they are all null. My napkin calculation was that the additional columns would add < 9 GB. Likewise, one additional index on UID should be a relatively small addition.
The follow up question is what can I do now if I want to try to compact the table. (Server now only has about 385 GB free...)
If I repeated the procedure, dump to file, reload, this time in the current partition order, would I end up with a table more like the size of my original table ~430 GB?
Following are relevant parts of DDL.
OLD TABLE:
CREATE TABLE table_name (
`AUTO_SEQ` bigint(20) NOT NULL,
`MERCHANT_ID` bigint(20) NOT NULL,
`AFFILIATE_ID` bigint(20) DEFAULT NULL,
`PROGRAM_ID` bigint(20) NOT NULL,
`ACTION_DATE` date DEFAULT NULL,
`UID` varchar(128) DEFAULT NULL,
... additional columns ...
PRIMARY KEY (`AUTO_SEQ`,`MERCHANT_ID`,`PROGRAM_ID`),
KEY `oc_rpt_mpad_idx` (`MERCHANT_ID`,`PROGRAM_ID`,`ACTION_DATE`,`AFFILIATE_ID`),
KEY `oc_rpt_mapd` (`MERCHANT_ID`,`ACTION_DATE`),
KEY `oc_rpt_apda_idx` (`AFFILIATE_ID`,`PROGRAM_ID`,`ACTION_DATE`,`MERCHANT_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY HASH (merchant_id)
PARTITIONS 16 */
NEW TABLE:
CREATE TABLE `tableName_new` (
`AUTO_SEQ` bigint(20) NOT NULL,
`MERCHANT_ID` bigint(20) NOT NULL,
`AFFILIATE_ID` bigint(20) DEFAULT NULL,
`PROGRAM_ID` bigint(20) NOT NULL,
`ACTION_DATE` date NOT NULL DEFAULT '0000-00-00',
`UID` varchar(128) DEFAULT NULL,
... additional columns...
# NEW COLUMNS (ALL NULL)
`UNUSED_BIGINT1` bigint(20) DEFAULT NULL,
`UNUSED_BIGINT2` bigint(20) DEFAULT NULL,
`UNUSED_VARCHAR1` varchar(200) DEFAULT NULL,
`UNUSED_VARCHAR2` varchar(200) DEFAULT NULL,
`UNUSED_DATE1` date DEFAULT NULL,
PRIMARY KEY (`AUTO_SEQ`,`ACTION_DATE`),
KEY `oc_rpt_mpad_idx` (`MERCHANT_ID`,`PROGRAM_ID`,`ACTION_DATE`,`AFFILIATE_ID`),
KEY `oc_rpt_mapd` (`ACTION_DATE`),
KEY `oc_rpt_apda_idx` (`AFFILIATE_ID`,`PROGRAM_ID`,`ACTION_DATE`,`MERCHANT_ID`),
KEY `oc_uid` (`UID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50500 PARTITION BY RANGE COLUMNS(ACTION_DATE)
(PARTITION p01 VALUES LESS THAN ('2012-01-01') ENGINE = InnoDB,
PARTITION p02 VALUES LESS THAN ('2012-04-01') ENGINE = InnoDB,
PARTITION p03 VALUES LESS THAN ('2012-07-01') ENGINE = InnoDB,
PARTITION p04 VALUES LESS THAN ('2012-10-01') ENGINE = InnoDB,
PARTITION p05 VALUES LESS THAN ('2013-01-01') ENGINE = InnoDB,
PARTITION p06 VALUES LESS THAN ('2013-04-01') ENGINE = InnoDB,
PARTITION p07 VALUES LESS THAN ('2013-07-01') ENGINE = InnoDB,
PARTITION p08 VALUES LESS THAN ('2013-10-01') ENGINE = InnoDB,
PARTITION p09 VALUES LESS THAN ('2014-01-01') ENGINE = InnoDB,
PARTITION p10 VALUES LESS THAN ('2014-04-01') ENGINE = InnoDB,
PARTITION p11 VALUES LESS THAN ('2014-07-01') ENGINE = InnoDB,
PARTITION p12 VALUES LESS THAN ('2014-10-01') ENGINE = InnoDB,
PARTITION p13 VALUES LESS THAN ('2015-01-01') ENGINE = InnoDB,
PARTITION p14 VALUES LESS THAN ('2015-04-01') ENGINE = InnoDB,
PARTITION p15 VALUES LESS THAN ('2015-07-01') ENGINE = InnoDB,
PARTITION p16 VALUES LESS THAN ('2015-10-01') ENGINE = InnoDB,
PARTITION p17 VALUES LESS THAN ('2016-01-01') ENGINE = InnoDB,
PARTITION p18 VALUES LESS THAN ('2016-04-01') ENGINE = InnoDB,
PARTITION p19 VALUES LESS THAN ('2016-07-01') ENGINE = InnoDB,
PARTITION p20 VALUES LESS THAN ('2016-10-01') ENGINE = InnoDB,
PARTITION p21 VALUES LESS THAN ('2017-01-01') ENGINE = InnoDB,
PARTITION p22 VALUES LESS THAN ('2017-04-01') ENGINE = InnoDB,
PARTITION p23 VALUES LESS THAN ('2017-07-01') ENGINE = InnoDB,
PARTITION p24 VALUES LESS THAN ('2017-10-01') ENGINE = InnoDB,
PARTITION p25 VALUES LESS THAN ('2018-01-01') ENGINE = InnoDB,
PARTITION p26 VALUES LESS THAN ('2018-04-01') ENGINE = InnoDB,
PARTITION p27 VALUES LESS THAN ('2018-07-01') ENGINE = InnoDB,
PARTITION p28 VALUES LESS THAN ('2018-10-01') ENGINE = InnoDB,
PARTITION p29 VALUES LESS THAN ('2019-01-01') ENGINE = InnoDB,
PARTITION p30 VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB) */
adding new fields essentially requires a rebuild anyway, so why not
I predict you will regret it.
The existing table was approximately 430 GB.
According to size of .ibd? Or SHOW TABLE STATUS? Or the dump size, which would be bogus (see below).
it is significantly more than the text dump file
The lengths in TABLE STATUS include several flavors of overhead (BTree, free space, extra extents, etc), plus the indexes (which are not in the dump file).
Also, think about a BIGINT that contains 1234. The .ibd will 8 bytes plus some overhead; the dump will have 5 ('1234', plus a comma). That leads to my next point...
Are there really more than 4 billion merchants? merchant_id is BIGINT (8 bytes); INT UNSIGNED is only 4 bytes and allows 0..4 billion.
What's in uid? If it is some sort of UUID, it seems awfully long.
Do you happen to have the "stats from I_S.TABLES" from the old table?
So far, I have not addressed "whether the repartioning explains the roughly doubled size".
extra columns (2 Bigint, 2 Varchar(200), 1 Date)
That's about 29 bytes per row (15GB of Data_length), perhaps less since they are NULL.
You seem to be using the default ROW_FORMAT. I suspect this did not change in the conversion.
It is usually unwise to start an index with the "partition key" (merchant_id or action_date). This is because you are already "pruning" on that key; you are better off starting the index with something else. (Caveat: There are exceptions.)
Check the CHARACTER SET and datatype of the "additional columns". If something changed, that could be significant.
would I end up with a table more like the size of my original table ~430 GB?
Alas, until we figure out why it grew, I can't answer that question.
I'm more interested in whether random insertion vs. the partition (ACTION_DATE) would lead to wasted space / half empty pages.
I recommend you try the following experiment. Do not use optimize partition; see http://bugs.mysql.com/bug.php?id=42822 . Instead do this to defragment one partition (such as p02):
ALTER TABLE table_name REBUILD PARTITION p02;
You could do this SELECT before and after in order to see the change(s) to the PARTITIONs:
SELECT *
FROM information_schema.PARTITIONS
WHERE TABLE_SCHEMA = 'dbname' -- change as needed
AND TABLE_NAME = 'table_name' -- change as needed
ORDER BY PARTITION_ORDINAL_POSITION,
SUBPARTITION_ORDINAL_POSITION;
It's a generic query to get the table-status-like info for the partitions of one table.
If the REBUILD cuts the partition by about 50%, then we have the answer.
Generally, randomly inserting into a BTree should leave you with about 69% (not 50%) of the "full" size. Hence, I'm not 'expecting' this to be the solution/answer.

MySQL Partitioning showing low performance

I was trying to check whether implementing MySQL database partitioning is beneficial for our application or not. I have heard a lot about the benefits of using partitioning for large number of records.
But surprisingly, the response time of the application got reduced by 3 times when doing the load testing after partitioning was implemented. Could someone please help with the reason why this may happen?
Let me explain in detail:
Below is the DDL of the table when partitioning was ‘not’ in place.
CREATE TABLE `myTable` (
`column1` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`column2` char(3) NOT NULL,
`column3` char(3) NOT NULL,
`column4` char(2) NOT NULL,
`column5` smallint(4) unsigned NOT NULL,
`column6` date NOT NULL,
`column7` varchar(2) NOT NULL,
`column8` tinyint(3) unsigned NOT NULL COMMENT 'Seat Count Ranges from 0-9.',
`column9` varchar(2) NOT NULL,
`column10` varchar(4) NOT NULL,
`column11` char(2) NOT NULL,
`column12` datetime NOT NULL,
`column13` datetime DEFAULT NULL,
PRIMARY KEY (`column1`),
KEY `index1` (`column2`,`column3`,`column4`,`column5`,`column7`,`column6`),
KEY `index2` (`column2`,`column3`,`column6`,`column4`)
) ENGINE=InnoDB AUTO_INCREMENT=342024674 DEFAULT CHARSET=latin1;
And below is the DDL of the same table after implementing ‘Range’ partitioning based on a date field.
CREATE TABLE `myTable` (
`column1` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`column2` char(3) NOT NULL,
`column3` char(3) NOT NULL,
`column4` char(2) NOT NULL,
`column5` smallint(4) unsigned NOT NULL,
`column6` date NOT NULL,
`column7` varchar(2) NOT NULL,
`column8` tinyint(3) unsigned NOT NULL COMMENT 'Seat Count Ranges from 0-9.',
`column9` varchar(2) NOT NULL,
`column10` varchar(4) NOT NULL,
`column11` char(2) NOT NULL,
`column12` datetime NOT NULL,
`column13` datetime DEFAULT NULL,
PRIMARY KEY (`column1`,`column6`),
KEY `index1` (`column2`,`column3`,`column4`,`column5`,`column7`,`column6`),
KEY `index2` (`column2`,`column3`,`column6`,`column4`)
) ENGINE=InnoDB AUTO_INCREMENT=342024674 DEFAULT CHARSET=latin1
PARTITION BY RANGE COLUMNS(`column6`)
(PARTITION date_jul_11 VALUES LESS THAN ('2011-08-01') ENGINE = InnoDB,
PARTITION date_aug_11 VALUES LESS THAN ('2011-09-01') ENGINE = InnoDB,
PARTITION date_sep_11 VALUES LESS THAN ('2011-10-01') ENGINE = InnoDB,
PARTITION date_oct_11 VALUES LESS THAN ('2011-11-01') ENGINE = InnoDB,
PARTITION date_nov_11 VALUES LESS THAN ('2011-12-01') ENGINE = InnoDB,
PARTITION date_dec_11 VALUES LESS THAN ('2012-01-01') ENGINE = InnoDB,
PARTITION date_jan_12 VALUES LESS THAN ('2012-02-01') ENGINE = InnoDB,
PARTITION date_feb_12 VALUES LESS THAN ('2012-03-01') ENGINE = InnoDB,
PARTITION date_mar_12 VALUES LESS THAN ('2012-04-01') ENGINE = InnoDB,
PARTITION date_apr_12 VALUES LESS THAN ('2012-05-01') ENGINE = InnoDB,
PARTITION date_may_12 VALUES LESS THAN ('2012-06-01') ENGINE = InnoDB,
PARTITION date_jun_12 VALUES LESS THAN ('2012-07-01') ENGINE = InnoDB,
PARTITION date_jul_12 VALUES LESS THAN ('2012-08-01') ENGINE = InnoDB,
PARTITION date_aug_12 VALUES LESS THAN ('2012-09-01') ENGINE = InnoDB,
PARTITION date_sep_12 VALUES LESS THAN ('2012-10-01') ENGINE = InnoDB,
PARTITION date_oct_12 VALUES LESS THAN ('2012-11-01') ENGINE = InnoDB,
PARTITION date_nov_12 VALUES LESS THAN ('2012-12-01') ENGINE = InnoDB,
PARTITION date_dec_12 VALUES LESS THAN ('2013-01-01') ENGINE = InnoDB,
PARTITION date_jan_13 VALUES LESS THAN ('2013-02-01') ENGINE = InnoDB,
PARTITION date_feb_13 VALUES LESS THAN ('2013-03-01') ENGINE = InnoDB,
PARTITION date_mar_13 VALUES LESS THAN ('2013-04-01') ENGINE = InnoDB,
PARTITION date_apr_13 VALUES LESS THAN ('2013-05-01') ENGINE = InnoDB,
PARTITION date_may_13 VALUES LESS THAN ('2013-06-01') ENGINE = InnoDB,
PARTITION date_jun_13 VALUES LESS THAN ('2013-07-01') ENGINE = InnoDB,
PARTITION date_oth VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB);
Below is a sample query which was used for doing the load testing to test the performance.
SELECT column8, column9
FROM myTable
WHERE column2 = ? AND column3 = ? AND column4 =? AND column5 = ? AND column7 = ? AND column6 = ?
LIMIT 1
The ? above were replaced with real values present in the database for testing.
Please note that the number of records in ‘myTable’ table is around 342 million, and the number of test data used for doing the performance testing is about 2 million.
However, as I said, the performance after implementing partitioning was reduced by a shocking 3 times. Any idea what may have caused this?
Also, please let me know if doing any further change in the table structure or indexing may help resolve this issue.
Remember, the goal of partitioning is to speed up queries where your query limits the number of partitions the result could be found in. I think the issues is the column6 = ? in your test query. I'm guessing that requiring an exact value, rather than a range, for column6 reduces your result set to very few values. Therefore, in the process of narrowing down the partitions, you've already essentially found the result. And since the indexes are split across the multiple partitions, there is a cost to that narrowing process.
The kind of query you would expect to benefit from partitioning on column6 is one that returns a range of values, limited to a small number of partitions. For example, try something like this as a test query:
SELECT column8, column9
FROM myTable
WHERE column6 < ? AND column6 > ? AND column2 = ? AND column3 = ? AND column4 =? AND column5 = ?
where that column6 range spans around 2 partitions, and the total result count is expected to be reasonably large.
This might help: http://dev.mysql.com/tech-resources/articles/partitioning.html
Looking at this, there's several things I would consider.
The first, and most glaring issue is that the big benefit from partitioning comes when you spread your data across different devices (disks) - and there's no evidence of that from the code posted.
Next, your partitioning is hard coded to specific date ranges - hence you're going to have to come up with a better plan when date_oth starts to fill up.
AND column6 = ?
So you only tested the performance of data from single partition? At best this will be no faster than with all the data in one table.
As Nathan points out, you are partitioning by column 6 - but you don't have this at the front of any of your indexes, hence the DBMS must search the index in each partition to find the data - this is ilkely the reason why the performance is so poor. (I disagree that partitioning only helps range queries).

Convert to a Partitioned Table

I have the following table structure with live data in it:
CREATE TABLE IF NOT EXISTS `userstatistics` (
`user_id` int(10) unsigned NOT NULL,
`number_logons` int(7) unsigned NOT NULL DEFAULT '0',
`number_profileminiviews` int(7) unsigned NOT NULL DEFAULT '0',
`number_profilefullviews` int(7) unsigned NOT NULL DEFAULT '0',
`number_mailsreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_interestreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_favouratesreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_friendshiprequestreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_imchatrequestreceived` int(7) unsigned NOT NULL DEFAULT '0',
`yearweek` int(6) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`user_id`,`yearweek`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I want to convert this to a partitioned table with the following structure:
CREATE TABLE IF NOT EXISTS `userstatistics` (
`user_id` int(10) unsigned NOT NULL,
`number_logons` int(7) unsigned NOT NULL DEFAULT '0',
`number_profileminiviews` int(7) unsigned NOT NULL DEFAULT '0',
`number_profilefullviews` int(7) unsigned NOT NULL DEFAULT '0',
`number_mailsreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_interestreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_favouratesreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_friendshiprequestreceived` int(7) unsigned NOT NULL DEFAULT '0',
`number_imchatrequestreceived` int(7) unsigned NOT NULL DEFAULT '0',
`yearweek` int(6) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`user_id`,`yearweek`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (yearweek)
(PARTITION userstats_201108 VALUES LESS THAN (201108) ENGINE = InnoDB,
PARTITION userstats_201109 VALUES LESS THAN (201109) ENGINE = InnoDB,
PARTITION userstats_201110 VALUES LESS THAN (201110) ENGINE = InnoDB,
PARTITION userstats_201111 VALUES LESS THAN (201111) ENGINE = InnoDB,
PARTITION userstats_201112 VALUES LESS THAN (201112) ENGINE = InnoDB,
PARTITION userstats_201113 VALUES LESS THAN (201113) ENGINE = InnoDB,
PARTITION userstats_201114 VALUES LESS THAN (201114) ENGINE = InnoDB,
PARTITION userstats_201115 VALUES LESS THAN (201115) ENGINE = InnoDB,
PARTITION userstats_201116 VALUES LESS THAN (201116) ENGINE = InnoDB,
PARTITION userstats_201117 VALUES LESS THAN (201117) ENGINE = InnoDB,
PARTITION userstats_201118 VALUES LESS THAN (201118) ENGINE = InnoDB,
PARTITION userstats_201119 VALUES LESS THAN (201119) ENGINE = InnoDB,
PARTITION userstats_201120 VALUES LESS THAN (201120) ENGINE = InnoDB,
PARTITION userstats_201121 VALUES LESS THAN (201121) ENGINE = InnoDB,
PARTITION userstats_max VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */;
How can I do this conversion?
Simply changing the first line of the second SQL statement to
ALTER TABLE 'userstatistics' (
Would this do it?
Going from MySQL 5.0 to 5.1.
First, you need to be running MySQL 5.1 or later. MySQL 5.0 does not support partitioning.
Second, please be aware of the difference between single-quotes (which delimit strings and dates) and back-ticks (which delimit table and column identifiers in MySQL). Use the correct type where appropriate. I mention this, because your example uses the wrong type of quotes:
ALTER TABLE 'userstatistics' (
That should be:
ALTER TABLE `userstatistics` (
Finally, yes, you can restructure a table into partitions with ALTER TABLE. Here's an exact copy & paste from a statement I tested on MySQL 5.1.57:
ALTER TABLE userstatistics PARTITION BY RANGE (yearweek) (
PARTITION userstats_201108 VALUES LESS THAN (201108) ENGINE = InnoDB,
PARTITION userstats_201109 VALUES LESS THAN (201109) ENGINE = InnoDB,
PARTITION userstats_201110 VALUES LESS THAN (201110) ENGINE = InnoDB,
PARTITION userstats_201111 VALUES LESS THAN (201111) ENGINE = InnoDB,
PARTITION userstats_201112 VALUES LESS THAN (201112) ENGINE = InnoDB,
PARTITION userstats_201113 VALUES LESS THAN (201113) ENGINE = InnoDB,
PARTITION userstats_201114 VALUES LESS THAN (201114) ENGINE = InnoDB,
PARTITION userstats_201115 VALUES LESS THAN (201115) ENGINE = InnoDB,
PARTITION userstats_201116 VALUES LESS THAN (201116) ENGINE = InnoDB,
PARTITION userstats_201117 VALUES LESS THAN (201117) ENGINE = InnoDB,
PARTITION userstats_201118 VALUES LESS THAN (201118) ENGINE = InnoDB,
PARTITION userstats_201119 VALUES LESS THAN (201119) ENGINE = InnoDB,
PARTITION userstats_201120 VALUES LESS THAN (201120) ENGINE = InnoDB,
PARTITION userstats_201121 VALUES LESS THAN (201121) ENGINE = InnoDB,
PARTITION userstats_max VALUES LESS THAN MAXVALUE ENGINE = InnoDB);
Note that this causes a table restructure, so if you already have a lot of data in this table, it will take a while to run. Exactly how long depends on how much data you have, and your hardware speed, and other factors. Be aware that while the table is being restructured, it is locked and unavailable for reading and writing by other queries.
Look this
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html about the alter table.
Then in particular the alter table.
ADD/DROP/COALESCE/REORGANIZE partition sql provides almost all the functions to manage your partitions.
note that hash can be only used to integer.
ALTER TABLE ... ADD PARTITION creates no temporary table except when used with NDB tables. ADD or DROP operations for RANGE or LIST partitions are immediate operations or nearly so. ADD or COALESCE operations for HASH or KEY partitions copy data between changed partitions; unless LINEAR HASH or LINEAR KEY was used, this is much the same as creating a new table (although the operation is done partition by partition). REORGANIZE operations copy only changed partitions and do not touch unchanged ones.