I am quite new in the subject of partitions and the necessity has arisen due to the great accumulation of data.
Well, basically it is an access control system, there are currently 20 departments and each department has approximately 100 users. The system records the date and time of the entries and exits (from_date / to_date) My intention is to divide by departments and then for a month throughout the year.
Plan:
Partition the table by [ dep_id and date (from_date and to_date) ]
Problem
I have the following table.
CREATE TABLE `employee` (
`employee_id` smallint(5) NOT NULL,
`dep_id` int(11) NOT NULL,
`from_date` int(11) NOT NULL,
`to_date` int(11) NOT NULL,
KEY `index1` (`employee_id`,`from_date`,`to_date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
I have the dates (from_date and to_date) in UNIX_TIMESTAMP format (INT 11)
I am looking to divide it during all the months of the year.
it's possible?
Mysql - 5.7
It is possible to use range partitioning on an integer column.
Assuming my_int_col is unix-style integer seconds since 1970-01-01
we could achieve monthly partitions with something like this:
PARTITION BY RANGE (my_int_col)
( PARTITION p20180101 VALUES LESS THAN ( UNIX_TIMESTAMP('2018-01-01 00:00') )
, PARTITION p20180201 VALUES LESS THAN ( UNIX_TIMESTAMP('2018-02-01 00:00') )
, PARTITION p20180301 VALUES LESS THAN ( UNIX_TIMESTAMP('2018-03-01 00:00') )
, PARTITION p20180401 VALUES LESS THAN ( UNIX_TIMESTAMP('2018-04-01 00:00') )
, PARTITION p20180501 VALUES LESS THAN ( UNIX_TIMESTAMP('2018-05-01 00:00') )
, PARTITION p20180601 VALUES LESS THAN ( UNIX_TIMESTAMP('2018-06-01 00:00') )
Be careful of the time_zone setting of the session. Those date literals will be interpreted as values in the current time_zone... e.g. if you want those to be UTC datetime, time_zone should be +00:00.
Or, replace the UNIX_TIMESTAMP() expression with a literal integer value... that's what MySQL is going to do with the UNIX_TIMESTAMP() expressions.
Obviously, you can name the partitions whatever you want.
Note: applying partitioning to an existing table will require MySQL to create an entire copy of the table, holding an exclusive lock on the original table while the operation completes. So you will need sufficient storage (disk) space, and a window of time for the operation to complete.
It's possible to create a new table that is partitioned, and then copy the older data a chunk at a time. But make the chunks reasonably sized, to avoid ballooning the ibdata1 with large transactions. And then do some RENAME TABLE statements to move the old table out, and move the new table in.
Some caveats to note with partitioned tables: there's no foreign key support, and there's no guarantee that partitioned table will give better DML performance than a non-partitioned table.
Strategic indexes and carefully planned queries is the key to performance with "very large" tables. And this is true with partitioned tables as well.
Partitioning isn't a magic bullet for performance problems that some novices would like it to be.
As far as creating subpartitions within partitions, I wouldn't recommend it.
Related
I have a huge table that stores many tracked events, such as a user click.
The table is already in the 10s of millions, and it's growing larger every day.
The queries are starting to get slower when I try to fetch events from a large timeframe, and after reading quite a bit on the subject I understand that partitioning the table may boost the performance.
What I want to do is partition the table on a per month basis.
I have only found guides that show how to partition manually each month, is there a way to just tell MySQL to partition by month and it will do that automatically?
If not, what is the command to do it manually considering my partitioned by column is a datetime?
As explained by the manual: http://dev.mysql.com/doc/refman/5.6/en/partitioning-overview.html
This is easily possible by hash partitioning of the month output.
CREATE TABLE ti (id INT, amount DECIMAL(7,2), tr_date DATE)
ENGINE=INNODB
PARTITION BY HASH( MONTH(tr_date) )
PARTITIONS 6;
Do note that this only partitions by month and not by year, also there are only 6 partitions (so 6 months) in this example.
And for partitioning an existing table (manual: https://dev.mysql.com/doc/refman/5.7/en/alter-table-partition-operations.html):
ALTER TABLE ti
PARTITION BY HASH( MONTH(tr_date) )
PARTITIONS 6;
Querying can be done both from the entire table:
SELECT * from ti;
Or from specific partitions:
SELECT * from ti PARTITION (HASH(MONTH(some_date)));
CREATE TABLE `mytable` (
`post_id` int DEFAULT NULL,
`viewid` int DEFAULT NULL,
`user_id` int DEFAULT NULL,
`post_Date` datetime DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
PARTITION BY RANGE (extract(year_month from `post_Date`))
(PARTITION P0 VALUES LESS THAN (202012) ENGINE = InnoDB,
PARTITION P1 VALUES LESS THAN (202104) ENGINE = InnoDB,
PARTITION P2 VALUES LESS THAN (202108) ENGINE = InnoDB,
PARTITION P3 VALUES LESS THAN (202112) ENGINE = InnoDB,
PARTITION P4 VALUES LESS THAN MAXVALUE ENGINE = InnoDB)
Be aware of the "lazy" effect doing it partitioning by hash:
As docs says:
You should also keep in mind that this expression is evaluated each time a row is inserted or updated (or possibly deleted); this means that very complex expressions may give rise to performance issues, particularly when performing operations (such as batch inserts) that affect a great many rows at one time.
The most efficient hashing function is one which operates upon a single table column and whose value increases or decreases consistently with the column value, as this allows for “pruning” on ranges of partitions. That is, the more closely that the expression varies with the value of the column on which it is based, the more efficiently MySQL can use the expression for hash partitioning.
For example, where date_col is a column of type DATE, then the expression TO_DAYS(date_col) is said to vary directly with the value of date_col, because for every change in the value of date_col, the value of the expression changes in a consistent manner. The variance of the expression YEAR(date_col) with respect to date_col is not quite as direct as that of TO_DAYS(date_col), because not every possible change in date_col produces an equivalent change in YEAR(date_col).
HASHing by month with 6 partitions means that two months a year will land in the same partition. What good is that?
Don't bother partitioning, index the table.
Assuming these are the only two queries you use:
SELECT * from ti;
SELECT * from ti PARTITION (HASH(MONTH(some_date)));
then start the PRIMARY KEY with the_date.
The first query simply reads the entire table; no change between partitioned and not.
The second query, assuming you want a single month, not all the months that map into the same partition, would need to be
SELECT * FROM ti WHERE the_date >= '2019-03-01'
AND the_date < '2019-03-01' + INTERVAL 1 MONTH;
If you have other queries, let's see them.
(I have not found any performance justification for ever using PARTITION BY HASH.)
I have been reading lots of great answers to different problems over the time on this site but this is the first time I am posting. So in advance thanks for your help.
Here is my question:
I have a MySQL table that tracks visits to different websites we have. This is the table structure:
create table navigation_base (
uid int(11) NOT NULL,
date datetime not null,
dia date not null,
ip int(4) unsigned not null default 0,
session_id int unsigned not null,
cliente smallint unsigned not null default 0,
campaign mediumint unsigned not null default 0,
trackcookie int unsigned not null,
adgroup int unsigned not null default 0,
PRIMARY KEY (uid)
) ENGINE=MyISAM;
This table has aprox. 70 million rows (an average of 110,000 per day).
On that table we have created indexes with following commands:
alter table navigation_base add index dia_cliente_campaign_ip (dia,cliente,campaign,ip);
alter table navigation_base add index dia_cliente_campaign_ip_session (dia,cliente,campaign,ip,session_id);
alter table navigation_base add index dia_cliente_campaign_ip_session_trackcookie (dia,cliente,campaign,ip,session_id,trackcookie);
We then use this table to get visitor statistics grouped by clients, days and campaigns with the following query:
select
dia,
navigation_base.campaign,
navigation_base.cliente,
count(distinct ip) as visitas,
count(ip) as paginas_vistas,
count(distinct session_id) as sesiones,
count(distinct trackcookie) as cookies
from navigation_base where
(dia between '2017-01-01' and '2017-01-31')
group by dia,cliente,campaign order by NULL
Even having those indexes created, the response times for periods of one month are relatively slow; On our server about 3 seconds.
Are there some ways of speeding up these queries?
Thanks in advance.
With this much of data, indexing alone may not be all that helpful since there is a lot of similarity in the data. Besides you have GROUP BY and SORT along with aggregation. All these things combined makes optimization very hard. partitioning is the way forward, because:
Some queries can be greatly optimized in virtue of the fact that data
satisfying a given WHERE clause can be stored only on one or more
partitions, which automatically excludes any remaining partitions from
the search. Because partitions can be altered after a partitioned
table has been created, you can reorganize your data to enhance
frequent queries that may not have been often used when the
partitioning scheme was first set up.
And if this doesn't work for you, it's still possible to
In addition, MySQL 5.7 supports explicit partition selection for
queries. For example, SELECT * FROM t PARTITION (p0,p1) WHERE c < 5
selects only those rows in partitions p0 and p1 that match the WHERE
condition.
ALTER TABLE navigation_base
PARTITION BY RANGE( TO_DAYS(dia)) (
PARTITION p0 VALUES LESS THAN (TO_DAYS('2018-12-31')),
PARTITION p1 VALUES LESS THAN (TO_DAYS('2017-12-31')),
PARTITION p2 VALUES LESS THAN (TO_DAYS('2016-12-31')),
PARTITION p3 VALUES LESS THAN (TO_DAYS('2015-12-31')),
..
PARTITION p10 VALUES LESS THAN MAXVALUE));
Use bigger or smaller partitions as you see fit.
The most important factor to keep in mind is that mysql can only use one index per table. So choose your index wisely.
If you only do COUNT(DISTINCT ...) at the granularity of a day, then build and incrementally maintain a summary table. It would augmented each night by a query nearly identical to your SELECT, but only fetching yesterday's data.
Then use this Summary Table for the monthly "report".
More on Summary Tables
(MySQL version: 5.6.15)
I have a huge table (Table_A) with 10M rows, in entity-attribute-value model.
It has a compound unique key [Field_A + Element + DataTime].
CREATE TABLE TABLE_A
(
`Field_A` varchar(5) NOT NULL,
`Element` varchar(5) NOT NULL,
`DataTime` datetime NOT NULL,
`Value` decimal(10,2) DEFAULT NULL,
UNIQUE KEY `A_ELE_TIME` (`Field_A`,`Element`,`DataTime`),
KEY `DATATIME` (`DataTime`),
KEY `ELEID` (`ELEID`),
KEY `ELE_TIME` (`ELEID`,`DataTime`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Rows are inserted/updated to the table every minutes, hence the row size of each [DataTime] (i.e. every minute) is regular, around 3K rows.
I have a "select" query from this table, after the above "inserted/updated".
The query selects one specified elements within most recent 25 hours (around 30K rows). This query usually processes within 3 sec.
SELECT
Field_A, Element, DataTime, `Value`
FROM
Table_A
WHERE
Element="XX"
AND DataTime between [time] and [time].
The original housekeeping would be remove any row after 3 days, every 5 minutes.
For better housekeeping, I try to partition the table base on [DataTime], every 6 hours. (00,06,12,18 local time)
PARTITION BY RANGE (TO_DAYS(DataTime)*100+hour(DataTime))
(PARTITION p2014103112 VALUES LESS THAN (73590212) ENGINE = InnoDB,
...
PARTITION p2014110506 VALUES LESS THAN (73590706) ENGINE = InnoDB,
PARTITION pFuture VALUES LESS THAN MAXVALUE ENGINE = InnoDB)
My housekeeping script will drop the expired partition then create a new one
ALTER TABLE TABLE_A REORGANIZE PARTITION pFuture INTO (
PARTITION [new_partition_name] VALUES LESS THAN ([bound_value]),
PARTITION pFuture VALUES LESS THAN MAXVALUE
)
The new process seems running smoothly.
However, the SELECT query would slow down suddenly (> 100 sec).
The query is still slow even all process stopped. It won't be fixed until "analyzing partitions" (reads and stores the key distributions for partitions).
It usually happens ones a day.
It does not happen to a non-partitioned table.
Therefore, we think it is caused by corrupted indexing in a partitioned MySQL (huge) table.
Does anyone have any idea on how to solve it?
Many Thanks!!
If you PARTITION BY RANGE (TO_DAYS(DataTime)*100+hour(DataTime)), when you filter datetime with between [from] and [to] operation, mysql will scan all partitions unless [from] equals [to].
So it's reasonable that your query slow down suddenly.
My suggestion is partition using TO_DAYS(DataTime) without hour, if you query recent 25 hours data, it will scan up to 2 partitions only.
I'm not good at MySql, and I couldn't explain it, wish other smart guys can explain it further. But you could using EXPLAIN PARTITIONS to prove it. And here is the Sql Fiddle Demo.
my table scheme:
CREATE TABLE `test_table` (
`his_id` int(11) NOT NULL,
`user_id` varchar(45) NOT NULL,
`gps_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`his_id`,`user_id`)
)
I want partitioning this table by user_id and gps_time,
which column user_id is partitioned by first character A~Z、a~z、0~9,
column gps_time is partitioned by the tast 3 month(ie:3 partitions).
how to do that?
thanks alot~
With MySQL 5.5, you can use multiple columns with RANGE partitioning.
From your question, it's not entirely clear how many partitions you want; it sounds as if you want a whole boatload of partitions, but I don't believe that's what you really want.
The syntax for RANGE partitioning is in the MySQL Reference Manual, available online.
here: http://dev.mysql.com/doc/refman/5.5/en/partitioning.html
(Be sure you check the manual for the version of MySQL you are actually running; there's been some significant changes to partitioning in 5.0, 5.1, 5.5, etc.
With MySQL 5.5.x, if you want a separate partitions for the first character of user_id, and a range of gps_time values, you could do something like this:
PARTITION BY RANGE COLUMNS(userid, gps_time)
( PARTITION pA0 VALUES LESS THAN ('B','2014-07-01')
, PARTITION pA1 VALUES LESS THAN ('B','2014-08-01')
, PARTITION pA2 VALUES LESS THAN ('B','2014-09-01')
, PARTITION pA3 VALUES LESS THAN ('B',MAXVALUE)
, PARTITION pB0 VALUES LESS THAN ('C','2014-07-01')
, PARTITION pB1 VALUES LESS THAN ('C','2014-08-01')
, PARTITION pB2 VALUES LESS THAN ('C','2014-09-01')
, PARTITION pB3 VALUES LESS THAN ('C',MAXVALUE)
, ...
, PARTITION pMX VALUES LESS THAN (MAXVALUE,MAXVALUE),
But that'd be over 100 partitions. I can't imagine a scenario where that's that's you really want. (I'm not sure what the upper limit on partitions for a table is.)
With MySQL 5.1, I don't believe it's possible to partition on multiple columns. You could, howerver, partition on just the user_id column, and then create subpartitions (within each partition) on the gps_time column... but I've never done that before.
I have a huge table that stores many tracked events, such as a user click.
The table is already in the 10s of millions, and it's growing larger every day.
The queries are starting to get slower when I try to fetch events from a large timeframe, and after reading quite a bit on the subject I understand that partitioning the table may boost the performance.
What I want to do is partition the table on a per month basis.
I have only found guides that show how to partition manually each month, is there a way to just tell MySQL to partition by month and it will do that automatically?
If not, what is the command to do it manually considering my partitioned by column is a datetime?
As explained by the manual: http://dev.mysql.com/doc/refman/5.6/en/partitioning-overview.html
This is easily possible by hash partitioning of the month output.
CREATE TABLE ti (id INT, amount DECIMAL(7,2), tr_date DATE)
ENGINE=INNODB
PARTITION BY HASH( MONTH(tr_date) )
PARTITIONS 6;
Do note that this only partitions by month and not by year, also there are only 6 partitions (so 6 months) in this example.
And for partitioning an existing table (manual: https://dev.mysql.com/doc/refman/5.7/en/alter-table-partition-operations.html):
ALTER TABLE ti
PARTITION BY HASH( MONTH(tr_date) )
PARTITIONS 6;
Querying can be done both from the entire table:
SELECT * from ti;
Or from specific partitions:
SELECT * from ti PARTITION (HASH(MONTH(some_date)));
CREATE TABLE `mytable` (
`post_id` int DEFAULT NULL,
`viewid` int DEFAULT NULL,
`user_id` int DEFAULT NULL,
`post_Date` datetime DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
PARTITION BY RANGE (extract(year_month from `post_Date`))
(PARTITION P0 VALUES LESS THAN (202012) ENGINE = InnoDB,
PARTITION P1 VALUES LESS THAN (202104) ENGINE = InnoDB,
PARTITION P2 VALUES LESS THAN (202108) ENGINE = InnoDB,
PARTITION P3 VALUES LESS THAN (202112) ENGINE = InnoDB,
PARTITION P4 VALUES LESS THAN MAXVALUE ENGINE = InnoDB)
Be aware of the "lazy" effect doing it partitioning by hash:
As docs says:
You should also keep in mind that this expression is evaluated each time a row is inserted or updated (or possibly deleted); this means that very complex expressions may give rise to performance issues, particularly when performing operations (such as batch inserts) that affect a great many rows at one time.
The most efficient hashing function is one which operates upon a single table column and whose value increases or decreases consistently with the column value, as this allows for “pruning” on ranges of partitions. That is, the more closely that the expression varies with the value of the column on which it is based, the more efficiently MySQL can use the expression for hash partitioning.
For example, where date_col is a column of type DATE, then the expression TO_DAYS(date_col) is said to vary directly with the value of date_col, because for every change in the value of date_col, the value of the expression changes in a consistent manner. The variance of the expression YEAR(date_col) with respect to date_col is not quite as direct as that of TO_DAYS(date_col), because not every possible change in date_col produces an equivalent change in YEAR(date_col).
HASHing by month with 6 partitions means that two months a year will land in the same partition. What good is that?
Don't bother partitioning, index the table.
Assuming these are the only two queries you use:
SELECT * from ti;
SELECT * from ti PARTITION (HASH(MONTH(some_date)));
then start the PRIMARY KEY with the_date.
The first query simply reads the entire table; no change between partitioned and not.
The second query, assuming you want a single month, not all the months that map into the same partition, would need to be
SELECT * FROM ti WHERE the_date >= '2019-03-01'
AND the_date < '2019-03-01' + INTERVAL 1 MONTH;
If you have other queries, let's see them.
(I have not found any performance justification for ever using PARTITION BY HASH.)