How the WHERE clause works on Partitioning by Range? - mysql

My database is on AWS RDS and getting bigger day by day. 
The reason for it is we have several cron jobs that fetches data through various API's and add it into our database. The data is increasing and affecting the SQL SEARCH operations.
I am thinking of archiving the previous year's data so that the WHERE clause keeps running without any latency and it does not traverse the complete record set (the previous years data).
I came to know recently about MYSQL Partitioning concept and by using the RANGE filter, we can partition the data of each year. My only concern about is if I have columns in the table like:-
id, first_name, last_name, email, created_date
and the partitioning is done as:
PARTITION BY RANGE(YEAR(created_date)) (
PARTITION p0 VALUES LESS THAN (2019),
PARTITION p1 VALUES LESS THAN (2020),
PARTITION p2 VALUES LESS THAN MAXVALUE
)
If I run the SQL query as:
select * from table where email = "abc#....com" 
Here, the partition is created on column created_date but WHERE clause is applied on email column so from which partition it will fetch the result?

Related

Mysql partitioning over the time

I have a table which will grow large over time, moreover I need only small amount of data say last 7 days.
I want to configure it such that the data of 7 days goes in one partition, and then in next. This way I would keep only two partitions and archive others.
I read about MySQL partitions here but the way in article to create partitions is that we specify all partitions while creating table only.
I am not sure is this the best way to do it where we add partitioning logic for long time.
Any ideas?
Unfortunately, it'll be a fairly manual process. Your best bet is to create the partitions, week by week ahead of time, then have a job that runs periodically to archive the old data into the 'catchall' partition.
e.g. with:
PARTITION BY RANGE ( TO_DAYS(date) ) (
PARTITION pmin VALUES LESS THAN ( TO_DAYS('2016-10-02 00:00:00') ),
PARTITION p1 VALUES LESS THAN ( TO_DAYS('2016-10-09 00:00:00') ),
PARTITION p2 VALUES LESS THAN ( TO_DAYS('2016-10-16 00:00:00') ),
PARTITION p3 VALUES LESS THAN ( TO_DAYS('2016-10-23 00:00:00') ),
PARTITION pmax VALUES LESS THAN (MAXVALUE)
);
There's no real harm having a few empty partitions sitting there with higher dates then doing a 'shift' once a week. It'll be fast enough as long as when you change the partitioning definition, the data window shifts by the partition size.
Your job would do something like
ALTER TABLE x REORGANIZE PARTITION pmin,p1 INTO (
PARTITION pmin VALUES LESS THAN ('2016-10-09 00:00:00')
);
ALTER TABLE x
ADD PARTITION px VALUES LESS THAN ( TO_DAYS('2016-10-30 00:00:00') )
);
There is no "automatic" partition management in MySQL. We have to run some specific SQL statements to add and drop partitions from a partitioned table.
We automated the task with a cron job which runs a MySQL PROCEDURE we wrote to drop (swap out) old partitions, and another PROCEDURE to add new partitions. The procedures are specific to a particular table.
Our table is partitioned by RANGE on a TIMESTAMP column. The partition expression is like UNIX_TIMESTAMP(col).
To add a new partition, we reorganize the MAXVALUE partition, which is always (or should always be) empty, so the operation is very quick. We dynamically prepare and execute a statement of the form:
ALTER TABLE ourtable REORGANIZE PARTITION pmax
INTO ( PARTITION pn_name VALUES LESS THAN (UNIX_TIMESTAMP(pn_date))
, PARTITION pmax VALUES LESS THAN MAXVALUE)
To get a new date value for the new partition (pn_name), we take the partition_description value from the second to last partition (the last partition is the MAXVALUE partition), and add 7 days to it to get the pn_date string to use. We use that same value to generate the pn_name for the new partition. (We name the partitions following a pattern like this: p20161030 based on the date value in the partition_description e.g. UNIX_TIMESTAMP('2016-10-30').
(This information is obtained from a fairly involved query with a couple of references to information_schema.partitions view.
With the other procedure to drop old partitions, we actually "swap out" the old partition to an archive table. (The archive table is later backed up, and dropped by a different task.)
The procedure basically runs a series of statements like this:
DROP TABLE IF EXISTS `_et` ;
CREATE TABLE `_et` LIKE `rdg_point_value` ;
ALTER TABLE `_et` REMOVE PARTITIONING ;
ALTER TABLE `ourtable` EXCHANGE PARTITION `oldest_partition` WITH TABLE `_et` ;
ALTER TABLE `ourtable` DROP PARTITION `oldest_partition` ;
RENAME TABLE `et` TO `archive_oldest_partition` ;
(I wish there was a cleaner way to create a new un-partitioned table, in a single statement, such as a a CREATE TABLE ... LIKE ... WITHOUT PARTITIONING, but absent that, we settled on the two separate statements.)
Just dropping the oldest partition would be a simpler process.
To obtain information about the oldest partition, our query is probably overkill. But it's where most of the "magic" happens. Just to give you an idea of what that query looks like...
FROM information_schema.partitions p1
JOIN information_schema.partitions px
ON px.table_schema = 'ourdatabase'
AND px.table_name = 'ourtable'
AND px.partition_method = 'RANGE'
AND px.partition_expression = 'UNIX_TIMESTAMP(ourcol)'
AND px.partition_description = 'MAXVALUE'
WHERE p1.table_schema = 'ourdatabase'
AND p1.table_name = 'ourtable'
AND p1.partition_method = 'RANGE'
AND p1.partition_expression = 'UNIX_TIMESTAMP(ourcol)'
AND p1.partition_description <> 'MAXVALUE'
AND p1.partition_description + 0 <= UNIX_TIMESTAMP(DATE(NOW()) + INTERVAL -187 DAY)
AND p1.partition_ordinal_position = 1
You could probably get away with a simpler query. (Our query is designed to only return the "oldest" partition only if all of the timestamp values in it are at least six months old, and only if there is a MAXVALUE partition defined.
Each of the procedures use the current date to see if "its time" to add or drop a partition. (The amount of time forward and back is hardcoded into the queries in the procedure... the query returns 0 rows if its not time yet.
The procedures only need to be executed once per week, and we designed them so that any "extra" runs won't add or drop partitions outside of the specified time ranges.
We have the procedures scheduled to execute every day, and on most days, the procedure runs a query which returns zero rows, and exits. Only when the query returns a row is there any work to do.

What columns to PARTITION BY in a time-series table?

I want to collect time-series data and store it in snappydata store. I will be collecting millions of rows of data and I want to make queries across timeslices/ranges.
Here is an example query I want to do:
select avg(value)
from example_timeseries_table
where time >= :startDate and time < :endDate;
So, I am thinking that I want to have PARTITION BY COLUMN on time columns rather than the classic PRIMARY KEY column. In other technologies that I am familiar with like Cassandra DB, using the time columns in the partition key would point me directly at the partition and allow pulling the data for the timeslice in a single node rather than across many distributed nodes.
To be performant, I assume I need to partition by column 'time', in this table.
example_timeseries_table
------------------------
id int not nullable,
value varchar(128) not nullable,
time timestamp not nullable
PERSISTENT ASYNCHRONOUS
PARTITION BY COLUMN time
Is this the correct column to partition on for efficient, time-slice queries or do I need to make even more columns like: year_num, month_num, day_num, hour_num columns and PARTITION BY COLUMN on all of them as well, then do a query like this to focus the query to a particular partitioned node?:
select avg(value)
from example_table
where year_num = 2016
and month_num= 1
and day_num = 4
and hour_num = 11
and time >= :startDate and time < :endDate;
When a single partition has all the data, a single processor processes that data and you lose distributed processing. In fact, if you have time series data, most of the time you would be querying the node that holds the latest time range and the rest of your compute capacity sits idle. If you expect concurrent queries on various time ranges then it may be fine but that is not the case most of the time.
Assuming that you are working with row tables, another way to speed up your queries would be by creating an index on your time column.
SnappyData supports partition pruning on row tables. In case you decide to go the way you mention here, the timestamp column's partition pruning should work.

Partitioning a MySQL table based on a column value.

I want to partition a table in MySQL while preserving the table's structure.
I have a column, 'Year', based on which I want to split up the table into different tables for each year respectively. The new tables will have names like 'table_2012', 'table_2013' and so on. The resultant tables need to have all the fields exactly as in the source table.
I have tried the following two pieces of SQL script with no success:
1.
CREATE TABLE all_data_table
( column1 int default NULL,
column2 varchar(30) default NULL,
column3 date default NULL
) ENGINE=InnoDB
PARTITION BY RANGE ((year))
(
PARTITION p0 VALUES LESS THAN (2010),
PARTITION p1 VALUES LESS THAN (2011) , PARTITION p2 VALUES LESS THAN (2012) ,
PARTITION p3 VALUES LESS THAN (2013), PARTITION p4 VALUES LESS THAN MAXVALUE
);
2.
ALTER TABLE all_data_table PARTITION BY RANGE COLUMNS (`year`) (
PARTITION p0 VALUES LESS THAN (2011),
PARTITION p1 VALUES LESS THAN (2012),
PARTITION p2 VALUES LESS THAN (2013),
PARTITION p3 VALUES LESS THAN (MAXVALUE)
);
Any assistance would be appreciated!
This is old, but seeing as it comes up highly ranked in partitioning searches, I figured I'd give some additional details for people who might hit this page. What you are talking about in having a table_2012 and table_2013 is not "MySQL Partitioning" but "Manual Partitioning".
Partitioning means that you have one "logical table" with a single table name, which--behind the scenes--is divided among multiple files. When you have millions to billions of rows, over years, but typically you are only searching a single month, partitioning by Year/Month can have a great performance benefit because MySQL only has to search against the file that contains the Year/Month that you are searching for...so long as you include the partition key in your WHERE.
When you create multiple tables like table_2012 and table_2013, you are MANUALLY partitioning the tables, which you don't do with the MySQL PARTITION configuration. To manually partition the tables, during 2012, you put all data into the 2012 table. When you hit 2013, you start putting all the data into the 2013 table. You have to make sure to create the table before you hit 2013 or it won't have any place to go. Then, when you query across the years (e.g. from Nov 2012 - Jan 2013), you have to do a UNION between table_2012 and table_2013.
SELECT * FROM table_2012 WHERE #...
UNION
SELECT * FROM table_2013 WHERE #...
With partitioning, this manual work is not necessary. You do the initial setup of the partitions, then you treat is as a single table. No unions required, no checking the date before you insert, etc. This makes life much easier. MySQL handles figuring out what tables it needs to query. However, you MUST make sure to query against the Year column or it will have to scan ALL files. E.g. SELECT * FROM all_data_table WHERE Month=12 will scan all partitions for Month=12. To ensure you are only scanning the partition files that you need to scan, you want to make sure to include the partition column in every query that you can.
Possible negatives to partitioning...if you have billions of rows and you do an ALTER TABLE on the table to--say--add a column...it's going to have to update every row taking a VERY long time. At the company I currently work for, the boss doesn't think it's worth the time it takes to update the billion rows historically when we are adding a new column for going forward...so this is one of the reasons we do manual partitioning instead of letting MySQL do it.
DISCLAIMER: I am not an expert at partitioning...so if I'm wrong in any of this, please let me know and I'll fix the incorrect parts.
From what I see you want to create many tables from one big table.
I think you should try to create views instead.
Since from what I look around about partitioning, it actually partitions the physical storage of that table and then store them separately. But if you see from the top perspective you will see them as a single table.

How can I use mysql partitioning on this table?

I am working on a social network type project, as most social networks have, a user feed that will show things that YOUR friends have done on the site.
So let's say I have a MySQL table for these items with these fields;
// user_actions
auto_id = auto increment ID
type = a number (1 = photo upload, 2 = friend added, 3 = status post, 4 = so other action, etc..)
user_id = The id of the user who did the action
datetime = date and time
action_id = this could be the ID of the action, so if it is for a status post, it could be the ID of the actual status post record
Now in my PHP script, I would query this table to get all friend actions of a user.
I think this is the perfect type of table to use the MySQL partitioning, so instead of showing all actions from your friends and having it query every action ever posted on the site, which could be in the millions of records based off a previous site I had done, I think it would be good to partition bye date, maybe have all actions partioned into 6 month partitions, so it is less records to query.
I have never worked with the partitions but have been looking for a sollution similar to this for a few years, I just discovered the built in MySQL partitions and they seem like the ticket here.
Can someone show me how I could go about creating a table like that into partitions, also since I would need a new partition created every 6 months, is there a way to automate new partitions? Please help
This is untested, but should be close.
CREATE TABLE user_actions (
auto_id INT NOT NULL AUTO_INCREMENT,
type INT NOT NULL,
user_id INT NOT NULL,
insert_datetime DATE NOT NULL,
action_id INT NOT NULL)
PARTITION BY RANGE(TO_DAYS(insert_datetime))
(
PARTITION p0 VALUES LESS THAN (to_days('2011-06-01')),
PARTITION p1 VALUES LESS THAN (to_days('2012-01-01')) ,
PARTITION p11 VALUES LESS THAN MAXVALUE
);
You can manage this in the following way:
You can have the MAXVALUE partition always represent your "active" (in your case current 6 month period) partition. When the period is up, you can split/reorg that MAXVALUE partition where the period that past goes into a new partition with the MAXVALUE partition representing again the current/active partition.
For example, Jan 1st of 2011 you would have one partition, let's call it pM and it would store everything as it would have the LESS THAN MAXVALUE clause. Then after 6 months have passed, you would reorg/split that single partition creating a new partition that holds all the data for the previous 6 months and the MAXVALUE partition again representing the current/active period.
-- Untested, but again should be close
ALTER TABLE t1 REORGANIZE PARTITION (pM) INTO
(PARTITION p20110101 VALUES LESS THAN (to_days('2011-07-01'),
PARTITION pM VALUES LESS THAN MAXVALUE);
You may also consider sub-partitioning. You could sub-partition your user_id by HASH and therefore further reduce I/O and cost on queries for data based on the user_id.
Check out the following links for more information on partitioning.
MySQL Partitioning
Partition Managment

How to partition a MyISAM table by day in MySQL

I want to keep the last 45 days of log data in a MySQL table for statistical reporting purposes. Each day could be 20-30 million rows. I'm planning on creating a flat file and using load data infile to get the data in there each day. Ideally I'd like to have each day on it's own partition without having to write a script to create a partition every day.
Is there a way in MySQL to just say each day gets it's own partition automatically?
thanks
I would strongly suggest using Redis or Cassandra rather than MySQL to store high traffic data such as logs. Then you could stream it all day long rather than doing daily imports.
You can read more on those two (and more) in this comparison of "NoSQL" databases.
If you insist on MySQL, I think the easiest would just be to create a new table per day, like logs_2011_01_13 and then load it all in there. It makes dropping older dates very easy and you could also easily move different tables on different servers.
er.., number them in Mod 45 with a composite key and cycle through them...
Seriously 1 table per day was a valid suggestion, and since it is static data I would create packed MyISAM, depending upon my host's ability to sort.
Building queries to union some or all of them would be only moderately challenging.
1 table per day, and partition those to improve load performance.
Yes, you can partition MySQL tables by date:
CREATE TABLE ExampleTable (
id INT AUTO_INCREMENT,
d DATE,
PRIMARY KEY (id, d)
) PARTITION BY RANGE COLUMNS(d) (
PARTITION p1 VALUES LESS THAN ('2014-01-01'),
PARTITION p2 VALUES LESS THAN ('2014-01-02'),
PARTITION pN VALUES LESS THAN (MAXVALUE)
);
Later, when you get close to overflowing into partition pN, you can split it:
ALTER TABLE ExampleTable REORGANIZE PARTITION pN INTO (
PARTITION p3 VALUES LESS THAN ('2014-01-03'),
PARTITION pN VALUES LESS THAN (MAXVALUE)
);
This doesn't automatically partition by date, but you can reorganize when you need to. Best to reorganize before you fill the last partition, so the operation will be quick.
I have stumbled on this question while looking for something else and wanted to point out the MERGE storage engine (http://dev.mysql.com/doc/refman/5.7/en/merge-storage-engine.html).
The MERGE storage is more or less a simple pointer to multiple tables, and can be redone in seconds. For cycling logs, it can be very powerfull! Here's what I'd do:
Create one table per day, use LOAD DATA as OP mentionned to fill it up. Once it is done, drop the MERGE table and recreate it including that new table while ommiting the oldest one. Once done, I could delete/archive the old table. This would allow me to rapidly query a specific day, or all as both the orignal tables and the MERGE are valid.
CREATE TABLE logs_day_46 LIKE logs_day_45 ENGINE=MyISAM;
DROP TABLE IF EXISTS logs;
CREATE TABLE logs LIKE logs_day_46 ENGINE=MERGE UNION=(logs_day_2,[...],logs_day_46);
DROP TABLE logs_day_1;
Note that a MERGE table is not the same as a PARTIONNED one and offer some advantages and inconvenients. But do remember that if you are trying to aggregate from all tables it will be slower than if all data was in only one table (same is true for partitions, as they are basically different tables under the hood). If you are going to query mostly on specific days, you will need to choose the table yourself, but if partitions are done on the day values, MySQL will automatically grab the correct table(s) which might come out faster and easier to write.