I am creating a calendar webapp and I'm kinda stuck between a performance vs storage issue in the creation and subsequent queries of my events table. The concept is "how to make a table with repeating events (daily/weekly)?" Here is my current solution:
CREATE TABLE `events` (
`eventid` int(10) NOT NULL AUTO_INCREMENT, //primary key
`evttitle` varchar(255) NOT NULL, //title of event
`createdby` char(8) NOT NULL, //user identification (I'm using
`evtdatestart` date NOT NULL, ////another's login system)
`evtdateend` date NOT NULL,
`evttimestart` time NOT NULL,
`evttimeend` time NOT NULL,
`evtrepdaily` tinyint(1) NOT NULL DEFAULT 0, //if both are '0' then its
`evtrepweekly` tinyint(1) NOT NULL DEFAULT 0, //a one time event
`evtrepsun` tinyint(1) NOT NULL DEFAULT 0,
`evtrepmon` tinyint(1) NOT NULL DEFAULT 0,
`evtreptue` tinyint(1) NOT NULL DEFAULT 0,
`evtrepwed` tinyint(1) NOT NULL DEFAULT 0,
`evtrepthu` tinyint(1) NOT NULL DEFAULT 0,
`evtrepfri` tinyint(1) NOT NULL DEFAULT 0,
`evtrepsat` tinyint(1) NOT NULL DEFAULT 0,
PRIMARY KEY (`eventid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I also have a very small table of numbers from 0 to 62 which can be used for many different things, but in the query of looking up events, it is used via MOD(num,7) as the day of the week. Here is that query:
SELECT date, evttitle, evttimestart, evttimeend
FROM ( //create result of all days between the span of two dates
SELECT DATE_ADD(startdate, INTERVAL (num-startday) DAY) AS date,
IF(MOD(num,7)=0,7,MOD(num,7)) AS weekday //1=sun...7=sat
FROM (
SELECT '#startdate' AS startdate, DAYOFWEEK('#startdate') AS startday,
'#enddate' AS enddate, DATEDIFF('#enddate','#startdate') AS diff
) AS span, numbers //numbers is 0-62
WHERE num>=startday AND num<=startday+diff
) AS daysinspan, events
WHERE evtdatestart<=date AND evtdateend>=date AND (
(evtdatestart=evtdateend) OR //single event
(evtrepdaily) OR //daily event
(evtrepweekly AND ( //weekly event
(weekday=1 AND evtrepsun) OR ////on Sunday
(weekday=2 AND evtrepmon) OR ////on Monday
(weekday=3 AND evtreptue) OR ////on Tuesday
(weekday=3 AND evtrepwed) OR ////on Wednesday
(weekday=3 AND evtrepthu) OR ////on Thursday
(weekday=3 AND evtrepfri) OR ////on Friday
(weekday=3 AND evtrepsat) ////on Saturday
)) //end of repeat truths
)
ORDER BY date, evtstarttime;
I like this way mostly because it is a pure SQL way of generating the results, it saves me from having to duplicate the event 50+ times for repeating events, and it makes it easier to modify the repeating events. However, it is unacceptable for performance to be slow as this is likely the most common query performed by users.
So another way would be to not include the evtrep columns and simply recreate a new, slightly different, event as many times is needed for the span. But I don't like this idea as the thought of duplicating that much data makes me cringe. However, if it will guarentee me significantly faster results (and clearly the lookup query would be much easier and faster), then I guess it can justify the extra storage.
Which do you all think to be the better plan? Or is there another one that I have not thought of/mentioned here?
Thanks in advance!
Update 1:
person-b made a good suggestion. I believe that person-b is suggesting that my table may not be in first normal form which could be true assuming that many (more than ~30%) of the events are non-repeating (in my case, however, 80%+ of the events will likely be repeating events). However, I think my question is due for a restating, as, in his first update, person-b's suggested change or adding a reptimes would simply push off the date processing to the back end (which in my case is PHP), whose processing time I still have to account for. But my main concern (question) is this: what is the fastest way on an average per query basis to compute the dates and times of every event without manually creating x number of entries for repeating events?
Try normalising your tables - you would separate the event information from the (multiple) date and recurrence information.
Update: Here's an example:
Table 1:
CREATE TABLE `events` (
`eventid` int(10) NOT NULL AUTO_INCREMENT, //primary key
`repeatid` int(10) NOT NULL,
`evttitle` varchar(255) NOT NULL, //title of event
`createdby` char(8) NOT NULL, //user identification (I'm using
`evtdatestart` date NOT NULL, ////another's login system)
`evtdateend` date NOT NULL,
`evttimestart` time NOT NULL,
`evttimeend` time NOT NULL,
PRIMARY KEY (`eventid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Table 2:
CREATE TABLE `repeats` (
`repweek` tinyint(1) NOT NULL DEFAULT 0, // if 0, don't repeat, otherwise, the day 1..7 of the week
`repday` tinyint(1) NOT NULL DEFAULT 0, // repeat daily if 1
`reptimes` int(10) NOT NULL DEFAULT 0, // 0 for indefinite, otherwise number of times
)
Then, use a query like this (untested):
SELECT e.evttitle, r.reptimes FROM events e, repeats r WHERE e.eventid = 9
More information (both from the same guide) on Simple joins and Normalisation.
This will make your system more flexible, and hopefully faster too.
Related
I'm thinking my way to do that is a little archaic, not optimized... i don't need super detailed statistics, lets say i want the number of clicks on a link on a blog post per (actual) day/week/month/year nothing more, don't want the hour of click, just a number for each corresponding times (day/month/year).
I've created this table :
CREATE TABLE `clicks` (
`file_id` bigint(20) unsigned NOT NULL,
`filename` varchar(100) NOT NULL,
`day` int(10) unsigned DEFAULT 0,
`week` int(10) unsigned DEFAULT 0,
`month` int(10) unsigned DEFAULT 0,
`year` int(10) unsigned DEFAULT 0,
`all` int(10) unsigned DEFAULT 0,
PRIMARY KEY (`file_id`)
)
And each time there's a click, i update every column of a row by +1.
UPDATE clicks SET day = day+1, week = week+1 [..] WHERE file_id = $id
And at every end of day/week/month/year there's a cronjob who will reset the corresponding column for every file. For each day end it will be :
UPDATE clicks SET day = 0 [No WHERE Clause]
And when there's new click on a file tomorrow, it'll increment the day column again.
I have a small VPS server, small storage space, small RAM etc.. i just need how many times a file has been clicked this day only (not yesterday), this week (not the week before) etc.. and i'm trying to not have big & slow queries by having a line for each click and having millions of them.
Is my way of doing seems ok, or is there a better approach for what i want ?
Thanks everyone for the help.
You could create a table just storing the clicks, something like this:
CREATE TABLE clicks (
file_id INT NOT NULL,
filename VARCHAR(100) NOT NULL,
click_time TIMESTAMP NOT NULL
);
Then you just need to use the group by, to extract the clicks. For Example:
-- group clicks by day
SELECT DATE(click_time) AS day, COUNT(*) AS clicks
FROM clicks
GROUP BY day;
-- group clicks by week
SELECT YEARWEEK(click_time) AS week, COUNT(*) AS clicks
FROM clicks
GROUP BY week;
This is quite more efficient and requires less storage
Build and maintain a "Summary table" by date. Only summarize by DAY, then sum up the counts to get "by week", etc. That also lets you get the tallies for arbitrary ranges of days.
More on Summary Tables
I have a table for storing stats. Currently this is populated with about 10 million rows at the end of the day then copied to daily stats table and deleted. For this reason I can't have an auto-incrementing primary key.
This is the table structure:
CREATE TABLE `stats` (
`shop_id` int(11) NOT NULL,
`title` varchar(255) CHARACTER SET latin1 NOT NULL,
`created` datetime NOT NULL,
`mobile` tinyint(1) NOT NULL DEFAULT '0',
`click` tinyint(1) NOT NULL DEFAULT '0',
`conversion` tinyint(1) NOT NULL DEFAULT '0',
`ip` varchar(20) CHARACTER SET latin1 NOT NULL,
KEY `shop_id` (`shop_id`,`created`,`ip`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
I have a key on shop_id, created, ip but I'm not sure what columns I should use to create the optimal index to increase lookup speeds any further?
The query below takes about 12 seconds with no key and about 1.5 seconds using the index above:
SELECT DATE(CONVERT_TZ(`created`, 'UTC', 'Australia/Brisbane')) AS `date`, COUNT(*) AS `views`
FROM `stats`
WHERE `created` <= '2017-07-18 09:59:59'
AND `shop_id` = '17515021'
AND `click` != 1
AND `conversion` != 1
GROUP BY DATE(CONVERT_TZ(`created`, 'UTC', 'Australia/Brisbane'))
ORDER BY DATE(CONVERT_TZ(`created`, 'UTC', 'Australia/Brisbane'));
If there is no column (or combination of columns) that is guaranteed unique, then do have an AUTO_INCREMENT id. Don't worry about truncating/deleting. (However, if the id does not reset, you probably need to use BIGINT, not INT UNSIGNED to avoid overflow.)
Don't use id as the primary key, instead, PRIMARY KEY(shop_id, created, id), INDEX(id).
That unconventional PK will help with performance in 2 ways, while being unique (due to the addition of id). The INDEX(id) is to keep AUTO_INCREMENT happy. (Whether you DELETE hourly or daily is a separate issue.)
Build a Summary table based on each hour (or minute). It will contain the count for such -- 400K/hour or 7K/minute. Augment it each hour (or minute) so that you don't have to do all the work at the end of the day.
The summary table can also filter on click and/or conversion. Or it could keep both, if you need them.
If click/conversion have only two states (0 & 1), don't say != 1, say = 0; the optimizer is much better at = than at !=.
If they 2-state and you changed to =, then this becomes viable and much better: INDEX(shop_id, click, conversion, created) -- created must be last.
Don't bother with TZ when summarizing into the Summary table; apply the conversion later.
Better yet, don't use DATETIME, use TIMESTAMP so that you won't need to convert (assuming you have TZ set correctly).
After all that, if you still have issues, start over on the Question; there may be further tweaks.
In your where clause, Use the column first which will return the small set of results and so on and create the index in the same order.
You have
WHERE created <= '2017-07-18 09:59:59'
AND shop_id = '17515021'
AND click != 1
AND conversion != 1
If created will return the small number of set as compare to other 3 columns then you are good otherwise you that column at first position in your where clause then select the second column as per the same explanation and create the index as per you where clause.
If you think order is fine then create an index
KEY created_shopid_click_conversion (created,shop_id, click, conversion);.
I have added advertisements to my website which have quite some conditions to meet before delivering to a browsing user. Here's a detailed explanation:
These are the fields that require explaining:
start is by default '0000-00-00' and it indicates whether the ad has been yet paid or not. When an ad payment is accepted start is set to the day after, or any date the customer choses.
impresssions is respectively the remaining impressions of the advertisement
impressions_total and impressions_perday are self explanatory
and the other fields used in the query are just fields that validate whether the user falls into the specifications of the advertisement's auditory
An advertisement has to be paid to start displaying in the first place, however it can be set to start on a future date so the start value will be set but the ad shouldn't show up before it is time to. Then since customers can limit impressions per day I need to pick up only advertisements that have enough impressions for the day in progress. For example if an advertisement is started in 30/08/2013 with 10,000 impressions and 2,000 impressions per day then it shouldn't be able to show up today (31/08/2013) if it has less than 6,000 impressions because it's the second day of the campaign. As well as if the term period is say 5 days, and 5 days have passed, the advertisement has to be shown regardless of remaining impressions. Then there are those other comparisons to validate that the user is fit for this ad to display and the whole thing gets so complicated.
I am not quite good with mysql, although I have managed to construct a working query I am very concerned about optimizing it. I am most certain that the methods I have used are highly inefficient but I couldn't find a better way online. That's why I'm asking this question here, if anyone can help me improve the performance of this query?
SELECT `fields`,
FROM `ads`
WHERE (`impressions`>0 && `start`!='0000-00-00')
AND `start`<CURDATE() AND
(
`impressions`>(`impressions_total`-(CONVERT(CURDATE()-date(`start`), UNSIGNED)*`impressions_perday`))
OR (`impressions_total`/`impressions_perday` < CURDATE()-date(`start`))
-- this is the part where I validate the impressions for the day
-- and am most concerned that I haven't built correctly
)
AND
(
(
(YEAR(NOW())-YEAR("user's birthday") BETWEEN `ageMIN` AND `ageMax`)
AND (`sex`=2 OR `sex`="user's gender")
AND (`country`='' OR `country`="user's country")
) OR `applyToUnregistered` = 1
)
ORDER BY $random_order -- Generate random order pattern
Schema:
CREATE TABLE `ads` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`headline` varchar(25) NOT NULL,
`text` varchar(90) NOT NULL,
`url` varchar(50) NOT NULL,
`country` varchar(2) DEFAULT '0',
`ageMIN` tinyint(2) unsigned NOT NULL,
`ageMax` tinyint(2) unsigned NOT NULL,
`sex` tinyint(1) unsigned NOT NULL DEFAULT '2',
`applyToUnregistered` tinyint(1) unsigned NOT NULL DEFAULT '0',
`creator` int(10) unsigned NOT NULL,
`created` int(10) unsigned NOT NULL,
`start` date NOT NULL,
`impressions_total` int(10) unsigned NOT NULL,
`impressions_perday` mediumint(8) unsigned NOT NULL,
`impressions` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=27 DEFAULT CHARSET=utf8
You have a very complicated query from an optimization perspective.
The only indexes that can be used on the where clause are on ads(impressions) or ads(start). Because you use inequalities, you cannot combine them.
Can you modify the table structure to have an ImpressionsFlag? This would be 1 if there are any impressions and 0 otherwise. If so, then you can try an index on ads(ImpressionsFlag, Start).
If that helps with performance, the next step would be to break up the query into separate subqueries and bring them together using union all. The purpose is to design indexes to optimize the underlying queries.
since I have launched a podcast recently I wanted to analyse our Downloaddata. But some clients seem to send multiple requests. So I wanted to only count one request per IP and User-Agent every 15 Minutes. Best thing I could come up with is the following query, that counts one request per IP and User-Agent every hour. Any ideas how to solve that Problem in MySQL?
SELECT episode, podcast, DATE_FORMAT(date, '%d.%m.%Y %k') as blurry_date, useragent, ip FROM downloaddata GROUP BY ip, useragent
This is the table I've got
CREATE TABLE `downloaddata` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`date` datetime NOT NULL,
`podcast` varchar(255) DEFAULT NULL,
`episode` int(4) DEFAULT NULL,
`source` varchar(255) DEFAULT NULL,
`useragent` varchar(255) DEFAULT NULL,
`referer` varchar(255) DEFAULT NULL,
`filetype` varchar(15) DEFAULT NULL,
`ip` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=216 DEFAULT CHARSET=utf8;
Personally I'd recomend collecting every request, and then only taking one every 15 mins with a distict query, or perhaps counting the number every 15 mins.
If you are determined to throw data away so it can never be analysed though.
Quick and simple is to just the date and have an int column which is the 15 minute period,
Hour part of current time * 4 + Minute part / 4
DatePart functions are what you want to look up. Things is each time you want to record, you'll have to check if they have in the 15 minute period. Extra work, extra complexity and less / lower quality data...
MINUTE(date)/15 will give you the quarter hour (0-3). Ensure that along with the date is unique (or ensure UNIX_TIMESTAMP(date)/(15*60) is unique).
For reference, this is my current table:
`impression` (
`impressionid` bigint(19) unsigned NOT NULL AUTO_INCREMENT,
`creationdate` datetime NOT NULL,
`ip` int(4) unsigned DEFAULT NULL,
`canvas2d` tinyint(1) DEFAULT '0',
`canvas3d` tinyint(1) DEFAULT '0',
`websockets` tinyint(1) DEFAULT '0',
`useragentid` int(10) unsigned NOT NULL,
PRIMARY KEY (`impressionid`),
UNIQUE KEY `impressionsid_UNIQUE` (`impressionid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=447267 ;
It keeps a record of all the impressions on a certain page. After one day of running, it has gathered 447266 views. Those are a lot of records.
Now I want the amount of visitors per minute. I can easily get them like this:
SELECT COUNT( impressionid ) AS visits, DATE_FORMAT( creationdate, '%m-%d %H%i' ) AS DATE
FROM `impression`
GROUP BY DATE
This query takes a long time, of course. Right now around 56 seconds.
So I'm wondering what to do next. Do I:
Create an index on creationdate (I don't know if that'll help since I'm using a function to alter this data by which to group)
Create new fields that stores hours and minutes separately.
The last one would cause there to be duplicate data, and I hate that. But maybe it's the only way in this case?
Or should I go about it in some different way?
If you run this query often, you could denormaize the calculated value into a separate column (perhaps by a trigger on insert/update) then grouping by that.
Your idea of hours and minutes is a good one too, since it lets you group a few different ways other than just minutes. It's still denormalization, but it's more versatile.
Denormalization is fine, as long as it's justified and understood.