SQL: from event data to probability map - mysql

I am interested to know whether the following can also be resolved with some (for me super advanced) SQL query.
I have a MySQL table with events (one row represents a device being turned on or off at a specific date & time). From this I'd like to extract patterns by calculating what the probability is for a specific device to be on or off at some specific time of the day (averaged out over all days in de DB).
With coding this is rather easy, but quite computational and I/O intensive. (I would just run through all rows and fill an array with a column for each minute of the day and then normalise.)
I am not super experienced in SQL, so I wonder whether I can do this smarter:
is it possible to construct an SQL query that can do the same?
Or at least the step in between (link an on and off event and - for that combination - fill a row with one minute for each day with a 0 or a 1).
Or even better, extract the patterns :)
This is how far I got so far (not very):
SELECT `deviceID`
, HOUR(FROM_UNIXTIME(timestamp)) AS hour
, MINUTE(FROM_UNIXTIME(timestamp)) AS minute
, DAYOFWEEK(FROM_UNIXTIME(timestamp)) AS`dayofweek`
, newValue > 0 AS onoff
FROM `log_raw`
WHERE 1
Edit:
Thanks for the very quick replies. Rd_nielsen's answer already helps me one important step ahead.
For Degan:
My ideal output would be a row for each device (there are circa 30 devices) that has the probability for each minute of the day that it would be on (hence for the column representing 0:00, the value would be 0, while for column 6:30 the value would be 0.5, representing the likelihood that every 2 days the device is on)
Alternatively, a meaningful step in between, would be to derive a row for each on-off sequence where all columns between the time it was turned on and off would be ones, while all the others would be zeros. Would either of that work in SQL?

Related

Store overlapping date ranges to be filtered using a custom date range

I need help regarding how to structure overlapping date ranges in my data warehouse. My objective is to model the data in a way that allows date-level filtering on the reports.
I have dimensions — DimEmployee, DimDate and a fact called FactAttendance. The records in this fact are stored as follows —
To represent this graphically —
A report needs to be created out of this data, that will allow the end-user to filter it by making a selection of a date range. Let's assume user selects date range D1 to D20. On making this selection, the user should see the value for how many days at least one of the employees was on leave. In this particular example, I should see the addition of light-blue segments in the bottom i.e. 11 days.
An approach that I am considering is to store one row per employee per date for each of the leaves. The only problem with this approach is that it will exponentially increase the number of records in the fact table. Besides, there are other columns in the fact that will have redundant data.
How are such overlapping date/time problems usually handled in a warehouse? Is there a better way that does not involve inserting numerous rows?
Consider modelling your fact like this:
fact_attendance (date_id,employee_id,hours,...)
This will enable you to answer your original question by simply filtering on the Date dimension, but you will also be able to handle issues like leave credits, and fractional day leave usage.
Yes, it might use a little more storage than your first proposal, but it is a better dimensional representation, and will satisfy more (potential) requirements.
If you are really worried about storage - probably not a real worry - use a DBMS with columnar compression, and you'll see large savings in disk.
The reason I say "not a real worry" about storage is that your savings are meaningless in today's world of storage. 1,000 employees with 20 days leave each per year, over five years would mean a total of 100,000 rows. Your DBMS would probably execute the entire star join in RAM. Even one million employees would require less than one terabyte before compression.

Database design - How much data to store, performance vs quality

There is some value, x, which I am recording every 30 seconds, currently into a database with three fields:
ID
Time
Value
I am then creating a mobile app which will use that data to plot charts in views of:
Last hour
Last 24 hours.
7 Day
30 Day
Year
Obviously, saving every 30 seconds for the last year and then sending that data to a mobile device will be too much (it would mean sending 1051200 values).
My second thought was perhaps I could use the average function in MySQL, for example, collect all of the averages for every 7 days (creating 52 points for a year), and send those points. This would work, but still MySQL would be trawling through creating averages and if many users connect, it's going to be bad.
So simply put, if these are my views, then I do not need to keep track of all that data. Nobody should care what x was a year ago to the precision of every 30 seconds, this is fine. I should be able to use "triggers" to create some averages.
I'm looking for someone to check what I have below is reasonable:
Store values every 30s in a table (this will be used for the hour view, 120 points)
When there are 120 rows are in the 30s table (120 * 30s = 60 mins = 1 hour), use a trigger to store the first half an hour in a "half hour average" table, remove the first 60 entries from the 30s table. This new table will need to have an id, start time, end time and value. This half hour average will be used for the 24 hour view (48 data points).
When the half hour table has more than 24 entries (12 hours), store the first 6 as an average in a 6 hour average table and then remove from the table. This 6 hour average will be used for the 7 day view (28 data points).
When there are 8 entries in the 6 hour table, remove the first 4 and store this as an average day, to be used in the 30 day view (30 data points).
When there are 14 entries in the day view, remove the first 7 and store in a week table, this will be used for the year view.
This doesn't seem like the best way to me, as it seems to be more complicated than I would imagine it should be.
The alternative is to keep all of the data and let mysql find averages as and when needed. This will create a monstrously huge database. I have no idea about the performance yet. The id is an int, time is a datetime and value is a float. Is 1051200 records too many? Now is a good time to add, I would like to run this on a raspberry pi, but if not.. I do have my main machine which I could use.
Your proposed design looks OK. Perhaps there are more elegant ways of doing this, but your proposal should work too.
RRD (http://en.wikipedia.org/wiki/Round-Robin_Database) is a specialised database designed to do all of this automatically, and it should be much more performant than MySQL for this specialised purpose.
An alternative is the following: keep only the original table (1051200 records), but have a trigger that generates the last hour/day/year etc views every time a new record is added (e.g. every 30 seconds) and store/cache the result somewhere. Then your number-crunching workload is independent of the number of requests/clients you have to serve.
1051200 records may or may not be too many. Test in your Raspberry Pi to find out.
Let me give a suggestion on the physical layout of your table, regardless on whether you decide to keep all data or "prune" it from time to time...
Since you generate a new row "every 30 seconds", then Time can serve as a natural key without fear of exceeding the resolution of the underlying data type and causing duplicated keys. You don't need ID in this scenario1, so your table is simply:
Time (PK)
Value
And since InnoDB tables are clustered, not having secondary indexes2 means the whole table is stored in a single B-Tree, which is as efficient as it gets from storage and querying perspective. On top of that, Value is automatically covered, which may not have been the case in your original design unless you specifically designed your index(es) for that.
Using time as key can be tricky in general, but I think may be worth it in this particular case.
1 Unless there are other tables that reference it through FOREIGN KEYs, or you have already written too much code that depends on it.
2 Which would be necessary in the original design to support efficient aggregation.

Filter records from a database on a minimum time interval for making graph

We have a MySQL database table with statistical data that we want to present as a graph, with timestamp used as the x axis. We want to be able to zoom in and out of the graph between resolutions of, say, 1 day and 2 years.
In the zoomed out state, we will not want to get all data from the table, since that would mean to much data being shipped through the servers, and the graph resolution will be good enough with less data anyway.
In MySQL you can make queries that only select e.g. every tenth value and similar, which could be usable in this case. However, the intervals between values stored in the database isn't consistent, two values can be separated by as little as 10 minutes and as much as 6 hours, possibly more.
So the issue is that it is difficult to calculate a good stepping interval for the query, if we skip every tenth value for some reslution, that may work for series 10 minutes inbetween, but for 6 hour intervals we will throw away too much and the graph will end up having a too low resolution for comfort.
My impression is that MySQL isn't able to have a stepping interval depend on time so it would skip rows that are e.g. in the vicinity of five minutes of am included rows.
One solution could be to set 6 hours as a minimal resolution requirement for the graph, so we don't throw away values unless 6 hours is represented by a sufficiently small distance in the graph. I fear that this may result in too much data being read and sent through the system if the interval actually is smaller.
Another solution is to have more intelligence in the Java code, reading sets of data iteratively from low resolution and downwards until the data is good enough.
Any ideas for a solution that would enable us to get optimal resolution in one read, without too large result sets being read from the database, while not putting too much load on the database? I'm having wild ideas about installing an intermediate NoSQL component to store the values in, that might support time intervals the way I want - not sure if that actually is an option in the organisation.

good design for a db that decreases resolution of accuracy after time

I have something like 20,000 data points in a database and I want to display it on the google annotated graph. I think around 2000 points would be a good number to actually use the graph for, so I want to use averages instead of the real amount of data points I have.
This data counts the frequency of something a certain time. it would be like Table(frequency, datetime)
So for the first week I will have datetime have an interval of every 10 minutes, and frequency will be an average of all the frequencies of that time interval (of 10 minutes). Similarly, for the month after that I will I have a datetime interval of an hour etc.
I think this is something you can see on google finance too, after some time the resolution of the datapoints decreases even when you zoom in.
So what would be a good design for this? Is there already a tool that exists to do something like this?
I already thought of (though it might not be good) a giant table of all 20,000 points and several smaller tables that represent each time interval (1 week, 1 month etc) that are built through queries to the larger table and constantly updated and trimmed with new averages.
Keep the raw data in the db in one table the. Have a second reprti g table which you use a script or query to populate from the raw table. The transformation that populates the reporting table can group and average the buckets however you want. The important thing Is to not transform your data on initial insert--keep all your raw data. That way you can always rollback or rebuild if you mess something up.
ETL. Learn it. Love it. Live it.

How would you store and query hours of operation?

We're building an app that stores "hours of operation" for various businesses. What is the easiest way to represent this data so you can easily check if an item is open?
Some options:
Segment out blocks (every 15 minutes) that you can mark "open/closed". Checking involves seeing if the "open" bit is set for the desired time (a bit like a train schedule).
Storing a list of time ranges (11am-2pm, 5-7pm, etc.) and checking whether the current time falls in any specified range (this is what our brain does when parsing the strings above).
Does anyone have experience in storing and querying timetable information and any advice to give?
(There's all sorts of crazy corner cases like "closed the first Tuesday of the month", but we'll leave that for another day).
store each contiguous block of time as a start time and a duration; this makes it easier to check when the hours cross date boundaries
if you're certain that hours of operation will never cross date boundaries (i.e. there will never be an open-all-night sale or 72-hour marathon event et al) then start/end times will suffice
The most flexible solution might be use the bitset approach. There are 168 hours in a week, so there are 672 15-minute periods. That's only 84 bytes worth of space, which should be tolerable.
I'd use a table like this:
BusinessID | weekDay | OpenTime | CloseTime
---------------------------------------------
1 1 9 13
1 2 5 18
1 3 5 18
1 4 5 18
1 5 5 18
1 6 5 18
1 7 5 18
Here, we have a business that has regular hours of 5 to 6, but shorter hours on sunday.
A query for if open would be (psuedo-sql)
SELECT #isOpen = CAST
(SELECT 1 FROM tblHours
WHERE BusinessId = #id AND weekDay = #Day
AND CONVERT(Currentime to 24 hour) IS BETWEEN(OpenTime,CloseTime)) AS BIT;
If you need to store edge cases, then just have 365 entries, one per day...its really not that much in the grand scheme of things, place an index on the day column and businessId column.
Don't forget to store the businesses timezone in a separate table (normalize!), and perform a transform between your time and it before making these comparisons.
OK, I'll throw in on this for what it's worth.
I need to handle quite a few things.
Fast / Performant Query
Any increments of time, 9:01 PM, 12:14, etc.
International (?) - not sure if this is an issue even with timezones, at least in my case but someone more versed here feel free to chime in
Open - Close spanning to the next day (open at noon, close at 2:00 AM)
Multiple timespans / day
Ability to override specific days (holidays, whatever)
Ability for overrides to be recurring
Ability to query for any point in time and get businesses open (now, future time, past time)
Ability to easily exclude results of businesses closing soon (filter businesses closing in 30 minutes, you don't want to make your users 'that guy that shows up 5 minutes before closing in the food/beverage industry)
I like a lot of the approaches presented and I'm borrowing from a few of them. In my website, project, whatever I need to take into consideration I may have millions of businesses and a few of the approaches here don't seem to scale well to me personally.
Here's what I propose for an algorithm and structure.
We have to make some concrete assumptions, across the globe, anywhere, any time:
There are 7 days in a week.
There are 1440 minutes in one day.
There are a finite number of permutations of minutes of open / closed that are possible.
Not concrete but decent assumptions:
Many permutations of open/closed minutes will be shared across businesses reducing total permutations actually stored.
There was a time in my life I could easily calculate the actual possible combinations to this approach but if someone could assist/thinks it would be useful, that would be great.
I propose 3 tables:
Before you stop reading, consider in the real-world 2 of these tables will be small enough cache neatly. This approach isn't going to be for everyone either due to the sheer complexity of code required to interpret a UI to the data model and back again if needed. Your mileage and needs may vary. This is an attempt at a reasonable 'enterprise' level solution, whatever that means.
HoursOfOperations Table
ID | OPEN (minute of day) | CLOSE (minute of day)
1 | 360 | 1020 (example: 9 AM - 5 PM)
2 | 365 | 1021 (example: edge-case 9:05 AM - 5:01 PM (weirdos) )
etc.
HoursOfOperations doesn't care about what days, just open and close and uniqueness. There can be only a single entry per open/close combination. Now, depending on your environment either this entire table can be cached or it could be cached for the current hour of the day, etc. At any rate, you shouldn't need to query this table for every operation. Depending on your storage solution I envision every column in this table as indexed for performance. As time progresses, this table likely has an exponentially inverse likelihood of INSERT(s). Really though, dealing with this table should mostly be an in-process operation (RAM).
Business2HoursMap
Note: In my example I'm storing "Day" as a bit-flag field/column. This is largely due to my needs and the advancement of LINQ / Flags Enums in C#. There's nothing stopping you from expanding this to 7 bit fields. Both approaches should be relatively similar in both storage logic and query approach.
Another Note: I'm not entering into a semantics argument on "every table needs a PK ID column", please find another forum for that.
BusinessID | HoursID | Day (or, if you prefer split into: BIT Monday, BIT Tuesday, ...)
1 | 1 | 1111111 (this business is open 9-5 every day of the week)
2 | 2 | 1111110 (this business is open 9:05 - 5:01 M-Sat (Monday = day 1)
The reason this is easy to query is that we can always determine quite easily the MOTD (Minute of the Day) that we're after. If I want to know what's open at 5 PM tomorrow I grab all HoursOfOperations IDS WHERE Close >= 1020. Unless I'm looking for a time range, Open becomes insignificant. If you don't want to show businesses closing in the next half-hour, just adjust your incoming time accordingly (search for 5:30 PM (1050), not 5:00 PM (1020).
The second query would naturally be 'give me all business with HoursID IN (1, 2, 3, 4, 5), etc. This should probably raise a red flag as there are limitations to this approach. However, if someone can answer the actual permutations question above we may be able to pull the red flag down. Consider we only need the possible permutations on any one side of the equation at one time, either open or close.
Considering we've got our first table cached, that's a quick operation. Second operation is querying this potentially large-row table but we're searching very small (SMALLINT) hopefully indexed columns.
Now, you may be seeing the complexity on the code side of things. I'm targeting mostly bars in my particular project so it's going to be very safe to assume that I will have a considerable number of businesses with hours such as "11:00 AM - 2:00 AM (the next day)". That would indeed be 2 entries into both the HoursOfOperations table as well as the Business2HoursMap table. E.g. a bar that is open from 11:00 AM - 2:00 AM will have 2 references to the HoursOfOperations table 660 - 1440 (11:00 AM - Midnight) and 0 - 120 (Midnight - 2:00 AM). Those references would be reflected into the actual days in the Business2HoursMap table as 2 entries in our simplistic case, 1 entry = all days Hours reference #1, another all days reference #2. Hope that makes sense, it's been a long day.
Overriding on special days / holidays / whatever.
Overrides are by nature, date based, not day of week based. I think this is where some of the approaches try to shove the proverbial round peg into a square hole. We need another table.
HoursID | BusinessID | Day | Month | Year
1 | 2 | 1 | 1 | NULL
This can certainly get more complex if you needed something like "on every second Tuesday, this company goes fishing for 4 hours". However, what this will allow us to do quite easily is allow 1 - overrides, 2 - reasonable recurring overrides. E.G. if year IS NULL, then every year on New Years day this weirdo bar is open from 9:00 AM to 5:00 PM keeping in line with our above data examples. I.e. - If year were set, it's only for 2013. If month is null, it's every first day of the month. Again, this won't handle every scheduling scenario by NULL columns alone, but theoretically, you could handle just about anything by relying on a long sequence of absolute dates if needed.
Again, I would cache this table on a rolling day basis. I just can't realistically see the rows for this table in a single-day snapshot being very large, at least for my needs. I would check this table first as it is well, an override and would save a query against the much larger Business2HoursMap table on the storage-side.
Interesting problem. I'm really surprised this is the first time I've really needed to think this through. As always, very keen on different insights, approaches or flaws in my approach.
I think I'd personally go for a start + end time, as it would make everything more flexible. A good question would be: what's the chance that the block size would change at a certain point? Then pick the solution that best fits your situation (if it's liable to change I'd go for the timespans definately).
You could store them as a timespan, and use segments in your application. That way you have the easy input using blocks, while keeping the flexibility to change in your datastore.
To add to what Johnathan Holland said, I would allow for multiple entries for the same day.
I would also allow for decimal time, or another column for minutes.
Why? many restaurants and some businesses, and many businesses around the world have lunch and or afternoon breaks. Also, many restaurants (2 that I know of near my house close at odd non-15-increments time. One closes at 9:40 PM on Sundays, and one closes at 1:40 AM.
There is also the issue of holiday hours , such as stores closing early on thanksgiving day, for example, so you need to have calendar-based override.
Perhaps what can be done is a date/time open, date-time close, such as this:
businessID | datetime | type
==========================================
1 10/1/2008 10:30:00 AM 1
1 10/1/2008 02:45:00 PM 0
1 10/1/2008 05:15:00 PM 1
1 10/2/2008 02:00:00 AM 0
1 10/2/2008 10:30:00 AM 1
etc. (type: 1 being open and 0 closed)
And have all the days in the coming 1 or two years precalculated 1-2 years in advance. Note that you would only have 3 columns: int, date/time/bit so the data consumption should be minimal.
This will also allow you to modify specific dates for odd hours for special days, as they become known.
It also takes care of crossing over midnight, as well as 12/24 hour conversions.
It is also timezone agnostic. If you store start time and duration, when you calculate the end time, is your machine going to give you the TZ adjusted time? Is that what you want? More code.
as far as querying for open-closed status: query the date-time in question,
select top 1 type from thehours where datetimefield<=somedatetime and businessID = somebusinessid order by datetime desc
then look at "type". if one, it's open, if 0, it's closed.
PS: I was in retail for 10 years. So I am familiar with the small business crazy-hours problems.
The segment blocks are better, just make sure you give the user an easy way to set them. Click and drag is good.
Any other system (like ranges) is going to be really annoying when you cross the midnight boundary.
As for how you store them, in C++ bitfields would probably be best. In most other languages, and array might be better (lots of wasted space, but would run faster and be easier to comprehend).
I would think a little about those edge-cases right now, because they are going to inform whether you have a base configuration plus overlay or complete static storage of opening times or whatever.
There are so many exceptions - and on a regular basis (like snow days, irregular holidays like Easter, Good Friday), that if this is expected to be a reliable representation of reality (as opposed to a good guess), you'll need to address it pretty soon in the architecture.
How about something like this:
Store Hours Table
Business_id (int)
Start_Time (time)
End_Time (time)
Condition varchar/string
Open bit
'Condition' is a lambda expression (text for a 'where' clause). Build the query dynamically. So for a particular business you select all of the open/close times
Let Query1 = select count(open) from store_hours where #t between start_time and end_time and open = true and business_id = #id and (.. dynamically built expression)
Let Query2 = select count(closed) from store_hours where #t between start_time and end_time and open = false and business_id = #id and (.. dynamically built expression)
So end the end you want something like:
select cast(Query1 as bit) & ~cast(Query2 as bit)
If the result of the last query is 1 then the store is open at time t, otherwise it is closed.
Now you just need a friendly interface that can generate your where clauses (lambda expressions) for you.
The only other corner case that I can think of is what happens if a store is open from say 7am to 2am on one date but closes at 11pm on the following date. Your system should be able to handle that as well by smartly splitting up the times between the two days.
There is surely no need to conserve memory here, but perhaps a need for clean and comprehensible code. "Bit twiddling" is not, IMHO, the way to go.
We need a set container here, which holds any number of unique items and can determine quickly and easily whether an item is a member or not. The setup reuires care, but in routine use a single line of simply understood code determines if you are open or closed
Concept:
Assign index number to every 15 min block, starting at, say, midnight sunday.
Initialize:
Insert into a set the index number of every 15 min block when you are open. ( Assuming you are open fewer hours than you are closed. )
Use:
Subtract from interesting time, in minutes, midnight the previous sunday and divide by 15. If this number is present in the set, you are open.