Multiple Date Fields database design - ms-access

I'm designing an Access .accdb for project management. The project contract stipulates a number of milestones for each project, with an associated date. The exact number of milestones depends on an "either/or" case of project size, but max of 6
My employer would like to track a [Forecast] date, [Actual] date and [Paid] date for each milestone, meaning a large sized project ends up with 24 dates associated with it, often duplicated (if a project runs to time, all four dates will be identical)
Currently, I have tblMilestones, which has a FK linking to tblProject and a record for each Milestone, with the 4 associated dates as fields in the record and a field to mark the milestone as complete or current.
I feel like we're collecting, storing and entering a lot of pretty pointless data - especially the [Forecast] date, for which we collect data from our project managers (not the most reliable of data anyway). Once the milestone is complete and the [Actual] date is entered, the [Forecast] date is pretty meaningless
I'd rather have the contract date in one table, entered when a new project is added, a reporting table for the changeable forecast date, set the Actual date when user marks milestone as complete and draw the paid date from transactions records.
Is this a better design approach? The db is small - less than 50 projects, so part of me thinks I'd just be making things more complicated than they need to be, especially in terms of the extra UI required.

Take a page out of dimensional data warehouse design and store dates in their own table with a DateID:
DateID DateValue
------ ----------
1 2000-01-01
... ...
9999 2012-12-31
Then turn all your date fields--Forecast, Actual, Paid, etc.--into foreign key references to the date table's DateID field.
To populate the dates table you can go two ways:
Use some VBA to generate a large set of dates, say 2005-01-01 to 2100-12-31, and insert them into the dates tables as a one-time operation.
Whenever someone types in a new date, check the dates table to see if it already exists, and if not, insert it.
Whichever way you do it, you'll obviously need an index on DateValue.
Taking a step back from the actual question, I'm realising that you're trying to fit two different uses into the same database--regular transactional use (as your project management app) and analytical use (tracking several different dates for your milestones--in other words, the milestone completion date is a Slowly Changing Dimension). You might want to consider splitting up these two uses into a regular transactional database and a data warehouse for analysis, and setting up an ETL process to move the data between them.
This way you can track only a milestone completion date and a payment date in your transactional database and the data warehouse will capture changes to the completion date over time. And allow you to do analysis and forecasting on that without bogging down the performance of the transactional (application) database.

Related

How to store recent usage frequency in MySQL

I'm working on the Product Catalog module of an Invoicing application.
When the user creates a new invoice the product name field should be an autocomplete field which shows the most recently used products from the product catalog.
How can I store this "usage recency/frequency" in the database?
I'm thinking about adding a new field recency which would be increased by 1 every time the product was used, and decreased by 1/(count of all products), when an other product is used. Then use this recency field for ordering, but it doesn't seem to me the best solution.
Can you help me what is the best practice for this kind of problem?
Solution for the recency calculation:
Create a new column in the products table, named last_used_on for example. Its data type should be TIMESTAMP (the MySQL representation for the Unix-time).
Advantages:
Timestamps contains both date and time parts.
It makes possible VERY precise calculations and comparisons in regard
to dates and times.
It lets you format the saved values in the date-time format of your
choice.
You can convert from any date-time format into a timestamp.
In regard to your autocomplete fields, it allows you to filter
the products list as you wish. For example, to display all products
used since [date-time]. Or to fetch all products used between
[date-time-1] and [date-time-2]. Or get the products used only on Mondays, at 1:37:12 PM, in the last two years, two months and three
days (so flexible timestamps are).
Resources:
Unix-Time
The DATE, DATETIME, and TIMESTAMP Types
How should unix timestamps be stored in int columns?
How to convert human date to unix timestamp in Mysql?
Solution for the usage rate calculation:
Well, actually, you are not speaking about a frequency calculation, but about a rate - even though one can argue that frequency is a rate, too.
Frequency implies using the time as the reference unit and it's measured in Hertz (Hz = [1/second]). For example, let's say you want to query how many times a product was used in the last year.
A rate, on the other hand, is a comparison, a relation between two related units. Like for example the exchange rate USD/EUR - they are both currencies. If the comparison takes place between two terms of the same type, then the result is a number without measurement units: a percentage. Like: 50 apples / 273 apples = 0.1832 = 18.32%
That said, I suppose you tried to calculate the usage rate: the number of usages of a product in relation with the number of usages of all products. Like, for a product: usage rate of the product = 17 usages of the product / 112 total usages = 0.1517... = 15.17%. And in the autocomplete you'd want to display the products with a usage rate bigger than a given percentage (like 9% for example).
This is easy to implement. In the products table add a column usages of type int or bigint and simply increment its value each time a product is used. And then, when you want to fetch the most used products, just apply a filter like in this sql statement:
SELECT
id,
name,
(usages*100) / (SELECT sum(usages) as total_usages FROM products) as usage_rate
FROM products
GROUP BY id
HAVING usage_rate > 9
ORDER BY usage_rate DESC;
Here's a little study case:
In the end, recency, frequency and rate are three different things.
Good luck.
To allow for future flexibility, I'd suggest the following additional (*) table to store the entire history of product usage by all users:
Name: product_usage
Columns:
id - internal surrogate auto-incrementing primary key
product_id (int) - foreign key to product identifier
user_id (int) - foreign key to user identifier
timestamp (datetime) - date/time the product was used
This would allow the query to be fine tuned as necessary. E.g. you may decide to only order by past usage for the logged in user. Or perhaps total usage within a particular timeframe would be more relevant. Such a table may also have a dual purpose of auditing - e.g. to report on the most popular or unpopular products amongst all users.
(*) assuming something similar doesn't already exist in your database schema
Your problem is related to many other web-scale search applications, such as e.g. showing spell corrections, related searches, or "trending" topics. You recognized correctly that both recency and frequency are important criteria in determining "popular" suggestions. In practice, it is desirable to compromise between the two: Recency alone will suffer from random fluctuations; but you also don't want to use only frequency, since some products might have been purchased a lot in the past, but their popularity is declining (or they might have gone out of stock or replaced by successor models).
A very simple but effective implementation that is typically used in these scenarios is exponential smoothing. First of all, most of the time it suffices to update popularities at fixed intervals (say, once each day). Set a decay parameter α (say, .95) that tells you how much yesterday's orders count compared to today's. Similarly, orders from two days ago will be worth α*α~.9 times as today's, and so on. To estimate this parameter, note that the value decays to one half after log(.5)/log(α) days (about 14 days for α=.95).
The implementation only requires a single additional field per product,
orders_decayed. Then, all you have to do is to update this value each night with the total daily orders:
orders_decayed = α * orders_decayed + (1-α) * orders_today.
You can sort your applicable suggestions according to this value.
To have an individual user experience, you should not rely on a field in the product table, but rather on the history of the user.
The occurrences of the product in past invoices created by the user would be a good starting point. The advantage is that you don't need to add fields or tables for this functionality. You simply rely on data that is already present anyway.
Since it is an auto-complete field, maybe past usage is not really relevant. Display n search results as the user types. If you feel that results are better if you include recency in the calculation of the order, go with it.
Now, implementation may defer depending on how and when product should be displayed. Whether it has to be user specific usage frequency or application specific (overall). But, in both case, I would suggest to have a history table, which later you can use for other analysis.
You could design you history table with atleast below columns:
Id | ProductId | LastUsed (timestamp) | UserId
And, now you can create a view, which will query this table for specific time range (something like product frequency of last week, last month or last year) and will give you highest sold product for specific time range.
Same can be used for User's specific frequency by adding additional condition to filter by Userid.
I'm thinking about adding a new field recency which would be increased
by 1 every time the product was used, and decreased by 1/(count of all
products), when an other product is used. Then use this recency field
for ordering, but it doesn't seem to me the best solution.
Yes, it is not a good practice to add a column for this and update every time. Imagine, this product is most awaiting product and people love to buy it. Now, at a time, 1000 people or may be more requested for this product and for every request you are going to update same row, since to maintain the concurrency database has to lock that specific row and update for each request, which is definitely going to hit your database and application performance instead you can simply insert a new row.
The other possible solution is, you could use your existing invoice table as it will definitely have all product and user specific information and create a view to get frequently used product as I mentioned above.
Please note that, this is an another option to achieve what you are expecting. But, I would personally recommend to have history table instead.
The scenario
When the user creates a new invoice the product name field should be an autocomplete field which shows the most recently used products from the product catalogue.
your suggested solution
How can I store this "usage recency/frequency" in the database?
If it is a web application, don't store it in a Database in your server. Each user has different choices.
Store it in the user's browser as Cookie or Localstorage because it will improve the User Experience.
If you still want to store it in MySQL table,
Do the following
Create a column recency as said in question.
When each time the item used, increase the count by 1 as said in question.
Don't decrease it when other items get used.
To get the recent most used item,
query
SELECT * FROM table WHERE recence = (SELECT MAX(recence) FROM table);
Side note
Go for the database use only if you want to show the recent most used products without depending the user.
As you aren't certain on wich measure to choose, and it's rather user experience related problem, I advice you have a number of measures and provide a user an option to choose one he/she prefers. For example the set of available measures could include most popular product last week, last month, last 3 months, last year, overall total. For the sake of performance I'd prefer to store those statistics in a separate table which is refreshed by a scheduled job running every 3 hours for example.

Need Help in Creating History Table in Access

In MS Access, i need to create a history table off a select query that is used for reporting? I don't want an append table as i need the select's data for reporting.
The answer that works best is you do want to use an append query.
Instead of garbaging up your database with lots of history tables, it's better to have one history table with a unique key to differentiate the multiple history reports.
Usually a "Time Stamp" field is a good primary key. Where each record in the report gets the same time stamp.
Also, you can have other key fields depending on the type of report it is. You may want a version field, or a re-try field. You also may want a final copy field. Having these fields will allow you to go back and delete garbage reports or updated reports or bad attempt reports.
Also having solid date fields will allow you to discriminate between daily reports, weekly reports, or monthly reports. (Let alone if you have to worry about Fiscal year, or retail calendars, etc.)
The good thing about having a single table is, you can always go back into your history table and pull out lots of historical data for other types of report comparisons... all in one query instead of trying to tie multiple tables together (mostly with hard to figure out names).
Do yourself and future programmers who will have to deal with your code a favor... and put all the history in one table. Especially since one of those future programmer may be you. You'll be thanking yourself.
Oh... and to get the data for the reporting, you use your primary key to pull out that data. Or... you can have a staging table for your report and then you append the staging table data to the History table (with all the proper key info).

Storing and Parsing Historical Data Using A Step Approach

I've noticed two main concepts for storing and parsing historical data in a database.
Making a carbon Copy of the de-normalized data that coincides with a specific date
Keeping a version history for each table.
I'm wondering if it might work to keep an audit table of what was changed in order to get a query of what the data was at a given time:
For example you have a company with many employees. Throughout time employees come and go.
tbl_employee would have id and name
tble_employee_audit would have id, employee_id, hired_or_left, datetime
You would then have to take the current list of employees and step backwards through the audit table in order to get to a specific point in time. This would also take into account when someone leaves and then gets hired again.
This is a pretty simple example but with a more complex one do you think this will work? Would it be too taxing from processing standpoint?

How to work with ETL from old OLTP to new DW, order dates and customer age

We dont have any existing data warehouse, but we have customers (in OLTP) that have been with us many years and made purchases. How can I populate a customer dimension and then "replay" all the age updates that have occurred over the years, so that the type 2 dimension will have all the updates for those customers.
Since I want to populate the fact table with sales and refer to the DimCustomerFK. But when our clients query for data I want those customers to have the correct age. Since if I dont make any changes the customer will have the same age now and 10 years back when he placed the first order.
Any ideas how this can be made?
Interesting problem Patrik.
Some options:-
1) design SQL to parse through your customer / transaction OLTP data to create a daily flat file of customer updates. So you will end up with many thousand fairly small files (obviously depending on the number of customers you have and the date range). Name them Customeryyyymmdd.csv. Then create an ETL suite to read in the flat files in forward date order and apply the type 2 changes in order to the DWH.
2) build a very complex SQL query (I'm waving my hands around here as I dont know your data structures so couldnt suggest how complex this would be) that creates an ordered customer change list that you can pass through an ETL SCD component record by record.
Either seems logically feasible given what you have said earlier, but that may give you some ideas to consider that may give you a more concrete solution.
g/l
Mark.

Database Historization

We have a requirement in our application where we need to store references for later access.
Example: A user can commit an invoice at a time and all references(customer address, calculated amount of money, product descriptions) which this invoice contains and calculations should be stored over time.
We need to hold the references somehow but what if the e.g. the product name changes? So somehow we need to copy everything so its documented for later and not affected by changes in future. Even when products are deleted, they need to reviewed later when the invoice is stored.
What is the best practise here regarding database design? Even what is the most flexible approach e.g. when the user want to edit his invoice later and restore it from the db?
Thank you!
Here is one way to do it:
Essentially, we never modify or delete the existing data. We "modify" it by creating a new version. We "delete" it by setting the DELETED flag.
For example:
If product changes the price, we insert a new row into PRODUCT_VERSION while old orders are kept connected to the old PRODUCT_VERSION and the old price.
When buyer changes the address, we simply insert a new row in CUSTOMER_VERSION and link new orders to that, while keeping the old orders linked to the old version.
If product is deleted, we don't really delete it - we simply set the PRODUCT.DELETED flag, so all the orders historically made for that product stay in the database.
If customer is deleted (e.g. because (s)he requested to be unregistered), set the CUSTOMER.DELETED flag.
Caveats:
If product name needs to be unique, that can't be enforced declaratively in the model above. You'll either need to "promote" the NAME from PRODUCT_VERSION to PRODUCT, make it a key there and give-up ability to "evolve" product's name, or enforce uniqueness on only latest PRODUCT_VER (probably through triggers).
There is a potential problem with the customer's privacy. If a customer is deleted from the system, it may be desirable to physically remove its data from the database and just setting CUSTOMER.DELETED won't do that. If that's a concern, either blank-out the privacy-sensitive data in all the customer's versions, or alternatively disconnect existing orders from the real customer and reconnect them to a special "anonymous" customer, then physically delete all the customer versions.
This model uses a lot of identifying relationships. This leads to "fat" foreign keys and could be a bit of a storage problem since MySQL doesn't support leading-edge index compression (unlike, say, Oracle), but on the other hand InnoDB always clusters the data on PK and this clustering can be beneficial for performance. Also, JOINs are less necessary.
Equivalent model with non-identifying relationships and surrogate keys would look like this:
You could add a column in the product table indicating whether or not it is being sold. Then when the product is "deleted" you just set the flag so that it is no longer available as a new product, but you retain the data for future lookups.
To deal with name changes, you should be using ID's to refer to products rather than using the name directly.
You've opened up an eternal debate between the purist and practical approach.
From a normalization standpoint of your database, you "should" keep all the relevant data. In other words, say a product name changes, save the date of the change so that you could go back in time and rebuild your invoice with that product name, and all other data as it existed that day.
A "de"normalized approach is to view that invoice as a "moment in time", recording in the relevant tables data as it actually was that day. This approach lets you pull up that invoice without any dependancies at all, but you could never recreate that invoice from scratch.
The problem you're facing is, as I'm sure you know, a result of Database Normalization. One of the approaches to resolve this can be taken from Business Intelligence techniques - archiving the data ina de-normalized state in a Data Warehouse.
Normalized data:
Orders table
OrderId
CustomerId
Customers Table
CustomerId
Firstname
etc
Items table
ItemId
Itemname
ItemPrice
OrderDetails Table
ItemDetailId
OrderId
ItemId
ItemQty
etc
When queried and stored de-normalized, the data warehouse table looks like
OrderId
CustomerId
CustomerName
CustomerAddress
(other Customer Fields)
ItemDetailId
ItemId
ItemName
ItemPrice
(Other OrderDetail and Item Fields)
Typically, there is either some sort of scheduled job that pulls data from the normalized datas into the Data Warehouse on a scheduled basis, OR if your design allows, it could be done when an order reaches a certain status. (Such as shipped) It could be that the records are stored at each change of status (with a field called OrderStatus tacking the current status), so the fully de-normalized data is available for each step of the oprder/fulfillment process. When and how to archive the data into the warehouse will vary based on your needs.
There is a lot of overhead involved in the above, but the other common approach I'm aware of carries even MORE overhead.
The other approach would be to make the tables read-only. If a customer wants to change their address, you don't edit their existing address, you insert a new record.
So if my address is AddressId 12 when I first order on your site in Jamnuary, then I move on July 4, I get a new AddressId tied to my account. (Say AddressId 123123 because your site is very successful and has attracted a ton of customers.)
Orders I palced before July 4 would have AddressId 12 associated with them, and orders placed on or after July 4 have AddressId 123123.
Repeat that pattern with every table that needs to retain historical data.
I do have a third approach, but searching it is difficult. I use this in one app only, and it actually works out pretty well in this single instance, which had some pretty specific business needs for reconstructing the data exactly as it was at a specific point in time. I wouldn't use it unless I had similar business needs.
At a specific status, serialize the data into an Xml document, or some other document you can use to reconstruct the data. This allows you to save the data as it was at the time it was serialized, retaining original table structure and relaitons.
When you have time-sensitive data, you use things like the product and Customer tables as lookup tables and store the information directly in your Orders/orderdetails tables.
So the order table might contain the customer name and address, the details woudl contain all relevant information about the produtct including especially price(you never want to rely on the product table for price information beyond the intial lookup at teh time of the order).
This is NOT denormalizing, the data changes over time but you need the historical value, so you must store it at the time the record is created or you will lose data intergrity. You don't want your financial reports to suddenly indicate you sold 30% more last year because you have price updates. That's not what you sold.