MS Access Database Task Costing - ms-access

I have a Theory question, I have an access database and I want to track cost by task. Currently I have a task tracker table that will store the users Hours|HourlyRate and Overtime|OvertimeRate among other things work order no, project no etc. I don't think that this is the best way to store this data as the users could look at the table and see each others rates, before now it didn't matter much, but I'm about to give this database to more users. I was thinking of having that Rate data in a separate table linked to the ID no of the Task table and not allow users to have access to this table, but then I couldn't do an after update event as the user wont have access to write to that table. Either that or store the rates in a separate Database with a start and end date of that given rate. For instance:
Ed | Rate $0.01 | StartDate 01/01/1999 | EndDate 12/32/1999
Ed | Rate $0.02 | StartDate 01/01/2000 | EndDate 12/32/2000
This way I can store the costing data in a separate database that the users don't have access too and just calculate the cost information every time I need it based on date and unique user ID. I was wondering what solutions have others come up within MSAccess for this type of situation.

Related

Add interested users to table in mysql

So I am working on a booking system where I am posting small avaiable jobs for the kids in the community. I am not looking for a direct booking system in the sense that the user can just press the "booking button" and directly have the job. The approach i want to take is that you can SUBMIT INTEREST and then the poster of the job can accept one of the applicants.
So i have a few tables going on but the essential for the questions are these two.
|users|
|id | name | age |......
|jobs|
|id | date | salary |
What i am looking for explained in it's most simplest form is that i want multiple user id´s to be stored in a column so that i can later display/controll the users connected to the job in matter.
Would very much appreciate a sultion or just as much a tip on how i would go about solving the problem.
(I am using mySQL database if that adds any value to the question)
Best regards.
That is an n:m relation. A user can be interested in multiple jobs and a job can be interesting to multiple users. You should have a third table user_jobs for this where you store one record per user interested in a job.
Something like
user_jobs
userid
jobid
date
status

Organizing monthly account extracts for personal use MS Access

I'm a bit of a newbie with databases and database design, but I'm hoping someone can point me in the right direction. I currently have 14 monthly loan extracts, each of which contain all accounts, their status, balance and customer contact info as-of month end. Not knowing what to do, I imported each of the monthly files into Access with each table acting more like a tab from an Excel workbook. Laugh away - I now know that's not how it's supposed to work.
I've done my homework and I understand how to split up part of my data into Customer and Account tables, but what do I do with the account balances? My thought is to create a Balances table, create a relationship to the Accounts table and create columns for each month. This seems logical, but is it the best way?
99% of my analysis involves trend reporting and other ad hoc tasks - tracking the total balances by product type over time given other criteria, such as credit score or age. My intended use is to create queries to select the data I need and connect to it via Get & Transform in Excel for final manipulation and report writing.
This also begs the question "how normalized should my new database be?" Each monthly extract is cumulative, so a good 75% of my data is redundant contact info already, but how normalized should I go?
Sorry for ranting,but if anyone has any experience in setting up their own historical database or could point me in a direction that will get me on track, I would appreciate it.
Best practice for transactional systems is close to what you expect:
1. Create a Customer table
2. Create an Account table
3. Create an Account Balance table
4. Create relationships from the Account to Customer, and from the Account Balance to the Account table.
You can create a column for each month, provided you have Year as part of the key of the Account Balance table. Even better would be to have the key for the Account Balance be Account ID and Date.
However, since you are performing analytics over the data, a de-normalized approach is not only acceptable -- it is preferable. So yes, you can (and perhaps should, based upon your use cases) put all the data into one big flat table and then compile your analytics.

Managing Historical and Current Records with SQL

I want to keep track of each User's current balance and balance history using the Django ORM. I imagine 2 tables (User and History) with a one-to-many between User and History representing a user's entire history, and a one-to-one between User and History for easy access to the current balance:
History
ID | User (FK to User) | Delta | Balance | Timestamp
User
ID | Name | Employee | Year | Balance (FK to History)
1) Does this seem reasonable given that I'm using the Django ORM? I think with raw SQL or another ORM, I could give history a start and stop date, then easily get the latest with SELECT * FROM History WHERE user_id=[id] AND stop IS NULL;.
2) Should History have a balance column?
3) Should User have a balance column (I could always compute the balance on the fly)? If so, should it be a "cached" decimal value? Or should it be a foreign key to the latest balance?
A strictly normal approach would say that neither table should contain a balance column, but that users' balances should be calculated when required from the sum of all their history. However, you may find that using such a schema would result in unacceptable performance—in which case cacheing the balance would be sensible:
if you're mostly interested in the current balance, then there's little reason to cache balances in the History table (just cache the current balance in the User table alone);
on the other hand, if you might be interested in arbitrary historical balances, then storing a historical balances in the History table would make sense (and then there'd be little point in also storing the current balance in the User table, since that could easily be discovered from the most recent History record).
But perhaps it's not worth worrying about cacheing right now? Have in mind the mantra "normalise until it hurts; denormalise until it works" as well as Knuth's famous maxim "premature optimisation is the root of all evil".

Multiple Date Fields database design

I'm designing an Access .accdb for project management. The project contract stipulates a number of milestones for each project, with an associated date. The exact number of milestones depends on an "either/or" case of project size, but max of 6
My employer would like to track a [Forecast] date, [Actual] date and [Paid] date for each milestone, meaning a large sized project ends up with 24 dates associated with it, often duplicated (if a project runs to time, all four dates will be identical)
Currently, I have tblMilestones, which has a FK linking to tblProject and a record for each Milestone, with the 4 associated dates as fields in the record and a field to mark the milestone as complete or current.
I feel like we're collecting, storing and entering a lot of pretty pointless data - especially the [Forecast] date, for which we collect data from our project managers (not the most reliable of data anyway). Once the milestone is complete and the [Actual] date is entered, the [Forecast] date is pretty meaningless
I'd rather have the contract date in one table, entered when a new project is added, a reporting table for the changeable forecast date, set the Actual date when user marks milestone as complete and draw the paid date from transactions records.
Is this a better design approach? The db is small - less than 50 projects, so part of me thinks I'd just be making things more complicated than they need to be, especially in terms of the extra UI required.
Take a page out of dimensional data warehouse design and store dates in their own table with a DateID:
DateID DateValue
------ ----------
1 2000-01-01
... ...
9999 2012-12-31
Then turn all your date fields--Forecast, Actual, Paid, etc.--into foreign key references to the date table's DateID field.
To populate the dates table you can go two ways:
Use some VBA to generate a large set of dates, say 2005-01-01 to 2100-12-31, and insert them into the dates tables as a one-time operation.
Whenever someone types in a new date, check the dates table to see if it already exists, and if not, insert it.
Whichever way you do it, you'll obviously need an index on DateValue.
Taking a step back from the actual question, I'm realising that you're trying to fit two different uses into the same database--regular transactional use (as your project management app) and analytical use (tracking several different dates for your milestones--in other words, the milestone completion date is a Slowly Changing Dimension). You might want to consider splitting up these two uses into a regular transactional database and a data warehouse for analysis, and setting up an ETL process to move the data between them.
This way you can track only a milestone completion date and a payment date in your transactional database and the data warehouse will capture changes to the completion date over time. And allow you to do analysis and forecasting on that without bogging down the performance of the transactional (application) database.

MySQL Table Summary Cascade

I'm designing a statistics tracking system for a sales organization that manages 300+ remote sales locations around the world. The system receives daily reports on sales figures (raw dollar values, and info-stats such as how many of X item were sold, etc.).
I'm using MAMP to build the system.
I'm planning on storing these figures in one MySQL big table, so each row is one day's statistics from one location. Here is a sample:
------------------------------------------------------------------
| LocationID | Date | Sales$ | Item1Sold | Item2Sold | Item3Sold |
------------------------------------------------------------------
| Hawaii | 3/4 | 100 | 2 | 3 | 4 |
| Turkey | 3/4 | 200 | 1 | 5 | 9 |
------------------------------------------------------------------
Because the organization will potentially receive a statistics update from each of 300 locations on a daily basis, I am estimating that within a month the table will have 9,000 records and within a year around 108,000. MySQL table partitioning based on the year should therefore keep queries in the 100,000 record range, which I think will allow steady performance over time.
(If anyone sees a problem with the theories in my above 'background data', feel free to mention them as I have no experience with building a large-scale database and this was simply what I have gathered with searching around the net.)
Now, on the front end of this system, it is web-based and has a primary focus on PHP. I plan on using the YUI framework I found online to display graph information.
What the organization needs to see is daily/weekly graphs of the sales figures of their remote locations, and whatever 'breakdown' statistics such as items sold (so you can "drill down" into a monetary graph and see what percentage of that income came from item X).
So if I have the statistics by LocationID, it's a fairly simple matter to organize this information by continent. If the system needs to display a graph of the sales figures for all locations in Europe, I can do a Query that JOINs a Dimension Table for the LocationID that gives its "continent" category and thereby sum (by date) all of those figures and display them on the graph. Or, to display weekly information, sum all of the daily reports in a given week and return them to my JS graph object as a JSON array, voila. Pretty simple stuff as far as I can see.
Now, my thought was to create "summary" tables of these common queries. When the user wants to pull up the last 3 months of sales for Africa, and the query has to go all the way down to the daily level and with various WHERE and JOIN clauses, sum up the appropriate LocationID's figures on a weekly basis, and then display to the user...well it just seemed more efficient to have a less granular table. Such a table would need to be automatically updated by new daily reports into the main table.
Here's the sort of hierarchy of data that would then need to exist:
1) Daily Figures by Location
2) Daily Figures by Continent based on Daily Figures by Location
3) Daily Figures for Planet based on Daily Figures by Continent
4) Weekly Figures by Location based on Daily Figures by Location
5) Weekly Figures By Continent based on Weekly Figures by Location
6) Weekly Figures for Planet based on Weekly Figures by Continent
So we have a kind of tree here, with the most granular information at the bottom (in one table, admittedly) and a series of less and less granular tables so that it is easier to fetch the data for long-term queries (partitioning the Daily Figures table by year will be useless if it receives queries for 3 years of weekly figures for the planet).
Now, first question: is this necessary at all? Is there a better way to achieve broad-scale query efficiency in the scenario I'm describing?
Assuming that there is no particularly better way to do this, how to go about this?
I discovered MySQL Triggers, which to me would seem capable of 'cascading the updates' as it were. After an INSERT into the Daily Figures table, a trigger could theoretically read the information of the inserted record and, based on its values, call an UPDATE on the appropriate record of the higher-level table. I.e., $100 made in Georgia on April 12th would prompt the United States table's 'April 10th-April 17th' record to UPDATE with a SUM of all of the daily records in that range, which would of course see the newly entered $100 and the new value would be correct.
Okay, so that's theoretically possible, but it seems too hard-coded. I want to build the system so that the organization can add/remove locations and set which continent they are in, which would mean that the triggers would have to be reconfigured to include that LocationID. The inability to make multiple triggers for a given command and table means that I would have to either store the trigger data separately or extract it from the trigger object, and then parse in/out the particular rule being added or removed, or keep an external array that I handled with PHP before this step, or...basically, a ton of annoying work.
While MySQL triggers initially seemed like my salvation, the more I look into how tricky it will be to implement them in the way that I need the more it seems like I am totally off the mark in how I am going about this, so I wanted to get some feedback from more experienced database people.
While I would appreciate intelligent answers with technical advice on how to accomplish what I'm trying to do, I will more deeply appreciate wise answers that explain the correct action (even if it's what I'm doing) and why it is correct.