Microsoft access - Percentage Complete - ms-access

Basically i would like to create a form/Report that shows a user what percentage of a job is complete (i have done that) but from the percentage, work out how much of the "quote" is completed (e.g. Quote = £100..... Percentage = 50% complete.... so the job would now be at £50 complete.
The next stage would then be working out the total of ALL the quotes (e.g. £100,000) and gathering The completion money count (e.g. £40,000, as some job may have different percentages complete) i can then compare the expected amount with the dates they are due by and what the actual completion amount is, at the moment.
I'm not saying its an easy or hard task to do, but if someone were willing to help that would be excellent.

As you have the percentage calculated, say 50% which is 0.5, all you need is to multiply:
QuoteCompleted = QuoteTotal * CalculatedPercentage
or, as a ControlSource:
=[Quote]*[TextboxHoldingCalculatedPercentage]

Related

AS3 - Massive Numbers/Integers, Beyond MAX_VALUE

Can anyone help me write a class, e.g. BigNumber.as (or BigInt.as) which will:
Allow for really really big numbers/integers.
Include a method to express a number in format "1.54 Million", "1.98 Vigintillion" and so on...
Allow the maximum number to stop only at the last number word (e.g. Million, Vigintillion, etc) in the defined list. (e.g. list built from here: https://en.wikipedia.org/wiki/Names_of_large_numbers under Standard dictionary numbers [Short scale])
I had an idea to have a class which contains 2 Number values ("value" and "timesMaxedOut"). When "value" >= Number.MAX_VALUE, it would then increment "timesMaxedOut" by 1 and reset "value" back to the difference that the value went over by.
The problem? It seems if you hit or surpass "MAX_VALUE" then the Number will reset to 0. I'm also sure it would then be difficult to properly multiply or divide numbers with this approach, as it would need to take into account "timesMaxedOut" just for the calculations to work correctly.
My goal is to write a game which would allow players to reach really big numbers, and play indefinitely essentially, but AS3 lacks very large number support it seems.

SSRS Format to display as percent

I've gone through quite a few examples on here and I apologize if I'm asking a repeat question, as far as I can tell, I am not.
I have an SSRS report made that shows gross sales for certain aspects of our sales departments. They are broken down, in row, by "cost, gross profit, gross profit %, order count, total sales." The columns are the aspects of our sales. Web sales, phone sales, etc....
In the tablix I can format a text box to display the results as numbers, but as you can see, I have also Percentage and Count in there. I don't know how to format those within the context of the original text box format. So I know I have everything that shows under there as a number already, but how do I handle getting the percentage to show as a percentage and the count to show as a count?
For example, all the percentages currently show as, "$0.35" and various other numbers that follow that form. The count's currently appear as currency too.
I've used an example I found on here, "=Iif ( Me.Value = Floor ( Me.Value ) , "0%" , "0.00%" )," but all that did was make everything that showed up in that column, "0.00%" I am fairly new to SSRS and have been cramming consistently for the past two weeks, but I just cannot find help on this. Thank you in advance for anything you can offer.
Update: =IIF(Fields!LVS_Web.Value=0.00, "0%", format(Fields!LVS_Web.Value, "P"))
That worked... to a degree, but now everything is a percent.... thinking ELSE here but I don't know how ELSE goes in, I've not once seen the word ELSE.
Update 2: The thing that I've noticed is that in the statement, where it says, "=0.00, "0%"," that doesn't even really apply. I've just put that there because I'm new to this and I just needed an argument involved. I took the 0% and changed it to N under the condition that the number was < .99, hopeing I would just catch all of the decimals that fell below the value of 1. Like, "$.23", which later became 23.45%, so I COULD do that, but what I don't udnerstand is it made everything else, "N," instead of a number. Why is that? It doesn't make everything else, "P?"
I'm losing my damned mind.
There is also the fact that this is information being pulled from a stored procedure, I don't really know too much about those quite yet, I get assigned simple tasks ever so often as a stepping stool for learning. I don't really know what the query was, but I couldn't edit it if I wanted to. This can be done with expression formatting but my expression is too broad, but I get mixed results using Greater or Less than, and it's probably not the wisest thing to use since these numbers are not set in stone. My day is almost done, I've made very very little progress, but I had a good lunch. So success.
So I provided my own answer for this problem, and it works. Thanks me. Thanks to all the tried to help me and did help as well. I appreciate the effort strangers will put out for each other.
I've had a new problem develop, I need to display a time relative to the data being pulled. I can put NOW in there and get today's date, but if someone is pulling information from FEB, they may be a little off-put by the current date. I'll probably get this figured out soon, but if anyone can help in the meantime, I would appreciate it.
A standard principle is to separate data from display, so use the Value property to store the data in its native data type and use the Format property to display it how you want. So rather than use an expression formatting the Value property such as =Format(Fields.SomeField.Value, "0.00%") leave the Value as =Fields!SomeField.Value and set the Format property to P2.
This is especially important when exporting your report to Excel because if you have the right data type for your data it will export to Excel as the right data type. If you use the Format function it will export as text, making sorting and formula not work properly.
The easiest thing to do to control the formatting is use the standard numeric formats. Click on the cell or range of cells that you want to have a certain format and set the Format property. They are a format specifier letter followed by an optional digit for precision (number of decimal places). Some useful ones are:
C Currency with 2 decimal places (by default)
N4 Number with 4 decimal places
P0 Percentage with no decimal places
Click on the link above for the full list. Format the number cells as numbers and the percents as percents - you don't need to try to make one format string fit every cell.
These standard numeric formats also respect regional settings. You should set your report's Language property to =User!Language to use the user's regional settings rather than the report server's.
If the number is already * 100 eg. 9.5 should be shown as 9.5% then use the format:
0.00\%
9.5 -> 9.5%
0.34 -> 0.34%
This way you can use the standard number formatting and just add the % to the end. The \ escapes the %, preventing the *100 in formatting (which would make 9.5 show 950%.).
=iif(Fields!Metric.Value = "Gross Profit %",
Format(Fields!LVS_Web.Value,"P"),
iif(Fields!Metric.Value = "Order Count",
Format(Fields!LVS_Web.Value,"G4"),
Format(Fields!LVS_Web.Value,"C")))
This is what saved me and did what I wanted. There is another error, but it's my bosses fault, so now I get to laugh at him. Thanks everyone.
Source:
https://technet.microsoft.com/en-us/library/bb630415(v=sql.100).aspx
This is simple to use,
Percent of (the sum of line item totals for the current scope)/(the sum of line item totals for the dataset).
This value is formatted using FormatPercent specifying one decimal place.
="Percentage contributing to all sales: " & FormatPercent(Sum(Field!LineTotal.Value)/Sum(Field!LineTotal.Value,"Sales"),1)

How can a fourier transform ever be used for exprapolation?

Please excuse a clueless newbie question.
Since a discrete Fourier Transform on a fixed interval is treated as repeating indefinitely, how can it ever be used to extrapolate a time series? What follows the end of the interval will always be identical to the beginning.
Even a simple least square fit would at least give a trend.
How can all that cycle information in a FT be useless for extrapolation?
How? By changing your initial assumption.
One does not need to assume that the input to a DFT repeats indefinitely exactly periodic in aperture width. Assuming that the input is a rectangular window upon a longer stationary sequence which may or may not be periodic within the DFT aperture is also a valid assumption, and commonly used to interpolate/estimate "between bin" spectra.
(e.g. if the DFT result looks exactly like offset samples of Sinc function corresponding to the window width, one could assume that this is the result of a rectangular window upon a single low degree of freedom oscillator, or extreme luck or an alien intelligence that just happens to order all N bins in just such an interesting pattern. Occam's razor may or may not suggest that the former is a better assumption depending on your model of the input.)
Extending interpolated "between bin" or estimated non-periodic-in-aperture spectra (e.g. after deconvolving the assumed Sinc distortion caused by the rectangular window) beyond the end of the DFT aperture/window may allow extrapolating data not identical with the beginning of the DFT aperture/window.

How to detect local maxima and curve windows correctly in semi complex scenarios?

I have a series of data and need to detect peak values in the series within a certain number of readings (window size) and excluding a certain level of background "noise." I also need to capture the starting and stopping points of the appreciable curves (ie, when it starts ticking up and then when it stops ticking down).
The data are high precision floats.
Here's a quick sketch that captures the most common scenarios that I'm up against visually:
One method I attempted was to pass a window of size X along the curve going backwards to detect the peaks. It started off working well, but I missed a lot of conditions initially not anticipated. Another method I started to work out was a growing window that would discover the longer duration curves. Yet another approach used a more calculus based approach that watches for some velocity / gradient aspects. None seemed to hit the sweet spot, probably due to my lack of experience in statistical analysis.
Perhaps I need to use some kind of a statistical analysis package to cover my bases vs writing my own algorithm? Or would there be an efficient method for tackling this directly with SQL with some kind of local max techniques? I'm simply not sure how to approach this efficiently. Each method I try it seems that I keep missing various thresholds, detecting too many peak values or not capturing entire events (reporting a peak datapoint too early in the reading process).
Ultimately this is implemented in Ruby and so if you could advise as to the most efficient and correct way to approach this problem with Ruby that would be appreciated, however I'm open to a language agnostic algorithmic approach as well. Or is there a certain library that would address the various issues I'm up against in this scenario of detecting the maximum peaks?
my idea is simple, after get your windows of interest you will need find all the peaks in this window, you can just compare the last value with the next , after this you will have where the peaks occur and you can decide where are the best peak.
I wrote one simple source in matlab to show my idea!
My example are in wave from audio file :-)
waveFile='Chick_eco.wav';
[y, fs, nbits]=wavread(waveFile);
subplot(2,2,1); plot(y); legend('Original signal');
startIndex=15000;
WindowSize=100;
endIndex=startIndex+WindowSize-1;
frame = y(startIndex:endIndex);
nframe=length(frame)
%find the peaks
peaks = zeros(nframe,1);
k=3;
while(k <= nframe - 1)
y1 = frame(k - 1);
y2 = frame(k);
y3 = frame(k + 1);
if (y2 > 0)
if (y2 > y1 && y2 >= y3)
peaks(k)=frame(k);
end
end
k=k+1;
end
peaks2=peaks;
peaks2(peaks2<=0)=nan;
subplot(2,2,2); plot(frame); legend('Get Window Length = 100');
subplot(2,2,3); plot(peaks); legend('Where are the PEAKS');
subplot(2,2,4); plot(frame); legend('Peaks in the Window');
hold on; plot(peaks2, '*');
for j = 1 : nframe
if (peaks(j) > 0)
fprintf('Local=%i\n', j);
fprintf('Value=%i\n', peaks(j));
end
end
%Where the Local Maxima occur
[maxivalue, maxi]=max(peaks)
you can see all the peaks and where it occurs
Local=37
Value=3.266296e-001
Local=51
Value=4.333496e-002
Local=65
Value=5.049438e-001
Local=80
Value=4.286804e-001
Local=84
Value=3.110046e-001
I'll propose a couple of different ideas. One is to use discrete wavelets, the other is to use the geographer's concept of prominence.
Wavelets: Apply some sort of wavelet decomposition to your data. There are multiple choices, with Daubechies wavelets being the most widely used. You want the low frequency peaks. Zero out the high frequency wavelet elements, reconstruct your data, and look for local extrema.
Prominence: Those noisy peaks and valleys are of key interest to geographers. They want to know exactly which of a mountain's multiple little peaks is tallest, the exact location of the lowest point in the valley. Find the local minima and maxima in your data set. You should have a sequence of min/max/min/max/.../min. (You might want to add an arbitrary end points that are lower than your global minimum.) Consider a min/max/min sequence. Classify each of these triples per the difference between the max and the larger of the two minima. Make a reduced sequence that replaces the smallest of these triples with the smaller of the two minima. Iterate until you get down to a single min/max/min triple. In your example, you want the next layer down, the min/max/min/max/min sequence.
Note: I'm going to describe the algorithmic steps as if each pass were distinct. Obviously, in a specific implementation, you can combine steps where it makes sense for your application. For the purposes of my explanation, it makes the text a little more clear.
I'm going to make some assumptions about your problem:
The windows of interest (the signals that you are looking for) cover a fraction of the entire data space (i.e., it's not one long signal).
The windows have significant scope (i.e., they aren't one pixel wide on your picture).
The windows have a minimum peak of interest (i.e., even if the signal exceeds the background noise, the peak must have an additional signal excess of the background).
The windows will never overlap (i.e., each can be examined as a distinct sub-problem out of context of the rest of the signal).
Given those, you can first look through your data stream for a set of windows of interest. You can do this by making a first pass through the data: moving from left to right, look for noise threshold crossing points. If the signal was below the noise floor and exceeds it on the next sample, that's a candidate starting point for a window (vice versa for the candidate end point).
Now make a pass through your candidate windows: compare the scope and contents of each window with the values defined above. To use your picture as an example, the small peaks on the left of the image barely exceed the noise floor and do so for too short a time. However, the window in the center of the screen clearly has a wide time extent and a significant max value. Keep the windows that meet your minimum criteria, discard those that are trivial.
Now to examine your remaining windows in detail (remember, they can be treated individually). The peak is easy to find: pass through the window and keep the local max. With respect to the leading and trailing edges of the signal, you can see n the picture that you have a window that's slightly larger than the actual point at which the signal exceeds the noise floor. In this case, you can use a finite difference approximation to calculate the first derivative of the signal. You know that the leading edge will be somewhat to the left of the window on the chart: look for a point at which the first derivative exceeds a positive noise floor of its own (the slope turns upwards sharply). Do the same for the trailing edge (which will always be to the right of the window).
Result: a set of time windows, the leading and trailing edges of the signals and the peak that occured in that window.
It looks like the definition of a window is the range of x over which y is above the threshold. So use that to determine the size of the window. Within that, locate the largest value, thus finding the peak.
If that fails, then what additional criteria do you have for defining a region of interest? You may need to nail down your implicit assumptions to more than 'that looks like a peak to me'.

Determining edge weights given a list of walks in a graph

These questions regard a set of data with lists of tasks performed in succession and the total time required to complete them. I've been wondering whether it would be possible to determine useful things about the tasks' lengths, either as they are or with some initial guesstimation based on appropriate domain knowledge. I've come to think graph theory would be the way to approach this problem in the abstract, and have a decent basic grasp of the stuff, but I'm unable to know for certain whether I'm on the right track. Furthermore, I think it's a pretty interesting question to crack. So here we go:
Is it possible to determine the weights of edges in a directed weighted graph, given a list of walks in that graph with the lengths (summed weights) of said walks? I recognize the amount and quality of permutations on the routes taken by the walks will dictate the quality of any possible answer, but let's assume all possible walks and their lengths are given. If a definite answer isn't possible, what kind of things can be concluded about the graph? How would you arrive at those conclusions?
What if there were several similar walks with possibly differing lengths given? Can you calculate a decent average (or other illustrative measure) for each edge, given enough permutations on different routes to take? How will discounting some permutations from the available data set affect the calculation's accuracy?
Finally, what if you had a set of initial guesses as to the weights and had to refine those using the walks given? Would that improve upon your guesstimation ability, and how could you apply the extra information?
EDIT: Clarification on the difficulties of a plain linear algebraic approach. Consider the following set of walks:
a = 5
b = 4
b + c = 5
a + b + c = 8
A matrix equation with these values is unsolvable, but we'd still like to estimate the terms. There might be some helpful initial data available, such as in scenario 3, and in any case we can apply knowledge of the real world - such as that the length of a task can't be negative. I'd like to know if you have ideas on how to ensure we get reasonable estimations and that we also know what we don't know - eg. when there's not enough data to tell a from b.
Seems like an application of linear algebra.
You have a set of linear equations which you need to solve. The variables being the lengths of the tasks (or edge weights).
For instance if the tasks lengths were t1, t2, t3 for 3 tasks.
And you are given
t1 + t2 = 2 (task 1 and 2 take 2 hours)
t1 + t2 + t3 = 7 (all 3 tasks take 7 hours)
t2 + t3 = 6 (tasks 2 and 3 take 6 hours)
Solving gives t1 = 1, t2 = 1, t3 = 5.
You can use any linear algebra techniques (for eg: http://en.wikipedia.org/wiki/Gaussian_elimination) to solve these, which will tell you if there is a unique solution, no solution or an infinite number of solutions (no other possibilities are possible).
If you find that the linear equations do not have a solution, you can try adding a very small random number to some of the task weights/coefficients of the matrix and try solving it again. (I believe falls under Perturbation Theory). Matrices are notorious for radically changing behavior with small changes in the values, so this will likely give you an approximate answer reasonably quickly.
Or maybe you can try introducing some 'slack' task in each walk (i.e add more variables) and try to pick the solution to the new equations where the slack tasks satisfy some linear constraints (like 0 < s_i < 0.0001 and minimize sum of s_i), using Linear Programming Techniques.
Assume you have an unlimited number of arbitrary characters to represent each edge. (a,b,c,d etc)
w is a list of all the walks, in the form of 0,a,b,c,d,e etc. (the 0 will be explained later.)
i = 1
if #w[i] ~= 1 then
replace w[2] with the LENGTH of w[i], minus all other values in w.
repeat forever.
Example:
0,a,b,c,d,e 50
0,a,c,b,e 20
0,c,e 10
So:
a is the first. Replace all instances of "a" with 50, -b,-c,-d,-e.
New data:
50, 50
50,-b,-d, 20
0,c,e 10
And, repeat until one value is left, and you finish! Alternatively, the first number can simply be subtracted from the length of each walk.
I'd forget about graphs and treat lists of tasks as vectors - every task represented as a component with value equal to it's cost (time to complete in this case.
In tasks are in different orderes initially, that's where to use domain knowledge to bring them to a cannonical form and assign multipliers if domain knowledge tells you that the ratio of costs will be synstantially influenced by ordering / timing. Timing is implicit initial ordering but you may have to make a function of time just for adjustment factors (say drivingat lunch time vs driving at midnight). Function might be tabular/discrete. In general it's always much easier to evaluate ratios and relative biases (hardnes of doing something). You may need a functional language to do repeated rewrites of your vectors till there's nothing more that romain knowledge and rules can change.
With cannonical vectors consider just presence and absence of task (just 0|1 for this iteratioon) and look for minimal diffs - single task diffs first - that will provide estimates which small number of variables. Keep doing this recursively, be ready to back track and have a heuristing rule for goodness or quality of estimates so far. Keep track of good "rounds" that you backtraced from.
When you reach minimal irreducible state - dan't many any more diffs - all vectors have the same remaining tasks then you can do some basic statistics like variance, mean, median and look for big outliers and ways to improve initial domain knowledge based estimates that lead to cannonical form. If you finsd a lot of them and can infer new rules, take them in and start the whole process from start.
Yes, this can cost a lot :-)