Return value of first argument of condition without recalculation - google-apps-script

I know I can solve it by using a separate cell and make it invisible but I'd like a clean solution for this and use only 1 cell and 1 formula.
For this example I need to get a random number between X.00 and Y.00 (with decimals), not lower than 0.00 and not superior than 9.00. But for this example I am using only 1 condition for avoiding values less than 0.00 to simplify.
I will use RANDBETWEEN(). The X and Y are supplied by 2 other cells on the dummy file bellow: B3 and D3.
The reason why sometimes the random number can be less than 0.00 or above 9.00 is that I need the random result to be between random +/- 1 than (X+Y)/2.
Also X and Y values will vary. Sometimes X will be 3.54, sometimes 8.99 and same for Y. For situations when it happens that X and Y are a low number like 0.5 the RANDBETWEEN() function might output a negative number. When so I need the output to be 0.00. Same for high values. If both X and Y will be close to 9.00 the result might be something like 9.35. In this cases I´d need the output to be 9.000. But in the below formula I am only using cases below zero to make it simple.
So the problem I am unable to resolve is that I need to get the value of the fist argument of the ÌF() condition without recalculating it. If I refer to the 3rd argument for FALSE then I will recalculate. my formula is this:
=IF(
(RANDBETWEEN(
((((B3*100)+(D3*100))/2)-100),
((((B3*100)+(D3*100))/2)+100))
/100)<0,0,
(RANDBETWEEN(
((((B3*100)+(D3*100))/2)-100),
((((B3*100)+(D3*100))/2)+100))
/100))
So this will check if the first argument is less than 0.00, if it is it displays 0.00 if not it recalculates again and this is a problem because it might sometimes recalculate a value less than 0.00.
My question is: Is there a way to return the value of the first argument of the condition without recalculation of RANDBETWEEN() and without using a separate cell?
If not possible I would also welcome any solution using custom GAS functions.
My dummy file:
https://docs.google.com/spreadsheets/d/15YtgUVqDTuC-raMNJiN-YG4j3URaorXdPiwTy_Kb_K0/edit
(click checkbox to reset and again to recalculate the random number).

Wrap your RANDBETWEEN within a MIN - MAX
=MAX(
MIN(
RANDBETWEEN(((((D3*100)+(B3*100))/2)-100),((((D3*100)+(B3*100))/2)+100))/100,
9
),
0
)*A1

Related

Octave out of bound error while trying to calculate mean value of a vector

I generated random values using following function :
P = floor(6*rand(1,30)+1)
Then, using T=find(P==5), I got values where outcome is 5 and stored them in T. The output was :
T =
10 11 13 14 15 29
Now, I want to calculate the mean value of T using mean(T) but it gives me following error :
error: mean(29): out of bound 1 (dimensions are 1x1) (note: variable 'mean' shadows function)
What I am trying to do is to model the outcomes of a rolling a fair dice and counting the first time I get a 5 in outcomes. Then I want to take mean value of all those times.
Although you don't explicitly say so in your question, it looks like you wrote
mean = mean(T);
When I tried that, it worked the first time I ran the code but the second and subsequent times it gave the same error that you got. What seems to be happening is that the first time you run the script it calculates the mean of T, which is a scalar, i.e. it has dimensions 1x1, and then stores it in a variable called mean, which then also has dimensions 1x1. The second time you run it, the variable mean is still present in the environment so instead of calling the function mean() Octave tries to index the variable called mean using the vector T as the indices. The variable mean only has one element, whose index is 1, so the first element of T whose value is different from 1 is out of bounds. If you call your variable something other than mean, such as, say, mu:
mu = mean(T);
then it should work as intended. A less satisfactory solution would be to write clear all at the top of your script, so that the variable mean is only created after the function mean() has been called.

Function that will not return 0

I am writing a formula which to use as a decay multiplier on a given value.
The problem is the following : I have a window of processing - days lets say 10, this window is computed every day anew. I need to decay a certain parameter with a factor reflecting the days that an id is present. Currently what I do is (previousWinSize-(start of the current window-start of the previous window))/previousWinSize
In this case if my previous window size is 10 and the difference in the days of processing is two (10-2)/10 which gives me 0.8 to multiply my variable by and respectively decay .2 of it.
However if I have a 3 day window and again 2 days of difference (3-2)/3 I get value close to 0 which cuts more than I would like to.
I am looking for a formula that would scale better when the numbers are small and would not produce a huge decay factor.
Thank you in advance.
I recommend making use of a sigmoid function e.g.
You can take the output of your function i.e. returns a number between 0 and 1 based on the difference of days of processing and feed it into the sigmoid. If you set up the a (slope) and b (inflection point) parameters properly you can for example, ensure that the lowest decay multiplier you get is ~0.5 when your original equation returns a number close to 0.
I've graphed the example I stated above here:
https://www.desmos.com/calculator/nqemuexjhg
(This is based on: https://www.desmos.com/calculator/rna4aqta0c)
I think you do have two edge cases with this method though. When your equation returns 0 the sigmoid isn't exactly going to give you 0.5 (which you may not even want to begin with), you'll end up getting something that's close to 0.5. In this scenario what you may start to see is your values drifting if you keep applying the sigmoid. The same is true for when your equation returns 1. After putting it through the sigmoid you won't get 1, you'll get something close to 1.
What I think I'd do in such a scenario is have some sort of check before the sigmoid gets applied
e.g.
if(x == 0)
y = 0;
else if(x == 1)
y = 1;
else
y = sigmoid(x);
Sources / Possible further reading:
https://en.wikipedia.org/wiki/Sigmoid_function

Octave FWHM calculation

I am having some problem about calculating the FWHM of my data. Because the "fwhm" function in signal package results in a 100 times bigger value than i expected to get.
What i did is that,
Depending on the gaussian distribution function (you can find it on wikipedia) I produced some data. In this function you can give a specific sigma (RMS) value (FWHM=sigma*2.355). Here is that the script I wrote to understand the situation
x=10:0.01:40;
x0=25;
sigma=0.25;
y=(1/(sigma*sqrt(2*pi)))*exp(-((x-x0).^2)/(2*sigma^2));
z=fwhm(y)/2.355;
plot(x,y)
when I compared the results the output of "fwhm" function (24.999) is 100 times bigger than the one I used (0.25) in the function.
If you have any idea it will be very helpful.
Thanks in advance.
Your z is 100 times bigger because your steps in x are 1/100 (0.01). If you use fwhm(y) it is expected that the stepsize in x is 1. If not you have to specify that.
In your case you should do:
z=fwhm(x, y)/2.355
z = 0.24999
which matches your sigma

Microsoft Access - Decimal Scale stuck at 0

I have a calculated field in my table called C. its the result of A-B=C. A & B are number fields (single, fixed). I have having trouble setting up C as a calculated (Decimal Field).
The precision / decimal places seem to work perfectly, I can modify them freely. But no matter what I do to "SCALE". It always seems to return to "0". I need it to be 2 since all my data in my reports are rounding off at the wrong locations giving me hole numbers.
As you can see "scale = 0", no matter what I do to this number. it will always revert to "0". Why is that?
You can’t change the scale in a calculated field, because it takes the values and settings from the calculation.
So the fact of a scale of 0 should not matter. The resulting number if it needs decimal places will (should) have the decimal value. The setting is IGNORED
I mean, if the calculation is:
2 x 3 = 6
Then you get 6.
If you have 4 / 3 = 1.3333
Then, in your case you get:
1.33333333333333
And you WILL get the above EVEN if the scale = 0. So the scale setting is NOT used nor available in a calculated field.
You are certainly free to round, or format the above result. And in fact you could (should) consider using the round() function in the actual calculation. So use something like:
Round([Field1] / [Field2],4)
And you thus get:
1.3333

Testing large input range scenarios with JUnit

I am pondering on how it is best to develop a JUnit test for a function that calculates a number of points and values in time based on a number of inputs. The purpose of the method is to calculate a series of points in time given a series of gradient value pairs, i.e.
Gradient 1 to Value 1, Gradient 2 to Value 2, Gradient 3 to Value 3, and so on...
Given a starting point in time and starting value, the function calculates the points in time each Value is reached (in the gradient value pairs) up until a target value is reached. This is essentially to plot a line on a graph with x-axis having date values and the y-axing having numeric values.
The method to test takes the following inputs:
StartTime (Date)
StartValue (Double)
TargetValue (Double)
GradientValuePairs (ArrayList)
EnsurePointEvery5Minutes (Boolean)
Where GradientValuePair is like:
class GradientValuePair {
Double gradient; // Gradient up to Target
Double target;
...
}
The output from this method is essentially ArrayList - a profile - with:
class DatePoint {
Date date;
Double value;
...
}
The EnsurePointEvery5Minuntes parameter basically adds a date point every 5 minutes for the calcualted profile which is then returned by the method.
To ensure the test has worked I will need to check each date and value is to what is expected by either:
Iterating through the array with an array of what is expected.
Store minute/second offsets from the StartTime with the expected value in some sort of structure.
Now the difficult part for me is deciding on how to write the TestCase. I want to test a broad/diverse range of inputs so that:
StartTime will cover 30 minutes i.e. in range of 2012-03-08 00:00 to 2012-03-08 00:30.
StartValue will be in the range of 0 to 1000.
TargetValue will be in the range of StartValue to 1000.
GradientValuePairs will require around 10 different arrays to be tested.
EnsurePointEvery5Minutes will be tested with both true and false.
Now given the number of different input sets will be something like:
30 * (0 to 1000 * 0 to 1000 = 500500) * 10 * 2 = 300,300,000 different test input sets per GradientValuePairs input
Or call us crazy for wanting to do this. Maybe the tests are too diverse for this instance.
I am wondering if anybody has any advice for testing such scenarios like this. I can't think of any other way to do this than implement my own algorithm for calculating the output before each call to the method I am testing - then who is to say that the algorithm I implement to test it is correct.
If I understand correctly. you are proposing that you test every possible set of combination of numeric inputs. That is almost never required of unit tests, as it would be essentially equivalent to testing whether the Java math library works for all numbers for all operations. Generally what you will do is try to identify edge conditions and write tests for those. These would include things like 0's. negatives, numeric overflow, and combinations of inputs which have intermediate computations that result in the same things. Then of course, you would want to test a handful of normal vanilla cases of data as well that are not edge cases.
So short answer: no you should not need to test 300M+ input sets.