I was thinking about an elementary question in numerical analysis.
When discretizing an ordinary differential equation, it is well known that a second order method is more accurate than a first order method, since the truncation error for second order method is O(dx^2) and O(dx) for the first order method. This is true when 0 < dx < 1.
what if dx > 1? For example, the domain is 0 to 10000 and the mesh size is 1000, then dx = 10. In this case, is the second order method not accurate as first order method, since dx^2 = 100 and dx = 10? We can encounter this when dealing with large scale problem, such as climate modeling (the cloud size could be several kilometers).
A second order method is not more accurate than a first order method because dx^2 < dx, for some value of dx. It's a statement about the asymptotic rate of convergence for small dx.
Additionally, comparing dx^2 to dx directly doesn't make sense, because dx isn't a unitless quantity, it's a length. So you're trying to compare an area to a length, which is meaningless.
In big-O notation, if a quantity converges with O(dx^2), then that typically means that the error is of the form e = a2 dx^2 + a3 dx^3 + ... The leading coefficient a2 is in the units of X/meters^2, where X is whatever units your error is in, and maybe you use some other length instead of meters. Similarly, for a first order solution, the error is in the form b1 dx + b2 dx^2 + ..., where b1 is in units of X/meters.
So if you decide you can neglect the non-leading terms (which you probably can't for large values of dx), the comparison isn't between dx^2 and dx, it's between a2 dx^2 and b1 dx. There is obviously a cross over between these two error terms, but it's not at dx=1, it's at dx = b1/a2. If your discretization is that coarse, you're probably not in the asymptotic regime in which you can ignore higher-order terms, and your solution is probably very inaccurate anyway.
Related
I have a partially constrained (in the parameters) minimisation problem which I am currently solving using Octave's fminunc function, but with constraints being applied within the objective function itself by use of if statements to produce a realmax cost if any constraint is violated.
However, the problem could also be solved by using fmincon with upper and lower bounds of the parameters being explicitly provided as constraints. I could also, probably, use other more 'complicated' functions such as sqp to solve the problem too.
The problem I'm solving is:- find values 'a' through 'f' such that
C1 - a = C2 * K
C3 + b = C4 * K
C5 - c = C6 * K
C7 - d = C8 * K
C9 + e = C10 * K
C11 - f = C12 * K
where all the C1, C2... are known, different valued constants 0 < C < 1
where K = 1 - a + b - c - d + e - f
and K > 0
and where a, c, d and f are individually subject to constraints on their values
and where, theoretically, the minimum cost of the objective function should be 0.
What is the best approach to take? Is hacking the use of fminunc somehow going to lead to unpredictable/pathological solutions? Is it better to use "the right tool for the job" and use a function specifically meant for constrained minimisation?
This might sound like it is a philosophical question but my concern is actually to do with accuracy of solutions and, to a lesser degree, ease of programming and computational efficiency.
Documentation available with help fminunc does provide a little bit of answer.
The algorithm used by fminunc is a gradient search which depends on the objective function being differentiable. If the function has discontinuities it may be better to use a derivative-free algorithm such as fminsearch.
Returning realmax will mess up the gradient calculation. Worst case scenario is it will flood with NaNs and Infs.
You can set "Display" option of the optimizer to "iter" and find out if your program converges solely because it never touches boundaries or if the fminunc does have some failsafes.
Also, note the fminsearch, it is a simplex method for optimizing any arbitrary function.
it is a beginner question but I have a dataset of two features house sizes and number of bedrooms, So I'm working on Octave; So basically I try to do a feature scaling but as in the Design Matrix I added A column of ones (for tetha0) So I try to do a mean normalisation : (x- mean(x)/std(x)
but in the columns of one obviously the mean is 1 since there is only 1 in every line So when I do this the intercept colum is set to 0 :
mu = mean(X)
mu =
1.0000 2000.6809 3.1702
votric = X - mu
votric =
0.00000 103.31915 -0.17021
0.00000 -400.68085 -0.17021
0.00000 399.31915 -0.17021
0.00000 -584.68085 -1.17021
0.00000 999.31915 0.82979
So the first column shouldn't be left out of the mean normalization ?
Yes, you're supposed to normalise the original dataset over all observations first, and only then add the bias term (i.e. the 'column of ones').
The point of normalisation is to allow the various features to be compared on an equal basis, which speeds up optimization algorithms significantly.
The bias (i.e. column of ones) is technically not part of the features. It is just a mathematical convenience to allow us to use a single matrix multiplication to obtain our result in a computationally and notationally efficient manner.
In other words, instead of saying Y = bias + weight1 * X1 + weight2 * X2 etc, you create an imaginary X0 = 1, and denote the bias as weight0, which then allows you to express it in a vectorised fashion as follows: Y = weights * X
"Normalising" the bias term does not make sense, because clearly that would make X0 = 0, the effect of which would be that you would then discard the effect of the bias term altogether. So yes, normalise first, and only then add 'ones' to the normalised features.
PS. I'm going on a limb here and guessing that you're coming from Andrew Ng's machine learning course on coursera. You will see in ex1_multi.m that this is indeed what he's doing in his code (line 52).
% Scale features and set them to zero mean
fprintf('Normalizing Features ...\n');
[X mu sigma] = featureNormalize(X);
% Add intercept term to X
X = [ones(m, 1) X];
Effectively what I'm looking for is a function f(x) that outputs into a range that is pre-defined. Calling f(f(x)) should be valid as well. The function should be cyclical, so calling f(f(...(x))) where the number of calls is equal to the size of the range should give you the original number, and f(x) should not be time dependent and will always give the same output.
While I can see that taking a list of all possible values and shuffling it would give me something close to what I want, I'd much prefer it if I could simply plug values into the function one at a time so that I do not have to compute the entire range all at once.
I've looked into Minimal Perfect Hash Functions but haven't been able to find one that doesn't use external libraries. I'm okay with using them, but would prefer to not do so.
If an actual range is necessary to help answer my question, I don't think it would need to be bigger than [0, 2^24-1], but the starting and ending values don't matter too much.
You might want to take a look at Linear Congruential Generator. You shall be looking at full period generator (say, m=224), which means parameters shall satisfy Hull-Dobell Theorem.
Calling f(f(x)) should be valid as well.
should work
the number of calls is equal to the size of the range should give you the original number
yes, for LCG with parameters satisfying Hull-Dobell Theorem you'll get full period covered once, and 'm+1' call shall put you back at where you started.
Period of such LCG is exactly equal to m
should not be time dependent and will always give the same output
LCG is O(1) algorithm and it is 100% reproducible
LCG is reversible as well, via extended Euclid algorithm, check Reversible pseudo-random sequence generator for details
Minimal perfect hash functions are overkill, all you've asked for is a function f that is,
bijective, and
"cyclical" (ie fN=f)
For a permutation to be cyclical in that way, its order must divide N (or be N but in a way that's just a special case of dividing N). Which in turn means the LCM of the orders of the sub-cycles must divide N. One way to do that is to just have one "sub"-cycle of order N. For power of two N, it's also really easy to have lots of small cycles of some other power-of-two order. General permutations do not necessarily satisfy the cycle-requirement, of course they are bijective but the LCM of the orders of the sub-cycles may exceed N.
In the following I will leave all reduction modulo N implicit. Without loss of generality I will assume the range starts at 0 and goes up to N-1, where N is the size of the range.
The only thing I can immediately think of for general N is f(x) = x + c where gcd(c, N) == 1. The GCD condition ensures there is only one cycle, which necessarily has order N.
For power-of-two N I have more inspiration:
f(x) = cx where c is odd. Bijective because gcd(c, N) == 1 so c has a modular multiplicative inverse. Also cN=1, because φ(N)=N/2 (since N is a power of two) so cφ(N)=1 (Euler's theorem).
f(x) = x XOR c where c < N. Trivially bijective and trivially cycles with a period of 2, which divides N.
f(x) = clmul(x, c) where c is odd and clmul is carry-less multiplication. Bijective because any odd c has a carry-less multiplicative inverse. Has some power-of-two cycle length (less than N) so it divides N. I don't know why though. This is a weird one, but it has decent special cases such as x ^ (x << k). By symmetry, the "mirrored" version also works.
Eg x ^ (x >> k).
f(x) = x >>> k where >>> is bit-rotation. Obviously bijective, and fN(x) = x >>> Nk, where Nk mod N = 0 so it rotates all the way back to the unrotated position regardless of what k is.
This is a follow-up to Testing for floating-point value equality: Is there a standard name for the “precision” constant?.
There is a very similar question Double.Epsilon for equality, greater than, less than, less than or equal to, greater than or equal to.
It is well known that an equality test for two floating-point values x and y should look more like this (rather than a straightforward =):
abs( x - y ) < epsilon , where epsilon is some very small value.
How to choose a value for epsilon?
It would obviously be preferable to choose for epsilon as small a value as possible, to get the highest-possible precision for the equality check.
As an example, the .NET framework offers a constant System.Double.Epsilon (= 4.94066 × 10-324), which represents the smallest positive System.Double value that is greater than zero.
However, it turns out that this particular value can't be reliably used as epsilon, since:
0 + System.Double.Epsilon ≠ 0
1 + System.Double.Epsilon = 1 (!)
which is, if I understand correctly, because that constant is less than machine epsilon.
→ Is this correct?
→ Does this also mean that I can reliably use epsilon := machine epsilon for equality tests?
Removed these two questions, as they are already adequately answered by the second SO question linked-to above.
The linked-to Wikipedia article says that for 64-bit floating-point numbers (ie. the double type in many languages), machine epsilon is equal to:
2-53, or approx. 0.000000000000000111 (a number with 15 zeroes after the decimal point)
→ Does it follow from this that all 64-bit floating point values are guaranteed to be accurate to 14 (if not 15) digits?
How to choose a value for epsilon?
Short Answer: You take a small value which fits your applications needs.
Long Answer: Nobody can know which calculations your application does and how accurate you expect your results to be. Since rounding errors sum up machine epsilon will be almost all times far too big so you have to chose your own value. Depending on your needs, 0.01 be be sufficient, or maybe 0.00000000000001 or less will.
The question is, do you really want/need to do equality tests on floating point values? Maybe you should redesign your algorithms.
In the past when I have had to use an epsilon value it's been very much bigger than the machine epsilon value.
Although it was for 32 bit doubles (rather than 64 bit doubles) we found that an epsilon value of 10-6 was needed for most (if not all) calculated values in our particular application.
The value of epsilon you choose depends on the scale of your numbers. If you are dealing with the very large (10+10 say) then you might need a larger value of epsilon as your significant digits don't stretch very far into the fractional part (if at all). If you are dealing with the very small (10-10 say) then obviously you need an epsilon value that's smaller than this.
You need to do some experimentation, performing your calculations and checking the differences between your output values. Only when you know the range of your potential answers will you be able to decide on a suitable value for your application.
The sad truth is: There is no appropriate epsilon for floating-point comparisons. Use another approach for floating-point equality tests if you don't want to run into serious bugs.
Approximate floating-point comparison is an amazingly tricky field, and the abs(x - y) < eps approach works only for a very limited range of values, mainly because of the absolute difference not taking into account the magnitude of the compared values, but also due to the significant digit cancellation occurring in the subtraction of two floating-point values with different exponents.
There are better approaches, using relative differences or ULPs, but they have their own shortcomings and pitfalls. Read Bruce Dawson's excellent article Comparing Floating Point Numbers, 2012 Edition for a great introduction into how tricky floating-point comparisons really are -- a must-read for anyone doing floating-point programming IMHO! I'm sure countless thousands of man-years have been spent finding out the subtle bugs due to naive floating-point comparisons.
I also have questions regarding what would be the correct procedure. However I believe one should do:
abs(x - y) <= 0.5 * eps * max(abs(x), abs(y))
instead of:
abs(x - y) < eps
The reason for this arises from the definition of the machine epsilon. Using python code:
import numpy as np
real = np.float64
eps = np.finfo(real).eps
## Let's get the machine epsilon
x, dx = real(1), real(1)
while x+dx != x: dx/= real(2) ;
print "eps = %e dx = %e eps*x/2 = %e" % (eps, dx, eps*x/real(2))
Which gives: eps = 2.220446e-16 dx = 1.110223e-16 eps*x/2 = 1.110223e-16
## Now for x=16
x, dx = real(16), real(1)
while x+dx != x: dx/= real(2) ;
print "eps = %e dx = %e eps*x/2 = %e" % (eps, dx, eps*x/real(2))
Which now gives: eps = 2.220446e-16 dx = 1.776357e-15 eps*x/2 = 1.776357e-15
## For x not equal to 2**n
x, dx = real(36), real(1)
while x+dx != x: dx/= real(2) ;
print "eps = %e dx = %e eps*x/2 = %e" % (eps, dx, eps*x/real(2))
Which returns: eps = 2.220446e-16 dx = 3.552714e-15 eps*x/2 = 3.996803e-15
However, despite the difference between dx and eps*x/2, we see that dx <= eps*x/2,
thus it serves the purpose for equality tests, checking for tolerances when testing for convergence in numerical procedures, etc.
Such is similar to what is in:
www.ibiblio.org/pub/languages/fortran/ch1-8.html#02,
however if someone knows of better procedures or if something here is incorrect, please do say.
The number of combinations of k items which can be retrieved from N items is described by the following formula.
N!
c = ___________________
(k! * (N - k)!)
An example would be how many combinations of 6 Balls can be drawn from a drum of 48 Balls in a lottery draw.
Optimize this formula to run with the smallest O time complexity
This question was inspired by the new WolframAlpha math engine and the fact that it can calculate extremely large combinations very quickly. e.g. and a subsequent discussion on the topic on another forum.
http://www97.wolframalpha.com/input/?i=20000000+Choose+15000000
I'll post some info/links from that discussion after some people take a stab at the solution.
Any language is acceptable.
Python: O(min[k,n-k]2)
def choose(n,k):
k = min(k,n-k)
p = q = 1
for i in xrange(k):
p *= n - i
q *= 1 + i
return p/q
Analysis:
The size of p and q will increase linearly inside the loop, if n-i and 1+i can be considered to have constant size.
The cost of each multiplication will then also increase linearly.
This sum of all iterations becomes an arithmetic series over k.
My conclusion: O(k2)
If rewritten to use floating point numbers, the multiplications will be atomic operations, but we will lose a lot of precision. It even overflows for choose(20000000, 15000000). (Not a big surprise, since the result would be around 0.2119620413×104884378.)
def choose(n,k):
k = min(k,n-k)
result = 1.0
for i in xrange(k):
result *= 1.0 * (n - i) / (1 + i)
return result
Notice that WolframAlpha returns a "Decimal Approximation". If you don't need absolute precision, you could do the same thing by calculating the factorials with Stirling's Approximation.
Now, Stirling's approximation requires the evaluation of (n/e)^n, where e is the base of the natural logarithm, which will be by far the slowest operation. But this can be done using the techniques outlined in another stackoverflow post.
If you use double precision and repeated squaring to accomplish the exponentiation, the operations will be:
3 evaluations of a Stirling approximation, each requiring O(log n) multiplications and one square root evaluation.
2 multiplications
1 divisions
The number of operations could probably be reduced with a bit of cleverness, but the total time complexity is going to be O(log n) with this approach. Pretty manageable.
EDIT: There's also bound to be a lot of academic literature on this topic, given how common this calculation is. A good university library could help you track it down.
EDIT2: Also, as pointed out in another response, the values will easily overflow a double, so a floating point type with very extended precision will need to be used for even moderately large values of k and n.
I'd solve it in Mathematica:
Binomial[n, k]
Man, that was easy...
Python: approximation in O(1) ?
Using python decimal implementation to calculate an approximation. Since it does not use any external loop, and the numbers are limited in size, I think it will execute in O(1).
from decimal import Decimal
ln = lambda z: z.ln()
exp = lambda z: z.exp()
sinh = lambda z: (exp(z) - exp(-z))/2
sqrt = lambda z: z.sqrt()
pi = Decimal('3.1415926535897932384626433832795')
e = Decimal('2.7182818284590452353602874713527')
# Stirling's approximation of the gamma-funciton.
# Simplification by Robert H. Windschitl.
# Source: http://en.wikipedia.org/wiki/Stirling%27s_approximation
gamma = lambda z: sqrt(2*pi/z) * (z/e*sqrt(z*sinh(1/z)+1/(810*z**6)))**z
def choose(n, k):
n = Decimal(str(n))
k = Decimal(str(k))
return gamma(n+1)/gamma(k+1)/gamma(n-k+1)
Example:
>>> choose(20000000,15000000)
Decimal('2.087655025913799812289651991E+4884377')
>>> choose(130202807,65101404)
Decimal('1.867575060806365854276707374E+39194946')
Any higher, and it will overflow. The exponent seems to be limited to 40000000.
Given a reasonable number of values for n and K, calculate them in advance and use a lookup table.
It's dodging the issue in some fashion (you're offloading the calculation), but it's a useful technique if you're having to determine large numbers of values.
MATLAB:
The cheater's way (using the built-in function NCHOOSEK): 13 characters, O(?)
nchoosek(N,k)
My solution: 36 characters, O(min(k,N-k))
a=min(k,N-k);
prod(N-a+1:N)/prod(1:a)
I know this is a really old question but I struggled with a solution to this problem for a long while until I found a really simple one written in VB 6 and after porting it to C#, here is the result:
public int NChooseK(int n, int k)
{
var result = 1;
for (var i = 1; i <= k; i++)
{
result *= n - (k - i);
result /= i;
}
return result;
}
The final code is so simple you won't believe it will work until you run it.
Also, the original article gives some nice explanation on how he reached the final algorithm.