Approximate function following interpolation - function

How do you approximate a function for interpolated points? I use the natural cubic spline to interpolate points as follows for n = 500 points:
t=[0; 3; 6; 9]
z=[0; 6; 6; 0]
plot(t,z,'ro')
ti=linspace(0,9,500)
zn = natcubicspline(t,z,ti)
yn = line(ti,zn)
Is there any way approximate a function for these n-many points (for large n)? Or are there ways to treat interpolated points as if they are a function, i.e. find the gradient of the zn vector? Because zn is a vector of constants, this wouldn't necessarily be helpful.
Update: In particular, my data appears to form a 2nd degree polynomial, so I went ahead and used the following Matlab function to fit my data:
p = polyfit(transpose(ti),zn,2)
Which yields coefficient estimates for a 2nd degree polynomial. It does fit the data, but with high error values, and I have to multiply this coefficient vector by a vector [1 z z^2] to get the correct polynomial. Is there any way to streamline this?

Related

How to plot a 2d Function in MATLAB

I am trying to plot a simple equation in MATLAB.
The equation is
z = x^2 - y^2, for -3 <= x <= 3, -3 <= y <= 3.
The current code that I have is
x = -3:3;
y = -3:3;
z = (x.^2) - (y.^2);
plot(z)
The result is
Please help me in this case because I am not sure if the code and graph is correct. Thank you very much.
This is not a piecewise function. A Piecewise Function is a function defined by multiple sub-functions, where each sub-function applies to a different interval in the domain. There is only one function here that takes two arrays of the same length. The calculations yield a vector of zeros, due to the input arrays. If you change either one of the vectors, that is "x" or "y", you will see a nonzero plot. Your code works as expected.
There is a lot going wrong here: Let's start at the beginning:
x = -3:3;
y = -3:3;
If we evaluate these they both will return an vector of integers:
x =
-3 -2 -1 0 1 2 3
This means that the grid on which the function is evaluated is going to be very coarse. To alleviate this you can define a step size, e.g. x = 3:0.1:3 or use linspace, in which case you set the number of samples, so e.g. x = linspace(-3, 3, 500). Now consider the next line:
z = (x.^2) - (y.^2);
If we evaluate this we get
z =
0 0 0 0 0 0 0
and you plot this vector with the 2d-plotting function
plot(z)
which perfectly explains why you get a straight line. This is because the automatic broadcasting of the arithmetic operators like minuse (-) just subtracts values entry-wise. You however want to evaluate z for each possible pair of values of x and y. To do this and to get a nice plot later you should use meshgrid, and use a plotting function like mesh to plot it. So I'd recommend using
[X,Y] = meshgrid(x,y);
to create the grid and then evaluate the function on the grid as follows
Z = X.^2 - Y.^2;
and finally plot your function with
mesh(X,Y,Z);

How to generate random numbers from log-normal distribution with a given mean and SD in SAS?

Wenping (Wendy) Zhang points out
that the SAS RAND function "basically gives "standard" distribution".
The author describes an interesting SAS %rndnmb macro to generate data from “non-standard” distributions. Unfortunately the code in unavailable. So, I dared to do it by myself.
If I understand correctly the Wikipedia says that y is from the log-normal distribution if
y = exp^(mu + sigma * Z).
The following formulas connect the mean and variance of the non-logarithmized sample values:
mu = ln((mean^2)/(sqrt(variance + mean^2))
and
sigma = sqrt(ln(1 + (variance)/(mean^2))).
If that correct, my y will be drawn from log-normal distribution when
Z is from standard normal distribution Z with mu' = 0, sigma' = 1.
Finally, is it correct that y is from lognormal distribution with mean and variance if
y = exp^(ln((mean^2)/(sqrt(variance + mean^2)) + sqrt(ln(1 + (variance)/(mean^2))) * Z)
?
My SAS code is:
/*I use StdDev^2 notation instead of variance here. */
DATA nonStLogNorm;
nonStLN = exp(1)**(log((mean**2)/(sqrt(StdDev^2 + mean**2)) +
sqrt(log(1 + (StdDev^2)/(mean**2))) * rand('UNIFORM'));
RUN;
References:
RAND function by Rick Wicklin:
http://blogs.sas.com/content/iml/2013/07/10/stop-using-ranuni/
http://blogs.sas.com/content/iml/2011/08/24/how-to-generate-random-numbers-in-sas/
What you need is the inverse cumulative distribution function. This is the function that is the inverse of the normalized integral of the distribution over the entire domain. So at 0% is your most negative possible value and 100% is your most positive. Practically though you would calmp to something like 0.01% and 99.99% or something like that as otherwise you'll end up at infinite for a lot of distributions.
Then from there you only need to random a number in a range (0,1) and plug that into the function. Remember to clamp it!
double CDF = 0.5 + 0.5*erf((ln(x) - center)/(sqrt(2)*sigma))
so
double x = exp(inverf((CDF - 0.5)*2.0)*sqrt(2)*sigma + center);
should give you the requested distribution. inverf is the inverse of the erf function. It is a common function but not in math.h typically.
Did a SIMD based random number generator that needed to do distributions. This worked fine, the above will work assuming I didn't flub up something while typing.
As requested how to clamp:
//This is how I do it with my Random class where the first argument
//is the min value and the second is the max
double CDF = Random::Range(0.0001,0.9999); //Depends on what you are using to random
//How you get there from Random Ints
unsigned int RandomNumber = rand();
//Conver number to range [0,1]
double CDF = (double)RandomNumber/(double)RAND_MAX;
//now clamp it to a min, max of your choosing
CDF = CDF*(max - min) + min;
If you want Z to be drawn from the standard normal distribution, shouldn't you obtain it by calling RAND('NORMAL') rather than RAND('UNIFORM')?

Translation from Complex-FFT to Finite-Field-FFT

Good afternoon!
I am trying to develop an NTT algorithm based on the naive recursive FFT implementation I already have.
Consider the following code (coefficients' length, let it be m, is an exact power of two):
/// <summary>
/// Calculates the result of the recursive Number Theoretic Transform.
/// </summary>
/// <param name="coefficients"></param>
/// <returns></returns>
private static BigInteger[] Recursive_NTT_Skeleton(
IList<BigInteger> coefficients,
IList<BigInteger> rootsOfUnity,
int step,
int offset)
{
// Calculate the length of vectors at the current step of recursion.
// -
int n = coefficients.Count / step - offset / step;
if (n == 1)
{
return new BigInteger[] { coefficients[offset] };
}
BigInteger[] results = new BigInteger[n];
IList<BigInteger> resultEvens =
Recursive_NTT_Skeleton(coefficients, rootsOfUnity, step * 2, offset);
IList<BigInteger> resultOdds =
Recursive_NTT_Skeleton(coefficients, rootsOfUnity, step * 2, offset + step);
for (int k = 0; k < n / 2; k++)
{
BigInteger bfly = (rootsOfUnity[k * step] * resultOdds[k]) % NTT_MODULUS;
results[k] = (resultEvens[k] + bfly) % NTT_MODULUS;
results[k + n / 2] = (resultEvens[k] - bfly) % NTT_MODULUS;
}
return results;
}
It worked for complex FFT (replace BigInteger with a complex numeric type (I had my own)). It doesn't work here even though I changed the procedure of finding the primitive roots of unity appropriately.
Supposedly, the problem is this: rootsOfUnity parameter passed originally contained only the first half of m-th complex roots of unity in this order:
omega^0 = 1, omega^1, omega^2, ..., omega^(n/2)
It was enough, because on these three lines of code:
BigInteger bfly = (rootsOfUnity[k * step] * resultOdds[k]) % NTT_MODULUS;
results[k] = (resultEvens[k] + bfly) % NTT_MODULUS;
results[k + n / 2] = (resultEvens[k] - bfly) % NTT_MODULUS;
I originally made use of the fact, that at any level of recursion (for any n and i), the complex root of unity -omega^(i) = omega^(i + n/2).
However, that property obviously doesn't hold in finite fields. But is there any analogue of it which would allow me to still compute only the first half of the roots?
Or should I extend the cycle from n/2 to n and pre-compute all the m-th roots of unity?
Maybe there are other problems with this code?..
Thank you very much in advance!
I recently wanted to implement NTT for fast multiplication instead of DFFT too. Read a lot of confusing things, different letters everywhere and no simple solution, and also my finite fields knowledge is rusty , but today i finally got it right (after 2 days of trying and analog-ing with DFT coefficients) so here are my insights for NTT:
Computation
X(i) = sum(j=0..n-1) of ( Wn^(i*j)*x(i) );
where X[] is NTT transformed x[] of size n where Wn is the NTT basis. All computations are on integer modulo arithmetics mod p no complex numbers anywhere.
Important values
Wn = r ^ L mod p is basis for NTT
Wn = r ^ (p-1-L) mod p is basis for INTT
Rn = n ^ (p-2) mod p is scaling multiplicative constant for INTT ~(1/n)
p is prime that p mod n == 1 and p>max'
max is max value of x[i] for NTT or X[i] for INTT
r = <1,p)
L = <1,p) and also divides p-1
r,L must be combined so r^(L*i) mod p == 1 if i=0 or i=n
r,L must be combined so r^(L*i) mod p != 1 if 0 < i < n
max' is the sub-result max value and depends on n and type of computation. For single (I)NTT it is max' = n*max but for convolution of two n sized vectors it is max' = n*max*max etc. See Implementing FFT over finite fields for more info about it.
functional combination of r,L,p is different for different n
this is important, you have to recompute or select parameters from table before each NTT layer (n is always half of the previous recursion).
Here is my C++ code that finds the r,L,p parameters (needs modular arithmetics which is not included, you can replace it with (a+b)%c,(a-b)%c,(a*b)%c,... but in that case beware of overflows especial for modpow and modmul) The code is not optimized yet there are ways to speed it up considerably. Also prime table is fairly limited so either use SoE or any other algo to obtain primes up to max' in order to work safely.
DWORD _arithmetics_primes[]=
{
2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,
179,181,191,193,197,199,211,223,227,229,233,239,241,251,257,263,269,271,277,281,283,293,307,311,313,317,331,337,347,349,353,359,367,373,379,383,389,397,401,409,
419,421,431,433,439,443,449,457,461,463,467,479,487,491,499,503,509,521,523,541,547,557,563,569,571,577,587,593,599,601,607,613,617,619,631,641,643,647,653,659,
661,673,677,683,691,701,709,719,727,733,739,743,751,757,761,769,773,787,797,809,811,821,823,827,829,839,853,857,859,863,877,881,883,887,907,911,919,929,937,941,
947,953,967,971,977,983,991,997,1009,1013,1019,1021,1031,1033,1039,1049,1051,1061,1063,1069,1087,1091,1093,1097,1103,1109,1117,1123,1129,1151,
0}; // end of table is 0, the more primes are there the bigger numbers and n can be used
// compute NTT consts W=r^L%p for n
int i,j,k,n=16;
long w,W,iW,p,r,L,l,e;
long max=81*n; // edit1: max num for NTT for my multiplication purposses
for (e=1,j=0;e;j++) // find prime p that p%n=1 AND p>max ... 9*9=81
{
p=_arithmetics_primes[j];
if (!p) break;
if ((p>max)&&(p%n==1))
for (r=2;r<p;r++) // check all r
{
for (l=1;l<p;l++)// all l that divide p-1
{
L=(p-1);
if (L%l!=0) continue;
L/=l;
W=modpow(r,L,p);
e=0;
for (w=1,i=0;i<=n;i++,w=modmul(w,W,p))
{
if ((i==0) &&(w!=1)) { e=1; break; }
if ((i==n) &&(w!=1)) { e=1; break; }
if ((i>0)&&(i<n)&&(w==1)) { e=1; break; }
}
if (!e) break;
}
if (!e) break;
}
}
if (e) { error; } // error no combination r,l,p for n found
W=modpow(r, L,p); // Wn for NTT
iW=modpow(r,p-1-L,p); // Wn for INTT
and here is my slow NTT and INTT implementations (i havent got to fast NTT,INTT yet) they are both tested with Schönhage–Strassen multiplication successfully.
//---------------------------------------------------------------------------
void NTT(long *dst,long *src,long n,long m,long w)
{
long i,j,wj,wi,a,n2=n>>1;
for (wj=1,j=0;j<n;j++)
{
a=0;
for (wi=1,i=0;i<n;i++)
{
a=modadd(a,modmul(wi,src[i],m),m);
wi=modmul(wi,wj,m);
}
dst[j]=a;
wj=modmul(wj,w,m);
}
}
//---------------------------------------------------------------------------
void INTT(long *dst,long *src,long n,long m,long w)
{
long i,j,wi=1,wj=1,rN,a,n2=n>>1;
rN=modpow(n,m-2,m);
for (wj=1,j=0;j<n;j++)
{
a=0;
for (wi=1,i=0;i<n;i++)
{
a=modadd(a,modmul(wi,src[i],m),m);
wi=modmul(wi,wj,m);
}
dst[j]=modmul(a,rN,m);
wj=modmul(wj,w,m);
}
}
//---------------------------------------------------------------------------
dst is destination array
src is source array
n is array size
m is modulus (p)
w is basis (Wn)
hope this helps to someone. If i forgot something please write ...
[edit1: fast NTT/INTT]
Finally I manage to get fast NTT/INTT to work. Was little bit more tricky than normal FFT:
//---------------------------------------------------------------------------
void _NFTT(long *dst,long *src,long n,long m,long w)
{
if (n<=1) { if (n==1) dst[0]=src[0]; return; }
long i,j,a0,a1,n2=n>>1,w2=modmul(w,w,m);
// reorder even,odd
for (i=0,j=0;i<n2;i++,j+=2) dst[i]=src[j];
for ( j=1;i<n ;i++,j+=2) dst[i]=src[j];
// recursion
_NFTT(src ,dst ,n2,m,w2); // even
_NFTT(src+n2,dst+n2,n2,m,w2); // odd
// restore results
for (w2=1,i=0,j=n2;i<n2;i++,j++,w2=modmul(w2,w,m))
{
a0=src[i];
a1=modmul(src[j],w2,m);
dst[i]=modadd(a0,a1,m);
dst[j]=modsub(a0,a1,m);
}
}
//---------------------------------------------------------------------------
void _INFTT(long *dst,long *src,long n,long m,long w)
{
long i,rN;
rN=modpow(n,m-2,m);
_NFTT(dst,src,n,m,w);
for (i=0;i<n;i++) dst[i]=modmul(dst[i],rN,m);
}
//---------------------------------------------------------------------------
[edit3]
I have optimized my code (3x times faster than code above),but still i am not satisfied with it so i started new question with it. There I have optimized my code even further (about 40x times faster than code above) so its almost the same speed as FFT on floating point of the same bit size. Link to it is here:
Modular arithmetics and NTT (finite field DFT) optimizations
To turn Cooley-Tukey (complex) FFT into modular arithmetic approach, i.e. NTT, you must replace complex definition for omega. For the approach to be purely recursive, you also need to recalculate omega for each level based on current signal size. This is possible because min. suitable modulus decreases as we move down in the call tree, so modulus used for root is suitable for lower layers. Additionally, as we are using same modulus, the same generator may be used as we move down the call tree. Also, for inverse transform, you should take additional step to take recalculated omega a and instead use as omega: b = a ^ -1 (via using inverse modulo operation). Specifically, b = invMod(a, N) s.t. b * a == 1 (mod N), where N is the chosen prime modulus.
Rewriting an expression involving omega by exploiting periodicity still works in modular arithmetic realm. You also need to find a way to determine the modulus (a prime) for the problem and a valid generator.
We note that your code works, though it is not a MWE. We extended it using common sense, and got correct result for a polynomial multiplication application. You just have to provide correct values of omega raised to certain powers.
While your code works, though, like from many other sources, you double spacing for each level. This does not lead to recursion that is as clean, though; this turns out to be identical to recalculating omega based on current signal size because the power for omega definition is inversely proportional to signal size. To reiterate: halving signal size is like squaring omega, which is like giving doubled powers for omega (which is what one would do for doubling of spacing). The nice thing about the approach that deals with recalculating of omega is that each subproblem is more cleanly complete in its own right.
There is a paper that shows some of the math for modular approach; it is a paper by Baktir and Sunar from 2006. See the paper at the end of this post.
You do not need to extend the cycle from n / 2 to n.
So, yes, some sources which say to just drop in a different omega definition for modular arithmetic approach are sweeping under the rug many details.
Another issue is that it is important to acknowledge that the signal size must be large enough if we are to not have overflow for result time-domain signal if we are performing convolution. Additionally, it may be useful to find certain implementations for exponentiation subject to modulus exist that are fast, even if the power is quite large.
References
Baktir and Sunar - Achieving efficient polynomial multiplication in Fermat fields using the fast Fourier transform (2006)
You must make sure that roots of unity actually exist. In R there are only 2 roots of unity: 1 and -1, since only for them x^n=1 can be true.
In C you have infinitely many roots of unity: w=exp(2*pi*i/N) is a primitive N-th roots of unity and all w^k for 0<=k
Now to your problem: you have to make sure the ring you're working in offers the same property: enough roots of unity.
Schönhage and Strassen (http://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen_algorithm) use integers modulo 2^N+1. This ring has enough roots of unity. 2^N == -1 is a 2nd root of unity, 2^(N/2) is a 4th root of unity and so on. Furthermore, these roots of unity have the advantage that they are powers of two and can be implemented as binary shifts (with a modulo operation afterwards, which comes down to a add/subtract).
I think QuickMul (http://www.cs.nyu.edu/exact/doc/qmul.ps) works modulo 2^N-1.

Matlab - how to make this work for both scalars and vectors

Suppose function g takes a function f as a parameter, and inside g we have something like
x = t*feval(f, u);
however, f can be either scalar-valued or vector-valued. If it is vector valued, we want x to be a vector as well, i.e. the feval statement to return the whole vector returned by f. How do we make this work for both scalar and vector cases?
As far as I can tell, what you are asking is already the default behavior in matlab.
This means that if f returns a scalar, x will be a scalar and if it returns a vector x will be a vector.
In your example, this holds as long as t is also a scalar - otherwise the result will depend on how t*[output of f] is evaluated.
Example
function o1 = f(N)
o1 = zeros(1,N);
end
Here f returns a scalar if N=1 and a vector for N>1.
Calling your code gives
x=feval('f', 1); % Returns x = 0
x=feval('f', 4); % Returns x = [0 0 0 0]
If the output of feval(f,u) can be either a scalar or a vector, and you want the result x to be the same (i.e. a scalar or a vector of the same length and dimension), then it will depend on what t is:
If t is a scalar, then what you have is fine. You can use either of the operators * or .* to perform the multiplication.
If t is a vector of the same length and dimension as the result from feval(f,u), then use the .* operator to perform element-wise multiplication.
If t is a vector of the same length but with different dimension that the result from feval(f,u) (i.e. one is a row vector and one is a column vector), then you have to make the dimensions match by transposing one or the other with the .' operator.
If t is a different length than the result of feval(f,u), then you can't do element-wise multiplication.

summing functions handles in matlab

Hi
I am trying to sum two function handles, but it doesn't work.
for example:
y1=#(x)(x*x);
y2=#(x)(x*x+3*x);
y3=y1+y2
The error I receive is "??? Undefined function or method 'plus' for input arguments of type 'function_handle'."
This is just a small example, in reality I actually need to iteratively sum about 500 functions that are dependent on each other.
EDIT
The solution by Clement J. indeed works but I couldn't manage to generalize this into a loop and ran into a problem. I have the function s=#(x,y,z)((1-exp(-x*y)-z)*exp(-x*y)); And I have a vector v that contains 536 data points and another vector w that also contains 536 data points. My goal is to sum up s(v(i),y,w(i)) for i=1...536 Thus getting one function in the variable y which is the sum of 536 functions. The syntax I tried in order to do this is:
sum=#(y)(s(v(1),y,z2(1)));
for i=2:536
sum=#(y)(sum+s(v(i),y,z2(i)))
end
The solution proposed by Fyodor Soikin works.
>> y3=#(x)(y1(x) + y2(x))
y3 =
#(x) (y1 (x) + y2 (x))
If you want to do it on multiple functions you can use intermediate variables :
>> f1 = y1;
>> f2 = y2;
>> y3=#(x)(f1(x) + f2(x))
EDIT after the comment:
I'm not sure to understand the problem. Can you define your vectors v and w like that outside the function :
v = [5 4]; % your 536 data
w = [4 5];
y = 8;
s=#(y)((1-exp(-v*y)-w).*exp(-v*y))
s_sum = sum(s(y))
Note the dot in the multiplication to do it element-wise.
I think the most succinct solution is given in the comment by Mikhail. I'll flesh it out in more detail...
First, you will want to modify your anonymous function s so that it can operate on vector inputs of the same size as well as scalar inputs (as suggested by Clement J.) by using element-wise arithmetic operators as follows:
s = #(x,y,z) (1-exp(-x.*y)-z).*exp(-x.*y); %# Note the periods
Then, assuming that you have vectors v and w defined in the given workspace, you can create a new function sy that, for a given scalar value of y, will sum across s evaluated at each set of values in v and w:
sy = #(y) sum(s(v,y,w));
If you want to evaluate this function using an array of values for y, you can add a call to the function ARRAYFUN like so:
sy = #(y) arrayfun(#(yi) sum(s(v,yi,w)),y);
Note that the values for v and w that will be used in the function sy will be fixed to what they were when the function was created. In other words, changing v and w in the workspace will not change the values used by sy. Note also that I didn't name the new anonymous function sum, since there is already a built-in function with that name.