Retrieving coefficients of polynomial from DFT using inverse DFT - fft

I am trying to multiply two polynomials using DFT and I don't know how to get the last bit from the DFT of their multiplication.
So there's p(x) = x - 4, dft -3, i-4, -5, -i-4
And q(x) = x^2-1, dft 0, -2, 0, -2
degree(pq) = 3
So we get the 4th roots of unity 1, i, -1, -i
dft for pq is 0, 8-2i, 0, 8+2i.
Could someone please tell me how to get the coefficients for pq now from its dft?
Thanks!

The first thing to understand is that multiplying two polynomials is the same as convolving the coefficients.
octave:1> p=[0 0 1 -4];
octave:2> q=[0 1 0 -1];
octave:3> conv(p,q)
ans =
0 0 0 1 -4 -1 4
Secondly, understand the conditions under which circular convolution is equivalent to linear convolution.
(Also, your DFT coeffs seem to be wrong)

Related

Fourier transform, coefficients issue. How to gather function back?

I have the output 2D array C(kx,ky) with Fourier coefficients and the problem is in how to get the function f(x,y) back using that coefficients.
f(x,y) is defined on [0,2Pi] x [0,2Pi] space in Fourier basis with 64 points in each direction. My output array with coefficients has the size (32,63). 32 represents only coefficients for positive wave numbers k_x > 0 and 63 is for -k_y < 0 < k_y.
In order to get the coefficients for negative k_x < 0 I do conjugate symmetry operation for C(kx,ky) i.e. np.conjugate(C) in python.
Getting back f(x,y):
#1st part from for k_x => 0:
for i in range(len(kx)):
for j in range(len(ky)):
summode = summode + c[i,j]*np.exp(1j*(kx[i]*x + ky[j]*y))
#2nd part for k_x < 0:
for i in range(1,len(kx)):
for j in range(len(ky)):
summode = summode + np.conjugate(c[i,j])*np.exp(1j*(-kx[i]*x + ky[j]*y))
In the end, my obtained function does not match the original. Any ideas?
Thanks in advance!

Using linear approximation to perform addition and subtraction | error barrier

I'm attempting my first solo project, after taking an introductory course to machine learning, where I'm trying to use linear approximation to predict the outcome of addition/subtraction of two numbers.
I have 3 features: first number, subtraction/addition (0 or 1), and second number.
So my input looks something like this:
3 0 1
4 1 2
3 0 3
With corresponding output like this:
2
6
0
I have (I think) successfully implemented logistic regression algorithm, as the squared error does gradually decrease, but in 100 values, ranging from 0 to 50, the squared error value flattens out at around 685.6 after about 400 iterations.
Graph: Squared Error vs Iterations
.
To fix this, I have tried using a larger dataset for training, getting rid of regularization, and normalizing the input values.
I know that one of the steps to fix high bias is to add complexity to the approximation, but I want to maximize the performance at this particular level. Is it possible to go any further on this level?
My linear approximation code in Octave:
% Iterate
for i = 1 : iter
% hypothesis
h = X * Theta;
% reg theta prep
regTheta = Theta;
regTheta(:, 1) = 0;
% cost calc
J(i, 2) = (1 / (2 * m)) * (sum((h - y) .^ 2) + lambda * sum(sum(regTheta .^ 2,1),2));
% theta calc
Theta = Theta - (alpha / m) * ((h - y)' * X)' + lambda * sum(sum(regTheta, 1), 2);
end
Note: I'm using 0 for lambda, as to ignore regularization.

Converting from fractional to decimal representation in Octave

I'm getting the following warning:
warning: Using rat() heuristics for double-precision input (is this what you wanted?)
and my resultant calculation is using the rational approach when I would like the decimal form. How can I force the computation to convert the rational to a decimal representation?
Here is the code:
pkg load symbolic
syms a b c d real
C = [1, 0, 0, 0; 0, 1, 0, 0; 0, 0, 0, 1; 0, 0, 1, 0]
H = (1/sqrt(2))*[1, 1; 1, -1]
I = [1, 0; 0, 1]
X = [a, b, c, d]
s = kron(H, I)
s*C*X'
The rational representation can be converted to a float one using vpa. vpa(x,n) evaluates x to at least n significant digits. If you want to use current value of digits, you can omit n.
vpa(s*C*X.',4)
% Above line evaluates the result to at least 4 significant digits
Also note that ' is not transpose. It is complex conjugate transpose. Use transpose (i.e. .') when you are meant to take transpose. That's why I made the replacement in the above code.
Regarding the warning message, it can be turned off by:
warning ('off', 'OctSymPy:sym:rationalapprox');
You can turn it on again by replacing off with on in the above code.

Compare two linear regression models in MATLAB

I want to compare the performance of two models using the F statistic. Here is a reproducible example and the expected results:
load carbig
tbl = table(Acceleration,Cylinders,Horsepower,MPG);
% Testing separetly both models
mdl1 = fitlm(tbl,'MPG~1+Acceleration+Cylinders+Horsepower');
mdl2 = fitlm(tbl,'MPG~1+Acceleration');
% Comparing both models using the F-test and p-value
numerator = (mdl2.SSE-mdl1.SSE)/(mdl1.NumCoefficients-mdl2.NumCoefficients);
denominator = mdl1.SSE/mdl1.DFE;
F = numerator/denominator;
p = 1-fcdf(F,mdl1.NumCoefficients-mdl2.NumCoefficients,mdl1.DFE);
We end up with F = 298.75 and p = 0, indicating mdl1 is significantly better than mdl2, as assessed by the F statistic.
Is there anyway to obtain the F and p values without performing twice fitlm and doing all the computation?
I tried to run a coefTest, as suggested by #Glen_b, however the function is poorly documented and the results are not the ones I'm expecting.
[p,F] = coefTest(mdl1); % p = 0, F = 262.508 (this F test mdl1 vs constant mdl)
[p,F] = coefTest(mdl1,[0,0,1,1]); % p = 0, F = 57.662 (not sure what this is testing)
[p,F] = coefTest(mdl1,[1,1,0,0]); % p = 0, F = 486.810 (idem)
I believe I should carry the test with a different null hypothesis (C) using the function [p,F] = coeffTest(mdl1,H,C). But I don't really know how to do it and there's no example.
This answer is in regards to comparing two linear regression models where one model is a restricted version of the other.
Short answer:
To do an F-test on the restriction that the 3rd and 4th elements of your estimated, coefficient vector b are zero:
[p, F] = coefTest(mdl1, [0, 0, 1, 0; 0, 0, 0, 1]);
Further explanation:
Let b be our estimated vector. Linear restrictions on b are typically written in a matrix form: R*b = r. The restriction that 3rd and 4th element of b are zero would be written:
[0, 0, 1, 0 * b = [0
0, 0, 0, 1] 0];
The matrix [0, 0, 1, 0; 0, 0, 0, 1] is what coefTest calls the H matrix in the docs.
P = coefTest(M,H), with H a numeric matrix having one column for each
coefficient, performs an F test that H*B=0, where B represents the
coefficient vector.
Long version
Sometimes with this econometric routines, it's nice just to write it out yourself so you know what's really going on.
Remove rows with NaN because they just add unrelated complexity:
tbl_dirty = table(Acceleration,Cylinders,Horsepower,MPG);
tbl = tbl_dirty(~any(ismissing(tbl_dirty),2),:);
Do the estimation etc...
n = height(tbl); % number of observations
y = tbl.MPG;
X = [ones(n, 1), tbl.Acceleration, tbl.Cylinders, tbl.Horsepower];
k = size(X,2); % number of variables (including constant)
b = X \ y; % estimate b with least squares
u = y - X * b; % calculates residuals
s2 = u' * u / (n - k); % estimate variance of error term (assuming homoskedasticity, independent observations)
BCOV = inv(X'*X) * s2; % get covariance matrix of b assuming homoskedasticity of error term etc...
bse = diag(BCOV).^.5; % standard errors
R = [0, 0, 1, 0;
0, 0, 0, 1];
r = [0; 0]; % Testing restriction: R * b = r
num_restrictions = size(R, 1);
F = (R*b - r)'*inv(R * BCOV * R')*(R*b - r) / num_restrictions; % F-stat (see Hiyashi for reference)
Fp = 1 - fcdf(F, num_restrictions, n - k); % F p-val
For reference, can look at p. 65 of Hiyashi's book Econometrics.
No, there is not.
Fitlm fits an arbitrary model. In your case a regression model with an intercept and either one or three regressors. It might seem that the model with three regressors can use information from the model with one regressor, but this is only true if there are some restrictions on the model and even then this overlapping information is limited.
Fitlm is a very general framework which can be used for arbitrary models. Doing multiple regressions at the same time with sharing of information can thus get quite complex and is not implemented.
It is possible to implement this yourself for these two specific models. Usually such a linear regression is solved using the covariance matrix:
Beta = (X' X) ^-1 X' y
were X is the data with the variables as columns and y is the target variable. In this case you could reuse part of the covariance matrix for which you only need the columns from the smaller regression: the variation in Acceleration. Since adding 2 new variables adds 8 values yo the covariance matrix you only save 1/9 of the time. Furthermore, the heaviest part is the inversion. Thus the time improvement is very very little.
In short, just do two separate regressions

Can someone explain the behavior of the functions mkpp and ppval?

If I do the following in MATLAB:
ppval(mkpp(1:2, [1 0 0 0]),1.5)
ans = 0.12500
This should construct a polynomial f(x) = x^3 and evaluate it at x = 1.5. So why does it give me the result 1.5^3 = .125? Now, if I change the domain defined in the first argument to mkpp, I get this:
> ppval(mkpp([1 1.5 2], [[1 0 0 0]; [1 0 0 0]]), 1.5)
ans = 0
So without changing the function, I change the answer. Awesome.
Can anyone explain what's going on here? How does changing the first argument to mkpp change the result I get?
The function MKPP will shift the polynomial so that x = 0 will start at the beginning of the corresponding range you give it. In your first example, the polynomial x^3 is shifted to the range [1 2], so if you want to evaluate the polynomial at an unshifted range of [0 1], you would have to do the following:
>> pp = mkpp(1:2,[1 0 0 0]); %# Your polynomial
>> ppval(pp,1.5+pp.breaks(1)) %# Shift evaluation point by the range start
ans =
3.3750 %# The answer you expect
In your second example, you have one polynomial x^3 shifted to the range [1 1.5] and another polynomial x^3 shifted to the range of [1.5 2]. Evaluating the piecewise polynomial at x = 1.5 gives you a value of zero, occurring at the start of the second polynomial.
It may help to visualize the polynomials you are making as follows:
x = linspace(0,3,100); %# A vector of x values
pp1 = mkpp([1 2],[1 0 0 0]); %# Your first piecewise polynomial
pp2 = mkpp([1 1.5 2],[1 0 0 0; 1 0 0 0]); %# Your second piecewise polynomial
subplot(1,2,1); %# Make a subplot
plot(x,ppval(pp1,x)); %# Evaluate and plot pp1 at all x
title('First Example'); %# Add a title
subplot(1,2,2); %# Make another subplot
plot(x,ppval(pp2,x)); %# Evaluate and plot pp2 at all x
axis([0 3 -1 8]) %# Adjust the axes ranges
title('Second Example'); %# Add a title