Using GNU Octave FFT functions - fft

I'm playing with octave's fft functions, and I can't really figure out how to scale their output: I use the following (very short) code to approximate a function:
function y = f(x)
y = x .^ 2;
endfunction;
X=[-4096:4095]/64;
Y = f(X);
# plot(X, Y);
F = fft(Y);
S = [0:2047]/2048;
function points = approximate(input, count)
size = size(input)(2);
fourier = [fft(input)(1:count) zeros(1, size-count)];
points = ifft(fourier);
endfunction;
Y = f(X); plot(X, Y, X, approximate(Y, 10));
Basically, what it does is take a function, compute the image of an interval, fft-it, then keep a few harmonics, and ifft the result. Yet I get a plot that is vertically compressed (the vertical scale of the output is wrong). Any ideas?

You are throwing out the second half of the transform. The transform is Hermitian symmetric for real-valued inputs and you have to keep those lines. Try this:
function points = approximate(inp, count)
fourier = fft(inp);
fourier((count+1):(length(fourier)-count+1)) = 0;
points = real(ifft(fourier)); %# max(imag(ifft(fourier))) should be around eps(real(...))
endfunction;
The inverse transform will invariably have some tiny imaginary part due to numerical computation error, hence the real extraction.
Note that input and size are keywords in Octave; clobbering them with your own variables is a good way to get really weird bugs down the road!

You are probably doing it wrong. You remove all the "negative" frequencies in your code. You should keep both positive and negative low frequencies. Here is a code in python and the result. The plot has the right scale.
alt text http://files.droplr.com/files/35740123/XUl90.fft.png
The code:
from __future__ import division
from scipy.signal import fft, ifft
import numpy as np
def approximate(signal, cutoff):
fourier = fft(signal)
size = len(signal)
# remove all frequencies except ground + offset positive, and offset negative:
fourier[1+cutoff:-cutoff] = 0
return ifft(fourier)
def quad(x):
return x**2
from pylab import plot
X = np.arange(-4096,4096)/64
Y = quad(X)
plot(X,Y)
plot(X,approximate(Y,3))

Related

How to plot a 2d Function in MATLAB

I am trying to plot a simple equation in MATLAB.
The equation is
z = x^2 - y^2, for -3 <= x <= 3, -3 <= y <= 3.
The current code that I have is
x = -3:3;
y = -3:3;
z = (x.^2) - (y.^2);
plot(z)
The result is
Please help me in this case because I am not sure if the code and graph is correct. Thank you very much.
This is not a piecewise function. A Piecewise Function is a function defined by multiple sub-functions, where each sub-function applies to a different interval in the domain. There is only one function here that takes two arrays of the same length. The calculations yield a vector of zeros, due to the input arrays. If you change either one of the vectors, that is "x" or "y", you will see a nonzero plot. Your code works as expected.
There is a lot going wrong here: Let's start at the beginning:
x = -3:3;
y = -3:3;
If we evaluate these they both will return an vector of integers:
x =
-3 -2 -1 0 1 2 3
This means that the grid on which the function is evaluated is going to be very coarse. To alleviate this you can define a step size, e.g. x = 3:0.1:3 or use linspace, in which case you set the number of samples, so e.g. x = linspace(-3, 3, 500). Now consider the next line:
z = (x.^2) - (y.^2);
If we evaluate this we get
z =
0 0 0 0 0 0 0
and you plot this vector with the 2d-plotting function
plot(z)
which perfectly explains why you get a straight line. This is because the automatic broadcasting of the arithmetic operators like minuse (-) just subtracts values entry-wise. You however want to evaluate z for each possible pair of values of x and y. To do this and to get a nice plot later you should use meshgrid, and use a plotting function like mesh to plot it. So I'd recommend using
[X,Y] = meshgrid(x,y);
to create the grid and then evaluate the function on the grid as follows
Z = X.^2 - Y.^2;
and finally plot your function with
mesh(X,Y,Z);

Is there a limit in the number of degrees of freedom with the lm_feasible algorithm? If so, what is the limit?

I am developing a finite element software that minimizes the energy of a mechanical structure. Using octave and its optim package, I run into a strange issue: The lm_feasible algorithm doesn't calculate at all when I use more than 300 degrees of freedom (DoF). Another algorithm (sqp) performs the calculation but doesn't work well when I complexify the structure and are out of my test case.
Is there a limit in the number of DoF with lm_feasible algorithm?
If so, how many DoF are maximally possible?
To give an overview and general idea of how the code works:
[x,y] = geometryGenerator()
U = zeros(lenght(x)*2,1);
U(1:2:end-1) = x;
U(2:2:end) = y;
%Non geometric argument are not optimised, and fixed during calculation
fct =#(U)complexFunctionOfEnergyIWrap(U(1:2:end-1),U(2:2:end), variousMaterialPropertiesAndOtherArgs)
para = optimset("f_equc_idx",contEq,"lb",lb,"ub",ub,"objf_grad",dEne,"objf_hessian",d2Ene,"MaxIter",1000);
[U,eneFinale,cvg,outp] = nonlin_min(fct,U,para)
Full example:
clear
pkg load optim
function [x,y] = geometryGenerator(r,elts = 100)
teta = linspace(0,pi,elts = 100);
x = r * cos(teta);
y = r * sin(teta);
endfunction
function ene = complexFunctionOfEnergyIWrap (x,y,E,P, X,Y)
ene = 0;
for i = 1:length(x)-1
ene += E*(x(i)/X(i))^4+ E*(y(i)/Y(i))^4- P *(x(i)^2+(x(i+1)^2)-x(i)*x(i+1))*abs(y(i)-y(i+1));
endfor
endfunction
[x,y] = geometryGenerator(5,100)
%Little distance from axis to avoid division by zero
x +=1e-6;
y +=1e-6;
%Saving initial geometry
X = x;
Y = y;
%Vectorisation of the function
%% Initial vector
U = zeros(length(x)*2,1);
U(1:2:end-1) = linspace(min(x),max(x),length(x));
U(2:2:end) = linspace(min(y),max(y),length(y));
%%Constraints
Aeq = zeros(3,length(U));
%%% Blocked bottom
Aeq(1,1) = 1;
Aeq(2,2) = 1;
%%% Sliding top
Aeq(3,end-1) = 1;
%%%Initial condition
beq = zeros(3,1);
beq(1) = U(1);
beq(2) = U(2);
beq(3) = U(end-1);
contEq = #(U) Aeq * U - beq;
%Parameter
Mat = 0.2e9;
pressure = 50;
%% Vectorized function. Non geometric argument are not optimised, and fixed during calculation
fct =#(U)complexFunctionOfEnergyIWrap(U(1:2:end-1),U(2:2:end), Mat, pressure, X, Y)
para = optimset("Algorithm","lm_feasible","f_equc_idx",contEq,"MaxIter",1000);
[U,eneFinale,cvg,outp] = nonlin_min(fct,U,para)
xFinal = U(1:2:end-1);
yFinal = U(2:2:end);
plot(x,y,';Initial geo;',xFinal,yFinal,'--x;Final geo;')
Finite Element Method is typically formulated as the optimal criteria for the minimization problem, which is equivalent to the Virtual Work Principle (see books like Hughes of Bathe). The Virtual Work, represents a set of linear (or nonlinear) equations which can be solved more efficiently (with fsolve).
If for some motive you must solve the problem as an optimization problem, then, if you are considering linear elasticity, your strain energy is quadratic, thus you could use the qp Octave function.
To use sparse matrices could also be helpful.

Why that I have written for Andrew Ng's course not accepted?

Andrew Ng's course in Coursera, which Stanford's Machine Learning course, features programming assignments that deal with implementing the algorithms taught in class. The goal of this assignment is to implement linear regression through gradient descent with an input set of X, y, theta, alpha (learning rate), and number of iterations.
I implemented this solution in Octave, the prescribed language in the course.
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y);
J_history = zeros(num_iters, 1);
numJ = size(theta, 1);
for iter = 1:num_iters
for i = 1:m
for j = 1:numJ
temp = theta(j) - alpha /m * X(i, j) * (((X * theta)(i, 1)) - y(i, 1));
theta(j) = temp
end
prediction = X * theta;
J_history(iter, 1) = computeCost(X,y,theta)
end
end
On the other hand, here is the cost function:
function J = computeCost(X, y, theta)
m = length(y);
J = 0;
prediction = X * theta;
error = (prediction - y).^2;
J = 1/(2 * m) .* sum(error);
end
This does not pass the submit() function. The submit() function simply validates the data through passing an unknown test case.
I have checked other questions on StackOverflow but I really don't get it. :)
Thank you very much!
Your gradient seems to be correct and as already pointed out in the answer given by #Kasinath P, it is likely that the problem is that the code is too slow. You just need to vectorize it. In Matlab/Octave, you usually need to avoid for loops (note that although you have parfor in Matlab, it is not yet available in octave). So it is always better, performance-wise, to write something like A*x instead of iterating over each row of A with a for loop. You can read about vectorization here.
If I understand correctly, X is a matrix of size m*numJ where m is the number of examples, and numJ is the number of features (or the dimension of the space where each point lies. In that case, you can rewrite your cost function as
(1/(2*m)) * (X*theta-y)'*(X*theta-y);%since ||v||_2^2=v'*v for any vector v in Euclidean space
Now, we know from basic matrix calculus that for any two vectors s and v that are functions from R^{num_J} to R^m, the Jacobian of s^{t}v is given by
s^{t}Jacobian(v)+v^{t}*Jacobian(s) %this Jacobian will have size 1*num_J.
Applying that to your cost function, we obtain
jacobian=(1/m)*(theta'*X'-y')*X;
So if you just replace
for i = 1:m
for j = 1:numJ
%%% theta(j) updates
end
end
with
%note that the gradient is the transpose of the Jacobian we've computed
theta-=alpha*(1/m)*X'*(X*theta-y)
you should see a great increase in performance.
your computecost code is correct
and Better follow the vectorized implementation of Gradient Descent.
You are just iterating and it is slow and may have error.
That course aims you to do vectorized implementation as it is simple and handy at the same time.
I knew this because I did that after sweating a lot.
Vectorization is good:)

Explanation of a Hough accumulator that does not match image

I was having fun with image processing and hough transforms on Octave but the results are not the expected ones.
Here is my edges image:
and here is my hough accumulator (x-axis is angle in deg, y-axis is radius):
I feel like I am missing the horizontal streaks but there is no local maximum in the accumulator for the 0/180 angle values.
Also, for the vertical streaks, the value of the radius should be equal to the x value of the edge's image, but instead the values of r are very high:
exp: the first vertical line on the left of the image has an equation of x=20(approx) -> r.r = x.x + y.y -> r=x -> r=20
The overall resulting lines detected do not match the edges at all:
Acculmulator with detected maxima:
Resulting lines:
As you can see the maximas of the accumulator are satisfyingly detected but the resulting lines' radius values are too high and theta values are missing.
It almost looks like the hough transform accumulator does not correspond to the image...
Can someone help me figure out why and how to correct it?
Here is my code:
function [r, theta] = findScratches (img, edge)
hough = houghtf(edge,"line", pi*[0:360]/180);
threshHough = hough>.5*max(hough(:));
[r, theta] = find(threshHough>0);
%deg to rad for the trig functions
theta = theta/180*pi;
%according to octave doc r range is 2*diagonal
%-> bring it down to 1*diagonal or all lines are out of the picture
r = r/2;
%coefficients of the line y=ax+b
a = -cos(theta)./sin(theta);
b = r./sin(theta);
x = 1:size(img,2);
y = a * x + b;
figure(1)
imagesc(edge);
colormap gray;
hold on;
for i=1:size(y,1)
axis ij;
plot(y(i,:),x,'r','linewidth',1);
end
hold off;
endfunction
Thank you in advance.
You're definitely on the right track. Blurring the accumulator image would help before looking for the hotspots. Also, why not do a quick erode and dilate before doing the hough transform?
I had the same issue - detected lines had the correct slope but were shifted. The problem is that the r returned by the find(threshHough>0) function call is in the interval of [0,2*diag] while the Hough transform operates with values of r from the interval of [-diag,diag]. Therefore if you change the line
r=r/2
to
r=r-size(hough,1)/2
you will get the correct offset.
Lets define a vector of angles (in radians):
angles=pi*[0:360]/180
You should not take this operation: theta = theta/180*pi.
Replace it by: theta = angles(theta), where theta are indices
Some one commented above suggesting adjusting r to -diag to +diag range by
r=r-size(hough,1)/2
This worked well for me. However another difference was that I used the default angle to compute Hough Transform with angles -90 to +90. The theta range in the vector is +1 to +181. So It needs to be adjusted by -91, then convert to radian.
theta = (theta-91)*pi/180;
With above 2 changes, rest of the code works ok.

How to compute Fourier coefficients with MATLAB

I'm trying to compute the Fourier coefficients for a waveform using MATLAB. The coefficients can be computed using the following formulas:
T is chosen to be 1 which gives omega = 2pi.
However I'm having issues performing the integrals. The functions are are triangle wave (Which can be generated using sawtooth(t,0.5) if I'm not mistaking) as well as a square wave.
I've tried with the following code (For the triangle wave):
function [ a0,am,bm ] = test( numTerms )
b_m = zeros(1,numTerms);
w=2*pi;
for i = 1:numTerms
f1 = #(t) sawtooth(t,0.5).*cos(i*w*t);
f2 = #(t) sawtooth(t,0.5).*sin(i*w*t);
am(i) = 2*quad(f1,0,1);
bm(i) = 2*quad(f2,0,1);
end
end
However it's not getting anywhere near the values I need. The b_m coefficients are given for a
triangle wave and are supposed to be 1/m^2 and -1/m^2 when m is odd alternating beginning with the positive term.
The major issue for me is that I don't quite understand how integrals work in MATLAB and I'm not sure whether or not the approach I've chosen works.
Edit:
To clairify, this is the form that I'm looking to write the function on when the coefficients have been determined:
Here's an attempt using fft:
function [ a0,am,bm ] = test( numTerms )
T=2*pi;
w=1;
t = [0:0.1:2];
f = fft(sawtooth(t,0.5));
am = real(f);
bm = imag(f);
func = num2str(f(1));
for i = 1:numTerms
func = strcat(func,'+',num2str(am(i)),'*cos(',num2str(i*w),'*t)','+',num2str(bm(i)),'*sin(',num2str(i*w),'*t)');
end
y = inline(func);
plot(t,y(t));
end
Looks to me that your problem is what sawtooth returns the mathworks documentation says that:
sawtooth(t,width) generates a modified triangle wave where width, a scalar parameter between 0 and 1, determines the point between 0 and 2π at which the maximum occurs. The function increases from -1 to 1 on the interval 0 to 2πwidth, then decreases linearly from 1 to -1 on the interval 2πwidth to 2π. Thus a parameter of 0.5 specifies a standard triangle wave, symmetric about time instant π with peak-to-peak amplitude of 1. sawtooth(t,1) is equivalent to sawtooth(t).
So I'm guessing that's part of your problem.
After you responded I looked into it some more. Looks to me like it's the quad function; not very accurate! I recast the problem like this:
function [ a0,am,bm ] = sotest( t, numTerms )
bm = zeros(1,numTerms);
am = zeros(1,numTerms);
% 2L = 1
L = 0.5;
for ii = 1:numTerms
am(ii) = (1/L)*quadl(#(x) aCos(x,ii,L),0,2*L);
bm(ii) = (1/L)*quadl(#(x) aSin(x,ii,L),0,2*L);
end
ii = 0;
a0 = (1/L)*trapz( t, t.*cos((ii*pi*t)/L) );
% now let's test it
y = ones(size(t))*(a0/2);
for ii=1:numTerms
y = y + am(ii)*cos(ii*2*pi*t);
y = y + bm(ii)*sin(ii*2*pi*t);
end
figure; plot( t, y);
end
function a = aCos(t,n,L)
a = t.*cos((n*pi*t)/L);
end
function b = aSin(t,n,L)
b = t.*sin((n*pi*t)/L);
end
And then I called it like:
[ a0,am,bm ] = sotest( t, 100 );
and I got:
Sweetness!!!
All I really changed was from quad to quadl. I figured that out by using trapz which worked great until the time vector I was using didn't have enough resolution, which led me to believe it was a numerical issue rather than something fundamental. Hope this helps!
To troubleshoot your code I would plot the functions you are using and investigate, how the quad function samples them. You might be undersampling them, so make sure your minimum step size is smaller than the period of the function by at least factor 10.
I would suggest using the FFTs that are built-in to Matlab. Not only is the FFT the most efficient method to compute a spectrum (it is n*log(n) dependent on the length n of the array, whereas the integral in n^2 dependent), it will also give you automatically the frequency points that are supported by your (equally spaced) time data. If you compute the integral yourself (might be needed if datapoints are not equally spaced), you might calculate frequency data that are not resolved (closer spacing than 1/over the spacing in time, i.e. beyond the 'Fourier limit').