Steepest Descent Algorithm in Octave - octave

I'm having trouble implementing this algorithm in octave even though the psuedocode for this algorithm looks really simple. The book covering this algorithm is only 1 page long, so there are not a lot of information regarding this algorithm, so I'm just going to post the psedocode:
Compute r = b - Ax and p = A r
Until convergence, Do:
a <- (r,r) / (p,r)
x <- x + a r
r <- r - r p
compute p := A r
End do
Here is my attempt at implementing this in Octave. I use an example in the book to test the program:
A = [5,2,-1;3,7,3;1,-4,6];
b = [2;-1;1];
x0 = [0;0;0];
Tol = 0.00001;
x=x0;
r = b-A*x;
p = A*r;
while true,
a = (r')*(r)/((p)*(r'));
disp(a);
x = x + a * r;
r = r - a * p;
p = A*r;
if norm(r) < Tol,
break
end
end
When I run this, i get an error saying that the first matrix im dividing with is 1x1 and the second matrix is 3v3, so I'm not able to do that and I understand that. I thought about using the ./ operator instead, but to my understanding this does not yield the result that I'm looking for and this example should be eligble to division. Did i screw up my implementation or is my understanding of this algorithm wrong? Not sure whether to post this here or math.stackexchange, but I tried here.

My first thought is the error message: You have (r') * (r) / ((p) * (r')); should the denominator be (p) * (r') or (p') * (r) (notice where the ' are)?

Related

Removing DC component for matrix in chuncks in octave

I'm new to octave and if this as been asked and answered then I'm sorry but I have no idea what the phrase is for what I'm looking for.
I trying to remove the DC component from a large matrix, but in chunks as I need to do calculations on each chuck.
What I got so far
r = dlmread('test.csv',';',0,0);
x = r(:,2);
y = r(:,3); % we work on the 3rd column
d = 1
while d <= (length(y) - 256)
e = y(d:d+256);
avg = sum(e) / length(e);
k(d:d+256) = e - avg; % this is the part I need help with, how to get the chunk with the right value into the matrix
d += 256;
endwhile
% to check the result I like to see it
plot(x, k, '.');
if I change the line into:
k(d:d+256) = e - 1024;
it works perfectly.
I know there is something like an element-wise operation, but if I use e .- avg I get this:
warning: the '.-' operator was deprecated in version 7
and it still doesn't do what I expect.
I must be missing something, any suggestions?
GNU Octave, version 7.2.0 on Linux(Manjaro).
Never mind the code works as expected.
The result (K) got corrupted because the chosen chunk size was too small for my signal. Changing 256 to 4096 got me a better result.
+ and - are always element-wise. Beware that d:d+256 are 257 elements, not 256. So if then you increment d by 256, you have one overlaying point.

Plotting a 2D graph in Octave: getting nonconformant arguments error with graph not coming correctly

I'm making a graph out of calculations by putting energy over the overall distance traveled. I used the equation E/D = F (Energy/Distance = Force) to try and got values in order to create a 2D line graph from them. However, I'm getting errors such as "nonconformant arguments", one of my variables being randomly turned to 0 and that the vector lengths aren't matching, here's the code:
% Declaring all the variables for the drag equation
p = 1.23;
v = 0:30;
C = 0.32;
A = 3.61;
D = 100000;
% This next line of code uses the variables above in order to get the force.
Fd = (p*(v.^2)*C*A)/2
% This force is then used to calculate the energy used to overcome the drag force
E = Fd*D
kWh = (E/3.6e+6);
Dist = (D/1000);
x = 0:Dist
y = 0:kWh
plot(x,y)
xlabel('x, Distance( km )')
ylabel('y, Energy Used Per Hour ( kWh )')
The outputs:

SPSS syntax of a quadratic term with interaction

How looks the syntax of a regression with a quadratic term and interaction in SPSS? In R the code would be:
fit <- lm(c ~ a*b + a*I(b^2), dat)
or
fit <- lm(c ~ a*(b+I(b^2), dat)
Thanks for help.
Using REGRESSION you need to actually make the variables in the SPSS data file before submitting the command. So if your variables were named the same:
COMPUTE ab = a*b. /*Interaction*/.
COMPUTE bsq = b**2. /*squared term*/.
COMPUTE absq = a*bsq. /*Interaction with squared term*/.
Then these can be placed on the right hand side of your regression equation.
REGRESSION VARIABLES=a,b,bsq,absq,c
/DEPENDENT=c
/METHOD=ENTER a,b,bsq,absq.
I thought you could only do factor variables for the interactions - but I was wrong, you can do continuous variables as well (sorry!). Here is an example using MIXED (still you need to make the seperate variables if using REGRESSION).
INPUT PROGRAM.
LOOP Case = 1 TO 200000.
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
COMPUTE a = RV.BERNOULLI(0.5).
COMPUTE b = RV.NORMAL(0,1).
COMPUTE ab = a*b /*Interaction*/.
COMPUTE bsq = b**2 /*squared term*/.
COMPUTE absq = a*bsq /*Interaction with squared term*/.
COMPUTE c = 0.5 + 0.2*a + 0.1*b -0.05*ab + .03*bsq -.001*absq + RV.NORMAL(0,1).
VARIABLE LEVEL a (NOMINAL).
RECODE a (0 = 2)(ELSE = COPY).
MIXED c BY a WITH b bsq
/FIXED = a b b*b a*b
/PRINT SOLUTION.

Solving two coupled non-linear second order differentially equations numerically

I have encountered the following system of differential equations in lagrangian mechanics. Can you suggest a numerical method, with relevant links and references on how can I solve it. Also, is there a shorter implementation on Matlab or Mathematica?
mx (y dot)^2 + mgcosy - Mg - (M=m)(x double dot) =0
gsiny + 2(x dot)(y dot + x (y double dot)=0
where (x dot) or (y dot)= dx/dt or dy/dt, and the double dot indicated a double derivative wrt time.
You can create a vector Y = (x y u v)' so that
dx/dt = u
dy/dt = v
du/dt = d²x/dt²
dv/dt = d²y/dt²
It is possible to isolate the second derivatives from the equations, so you get
d²x/dt² = (m*g*cos(y) + m*x*v² - M*g)/(M-m)
d²y/dt² = -(g*sin(y) - 2*u*v)/x
Now, you can try to solve it using standard ODE solvers, such as Runge-Kutta methods. Matlab has a set of solvers, such as ode23. I didn't test he following, but it would be something like it:
function f = F(Y)
x = Y(1); y = Y(2); u = Y(3); v = Y(4);
f = [0,0,0,0];
f(1) = u;
f(2) = v;
f(3) = (m*g*cos(y) + m*x*v*v - M*g)/(M-m);
f(4) = -(g*sin(y) - 2*u*v)/x;
[T,Y] = ode23(F, time_period, Y0);

How to use Newton-Raphson method in matlab to find an equation root?

I am a new user of MATLAB. I want to find the value that makes f(x) = 0, using the Newton-Raphson method. I have tried to write a code, but it seems that it's difficult to implement Newton-Raphson method. This is what I have so far:
function x = newton(x0, tolerance)
tolerance = 1.e-10;
format short e;
Params = load('saved_data.mat');
theta = pi/2;
zeta = cos(theta);
I = eye(Params.n,Params.n);
Q = zeta*I-Params.p*Params.p';
% T is a matrix(5,5)
Mroot = Params.M.^(1/2); %optimization
T = Mroot*Q*Mroot;
% Find the eigenvalues
E = real(eig(T));
% Find the negative eigenvalues
% Find the smallest negative eigenvalue
gamma = min(E);
% Now solve for lambda
M_inv = inv(Params.M); %optimization
zm = Params.zm;
x = x0;
err = (x - xPrev)/x;
while abs(err) > tolerance
xPrev = x;
x = xPrev - f(xPrev)./dfdx(xPrev);
% stop criterion: (f(x) - 0) < tolerance
err = f(x);
end
% stop criterion: change of x < tolerance % err = x - xPrev;
end
The above function is used like so:
% Calculate the functions
Winv = inv(M_inv+x.*Q);
f = #(x)( zm'*M_inv*Winv*M_inv*zm);
dfdx = #(x)(-zm'*M_inv*Winv*Q*M_inv*zm);
x0 = (-1/gamma)/2;
xRoot = newton(x0,1e-10);
The question isn't particularly clear. However, do you need to implement the root finding yourself? If not then just use Matlab's built in function fzero (not based on Newton-Raphson).
If you do need your own implementation of the Newton-Raphson method then I suggest using one of the answers to Newton Raphsons method in Matlab? as your starting point.
Edit: The following isn't answering your question, but is just a note on coding style.
It is useful to split your program up into reusable chunks. In this case your root finding should be separated from your function construction. I recommend writing your Newton-Raphson method in a separate file and call this from the script where you define your function and its derivative. Your source would then look some thing like:
% Define the function (and its derivative) to perform root finding on:
Params = load('saved_data.mat');
theta = pi/2;
zeta = cos(theta);
I = eye(Params.n,Params.n);
Q = zeta*I-Params.p*Params.p';
Mroot = Params.M.^(1/2);
T = Mroot*Q*Mroot; %T is a matrix(5,5)
E = real(eig(T)); % Find the eigen-values
gamma = min(E); % Find the smallest negative eigen value
% Now solve for lambda (what is lambda?)
M_inv = inv(Params.M);
zm = Params.zm;
Winv = inv(M_inv+x.*Q);
f = #(x)( zm'*M_inv*Winv*M_inv*zm);
dfdx = #(x)(-zm'*M_inv*Winv*Q*M_inv*zm);
x0 = (-1./gamma)/2.;
xRoot = newton(f, dfdx, x0, 1e-10);
In newton.m you would have your implementation of the Newton-Raphson method, which takes as arguments the function handles you define (f and dfdx). Using your code given in the question, this would look something like
function root = newton(f, df, x0, tol)
root = x0; % Initial guess for the root
MAXIT = 20; % Maximum number of iterations
for j = 1:MAXIT;
dx = f(root) / df(root);
root = root - dx
% Stop criterion:
if abs(dx) < tolerance
return
end
end
% Raise error if maximum number of iterations reached.
error('newton: maximum number of allowed iterations exceeded.')
end
Notice that I avoided using an infinite loop.