polygon-offsetting in GNU Octave - octave

I have a simple, not self-intersecting polygonal chain and want to create a second polygonal chain with parallels with fixed distance.
I think this topic is called polygon offsetting or buffering (this finds for example An algorithm for inflating/deflating (offsetting, buffering) polygons)
MATLAB has bufferm and polybuffer but none of them is implemented in GNU Octave.
I've started my own implementation:
close all
rotm = #(a) [cos(a) -sin(a); sin(a) cos(a)];
h = 3; # distance from existing polygon
p = [1 5 18.7 21 34 34;
36.1 36.1 42.1 22.5 16.0 13];
dp = diff(p, [], 2);
a = atan2 (dp (2, :), dp(1, :));
da = diff (a);
horiz = abs (da) < 16 * eps;
f = 2 * h./sin(da).*sin(da/2);
f(horiz) = h;
f = [h f h];
r = a(1:end-1) + diff(a)/2;
r = pi/2 + [a(1) r a(end)];
p2 = zeros(size(p));
for k=1:columns(p)
p2(:,k) = p(:,k) + rotm(r(k)) * [f(k); 0];
line ([p(1, k);p2(1,k)], [p(2, k);p2(2,k)], "color", "magenta");
endfor
line (p(1, :), p(2, :), "color", "green");
line (p2(1, :), p2(2, :), "color", "red");
axis equal
grid on
but at that point I really think there might be an easier way to do this.
Is there an easier way or some already implemented function which might help?
(btw, I haven't vectorized the code yet)

This is not as simple as it may initallly seem. For example, offsetting complex
polygons involves collisions between the offsets:
Image from CGAL manual, Chap.16: 2D Straight Skeleton and Polygon Offsetting

Related

Non-linear fit Gnu Octave

I have a problem in performing a non linear fit with Gnu Octave. Basically I need to perform a global fit with some shared parameters, while keeping others fixed.
The following code works perfectly in Matlab, but Octave returns an error
error: operator *: nonconformant arguments (op1 is 34x1, op2 is 4x1)
Attached my code and the data to play with:
clear
close all
clc
pkg load optim
D = dlmread('hd', ';'); % raw data
bkg = D(1,2:end); % 4 sensors bkg
x = D(2:end,1); % input signal
Y = D(2:end,2:end); % 4 sensors reposnse
W = 1./Y; % weights
b0 = [7 .04 .01 .1 .5 2 1]; % educated guess for start the fit
%% model function
F = #(b) ((bkg + (b(1) - bkg).*(1-exp(-(b(2:5).*x).^b(6))).^b(7)) - Y) .* W;
opts = optimset("Display", "iter");
lb = [5 .001 .001 .001 .001 .01 1];
ub = [];
[b, resnorm, residual, exitflag, output, lambda, Jacob\] = ...
lsqnonlin(F,b0,lb,ub,opts)
To give more info, giving array b0, b0(1), b0(6) and b0(7) are shared among the 4 dataset, while b0(2:5) are peculiar of each dataset.
Thank you for your help and suggestions! ;)
Raw data:
0,0.3105,0.31342,0.31183,0.31117
0.013229,0.329,0.3295,0.332,0.372
0.013229,0.328,0.33,0.33,0.373
0.021324,0.33,0.3305,0.33633,0.399
0.021324,0.325,0.3265,0.333,0.397
0.037763,0.33,0.3255,0.34467,0.461
0.037763,0.327,0.3285,0.347,0.456
0.069405,0.338,0.3265,0.36533,0.587
0.069405,0.3395,0.329,0.36667,0.589
0.12991,0.357,0.3385,0.41333,0.831
0.12991,0.358,0.3385,0.41433,0.837
0.25368,0.393,0.347,0.501,1.302
0.25368,0.3915,0.3515,0.498,1.278
0.51227,0.458,0.3735,0.668,2.098
0.51227,0.47,0.3815,0.68467,2.124
1.0137,0.61,0.4175,1.008,3.357
1.0137,0.599,0.422,1,3.318
2.0162,0.89,0.5335,1.645,5.006
2.0162,0.872,0.5325,1.619,4.938
4.0192,1.411,0.716,2.674,6.595
4.0192,1.418,0.7205,2.691,6.766
8.0315,2.34,1.118,4.195,7.176
8.0315,2.33,1.126,4.161,6.74
16.04,3.759,1.751,5.9,7.174
16.04,3.762,1.748,5.911,7.151
32.102,5.418,2.942,7.164,7.149
32.102,5.406,2.941,7.164,7.175
64.142,7.016,4.478,7.174,7.176
64.142,7.018,4.402,7.175,7.175
128.32,7.176,6.078,7.175,7.176
128.32,7.175,6.107,7.175,7.173
255.72,7.165,7.162,7.165,7.165
255.72,7.165,7.164,7.166,7.166
511.71,7.165,7.165,7.165,7.165
511.71,7.165,7.165,7.166,7.164
Giving the function definition above, if you call it by F(b0) in the command windows, you will get a 34x4 matrix which is correct, since variable Y has the same size.
In that way I can (in theory) compute the standard formula for lsqnonlin (fit - measured)^2

Why am I getting this error ? I just want to plot my equations in Octave

%z = ratio of damping co-efficients , z<1
%wn = natural frequency in rad/sec
%wd = frequency of damped osciallations
%x_0 = amp
%phi = initial phase
%t = time
%%
z = 0.6943;
wn = 50;
wd = sqrt(1-(z^2))*wn;
x_0 = 42;
phi = pi/12;
t = linspace(0,100,1000);
x = x_0.*exp(-z*wn*t).*sin(phi+(wd*t));
plot(t,x);
error: operator *: nonconformant arguments (op1 is 1x1000, op2 is 1x1000)
error: called from
/home/koustubhjain/Documents/Damped_Oscialltion_(z<1).m at line 14 column 3
I am completely new to Octave/MATLAB, I just want to plot my equations and get a graph for them. Did I do something wrong with the multiplication ? Please someone help
Also the curve I am trying to plot should look something like a sinusoidal with decreasing amplitude, that's what my teacher told. But If I replace the multiplication signs with .*, all I get is a sort of a straight line.
The curve tends to 0 and the range of t is so wide to see something. Try to plot for t from 0 to 0.5 (instead of from 0 to 100) and you will see your curve.

Lines not being drawn in Octave

I am instructed to draw a circle with the points that are transformed by a matrix. I cannot seem to get a line to draw between all the points
c = cos(pi/8)
s = sin(pi/8)
A = [c -s; s c]
xy = [1;0]
axis('square')
for i = 1:17
xy = A * xy;
plot(xy(1, :), xy(2,:), 'r', 'linewidth', 2);
hold on
endfor
When I run the code I get this
How would I get the lines to be drawn between all the points?
Thanks
You need to calculate all of the points before you plot. If you plot only one point at a time, it has nowhere to connect a line to.
c = cos(pi/8)
s = sin(pi/8)
A = [c -s; s c]
xy = zeros(2,17); %// preallocate the matrix
xy(:,1) = [1;0]
for i = 2:17
xy(:,i) = A * xy(:,i-1);
endfor
plot(xy(1, :), xy(2,:), 'r', 'linewidth', 2);
axis('square') %// goes *after* the plot (thanks #Andy)

Updating parameters in a function being called by Octave's fsolve

I am working on modeling the motion of a single actuated leg in Octave. The leg has 3 points: a stationary hip (point A), a foot (point B) that moves along a known path, and a knee (point C) whose location and angle I am trying to solve for.
Using the code below I can successfully solve for the knee's XYZ position and relevant angles for a single value of the parameters s0 and Theta_H.
Now I want to be able to loop through multiple s0 and Theta_H values and run the solver. My problem is that I can't figure out how to pass new values for those variables into the equations function.
The reason this is tricky is that the function format necessary to use Octave's fsolve prevents entering inputs other than the unknowns into the function. I've tried updating a global variable as an indexer but to do that I would need to clear all workspace variables which causes other problems.
Any ideas on how to update the parameters in this function while still being able to input it into fsolve would be really appreciated!
The code below calls the solver:
global AC = 150; % length of the thigh limb
global CB = 150; % length of the shin limb
global lspan = 75; % width span of the foot touch down wrt the hip
global bob = 10; % height of the hip joint off the ground during a step
inits = [ .75; 2.35; 37; 0; 125]; % initial guesses at horizontal step position
% x(1): hip joint - guessing a 45 deg (.75 rad) angle for hip joint
% x(2): knee joint - guessing a 135 deg (2.35 rad) angle (wrt to vert)
% x(3): X position of the knee joint - guessing middle of the leg span in mm
% x(4): Y position of the knee joint - know it is 0 mm at the horizontal step position
% x(5): Z position of the knee joint - guessing the height to be ~80% of the height of a limb
[x, fval, info] = fsolve(#Rug_Bug_Leg, inits); % when running fsolve for the first time often have to remove the output suppress
The code below shows the function containing the system of equations to be solved by Octave's fsolve function:
function y = Rug_Bug_Leg(x)
global AC;
global CB;
global lspan;
global bob;
s0 = 0; % fore/aft (Y) position of the foot during the step. Trying to iterate this
Theta_H = 0; % hip angle during the step. Trying to iterate this
y = zeros(6,1); % zeros for left side of each equation
% First set of equations, Joint C wrt to Joint A
y(1) = -1*x(3)+AC*sin(x(1))*cos(Theta_H);
y(2) = -1*x(4)+AC*sin(x(1))*sin(Theta_H);
y(3) = -1*bob - x(5)+AC*cos(x(1));
% Second set of equations, Joint B wrt to Joint C
y(4) = x(3)-lspan +CB*sin(x(2))*cos(Theta_H);
y(5) = x(4) - s0 +sin(x(2))*sin(Theta_H);
y(6) = x(5) + bob + CB*cos(x(2));
end function
You can definitely do that!
All you need to do is create a function that returns a function.
First have your Rug_Bug_Leg function take s0 and Theta_H as inputs:
function y = Rug_Bug_Leg(x, s0, Theta_H)
% ...
endfunction
Then, you can write a "wrapper" function around Rug_Bug_Leg like this:
rbl = #(s0, Theta_H) #(x) Rug_Bug_Leg(x, s0, Theta_H)
Now, if you call rbl with some values (s0,Theta_H), it will return a function that takes x as input and returns Rug_Bug_Leg(x,s0,Theta_H).
For instance, rbl(0,0) returns the function:
#(x) Rug_Bug_Leg(x,0,0)
Here's a sample usage:
for s0=1:10
for Theta_H=1:10
[x, fval, info] = fsolve( rbl(s0,Theta_H), inits );
endfor
endfor

Compare two linear regression models in MATLAB

I want to compare the performance of two models using the F statistic. Here is a reproducible example and the expected results:
load carbig
tbl = table(Acceleration,Cylinders,Horsepower,MPG);
% Testing separetly both models
mdl1 = fitlm(tbl,'MPG~1+Acceleration+Cylinders+Horsepower');
mdl2 = fitlm(tbl,'MPG~1+Acceleration');
% Comparing both models using the F-test and p-value
numerator = (mdl2.SSE-mdl1.SSE)/(mdl1.NumCoefficients-mdl2.NumCoefficients);
denominator = mdl1.SSE/mdl1.DFE;
F = numerator/denominator;
p = 1-fcdf(F,mdl1.NumCoefficients-mdl2.NumCoefficients,mdl1.DFE);
We end up with F = 298.75 and p = 0, indicating mdl1 is significantly better than mdl2, as assessed by the F statistic.
Is there anyway to obtain the F and p values without performing twice fitlm and doing all the computation?
I tried to run a coefTest, as suggested by #Glen_b, however the function is poorly documented and the results are not the ones I'm expecting.
[p,F] = coefTest(mdl1); % p = 0, F = 262.508 (this F test mdl1 vs constant mdl)
[p,F] = coefTest(mdl1,[0,0,1,1]); % p = 0, F = 57.662 (not sure what this is testing)
[p,F] = coefTest(mdl1,[1,1,0,0]); % p = 0, F = 486.810 (idem)
I believe I should carry the test with a different null hypothesis (C) using the function [p,F] = coeffTest(mdl1,H,C). But I don't really know how to do it and there's no example.
This answer is in regards to comparing two linear regression models where one model is a restricted version of the other.
Short answer:
To do an F-test on the restriction that the 3rd and 4th elements of your estimated, coefficient vector b are zero:
[p, F] = coefTest(mdl1, [0, 0, 1, 0; 0, 0, 0, 1]);
Further explanation:
Let b be our estimated vector. Linear restrictions on b are typically written in a matrix form: R*b = r. The restriction that 3rd and 4th element of b are zero would be written:
[0, 0, 1, 0 * b = [0
0, 0, 0, 1] 0];
The matrix [0, 0, 1, 0; 0, 0, 0, 1] is what coefTest calls the H matrix in the docs.
P = coefTest(M,H), with H a numeric matrix having one column for each
coefficient, performs an F test that H*B=0, where B represents the
coefficient vector.
Long version
Sometimes with this econometric routines, it's nice just to write it out yourself so you know what's really going on.
Remove rows with NaN because they just add unrelated complexity:
tbl_dirty = table(Acceleration,Cylinders,Horsepower,MPG);
tbl = tbl_dirty(~any(ismissing(tbl_dirty),2),:);
Do the estimation etc...
n = height(tbl); % number of observations
y = tbl.MPG;
X = [ones(n, 1), tbl.Acceleration, tbl.Cylinders, tbl.Horsepower];
k = size(X,2); % number of variables (including constant)
b = X \ y; % estimate b with least squares
u = y - X * b; % calculates residuals
s2 = u' * u / (n - k); % estimate variance of error term (assuming homoskedasticity, independent observations)
BCOV = inv(X'*X) * s2; % get covariance matrix of b assuming homoskedasticity of error term etc...
bse = diag(BCOV).^.5; % standard errors
R = [0, 0, 1, 0;
0, 0, 0, 1];
r = [0; 0]; % Testing restriction: R * b = r
num_restrictions = size(R, 1);
F = (R*b - r)'*inv(R * BCOV * R')*(R*b - r) / num_restrictions; % F-stat (see Hiyashi for reference)
Fp = 1 - fcdf(F, num_restrictions, n - k); % F p-val
For reference, can look at p. 65 of Hiyashi's book Econometrics.
No, there is not.
Fitlm fits an arbitrary model. In your case a regression model with an intercept and either one or three regressors. It might seem that the model with three regressors can use information from the model with one regressor, but this is only true if there are some restrictions on the model and even then this overlapping information is limited.
Fitlm is a very general framework which can be used for arbitrary models. Doing multiple regressions at the same time with sharing of information can thus get quite complex and is not implemented.
It is possible to implement this yourself for these two specific models. Usually such a linear regression is solved using the covariance matrix:
Beta = (X' X) ^-1 X' y
were X is the data with the variables as columns and y is the target variable. In this case you could reuse part of the covariance matrix for which you only need the columns from the smaller regression: the variation in Acceleration. Since adding 2 new variables adds 8 values yo the covariance matrix you only save 1/9 of the time. Furthermore, the heaviest part is the inversion. Thus the time improvement is very very little.
In short, just do two separate regressions