Matlab plotting the shifted logistic function - function

I would like to plot the shifted logistic function as shown from Wolfram Alpha.
In particular, I would like the function to be of the form
y = exp(x - t) / (1 + exp(x - t))
where t > 0. In the link, for example, t is 6. I had originally tried the following:
x = 0:.1:12;
y = exp(x - 6) ./ (1 + exp(x - 6));
plot(x, y);
axis([0 6 0 1])
However, this is not the same as the result from Wolfram Alpha. Here is an export of my plot.
I do not understand what the difference is between what I am trying to do here vs. plotting shifted sin and cosine functions (which works using the same technique).
I am not completely new to Matlab but I do not usually use it in this way.
Edit: My values for x in the code should have been from 0 to 12.

fplot takes as inputs a function handle and a range to plot for:
>> fplot(#(x) exp(x-6) / (1 + exp(x-6)), [0 12])
The beauty of fplot in this case is you don't need to spend time calculating y-values beforehand; you could also extract values from the graph after the fact if you want (by getting the line handle's XData and YData properties).

Your input to Wolfram Alpha is incorrect. It is interpreted as e*(x-6)/(1-e*(x-6)). Use plot y = exp(x - 6) / (1 + exp(x - 6)) for x from 0 to 12 in Wolfram Alpha (see here) for the same results as in MATLAB. Also use axis([0 12 0 1]) (or no axis statement at all on a new plot) to see the full results in MATLAB.
In reply to your comment: use y = exp(1)*(x - 6) ./ (1 + exp(1)*(x - 6)); to do in MATLAB what you were doing in Wolfram Alpha.

Related

Non-linear fit Gnu Octave

I have a problem in performing a non linear fit with Gnu Octave. Basically I need to perform a global fit with some shared parameters, while keeping others fixed.
The following code works perfectly in Matlab, but Octave returns an error
error: operator *: nonconformant arguments (op1 is 34x1, op2 is 4x1)
Attached my code and the data to play with:
clear
close all
clc
pkg load optim
D = dlmread('hd', ';'); % raw data
bkg = D(1,2:end); % 4 sensors bkg
x = D(2:end,1); % input signal
Y = D(2:end,2:end); % 4 sensors reposnse
W = 1./Y; % weights
b0 = [7 .04 .01 .1 .5 2 1]; % educated guess for start the fit
%% model function
F = #(b) ((bkg + (b(1) - bkg).*(1-exp(-(b(2:5).*x).^b(6))).^b(7)) - Y) .* W;
opts = optimset("Display", "iter");
lb = [5 .001 .001 .001 .001 .01 1];
ub = [];
[b, resnorm, residual, exitflag, output, lambda, Jacob\] = ...
lsqnonlin(F,b0,lb,ub,opts)
To give more info, giving array b0, b0(1), b0(6) and b0(7) are shared among the 4 dataset, while b0(2:5) are peculiar of each dataset.
Thank you for your help and suggestions! ;)
Raw data:
0,0.3105,0.31342,0.31183,0.31117
0.013229,0.329,0.3295,0.332,0.372
0.013229,0.328,0.33,0.33,0.373
0.021324,0.33,0.3305,0.33633,0.399
0.021324,0.325,0.3265,0.333,0.397
0.037763,0.33,0.3255,0.34467,0.461
0.037763,0.327,0.3285,0.347,0.456
0.069405,0.338,0.3265,0.36533,0.587
0.069405,0.3395,0.329,0.36667,0.589
0.12991,0.357,0.3385,0.41333,0.831
0.12991,0.358,0.3385,0.41433,0.837
0.25368,0.393,0.347,0.501,1.302
0.25368,0.3915,0.3515,0.498,1.278
0.51227,0.458,0.3735,0.668,2.098
0.51227,0.47,0.3815,0.68467,2.124
1.0137,0.61,0.4175,1.008,3.357
1.0137,0.599,0.422,1,3.318
2.0162,0.89,0.5335,1.645,5.006
2.0162,0.872,0.5325,1.619,4.938
4.0192,1.411,0.716,2.674,6.595
4.0192,1.418,0.7205,2.691,6.766
8.0315,2.34,1.118,4.195,7.176
8.0315,2.33,1.126,4.161,6.74
16.04,3.759,1.751,5.9,7.174
16.04,3.762,1.748,5.911,7.151
32.102,5.418,2.942,7.164,7.149
32.102,5.406,2.941,7.164,7.175
64.142,7.016,4.478,7.174,7.176
64.142,7.018,4.402,7.175,7.175
128.32,7.176,6.078,7.175,7.176
128.32,7.175,6.107,7.175,7.173
255.72,7.165,7.162,7.165,7.165
255.72,7.165,7.164,7.166,7.166
511.71,7.165,7.165,7.165,7.165
511.71,7.165,7.165,7.166,7.164
Giving the function definition above, if you call it by F(b0) in the command windows, you will get a 34x4 matrix which is correct, since variable Y has the same size.
In that way I can (in theory) compute the standard formula for lsqnonlin (fit - measured)^2

Using linear approximation to perform addition and subtraction | error barrier

I'm attempting my first solo project, after taking an introductory course to machine learning, where I'm trying to use linear approximation to predict the outcome of addition/subtraction of two numbers.
I have 3 features: first number, subtraction/addition (0 or 1), and second number.
So my input looks something like this:
3 0 1
4 1 2
3 0 3
With corresponding output like this:
2
6
0
I have (I think) successfully implemented logistic regression algorithm, as the squared error does gradually decrease, but in 100 values, ranging from 0 to 50, the squared error value flattens out at around 685.6 after about 400 iterations.
Graph: Squared Error vs Iterations
.
To fix this, I have tried using a larger dataset for training, getting rid of regularization, and normalizing the input values.
I know that one of the steps to fix high bias is to add complexity to the approximation, but I want to maximize the performance at this particular level. Is it possible to go any further on this level?
My linear approximation code in Octave:
% Iterate
for i = 1 : iter
% hypothesis
h = X * Theta;
% reg theta prep
regTheta = Theta;
regTheta(:, 1) = 0;
% cost calc
J(i, 2) = (1 / (2 * m)) * (sum((h - y) .^ 2) + lambda * sum(sum(regTheta .^ 2,1),2));
% theta calc
Theta = Theta - (alpha / m) * ((h - y)' * X)' + lambda * sum(sum(regTheta, 1), 2);
end
Note: I'm using 0 for lambda, as to ignore regularization.

find point of intersection between two arc

I'm creating a game for kids. It's creating a triangle using 3 lines. How I approached this is I create two arcs(semi circle) from two end points of a base line. But I couldn't figure how to find the point of intersection of those two arc. I've search about it but only found point of intersection between two straight lines. Is there any method to find this point of intersection? Below is the figure of two arcs drawn from each end of the baseline.
Assume centers of the circle are (x1, y1) and (x2, y2), radii are R1 and R2. Let the ends of the base be A and B and the target point be T. We know that AT = R1 and BT = R2. IMHO the simplest trick to find T is to notice that difference of the squares of the distances is a known constant (R1^2 - R2^2). And it is easy to see that the line the contains points meeting this condition is actually a straight line perpendicular to the base. Circles equations:
(x - x1)^2 + (y-y1)^2 = R1^2
(x - x2)^2 + (y-y2)^2 = R2^2
If we subtract one from another we'll get:
(x2 - x1)(2*x - x1 - x2) + (y2 - y1)(2*y - y1 - y2) = R1^2 - R2^2
Let's x0 = (x1 + x2)/2 and y0 = (y1 + y2)/2 - the coordinates of the center. Let also the length of the base be L and its projections dx = x2 - x1 and dy = y2 - y1 (i.e. L^2 = dx^2 + dy^2). And let's Q = R1^2 - R2^2 So we can see that
2 * (dx * (x-x0) + dy*(y-y0)) = Q
So the line for all (x,y) pairs with R1^2 - R2^2 = Q = const is a straight line orthogonal to the base (because coefficients are exactly dx and dy).
Let's find the point C on the base that is the intersection with that line. It is easy - it splits the base so that difference of the squares of the lengths is Q. It is easy to find out that it is the point on a distance L/2 + Q/(2*L) from A and L/2 - Q/(2*L) from B. So now we can find that
TC^2 = R1^2 - (L/2 + Q/(2*L))^2
Substituting back Q and simplifying a bit we can find that
TC^2 = (2*L^2*R1^2 + 2*L^2*R2^2 + 2*R1^2*R2^2 - L^4 - R1^4 - R2^4) / (4*L^2)
So let's
a = (R1^2 - R2^2)/(2*L)
b = sqrt(2*L^2*R1^2 + 2*L^2*R2^2 + 2*R1^2*R2^2 - L^4 - R1^4 - R2^4) / (2*L)
Note that formula for b can also be written in a different form:
b = sqrt[(R1+R2+L)*(-R1+R2+L)*(R1-R2+L)*(R1+R2-L)] / (2*L)
which looks quite similar to the Heron's formula. And this is not a surprise because b is effectively the length of the height to the base AB from T in the triangle ABT so its length is 2*S/L where S is the area of the triangle. And the triangle ABT obviously has sides of lengths L, R1 and R2 respectively.
To find the target T we need to move a along the base and b in a perpendicular direction. So coordinates of T calculated from the middle of the segment are:
Xt = x0 + a * dx/L ± b * dy / L
Yt = y0 + a * dy/L ± b * dx / L
Here ± means that there are two solutions: one on either side of the base line.
Partial case: if R1 = R2 = R, then a = 0 and b = sqrt(R^2 - (L/2)^2) which makes obvious sense: T lies on the segment bisector on a length of sqrt(R^2 - (L/2)^2) from the middle of the segment.
Hope this helps.
While you have not stated clearly, I assume that you have points with coordinates (A.X, A.Y) and (B.X, B.Y) and lengths of two sides LenA and LenB and need to find coordinates of point C.
So you can make equation system exploiting circle equation:
(C.X - A.X)^2 + (C.Y - A.Y)^2 = LenA^2
(C.X - B.X)^2 + (C.Y - B.Y)^2 = LenB^2
and solve it for unknowns C.X, C.Y.
Not that it is worth to subtract A coordinates from all others, make and solve simpler system (the first equation becomes C'.X^2 + C'.Y^2 = LenA^2), then add A coordinates again
So I actually needed this to design a hopper to lift grapes during the wine harvest. Tried to work it out myself but the algebra is horrible, so I had a look on the web -in the end I did it myself but introduced some intermediate variables (that I calculate in Excel - this should also work for the OP since the goal was a calculated solution). In fairness this is really much the same as previous solutions but hopefully a little clearer.
Problem:
What are the coordinates of a point P(Xp,Yp) distance Lq from point Q(Xq,Yq) and distance Lr from point R(Xr,Yr)?
Let us first map the problem onto to new coordinate system where Lq is the origin, thus Q’ = (0,0), let (x,y) = P’(Xp-Xq,Yp-Yq) and let (a,b) = R’(Xr-Xq,Yr-Yq).
We may now write:
x^2 + y^2 = Lq^2 -(1)
(x-a)^2 + (y-b)^2 = Lr^2 -(2)
Expanding 2:
x^2 – 2ax + a^2 + y^2 -2ay + b^2 =Lr^2
Subtracting 1 and rearranging
2by = -2ax + a2 + b2 - Lr^2+ Lq^2
For convenience, let c = a^2 + b^2 + Lq^2 + Lr^2 (these are all known constants so c may be easily computed), thus we obtain:
y = -ax/b + c/2b
Substituting into 1 we obtain:
x^2 + (-a/b x + c/2b)^2 = Lq^2
Multiply the entire equation by b^2 and gather terms:
(a^2 + b^2) x2 -ac x + c/4 + Lq^2 b^2 = 0
Let A = (a2 + b2), B= -ac ,and C= c/4 + Lq^2 b^2
Use the general solution for a quadratic
x = (-B +-SQRT(B^2-4AC))/2A
Substitute back into 1 to get:
y= SQRT(Lq^2 - x^2 )
(This avoids computational difficulties where b = 0)
Map back to original coordinate system
P = (x+Xq, y + Yq)
Hope this helps, sorry about the formatting, I had this all pretty in Word, but lost it

which python regression function to use for linear regression curve

Im trying to replicate a function in python and was able to code the following using multiple columns in a dataframe but was wondering if there is a python regression function that would do this more effectively. Here is the link to the description of the function. Sorry in advanced not really a stats guy. :)
http://tlc.thinkorswim.com/center/reference/thinkScript/Functions/Statistical/Inertia.html
It states that its the linear regression curve using the least-squares method to approximate data for each set of bars.
input y = close;
input n = 20;
def x = x[1] + 1; # previous value + 1
def a = (n * Sum(x * y, n) - Sum(x, n) * Sum(y, n) ) / ( n * Sum(Sqr(x), n) -Sqr(Sum(x, n)));
def b = (Sum(Sqr(x), n) * Sum(y, n) - Sum(x, n) * Sum(x * y, n) ) / ( n * Sum(Sqr(x), n) - Sqr(Sum(x, n)));
plot InertiaTS = a * x + b;
Thanks
Updated
here is the pandas columns and function. I first defined the xValue and yValue columns and then the following which is the raw calculation:
df['ind1']= ((10 * (df['xValue']*df['ysValue']).rolling(10, min_periods=10).sum() - df['xValue'].rolling(10, min_periods=10).sum()*df['ysValue'].rolling(10, min_periods=10).sum())/ (10 * (df['xValue'] ** 2).rolling(10, min_periods=10).sum() - (df['xValue'].rolling(10, min_periods=10).sum())**2)) * df['xValue'] + (((df['xValue'] ** 2).rolling(10, min_periods=10).sum()*df['ysValue'].rolling(10, min_periods=10).sum() - df['xValue'].rolling(10, min_periods=10).sum()*(df['xValue']*df['ysValue']).rolling(10, min_periods=10).sum())/(10 * (df['xValue'] ** 2).rolling(10, min_periods=10).sum() - (df['xValue'].rolling(10, min_periods=10).sum())**2))
It's not really clear whether you are just looking for a way to perform regression in python or you want to code the algorithm yourself.
if you want a package to do the regression, you can look at scikit-learn
Using,
from sklearn import linear_model
linear_model.LinearRegression()
If you want to code your own algorithm, you can look at gradient descent. you can look at a video by Andrew Ng on coursera - https://www.coursera.org/learn/machine-learning/lecture/GFFPB/gradient-descent-intuition. It's fairly intuitive to code the algorithm, the steps are as follows,
i. define a cost function - this is based on OLS(ordinary least squares) and looks like,
J = 1/2 (h(x) - y ) ^2
ii. take a partial derivative of cost function wrt each feature or j. Here X is the input vector comprised of n features one of which is j.
iii. Update the feature vector using gradient descent -
theta = theta - alpha * (partial derivative)
You can find the details here from Andrew Ng's papaper: http://cs229.stanford.edu/notes/cs229-notes1.pdf
sorry, it's difficult to put latex on SO

Subscript indices must be real positive integers or logicals

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
hypothesis = x*theta;
theta_0 = theta(1) - alpha(1/m)*sum((hypothesis-y)*x);
theta_1 = theta(2) - alpha(1/m)*sum((hypothesis-y)*x);
theta(1) = theta_0;
theta(2) = theta_1;
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
I keep getting this error
error: gradientDescent: subscript indices must be either positive integers less than 2^31 or logicals
on this line right in-between the first theta and =
theta_0 = theta(1) - alpha(1/m)*sum((hypothesis-y)*x);
I'm very new to octave so please go easy on me, and
thank you in advance.
This is from the coursera course on Machine Learning from Week 2
99% sure your error is on the line pointed out by topsig, where you have alpha(1/m)
it would help if you gave an example of input values to your function and what you hoped to see as an output, but I'm assuming from your comment
% taking num_iters gradient steps with learning rate alpha
that alpha is a constant, not a function. as such, you have the line alpha(1/m) without any operator in between. octave sees this as you indexing alpha with the value of 1/m.
i.e., if you had an array
x = [3 4 5]
x*(2) = [6 8 10] %% two times each element in the array
x(2) = [4] %% second element in the array
what you did doesn't seem to make sense, as 'm = length(y)' which will output a scalar, so
x = [3 4 5]; m = 3;
x*(1/m) = x*(1/3) = [1 1.3333 1.6666] %% element / 3
x(1/m) = ___error___ %% the 1/3 element in the array makes no sense
note that for certain errors it always indicates that the location of the error is at the assignment operator (the equal sign at the start of the line). if it points there, you usually have to look elsewhere in the line for the actual error. here, it was yelling at you for trying to apply a non-integer subscript (1/m)