Julia scoping: why does this function modify a global variable? - function

I'm a relative newcomer to Julia, and so far I am a fan of it. But coming from years of R programming, some scoping rules leave me perplexed.
Let's take this function. This behaves exactly as I would expect.
function foo1(x)
y = x
t = 1
while t < 1000
t += 1
y += 1
end
return 42
end
var = 0;
foo1(var)
# 42
var
# 0
But when doing something similar on an array, it acts as a mutating function (modifies it argument in the global scope!)
function foo2(x)
y = x
t = 1
while t < 1000
t += 1
y[1] += 1
end
return 42
end
var = zeros(1);
foo2(var)
# 42
var
# 999.0
I realize I can fix this by changing the first line to y = copy(x) , but what is the reason for such a (dangerous?) behaviour in the first place?

I would write an answer to this, but I think John Myles White has already done it better than I ever could so I'll just link to his blogpost:
https://www.juliabloggers.com/values-vs-bindings-the-map-is-not-the-territory-3/
In short x = 1 and x[1] = 1 are very different operations. The first one is assignment—i.e. changing a binding of the variable x—while the second is a syntactic sugar for calling the setindex! function, which, in the case of arrays, assigns to a location in the array. Assignment only changes which variables refer to which objects and never modifies any objects. Mutation only modifies objects and never changes which variables refer to which objects. This answer has a bit more detail about the distinction: Creating copies in Julia with = operator.

Related

Schwefel function trying to find global minimum with three variables, but I am receiving a error from function

I am writing a Schwefel function with three variables x1, x2 and x3 with x (-400,400), I am trying to find the global minimum of a Schwefel function. Can anybody tell me what's wrong with the function code.
function output = objective_function(in)
x1 = in(1);
x2 = in(2);
x3 = in(3);
output=(-x1.*sin(sqrt(mod(x1)))+(-x2.*sin(sqrt(mod(x2)))+(-x3.*sin(sqrt(mod(x3)));
ouput=[F1 F2];
Current Problems:
1: Mismatched delimiters in line:
output=(-x1.*sin(sqrt(mod(x1)))+(-x2.*sin(sqrt(mod(x2)))+(-x3.*sin(sqrt(mod(x3)));
2: Modulus requires a second argument. The second argument inputted into the mod() function needs to be the divisor. The modulus can only return the remainder if it knows the number being divided (dividend) and the number that is dividing (divisor). Calling the mod() function can follow the form:
mod(dividend,divisor);
Aside:
mod() is often used accidentally in replace of abs()
Modulus → mod() : Returns the remainder after division.
Example: mod(10,3) = 1 → 10/3 = 3 with remainder 1
Absolute → abs(): Returns the absolute/magnitude of the number.
Example: abs(-10) = 10 or abs(1 + 1i) = 1.4142
3: Variables F1 and F2 are not defined or initialized before being used. The variable called ouput may be a typo.
ouput=[F1 F2];
Playground Script:
Not sure what the function is supposed to exactly do but here is a script that you can modify to meet your needs. This gets rid of the errors but might need to be reconfigured to suit your exact functionality and output equation.
in = [1 2 3];
[output] = objective_function(in);
function [output] = objective_function(in)
x1 = in(1);
x2 = in(2);
x3 = in(3);
%Splitting into terms will help with debugging bracket balancing issues%
Divisor = 2;
Term_1 = (-x1.*sin(sqrt(mod(x1,Divisor))));
Term_2 = (-x2.*sin(sqrt(mod(x2,Divisor))));
Term_3 = (-x3.*sin(sqrt(mod(x3,Divisor))));
output = Term_1 + Term_2 + Term_3;
end
Ran using MATLAB R2019b

MaxFunEval option in fmincon

I am trying to implement the fmincon function in MATLAB. I am getting a warning with an algorithm change to evaluate my function (warning shown at the end of post). I wanted to use fminsearch, but I have 2 constraints I need to follow. It doesn't make sense for MATLAB to change algorithms to evaluate my function because my constraints are very simple. I have provided the constraint and piece of code:
Constraints:
theta(0) + theta(1) < 1
theta(0), theta(1), theta(2), theta(3) > 0
% Solve MLE using fmincon
ret_1000 = returns(1:1000);
A = [1 1 0 0];
b = [.99999];
lb = [0; 0; 0; 0];
ub = [1; 1; 1; 1];
Aeq = [];
beq = [];
noncoln = [];
init_guess = [.2;.5; long_term_sigma; initial_sigma];
%option = optimset('FunValCheck', 1000);
options = optimset('fmincon');
options = optimset(options, 'MaxFunEvals', 10000);
[x, maxim] = fmincon(#(theta)Log_likeli(theta, ret_1000), init_guess, A, b, Aeq, beq, lb, ub, noncoln, options);
Warning:
Warning: The default trust-region-reflective algorithm does not solve problems with the constraints you
have specified. FMINCON will use the active-set algorithm instead. For information on applicable
algorithms, see Choosing the Algorithm in the documentation.
> In fmincon at 486
In GARCH_loglikeli at 30
Local minimum possible. Constraints satisfied.
fmincon stopped because the predicted change in the objective function
is less than the selected value of the function tolerance and constraints
are satisfied to within the selected value of the constraint tolerance.
<stopping criteria details>
No active inequalities.
All matlab variables are double my default. You can force a double using, double(variableName), you can get the type of a variable using class(variableName). I would use the class on all your variables to make sure they are what you expect. I don't have fmincon, but I tried a variant of your code on fminsearch and it worked like a charm:
op = optimset('fminsearch');
op = optimset(op,'MaxFunEvals',1000,'MaxIter',1000);
a = sqrt(2);
banana = #(x)100*(x(2)-x(1)^2)^2+(a-x(1))^2;
[x,fval] = fminsearch(banana, [-1.2, 1],op)
Looking at the matlab documentation, I think your input variables are not correct:
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
I think you need:
% Let's be ultra specific to solve this syntax issue
fun = #(theta) Log_likeli(theta, ret_1000);
x0 = init_guess;
% A is defined as A
% b is defined as b
Aeq = [];
beq = [];
% lb is defined as lb
% ub is not defined, not sure if that's going to be an issue
% with the solver having lower, but not upper bounds probably isn't
% but thought it was worth a mention
ub = [];
nonlcon = [];
% options is defined as options
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)

Pointer to MATLAB function?

So I have a for-loop in MATLAB, where either a vector x will be put through one function, say, cos(x).^2, or a different choice, say, sin(x).^2 + 9.*x. The user will select which of those functions he wants to use before the for-loop.
My question is, I dont want the loop to check what the user selected on every iteration. Is there a way to use a pointer to a function, (user defined, or otherwise), that every iteration will use automatically?
This is inside a script by the way, not a function.
Thanks
You can use function_handles. For your example (to run on all available functions using a loop):
x = 1:10; % list of input values
functionList = {#(x) cos(x).^2, #(x) sin(x).^2 + 9*x}; % function handle cell-array
for i=1:length(functionList)
functionOut{i} = functionList{i}(x); % output of each function to x
end
You can try something like the following:
userChoice = 2;
switch userChoice
case 1
myFun = #(x) sin(x).^2 + 9.*x;
case 2
myFun = #(x) cos(x).^2;
end
for k = 1:10
x(k,:) = myFun(rand(1,10));
end

Maximize function with fminsearch

Within my daily work, I have got to maximize a particular function making use of fminsearch; the code is:
clc
clear all
close all
f = #(x,c,k) -(x(2)/c)^3*(((exp(-(x(1)/c)^k)-exp(-(x(2)/c)^k))/((x(2)/c)^k-(x(1)/c)^k))-exp(-(x(3)/c)^k))^2;
c = 10.1;
k = 2.3;
X = fminsearch(#(x) f(x,c,k),[4,10,20]);
It works fine, as I expect, but not the issue is coming up: I need to bound x within certain limits, as:
4 < x(1) < 5
10 < x(2) < 15
20 < x(3) < 30
To achieve the proper results, I should use the optimization toolbox, that I unfortunately cannot hand.
Is there any way to get the same analysis by making use of only fminsearch?
Well, not using fminsearch directly, but if you are willing to download fminsearchbnd from the file exchange, then yes. fminsearchbnd does a bound constrained minimization of a general objective function, as an overlay on fminsearch. It calls fminsearch for you, applying bounds to the problem.
Essentially the idea is to transform your problem for you, in a way that your objective function sees as if it is solving a constrained problem. It is totally transparent. You call fminsearchbnd with a function, a starting point in the parameter space, and a set of lower and upper bounds.
For example, minimizing the rosenbrock function returns a minimum at [1,1] by fminsearch. But if we apply purely lower bounds on the problem of 2 for each variable, then fminsearchbnd finds the bound constrained solution at [2,4].
rosen = #(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;
fminsearch(rosen,[3 3]) % unconstrained
ans =
1.0000 1.0000
fminsearchbnd(rosen,[3 3],[2 2],[]) % constrained
ans =
2.0000 4.0000
If you have no constraints on a variable, then supply -inf or inf as the corresponding bound.
fminsearchbnd(rosen,[3 3],[-inf 2],[])
ans =
1.4137 2
Andrey has the right idea, and the smoother way of providing a penalty isn't hard: just add the distance to the equation.
To keep using the anonymous function:
f = #(x,c,k, Xmin, Xmax) -(x(2)/c)^3*(((exp(-(x(1)/c)^k)-exp(-(x(2)/c)^k))/((x(2)/c)^k-(x(1)/c)^k))-exp(-(x(3)/c)^k))^2 ...
+ (x< Xmin)*(Xmin' - x' + 10000) + (x>Xmax)*(x' - Xmax' + 10000) ;
The most naive way to bound x, would be giving a huge penalty for any x that is not in the range.
For example:
function res = f(x,c,k)
if x(1)>5 || x(1)<4
penalty = 1000000000000;
else
penalty = 0;
end
res = penalty - (x(2)/c)^3*(((exp(-(x(1)/c)^k)-exp(-(x(2)/c)^k))/((x(2)/c)^k-(x(1)/c)^k))-exp(-(x(3)/c)^k))^2;
end
You can improve this approach, by giving the penalty in a smoother way.

MatLab - nargout

I am learning MatLab on my own, and I have this assignment in my book which I don't quite understand. Basically I am writing a function that will calculate sine through the use of Taylor series. My code is as follows so far:
function y = sine_series(x,n);
%SINE_SERIES: computes sin(x) from series expansion
% x may be entered as a vector to allow for multiple calculations simultaneously
if n <= 0
error('Input must be positive')
end
j = length(x);
k = [1:n];
y = ones(j,1);
for i = 1:j
y(i) = sum((-1).^(k-1).*(x(i).^(2*k -1))./(factorial(2*k-1)));
end
The book is now asking me to include an optional output err which will calculate the difference between sin(x) and y. The book hints that I may use nargout to accomplish this, but there are no examples in the book on how to use this, and reading the MatLab help on the subject did not make my any wiser.
If anyone can please help me understand this, I would really appreciate it!
The call to nargout checks for the number of output arguments a function is called with. Depending on the size of nargout you can assign entries to the output argument varargout. For your code this would look like:
function [y varargout]= sine_series(x,n);
%SINE_SERIES: computes sin(x) from series expansion
% x may be entered as a vector to allow for multiple calculations simultaneously
if n <= 0
error('Input must be positive')
end
j = length(x);
k = [1:n];
y = ones(j,1);
for i = 1:j
y(i) = sum((-1).^(k-1).*(x(i).^(2*k -1))./(factorial(2*k-1)));
end
if nargout ==2
varargout{1} = sin(x)'-y;
end
Compare the output of
[y] = sine_series(rand(1,10),3)
and
[y err] = sine_series(rand(1,10),3)
to see the difference.