I have a problem in performing a non linear fit with Gnu Octave. Basically I need to perform a global fit with some shared parameters, while keeping others fixed.
The following code works perfectly in Matlab, but Octave returns an error
error: operator *: nonconformant arguments (op1 is 34x1, op2 is 4x1)
Attached my code and the data to play with:
clear
close all
clc
pkg load optim
D = dlmread('hd', ';'); % raw data
bkg = D(1,2:end); % 4 sensors bkg
x = D(2:end,1); % input signal
Y = D(2:end,2:end); % 4 sensors reposnse
W = 1./Y; % weights
b0 = [7 .04 .01 .1 .5 2 1]; % educated guess for start the fit
%% model function
F = #(b) ((bkg + (b(1) - bkg).*(1-exp(-(b(2:5).*x).^b(6))).^b(7)) - Y) .* W;
opts = optimset("Display", "iter");
lb = [5 .001 .001 .001 .001 .01 1];
ub = [];
[b, resnorm, residual, exitflag, output, lambda, Jacob\] = ...
lsqnonlin(F,b0,lb,ub,opts)
To give more info, giving array b0, b0(1), b0(6) and b0(7) are shared among the 4 dataset, while b0(2:5) are peculiar of each dataset.
Thank you for your help and suggestions! ;)
Raw data:
0,0.3105,0.31342,0.31183,0.31117
0.013229,0.329,0.3295,0.332,0.372
0.013229,0.328,0.33,0.33,0.373
0.021324,0.33,0.3305,0.33633,0.399
0.021324,0.325,0.3265,0.333,0.397
0.037763,0.33,0.3255,0.34467,0.461
0.037763,0.327,0.3285,0.347,0.456
0.069405,0.338,0.3265,0.36533,0.587
0.069405,0.3395,0.329,0.36667,0.589
0.12991,0.357,0.3385,0.41333,0.831
0.12991,0.358,0.3385,0.41433,0.837
0.25368,0.393,0.347,0.501,1.302
0.25368,0.3915,0.3515,0.498,1.278
0.51227,0.458,0.3735,0.668,2.098
0.51227,0.47,0.3815,0.68467,2.124
1.0137,0.61,0.4175,1.008,3.357
1.0137,0.599,0.422,1,3.318
2.0162,0.89,0.5335,1.645,5.006
2.0162,0.872,0.5325,1.619,4.938
4.0192,1.411,0.716,2.674,6.595
4.0192,1.418,0.7205,2.691,6.766
8.0315,2.34,1.118,4.195,7.176
8.0315,2.33,1.126,4.161,6.74
16.04,3.759,1.751,5.9,7.174
16.04,3.762,1.748,5.911,7.151
32.102,5.418,2.942,7.164,7.149
32.102,5.406,2.941,7.164,7.175
64.142,7.016,4.478,7.174,7.176
64.142,7.018,4.402,7.175,7.175
128.32,7.176,6.078,7.175,7.176
128.32,7.175,6.107,7.175,7.173
255.72,7.165,7.162,7.165,7.165
255.72,7.165,7.164,7.166,7.166
511.71,7.165,7.165,7.165,7.165
511.71,7.165,7.165,7.166,7.164
Giving the function definition above, if you call it by F(b0) in the command windows, you will get a 34x4 matrix which is correct, since variable Y has the same size.
In that way I can (in theory) compute the standard formula for lsqnonlin (fit - measured)^2
I have solved a differential equation with a neural net. I leave code below with an example. I want to be able to compute the first derivative of this neural net with respect to its input "x" and evaluate this derivative for any "x".
1- Notice that I compute der = discretize.derivative . Is that the derivative of the neural net with respect to "x"? With this expression, if I type [first(der(phi, u, [x], 0.00001, 1, res.minimizer)) for x in xs] I get something that I wonder if it is the derivative but I cannot find a way to extract this in an array, let alone plot this. How can I evaluate this derivative at any point, lets say for all points in the array defined below as "xs"? Below in Update I give a more straightforward approach I took to try to compute the derivative (but still did not succeed).
2- Is there any other way that I could take the derivative with respect to x of the neural net?
I am new to Julia, so I am struggling a bit with how to manipulate the data types. Thanks for any suggestions!
Update: I found a way to see the symbolic expression for the neural net doing the following:
predict(x) = first(phi(x,res.minimizer))
df(x) = gradient(predict, x)[1]
After running the two lines of code type predict(x) or df(x) in the REPL and it will spit out the full neural net with the weights and biases of the solution. However I cannot evaluate the gradient, it spits an error. How can I evaluate the gradient with respect to x of my function predict(x)??
The original code creating the neural net and solving the equation
using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux
import ModelingToolkit: Interval, infimum, supremum
#parameters x
#variables u(..)
Dx = Differential(x)
a = 0.5
eq = Dx(u(x)) ~ -log(x*a)
# Initial and boundary conditions
bcs = [u(0.) ~ 0.01]
# Space and time domains
domains = [x ∈ Interval(0.01,1.0)]
# Neural network
n = 15
chain = FastChain(FastDense(1,n,tanh),FastDense(n,1))
discretization = PhysicsInformedNN(chain, QuasiRandomTraining(100))
#named pde_system = PDESystem(eq,bcs,domains,[x],[u(x)])
prob = discretize(pde_system,discretization)
const losses = []
cb = function (p,l)
push!(losses, l)
if length(losses)%100==0
println("Current loss after $(length(losses)) iterations: $(losses[end])")
end
return false
end
res = GalacticOptim.solve(prob, ADAM(0.01); cb = cb, maxiters=300)
prob = remake(prob,u0=res.minimizer)
res = GalacticOptim.solve(prob,BFGS(); cb = cb, maxiters=1000)
phi = discretization.phi
der = discretization.derivative
using Plots
analytic_sol_func(x) = (1.0+log(1/a))*x-x*log(x)
dx = 0.05
xs = LinRange(0.01,1.0,50)
u_real = [analytic_sol_func(x) for x in xs]
u_predict = [first(phi(x,res.minimizer)) for x in xs]
x_plot = collect(xs)
xconst = analytic_sol_func(1)*ones(size(xs))
plot(x_plot ,u_real,title = "Solution",linewidth=3)
plot!(x_plot ,u_predict,line =:dashdot,linewidth=2)
The solution I found consists in differentiating the approximation with the help of ForwardDiff.
So if the neural network approximation to the unkown function is called "funcres" then we take its derivative with respect to x as shown below.
using ForwardDiff
funcres(x) = first(phi(x,res.minimizer))
dxu = ForwardDiff.derivative.(funcres, Array(x_plot))
display(plot(x_plot,dxu,title = "Derivative",linewidth=3))
here is my problem. I'm trying to solve a system of two differential equations thanks to the two functions below. The part of the code that give me some trouble is the variable "rho". "rho" is a function which values are given from a file and that I tried to put in the the variable rho.
function [xdot]=f2(x,t)
# Parameters of the equations
t=[1:1:35926];
x = dlmread('data_txt.txt');
rho=x(:,4);
beta = 0.68*10^-2;
g = 1.5;
l = 1.6;
# Definition of the system of 2 ODE's :
xdot(1) = ((rho-beta)/g)*x(1) + l*x(2);
xdot(2) = (beta/g)*x(1)-l*x(2);
endfunction
.
# Initial conditions for the two variables :
x0 = [0;1];
# Definition of the time-vector -> (initial time,temporal mesh,final time) :
t = linspace (1, 10, 10000);
# Resolution with the lsode routine :
x = lsode ("f2", x0, t);
# Plot of the two curves :
plot (t,x);
When I run my code, I get the following error:
>> resolution_effective2
error: f2: A(I) = X: X must have the same size as I
error: called from
f2 at line 34 column 9
resolution_effective2 at line 8 column 3
error: lsode: evaluation of user-supplied function failed
error: called from
resolution_effective2 at line 8 column 3
error: lsode: inconsistent sizes for state and derivative vectors
error: called from
resolution_effective2 at line 8 column 3
I know that my error comes from a mismatch of size between some variable but I have been looking for the error for days now and I don't see. Could someone try to give and explain me an effective correction ?
Thank you
The error might come from your function f2. Its first argument x should have the same dimension as x0 since x0 is the initial condition of x.
In your case, whatever you intent to be the first argument of f2 is ignored since in your function you do x = dlmread('data_txt.txt'); this seems to be a mistake.
Then, xdot(1) = ((rho-beta)/g)*x(1) + l*x(2); will be a problem since rho is a vector.
You need to check the dimensions of x and rho.
I can't make matrices with variables in it for some reason. I get following message.
>>> A= [a b ;(-1-a) (1-b); (1+a) b]
error: horizontal dimensions mismatch (2x3 vs 1x1)
Why is it? Please show me correct way if I'm wrong.
In Matlab you first need to assign a variable before you can use it,
a = 1;
b = a+1;
This will thus give an error,
clear;
b = a+1; % ERROR! Undefined function or variable 'a
Matlab does never accept unassigned variables. This is because, on the lowest level, you do not have a. You will have machine code which is assgined the value of a. This is handled by the JIT compiler in Matlab, so you do not need to worry about this though.
If you want to use something as the variable which you have in maths you can specifically express this to matlab. The object is called a sym and the syntax that define the sym x to a variable xis,
syms x;
That said, you can define a vector or a matrix as,
syms a b x y; % Assign the syms
A = [x y]; % Vector
B = A= [a b ;(-1-a) (1-b); (1+a) b]; % Matrix.
The size of a matrix can be found with size(M) or for dim n size(M,n). You can calcuate the matrix product M3=M1*M2 if and only if M1 have the size m * n and M2 have the size n * p. The size of M3 will then be m * p. This will also mean that the operation A^N = A * A * ... is only allowed when m=n so to say, the matrix is square. This can be verified in matlab by the comparison,
syms a b
A = [a,1;56,b]
if size(A,1) == size(A,2)
disp(['A is a square matrix of size ', num2str(size(A,1)]);
else
disp('A is not square');
end
These are the basic rules for assigning variables in Matlab as well as for matrix multiplication. Further, a google search on the error error: 'x' undefined does only give me octave hits. Are you using octave? In that case I cannot guarantee that you can use sym objects or that the syntaxes are correct.