Kriging spherical model not working in QGIS 3.24 (SagaGIS 7.8.2) - gis

my work is based on QGIS 3.24, and the algorithm i am trying to make work is "Ordinary Kriging" from SagaGIS 7.8.2.
As some may recall, SagaGIS used to offer different models (equations) for fitting the Variogram (VAR_MODEL). In my case i have to use the spherical model which used to look like this:
a + b * ifelse(x > c, 1, 1.5 * x / c - 0.5 * x^3 / c^3)
where a is the Nugget, b is the difference between Sill and Nugget and C is the Range.
After few unsuccesfull errors i understood that QGIS doesnt like the power expressed with this symbol "^", so i worked around it substituting with a simple moltiplication ( x^3 -> x * x * x ). So my model looks like this now:
a + b * ifelse(x > c, 1, 1.5 * x / c - 0.5 * x * x * x / c * c * c)
Now, i noted that the ifelse statement, which sound R-ish to me, is not working, but no particoular error is arised. Clearly, removing the ifelse statement with the Test and True condition, which in our case is ifelse(x>c,1, makes the code works, but the results are absourdly wrong, clearly.
Can anyone help me out with this problem? Is there any workaround?
Davide

Related

Removing DC component for matrix in chuncks in octave

I'm new to octave and if this as been asked and answered then I'm sorry but I have no idea what the phrase is for what I'm looking for.
I trying to remove the DC component from a large matrix, but in chunks as I need to do calculations on each chuck.
What I got so far
r = dlmread('test.csv',';',0,0);
x = r(:,2);
y = r(:,3); % we work on the 3rd column
d = 1
while d <= (length(y) - 256)
e = y(d:d+256);
avg = sum(e) / length(e);
k(d:d+256) = e - avg; % this is the part I need help with, how to get the chunk with the right value into the matrix
d += 256;
endwhile
% to check the result I like to see it
plot(x, k, '.');
if I change the line into:
k(d:d+256) = e - 1024;
it works perfectly.
I know there is something like an element-wise operation, but if I use e .- avg I get this:
warning: the '.-' operator was deprecated in version 7
and it still doesn't do what I expect.
I must be missing something, any suggestions?
GNU Octave, version 7.2.0 on Linux(Manjaro).
Never mind the code works as expected.
The result (K) got corrupted because the chosen chunk size was too small for my signal. Changing 256 to 4096 got me a better result.
+ and - are always element-wise. Beware that d:d+256 are 257 elements, not 256. So if then you increment d by 256, you have one overlaying point.

Octave Get function_handle from vectorfunction with constants

I've tried to get a vectorfunction like
syms x
fn=function_handle([x^2;1])
Output is #(x) [x.^2;1]
Thats leads of course in an error while calling fn with vectorarguments
(Dimensions mismatch)
Is there a way to avoid the issue?
I've tried fn=function_handle([x^2;1+0*x])
but the codeoptimation - or whatever - deletes the 0*x - term.
Any suggestions?
If you think about it, what function_handle does here is reasonable, since the scenario you require here cannot be reliably predicted in advance. So I don't think there is an obvious option to change its behaviour.
You could deal with this in a couple of ways.
One way is to treat the function handle as unvectorised, and rely on external vectorization, e.g.
f = function_handle([x^2; 1]);
[arrayfun( f, [1,2,3,4,5], 'uniformoutput', false ){:}]
Alternatively, you could introduce a symbolic helper constant c, and call f appropriately. You can also create a wrapper function that uses an appropriate default constant. Examples:
f = function_handle([x^2; c], 'vars', [x,c]);
f( [1,2,3,4,5], [1,1,1,1,1] )
g = #(x) f( x, ones(size(x)) );
g([1,2,3,4,5])
or
f = function_handle([x^2; (x+c)/x], 'vars', [x,c]);
f([1,2,3,4,5], 0)
g = #(x) f( x, 0 )
g([1,2,3,4,5])
Thank you.
Today, i'm happy of a solution i've found.
I turn the arrayfcn into a cellfcn:
f_h_Cell=#(x, y) {x .* y, 0}
nf = #(x) #mifCell2Mat (f_h_Cell (x (size (x) (1) * 0 / 2 + 1:size (x) (1) * 1 / 2,{':'} (ones (1, ndims (x) - 1)){:}), x (size (x) (1) * 1 / 2 + 1:size (x) (1) * 2 / 2, {':'} (ones (1, ndims (x) - 1)) {:})))
and then:
function res=mifCell2Mat(resCell)
resCell=transpose(resCell);
[~,idx]=max(cellfun(#numel,resCell));
refSize=(size(resCell{idx}));
resCell=cellfun(#(x) x+zeros(refSize),resCell,'uniformoutput',false);
res=cell2mat(resCell);
endfunction
All automate calling following function
f=fcn(name,domain,parms,fcn);
so a simple f.nf([x;y;z]) call gives the result.
Of course it doesn't work, if there are numel's between 1 and say size=[10,10] of
eg size=[10,1], but so what ... In most cases it work's for me (until now: allways).
Oh, while i read my code just here, i've found a little bug:
refSize=(size(resCell{idx}));
must of course change to
refSize=(size(resCell{idx(1)}));
cause there a possible more than one max sizes in the idx, so i've picked the first. I do first a test of constant outDims, so that these workaround only occours, if there are constants. In the other cases (if all outDims contain domainvars) a simple anonymous function of a matrix-handle appears to the user:
f_h_Mat=#(x, y) [x .* y; x]
nf=#(x) f_h_Mat (x (size (x) (1) * 0 / 2 + 1:size (x) (1) * 1 / 2, {':'} (ones (1, ndims (x) - 1)) {:}), x (size
(x) (1) * 1 / 2 + 1:size (x) (1) * 2 / 2, {':'} (ones (1, ndims (x) - 1)) {:}))

Using linear approximation to perform addition and subtraction | error barrier

I'm attempting my first solo project, after taking an introductory course to machine learning, where I'm trying to use linear approximation to predict the outcome of addition/subtraction of two numbers.
I have 3 features: first number, subtraction/addition (0 or 1), and second number.
So my input looks something like this:
3 0 1
4 1 2
3 0 3
With corresponding output like this:
2
6
0
I have (I think) successfully implemented logistic regression algorithm, as the squared error does gradually decrease, but in 100 values, ranging from 0 to 50, the squared error value flattens out at around 685.6 after about 400 iterations.
Graph: Squared Error vs Iterations
.
To fix this, I have tried using a larger dataset for training, getting rid of regularization, and normalizing the input values.
I know that one of the steps to fix high bias is to add complexity to the approximation, but I want to maximize the performance at this particular level. Is it possible to go any further on this level?
My linear approximation code in Octave:
% Iterate
for i = 1 : iter
% hypothesis
h = X * Theta;
% reg theta prep
regTheta = Theta;
regTheta(:, 1) = 0;
% cost calc
J(i, 2) = (1 / (2 * m)) * (sum((h - y) .^ 2) + lambda * sum(sum(regTheta .^ 2,1),2));
% theta calc
Theta = Theta - (alpha / m) * ((h - y)' * X)' + lambda * sum(sum(regTheta, 1), 2);
end
Note: I'm using 0 for lambda, as to ignore regularization.

which python regression function to use for linear regression curve

Im trying to replicate a function in python and was able to code the following using multiple columns in a dataframe but was wondering if there is a python regression function that would do this more effectively. Here is the link to the description of the function. Sorry in advanced not really a stats guy. :)
http://tlc.thinkorswim.com/center/reference/thinkScript/Functions/Statistical/Inertia.html
It states that its the linear regression curve using the least-squares method to approximate data for each set of bars.
input y = close;
input n = 20;
def x = x[1] + 1; # previous value + 1
def a = (n * Sum(x * y, n) - Sum(x, n) * Sum(y, n) ) / ( n * Sum(Sqr(x), n) -Sqr(Sum(x, n)));
def b = (Sum(Sqr(x), n) * Sum(y, n) - Sum(x, n) * Sum(x * y, n) ) / ( n * Sum(Sqr(x), n) - Sqr(Sum(x, n)));
plot InertiaTS = a * x + b;
Thanks
Updated
here is the pandas columns and function. I first defined the xValue and yValue columns and then the following which is the raw calculation:
df['ind1']= ((10 * (df['xValue']*df['ysValue']).rolling(10, min_periods=10).sum() - df['xValue'].rolling(10, min_periods=10).sum()*df['ysValue'].rolling(10, min_periods=10).sum())/ (10 * (df['xValue'] ** 2).rolling(10, min_periods=10).sum() - (df['xValue'].rolling(10, min_periods=10).sum())**2)) * df['xValue'] + (((df['xValue'] ** 2).rolling(10, min_periods=10).sum()*df['ysValue'].rolling(10, min_periods=10).sum() - df['xValue'].rolling(10, min_periods=10).sum()*(df['xValue']*df['ysValue']).rolling(10, min_periods=10).sum())/(10 * (df['xValue'] ** 2).rolling(10, min_periods=10).sum() - (df['xValue'].rolling(10, min_periods=10).sum())**2))
It's not really clear whether you are just looking for a way to perform regression in python or you want to code the algorithm yourself.
if you want a package to do the regression, you can look at scikit-learn
Using,
from sklearn import linear_model
linear_model.LinearRegression()
If you want to code your own algorithm, you can look at gradient descent. you can look at a video by Andrew Ng on coursera - https://www.coursera.org/learn/machine-learning/lecture/GFFPB/gradient-descent-intuition. It's fairly intuitive to code the algorithm, the steps are as follows,
i. define a cost function - this is based on OLS(ordinary least squares) and looks like,
J = 1/2 (h(x) - y ) ^2
ii. take a partial derivative of cost function wrt each feature or j. Here X is the input vector comprised of n features one of which is j.
iii. Update the feature vector using gradient descent -
theta = theta - alpha * (partial derivative)
You can find the details here from Andrew Ng's papaper: http://cs229.stanford.edu/notes/cs229-notes1.pdf
sorry, it's difficult to put latex on SO

Matlab plotting the shifted logistic function

I would like to plot the shifted logistic function as shown from Wolfram Alpha.
In particular, I would like the function to be of the form
y = exp(x - t) / (1 + exp(x - t))
where t > 0. In the link, for example, t is 6. I had originally tried the following:
x = 0:.1:12;
y = exp(x - 6) ./ (1 + exp(x - 6));
plot(x, y);
axis([0 6 0 1])
However, this is not the same as the result from Wolfram Alpha. Here is an export of my plot.
I do not understand what the difference is between what I am trying to do here vs. plotting shifted sin and cosine functions (which works using the same technique).
I am not completely new to Matlab but I do not usually use it in this way.
Edit: My values for x in the code should have been from 0 to 12.
fplot takes as inputs a function handle and a range to plot for:
>> fplot(#(x) exp(x-6) / (1 + exp(x-6)), [0 12])
The beauty of fplot in this case is you don't need to spend time calculating y-values beforehand; you could also extract values from the graph after the fact if you want (by getting the line handle's XData and YData properties).
Your input to Wolfram Alpha is incorrect. It is interpreted as e*(x-6)/(1-e*(x-6)). Use plot y = exp(x - 6) / (1 + exp(x - 6)) for x from 0 to 12 in Wolfram Alpha (see here) for the same results as in MATLAB. Also use axis([0 12 0 1]) (or no axis statement at all on a new plot) to see the full results in MATLAB.
In reply to your comment: use y = exp(1)*(x - 6) ./ (1 + exp(1)*(x - 6)); to do in MATLAB what you were doing in Wolfram Alpha.