2D heatmap from x,y data / Octave - octave

I am trying to create 2D heatmap from those two parameters:
diffQ = discharge(2:lgth) - discharge(1:lgth-1);
difft = time(2:lgth) - time(1:lgth-1);
My former colleague made code for just one parameter:
diff = discharge(2:lgth) - discharge(1:lgth-1);
bin = 1;
idiff = int32(diff);
mx = max(idiff);
mn = min(idiff);
msh = ones(((mx-mn)/bin)+2);
for i = 1:lgth-2
y = ((idiff(i)-mn)/bin)+1;
x = (((idiff(i+1)-mn)/bin)+1);
msh(x,y) = msh(x,y) + 1;
end
mshl = log(log(log(msh)+1)+1);
newplot();
surface(mshl,'EdgeColor','none');
colormap(heatMapA);
but I can't figure out how to change plot creating for my case.
Thanks a lot
Kozina

Related

Stargazer(): Create covariate labels with exponent?

I want to use stargazer() to create a table of the coefficients of three regressions. I have to rename the independent variables of the regressions and the variable names include exponents. Is there a way to add exponents to the argument covariate.labels of the stargazer() function?
I tried the following code, but it did not work:
lb1 = expression(paste("D"^"AfD"))
lb2 = expression(paste("D"^"SPD-Reg."))
lb3 = expression(paste("D"^"CDU-Reg."))
lb4 = expression(paste("D"^"GroKo"))
lb5 = expression(paste("D"^"CDU-Amtszeit"))
lb6 = expression(paste("D"^"SPD-Amtszeit"))
label1 = c(lb1,lb2,lb3,lb4,lb5,lb6)
stargazer(MRBE,MLBE,RPBE, type = "html",covariate.labels = c(lb1,lb2,lb3,lb4,lb5,lb6),
title = "Saisonbereinigung",out="table1.html",dep.var.labels = c("Reg1","Reg2","Reg3"))

Applying Periodic Boundary Conditions for u(x,t) array

So I think I applied the periodic boundary conditions incorrectly. This the Lax Wendroff method
def LW_hflux_eq(a,c,delt,delx,u0,flux):
x = np.linspace(0,L,round(L/delx))
t = np.linspace(0,(L/a)/4,round(((L/a)/4)/delt))
u_arr = np.zeros((len(x),len(t)+1))
# Intial Condition
u_arr[:,0] = u0
countx = np.arange(1,len(x)-1)
countt = np.arange(0,len(t))
#Lax-Wendroff (no limiter)
for l in countt:
for j in countx:
u_arr[j,l+1] = u_arr[j,l] - c*(u_arr[j,l]+(((1-c)/2)*(u_arr[j+1,l]-u_arr[j,l])*flux) - (u_arr[j-1,l]+((1-c)/2)*(u_arr[j,l]-u_arr[j-1,l])*flux))
u_arr[-1,l+1] = u_arr[-2,l]
u_arr[0,l+1] = u_arr[-1,l]
return u_arr
The last two lines before the return are the PBC. Am i doing this correctly? I am getting weird errors done the road when applying this function. Trying to find the root cause of it.
Thanks!

Meta-model of the field function OpenTurns 1.16rc1

After updating Openturns from 1.15 to 1.16rc1 I have the following issue with building the meta-model of the field function:
to reduce the computational burden:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-KolmogorovSamplingSize", 1)
algo = ot.FunctionalChaosAlgorithm(sample_X, outputSampleChaos)
algo.run()
metaModel = ot.PointToFieldConnection(postProcessing, algo.getResult().getMetaModel())
The "FittingTest-KolmogorovSamplingSize" was removed from OpenTurns 1.16rc1 and when I try to replace the fitting test with:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMaximumSamplingSize", 10)
Or with
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMinimumSamplingSize", 1)
The code is freezing. Is there any solution for this?
The proposed solution is simply to use another distribution to model your data. You could have used any other multivariate continuous distribution of proper dimension. IMO it is not a valid answer as the distribution has no link to your data.
After inspection, it appears that the problem has nothing to do with Lilliefors's test. In OT 1.15 we were using this test (under the wrong name of Kolmogorov) to select automatically a distribution suited to the input sample, but we switched to a more sophisticated selection algorithm (see MetaModelAlgorithm::BuildDistribution). It is based on a first pass using the raw Kolomgorov test (thus ignoring the fact that parameters have been estimated) then an information-based criterion is used to select the most relevant model (AIC, AICC, BIC depending on the value of the "MetaModelAlgorithm-ModelSelectionCriterion" key in ResourceMap. The problem is caused by the TrapezoidalFactory class during the Kolmogorov phase. I will provide a fix ASAP in OpenTURNS master. In the mean time, I have adapted the proposed solution to something more adapted to your data:
degree = 6
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.HyperbolicAnisotropicEnumerateFunction(dimension_xi_X, 0.8)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.StandardDistributionPolynomialFactory(ot.HistogramFactory().build(sample_X[:,i])) for i in range(dimension_xi_X)], enumerateFunction)
basisSize = enumerateFunction.getStrataCumulatedCardinal(degree)
#basis = ot.OrthogonalProductPolynomialFactory(
# [ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
#basisSize = 450#enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())
I also implemented a quick and dirty estimator of the L2-error:
# Meta_model validation
iMax = 5
# Input values
sample_X_validation = ot.Sample(np.array(month_1_parameters_MSE.iloc[:iMax,0:3]))
print("sample size=", sample_X_validation.getSize())
# sample_X = ot.Sample(month_1_parameters_MSE[['Rseries','Rsh','Isc']])
# output values
#month_1_simulated.iloc[0:1].transpose()
Field = ot.Field(mesh,np.array(month_1_simulated.iloc[0:1]).transpose())
sample_Y_validation = ot.ProcessSample(1,Field)
for k in range(1,iMax):
sample_Y_validation.add( np.array(month_1_simulated.iloc[k:k+1]).transpose() )
# In[18]:
graph = sample_Y_validation.drawMarginal(0)
graph.setColors(['red'])
drawables = graph.getDrawables()
graph2 = metaModel(sample_X_validation).drawMarginal(0)
graph2.setColors(['blue'])
drawables = graph2.getDrawables()
graph.add(graph2)
graph.setTitle('Model/Metamodel Validation')
graph.setXTitle(r'$t$')
graph.setYTitle(r'$z$')
drawables = graph.getDrawables()
L2_error = 0.0
for i in range(iMax):
L2_error = (drawables[i].getData()[:,1]-drawables[iMax+i].getData()[:,1]).computeRawMoment(2)[0]
print("L2_error=", L2_error)
You get an error of 79.488 with the previous answer and 1.3994 with the new proposal. Here is a graphical comparison.
Comparison between test data & previous answer
Comparison between test data & new proposal
The solution is to use:
degree = 1
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.LinearEnumerateFunction(dimension_xi_X)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
basisSize =450 #enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel1 = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())

Octave function terminates inexplicably

I am implementing an interpolating function using Octave, and I have the following in a Sublime text file:
function root = HermiteInterp(x, y, yp)
Q = zeros(2*length(x), 2*length(x));
disp(Q);
z = zeros(1, 2*length(x));
new_y = zeros(1, 2*length(x));
for i = 1:length(x)
z((2*i)-1) = x(i);
z(2*i) = x(i);
new_y((2*i)-1) = y(i);
new_y(2*i) = y(i);
y_prime(2*i) = yp(i);
end
y_transpose = transpose(new_y);
disp(y_transpose);
yp_transpose = transpose(y_prime);
append_to_Q = [y_transpose, Q];
disp('test1');
disp('test2');
Yet the function never makes it to the display statement. What's causing this?
Use arrow keys to scroll up and down the GUI.

Writing functions in R - calling external functions from libraries

So I am trying to take a bit of code that I use for interactive selection and identification. It works outside of a function but gives an error when I try to run it as a stand alone function.
my.identify <- function(data)
{
# allows you to create a polygon by clicking on map
region = locator(type = "o")
n = length(region$x)
p = Polygon(cbind(region$x, region$y)[c(1:n,1),])
ps = Polygons(list(p), ID="region")
sps = SpatialPolygons(list(ps))
# returns all data that overlaps new polygon sps
a=data[!is.na(overlay(data,sps)),] # here is the problem
return(a)
}
Basically it doesn't want to run the overlay function (function of the sp package). The error report is that I can't run the inherited functions??
Error in function (classes, fdef, mtable) : unable to find an
inherited method for function "overlay", for signature "matrix",
"SpatialPolygons"
Any ideas??? I'm new to function writing... so hopefully it will be easy.
This should work. overlay is deprecated and over should be used instead. The catch is that all objects should be Spatial*.
xy <- data.frame(x = runif(40, min = -200, max = 200),
y = runif(40, min = -200, max = 200))
plot(xy)
my.identify <- function(data) {
# allows you to create a polygon by clicking on map
region = locator(type = "o")
n = length(region$x)
p = Polygon(cbind(region$x, region$y)[c(1:n,1),])
ps = Polygons(list(p), ID="region")
sps = SpatialPolygons(list(ps))
# returns all data that overlaps new polygon sps
a=data[!is.na(over(SpatialPoints(data),sps)),]
return(a)
}
ident <- my.identify(xy)
points(ident, pch = 16)
You need to add a call to the package in your function:
my.identify <- function(data)
{
require('sp') ## Call to load the sp package for use in stand alone function
# allows you to create a polygon by clicking on map
region = locator(type = "o")
n = length(region$x)
p = Polygon(cbind(region$x, region$y)[c(1:n,1),])
ps = Polygons(list(p), ID="region")
sps = SpatialPolygons(list(ps))
# returns all data that overlaps new polygon sps
a=data[!is.na(overlay(data,sps)),]
return(a)
}