Custom Simulink Discrete-time integrator block for Bogacki Shampine - integration

I am trying to create my own discrete time integrator in Simulink Using the Bogacki Shampine rule. The general formula for the rule (when it is only a function of time) is:
y(n+1) = y(n) + (t/9)*(2*s1+3*s2+4s3)
where:
s1 = x(n)
s2 = x(n+h/2)
s3 = x(n+3h/4)
which is also equal to :
y(n) = y(n-1) + (t/9)*(2*s1+3*s2+4s3) ;
where:
s1 = x(n-1)
s2 = x(n-h/2)
s3 = x(n-h/4)
Then I compared the results with the simple integrator block that uses ode3 (Bogacki Shampine). Results were close to each other but not too much.
Also I am not sure that I create this integrator in a correct way. Since Bogacki Shampine is 3rd order. I thought I should have used 3 unit delay, but 2 was enough for me.
How can I improve this or create another one to get more accurate results?

Related

anova_test not returning Mauchly's for three way within subject ANOVA

I am using a data set called sleep (found here: https://drive.google.com/file/d/15ZnsWtzbPpUBQN9qr-KZCnyX-0CYJHL5/view) to run a three way within subject ANOVA comparing Performance based on Stimulation, Deprivation, and Time. I have successfully done this before using anova_test from rstatix. I want to look at the sphericity output but it doesn't appear in the output. I have got it to come up with other three way within subject datasets, so I'm not sure why this is happening. Here is my code:
anova_test(data = sleep, dv = Performance, wid = Subject, within = c(Stimulation, Deprivation, Time))
I also tried to save it to an object and use get_anova_table, but that didn't look any different.
sleep_aov <- anova_test(data = sleep, dv = Performance, wid = Subject, within = c(Stimulation, Deprivation, Time))
get_anova_table(sleep_aov, correction = "GG")
This is an ideal dataset I pulled from the internet, so I'm starting to think the data had a W of 1 (perfect sphericity) and so rstatix is skipping this output. Is this something anova_test does?
Here also is my code using a dataset that does return Mauchly's:
weight_loss_long <- pivot_longer(data = weightloss, cols = c(t1, t2, t3), names_to = "time", values_to = "loss")
weight_loss_long$time <- factor(weight_loss_long$time)
anova_test(data = weight_loss_long, dv = loss, wid = id, within = c(diet, exercises, time))
Not an expert at all, but it might be because your factors have only two levels.
From anova_summary() help:
"Value
return an object of class anova_test a data frame containing the ANOVA table for independent measures ANOVA. However, for repeated/mixed measures ANOVA, it is a list containing the following components are returned:
ANOVA: a data frame containing ANOVA results
Mauchly's Test for Sphericity: If any within-Ss variables with more than 2 levels are present, a data frame containing the results of Mauchly's test for Sphericity. Only reported for effects that have more than 2 levels because sphericity necessarily holds for effects with only 2 levels.
Sphericity Corrections: If any within-Ss variables are present, a data frame containing the Greenhouse-Geisser and Huynh-Feldt epsilon values, and corresponding corrected p-values. "

Meta-model of the field function OpenTurns 1.16rc1

After updating Openturns from 1.15 to 1.16rc1 I have the following issue with building the meta-model of the field function:
to reduce the computational burden:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-KolmogorovSamplingSize", 1)
algo = ot.FunctionalChaosAlgorithm(sample_X, outputSampleChaos)
algo.run()
metaModel = ot.PointToFieldConnection(postProcessing, algo.getResult().getMetaModel())
The "FittingTest-KolmogorovSamplingSize" was removed from OpenTurns 1.16rc1 and when I try to replace the fitting test with:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMaximumSamplingSize", 10)
Or with
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMinimumSamplingSize", 1)
The code is freezing. Is there any solution for this?
The proposed solution is simply to use another distribution to model your data. You could have used any other multivariate continuous distribution of proper dimension. IMO it is not a valid answer as the distribution has no link to your data.
After inspection, it appears that the problem has nothing to do with Lilliefors's test. In OT 1.15 we were using this test (under the wrong name of Kolmogorov) to select automatically a distribution suited to the input sample, but we switched to a more sophisticated selection algorithm (see MetaModelAlgorithm::BuildDistribution). It is based on a first pass using the raw Kolomgorov test (thus ignoring the fact that parameters have been estimated) then an information-based criterion is used to select the most relevant model (AIC, AICC, BIC depending on the value of the "MetaModelAlgorithm-ModelSelectionCriterion" key in ResourceMap. The problem is caused by the TrapezoidalFactory class during the Kolmogorov phase. I will provide a fix ASAP in OpenTURNS master. In the mean time, I have adapted the proposed solution to something more adapted to your data:
degree = 6
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.HyperbolicAnisotropicEnumerateFunction(dimension_xi_X, 0.8)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.StandardDistributionPolynomialFactory(ot.HistogramFactory().build(sample_X[:,i])) for i in range(dimension_xi_X)], enumerateFunction)
basisSize = enumerateFunction.getStrataCumulatedCardinal(degree)
#basis = ot.OrthogonalProductPolynomialFactory(
# [ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
#basisSize = 450#enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())
I also implemented a quick and dirty estimator of the L2-error:
# Meta_model validation
iMax = 5
# Input values
sample_X_validation = ot.Sample(np.array(month_1_parameters_MSE.iloc[:iMax,0:3]))
print("sample size=", sample_X_validation.getSize())
# sample_X = ot.Sample(month_1_parameters_MSE[['Rseries','Rsh','Isc']])
# output values
#month_1_simulated.iloc[0:1].transpose()
Field = ot.Field(mesh,np.array(month_1_simulated.iloc[0:1]).transpose())
sample_Y_validation = ot.ProcessSample(1,Field)
for k in range(1,iMax):
sample_Y_validation.add( np.array(month_1_simulated.iloc[k:k+1]).transpose() )
# In[18]:
graph = sample_Y_validation.drawMarginal(0)
graph.setColors(['red'])
drawables = graph.getDrawables()
graph2 = metaModel(sample_X_validation).drawMarginal(0)
graph2.setColors(['blue'])
drawables = graph2.getDrawables()
graph.add(graph2)
graph.setTitle('Model/Metamodel Validation')
graph.setXTitle(r'$t$')
graph.setYTitle(r'$z$')
drawables = graph.getDrawables()
L2_error = 0.0
for i in range(iMax):
L2_error = (drawables[i].getData()[:,1]-drawables[iMax+i].getData()[:,1]).computeRawMoment(2)[0]
print("L2_error=", L2_error)
You get an error of 79.488 with the previous answer and 1.3994 with the new proposal. Here is a graphical comparison.
Comparison between test data & previous answer
Comparison between test data & new proposal
The solution is to use:
degree = 1
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.LinearEnumerateFunction(dimension_xi_X)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
basisSize =450 #enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel1 = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())

plotting interaction from mixed model lme4 with CI bands

I have the following mixed effects model:
p1 <- lmer(log(price) ~ year*loca + (1|author), data = df)
'year' is continuous
'loca' is categorical variable with 2 levels
I am trying to plot the significant interaction from this model.
The following code (using the visreg package) plots the lines from each of the two 'loca' but it does not produce a 95% confidence band:
visreg(p1, "year", by = "loca", overlay = T,
line=list(lty = 1, col = c("grey", "black")), points=list(cex=1, pch=19,
col = c("grey", "black")), type="conditional", axes = T)
Then, I tried the following code which allows me to plot the lines, but with no data points on top and no CIs:
visreg(p1, "year", by = "loca", overlay = T,
line=list(lty = 1, col = c("grey60", "black")), points=list(cex=1,
pch=19, col = c("grey", "black")),
type="conditional", trans = exp, fill.par = list(col = c("grey80",
"grey70")))
I get CI bands when I use type = 'contrast' rather than 'conditional'. However, this doesn't work when I try to backtransform the price as above using trans = exp.
Overall I need to be able to plot the interaction with the following attributes:
Confidence bands
backtransformed points
predicted line (one for each level of 'loca')
More than happy to try other methods....but I can't seem to find any that work so far.
Help much appreciated!
one possibility is with the use of the effects package:
library(effects)
eff.p1 <- effect("year*loca", p1, KR=T)
then you could either directly plot it with what the package provides and customize it from there:
plot(eff.p1)
or take what effect produces and plot it with ggplot in a nicer plot:
eff.p1 <- as.data.frame(eff.p1)
ggplot(eff.p1, aes(year, linetype=factor(loca),
color = factor(loca))) +
geom_line(aes(y = fit, group=factor(loca)), size=1.2) +
geom_line(aes(y = lower,
group=factor(loca)), linetype =3) +
geom_line(aes(y = upper,
group=factor(loca)), linetype =3) +
xlab("year") +
ylab("Marginal Effects on Log Price") +
scale_colour_discrete("") +
scale_linetype_discrete("") +
labs(color='loca') + theme_minimal()
I can't really try the code without the data, but I think it should work.
This should do the trick:
install.packages(sjPlot)
library(sjPlot)
plot_model(p1, type = "int", terms = c(year,loca), ci.lvl = 0.95)
Although it comes out with some warnings about labels, testing on my data, it does the back transformation automatically and seems to work fine. Customising should be easy, because I believe sjPlot uses ggplot.
EDIT: #Daniel points out that alternative options which allow more customization would be plot_model(type = "pred", ...) or plot_model(type = "eff", ...)

Ruby on Rails optimalization of some code

I have some simple code that uses the minmax algoritm to locate birds. Everything works but I find my programming not good and I believe there is a better solution. I'm not that experienced in RoR but if somebody knows a better way to achieve the same solution then I'm greatful ;).
There are two parts I hate, the 4 lists I had to create to determine the max or min value for the different combinations (the core of the min-max algorithm) and the very ugly SQL hack.
Thanks!
def index
# fetch all our birds
#birds = Bird.all
# Loop over the birds
#birds.each do |bird|
#fixed = Node.where("d7type = 'f'")
xminmax = []
xmaxmin = []
yminmax = []
ymaxmin = []
#fixed.each do |fixed|
rss = Log.find_by_sql("SELECT logs.fixed_mac, AVG(logs.blinker_rss) AS avg_rss FROM logs
WHERE logs.blinker_mac = '#{bird.d7_mac}' AND logs.fixed_mac = '#{fixed.d7_mac}' ORDER BY logs.id DESC LIMIT 30")
converted_rss = calculate_distance_rss(rss[0].attributes["avg_rss"])
xminmax.push(fixed.xpos + converted_rss)
xmaxmin.push(fixed.xpos - converted_rss)
yminmax.push(fixed.ypos + converted_rss)
ymaxmin.push(fixed.ypos - converted_rss)
end
pos = {x: (xminmax.min + xmaxmin.max) / 2, y: (yminmax.min + ymaxmin.max) / 2}
puts pos
end
end
2 things you could do to start with is (assuming Birds could be a large table) Change Bird.all to
Bird.find_each do |bird|
... code ...
end
It's a more efficient way to loop over many table records.
2nd: take #fixed = Node.where("d7type = 'f'") out of the each loop since it doesn't need any variables for its query. Put it above the loop so it doesn't execute each time.
3rd (Not so much of an optimization but just safer code): Your Log.find_by_sql looks simple enough to use active_record, you can change it to:
Log.select('fixed_mac, AVG(logs.blinker_rss) AS avg_rss, blinker_mac').
where(blinker_mac: bird.d7_mac, fixed_mac: fixed.d7_mac).
order('id DESC').limit(30)
converted_rss = calculate_distance_rss(rss.first.avg_rss)
Everything else looks fine.

How do I set a function to a variable in MATLAB

As a homework assignment, I'm writing a code that uses the bisection method to calculate the root of a function with one variable within a range. I created a user function that does the calculations, but one of the inputs of the function is supposed to be "fun" which is supposed to be set equal to the function.
Here is my code, before I go on:
function [ Ts ] = BisectionRoot( fun,a,b,TolMax )
%This function finds the value of Ts by finding the root of a given function within a given range to a given
%tolerance, using the Bisection Method.
Fa = fun(a);
Fb = fun(b);
if Fa * Fb > 0
disp('Error: The function has no roots in between the given bounds')
else
xNS = (a + b)/2;
toli = abs((b-a)/2);
FxNS = fun(xns);
if FxNS == 0
Ts = xNS;
break
end
if toli , TolMax
Ts = xNS;
break
end
if fun(a) * FxNS < 0
b = xNS;
else
a = xNS;
end
end
Ts
end
The input arguments are defined by our teacher, so I can't mess with them. We're supposed to set those variables in the command window before running the function. That way, we can use the program later on for other things. (Even though I think fzero() can be used to do this)
My problem is that I'm not sure how to set fun to something, and then use that in a way that I can do fun(a) or fun(b). In our book they do something they call defining f(x) as an anonymous function. They do this for an example problem:
F = # (x) 8-4.5*(x-sin(x))
But when I try doing that, I get the error, Error: Unexpected MATLAB operator.
If you guys want to try running the program to test your solutions before posting (hopefully my program works!) you can use these variables from an example in the book:
fun = 8 - 4.5*(x - sin(x))
a = 2
b = 3
TolMax = .001
The answer the get in the book for using those is 2.430664.
I'm sure the answer to this is incredibly easy and straightforward, but for some reason, I can't find a way to do it! Thank you for your help.
To get you going, it looks like your example is missing some syntax. Instead of either of these (from your question):
fun = 8 - 4.5*(x - sin(x)) % Missing function handle declaration symbol "#"
F = # (x) 8-4.5*(x-sin9(x)) %Unless you have defined it, there is no function "sin9"
Use
fun = #(x) 8 - 4.5*(x - sin(x))
Then you would call your function like this:
fun = #(x) 8 - 4.5*(x - sin(x));
a = 2;
b = 3;
TolMax = .001;
root = BisectionRoot( fun,a,b,TolMax );
To debug (which you will need to do), use the debugger.
The command dbstop if error stops execution and opens the file at the point of the problem, letting you examine the variable values and function stack.
Clicking on the "-" marks in the editor creates a break point, forcing the function to pause execution at that point, again so that you can examine the contents. Note that you can step through the code line by line using the debug buttons at the top of the editor.
dbquit quits debug mode
dbclear all clears all break points