Modelica and Event generation of Functions - function

I try to understand when events are generated in Modelica. In the context of functions I noticed behaviour I didn't expect: functions appear to suppress event generation.
I am surprised as this is, to my knowledge, not explicitly stated in the Modelica reference.
For example, if I run this model in OMEdit of OpenModelica 1.17.0
model timeEventTest
Real z(start=0);
Real dummy(start=0);
equation
der(z) = dummy;
algorithm
if time > 10 then
dummy := 1;
else
dummy := -1.;
end if;
end timeEventTest;
I get the following output in the solver window of OMEdit
### STATISTICS ###
timer
events
1 state events
0 time events
solver: dassl
46 steps taken
46 calls of functionODE
44 evaluations of jacobian
0 error test failures
0 convergence test failures
0.000122251s time of jacobian evaluation
The simulation finished successfully.
Apart from the fact that the solver (I used dassl) interpreted the event at time=10 as a state event rather than a time event, the behaviour is as expected.
However, if I instead run the (mathematically identical) model
model timeEventTest2
Real z(start=0);
equation
der(z) = myfunc(time-10);
end timeEventTest2;
with myfunc defined as
function myfunc
input Real x;
output Real y;
algorithm
if x > 0 then
y := 1;
else
y:= -1;
end if;
end myfunc;
I obtain the following output in OMEdit
### STATISTICS ###
timer
events
0 state events
0 time events
solver: dassl
52 steps taken
79 calls of functionODE
63 evaluations of jacobian
13 error test failures
0 convergence test failures
0.000185296s time of jacobian evaluation
The simulation finished successfully.
Not only is the event at time = 10 NOT detected, the solver even got into some trouble as indicated by the error test failures.
This is a trivial example, however, I can imagine that the apparent suppression of events by function may result in mayor problems in larger models.
What did I miss here? Can I enforce the strict triggering of events within functions?
Some built-in functions also trigger events, e.g. div and mod (curiously, sign and abs don't).
Edit: Obviously, you need to run the examples at least to a time > 10s. I ran the simulations to 20s.

Functions in Modelica normally do not generate events.
See 8.5 Events and Synchronization in the Modelica Spec.
All equations and assignment statements within when-clauses and all assignment statements within function classes are implicitly treated with noEvent, i.e., relations within the scope of these operators never induce state or time events.
But its possible to change this behavior:
Add the annotation GenerateEvents=true to the function.
However, it seems like this is not sufficient for some Modelica simulators (tested with OpenModelica v1.16.5 and Dymola 2021x).
To make it work in OpenModelica and Dymola, you have add the Inline annotation or you have to assign the function output in one line.
So if you re-write your function as follows and you will get a state event:
function myfunc
input Real x;
output Real y;
algorithm
y := if x > 0 then 1 else -1;
annotation (GenerateEvents=true);
end myfunc;
or by additionally adding the Inline annotation:
function myfunc
input Real x;
output Real y;
algorithm
if x > 0 then
y := 1;
else
y := -1;
end if;
annotation (GenerateEvents=true, Inline=true);
end myfunc;
State event vs time event
To turn the state event in timeEventTest into a time event, change the if-condition to
if time > 10 then
This is also covered in chapter 8.5 Events and Synchronization of the Modelica Spec. Only the following two cases trigger time events:
time >= discrete expression
time < discrete expression

Related

How does this recursive function in Julia work?

This code in Julia:
function seq(n)
if n<2
return BigInt(2)
else
return 1/(3-seq(n-1))
end
end
# and then run
[seq(n) for n=1:10]
replicates the recursive sequence Un = 1/(3-U(n-1)) where U1=2 and it works. But can someone explain to me how it works? for every n does it calculate every term before it or does the "return" store it somewhere which it can then call again when it needs to so it doesn't have to calculate every term for n every time?
It's just a normal recursive function: it calls itself however many times it needs to in order to compute the result. It terminates because every call chain eventually reaches the base case. There is no implicit caching of results or anything like that—it recomputes the same result however many times the function is called. If you want to remember previously calculated values, you can use the Memoize package to automatically "memoize" return values. Here's a terser version of the unmemoized function:
julia> seq(n) = n < 2 ? BigFloat(2) : 1/(3-seq(n-1))
seq (generic function with 1 method)
julia> seq(1) # trigger compilation
2.0
julia> #time [seq(n) for n=1:100];
0.001152 seconds (20.00 k allocations: 1.069 MiB)
julia> #time [seq(n) for n=1:100];
0.001365 seconds (20.00 k allocations: 1.069 MiB)
I changed it to fit on a single line and to return BigFloat(2) instead of BigInt(2) since the function returns BigFloat for larger inputs because of the division operation. Note that the second timing is no faster than the first (slower, in fact, probably because garbage collection kicks in during the second but not the first). Here's the same thing but with memoization:
julia> using Memoize
julia> #memoize seqm(n) = n < 2 ? BigFloat(2) : 1/(3-seqm(n-1))
seqm (generic function with 1 method)
julia> seqm(1) # trigger compilation
2.0
julia> #time [seqm(n) for n=1:100];
0.000071 seconds (799 allocations: 36.750 KiB)
julia> #time [seqm(n) for n=1:100];
0.000011 seconds (201 allocations: 4.000 KiB)
The first timing is significantly faster than the unmemoized version even though the memoization cache is empty at the start because the same computation is done many times and memoization avoids doing it all but the first time. The second timing is even faster because now all 100 computed values are already cached and can just be returned.

solve an integration using MapleSoft

I encountered some problems while trying to solve an integration equations using MapleSoft.There are 4 functions that are important. Here is my code defining the functions and trying to solve the problem:
"T is the starting point of the problem where it's given."
{T := proc (t) options operator, arrow; sqrt(t)/(sqrt(t)+sqrt(1-t))^2 end proc;}
"Now I solved for the inverse of function T."
V := proc (t) options operator, arrow;([solve(t = T(y), y, useassumptions)], [0 <= t and t <= 1]) end proc; '
"Only the first solution to the above is the correct inverse as we can see from the plot below."
sol := [allvalues(V(t))]; plot([t, T(t), op(1, sol), op(2, sol)], t = 0 .. 1, legend = [typeset("Curve: ", "t"), typeset("Curve: ", "T(t)"), typeset("Curve: ", "V(t)"), typeset("Curve: ", "V2(t)")]);
"Now I define the first solution as my inverse function called V1."
V1 := proc (t) options operator, arrow; evalf(op(1, sol)) end proc
"As the problem required, I have to find the derivative of V1. I call it dV."
dV := proc (t) options operator, arrow; diff(V1(t), t) end proc
"Then define a new function called F"
F := proc (t) options operator, arrow; -10*ln(1-t*(1-exp(-1))) end proc
"With V1(t), F(t), dV(t) defined, we define a new function U."
U := proc (t,lambda) options operator, arrow; piecewise(t <= .17215, min(IVF(V1(t)), max(0, 12-(1/4)/(lambda^2*dV(t)^2))), .17215 <= t, min(IVF(V1(t)), max(0, 12-(1/4)*.7865291304*lambda^-2))) end proc;
"Next I will be trying to find the value of lambda, such that the"
solve(int(U(t,lambda)*dV(t),t=0..1)= R,lambda)
"where the R will be a real number, let's say 2.93 for now."
I think the code works fine all the way till the last step where I had to solve the integration. I couldn't figure out why.
I was trying to progress further, solving the U, i.e U(t) will be 0 if t<=0.17215 and 12-(1/4)/(lambda^2*dV(t)^2)<=0 or t>=0.17215 and 12-(1/4)*0.7865291304*lambda^-2<=0 and so on so forth. But had problem solving the inequality. For example, solve(12-1/4*lambda^-2*dV(t)^-2<=0,t). The programme runs indefinitely.
Appreciate your input!
I'm guessing that by IVF(V1(t)) you really meant F(V(t)) .
It seemed to me that computation of your dV(t) (and perhaps V(t) sometimes) for very small t might exhibit some numerical difficulty.
And I was worried that using a narrower range of integration like, say, t=1.0e-2 .. 1.0, might mess with some significant contribution to the integral from that portion.
So below I've constructed V and dV so that they raised Digits (only) for greater working precision on (only) their own body of calculations.
And I've switched it from symbolic integration to numeric/float integration. And I've added the epsilon option for that, in case you need to control the accuracy tolerance more finely.
Some of these details help with the speed issues you mentioned above. And some of those details slow the computation down a bit, while perhaps giving a greater sense of robustness of it.
The plotting command calls below are just there for illustration. Comment them out if you please.
I used Maple 2018.0 for these. If you have difficulties then please report your particular version used.
restart;
T:=sqrt(x)/(sqrt(x)+sqrt(1-x))^2:
Vexpr:=solve(t=T,x,explicit)[1]:
Vexpr:=simplify(combine(Vexpr)) assuming t>0, t<1:
The procedure for computing V is constructed next. It's constructed in this subtle way for two reasons:
1) It internally raises working precision Digits to avoid round-off error.
2) It creates (and throws away) an empty list []. The effect of this is that it will not run under fast hardware-precision evalhf mode. This is key later on, during the numeric integration.
V:=subs(__dummy=Vexpr,
proc(t)
if not type(t,numeric) then
return 'procname'(t);
end if;
[];
Digits:=100;
evalf(__dummy);
end proc):
V(0.25);
0.2037579200498004992002294012453548811286405373\
653346413644953624868320875151070347969077227713469370
V(1e-9);
1.0000000040000000119998606410199120814521886485524\
-18
22617659574502917213370846924673012122642160567565 10
Let's plot T and V,
plot([T, V(x)], x=0..1);
Now let's create a procedure for the derivative of V, using the same technique so that it too raises Digits (only internal to itself) and will not run under evalhf.
dV:=subs(__dummy=diff(Vexpr,t),
proc(t)
if not type(t,numeric) then
return 'procname'(t);
end if;
[];
Digits:=100;
evalf(__dummy);
end proc):
Note that calling dV(t) for unknown symbol t makes it return unevaluated. This is convenient later on.
dV(t);
dV(t)
dV(0.25);
2.44017580084567947538393626436824494366329948208270464559139762\
2347525580165201957710520046760103982
dV(1e-15);
-5.1961525404198771358909147606209930290335590838862019038834313\
73611362758951860627325613490378754702
evalhf(dV(1e-15)); # I want this to error-out.
Error, unable to evaluate expression to hardware floats: []
This is your F.
F:=t->-10*ln(1-t*(1-exp(-1))):
Now create the procedure U. This too returns unevaluated when either of its argument is not an actual number. Note that lambda is its second parameter.
U:=proc(t,lambda)
if not ( type(t,numeric) and
type(lambda,numeric) ) then
return 'procname'(args);
end if;
piecewise(t <= .17215,
min(F(V(t)), max(0, 12-(1/4)/(lambda^2*dV(t)^2))),
t >= .17215,
min(F(V(t)), max(0, 12-(1/4)*.7865291304*lambda^(-2))));
end proc:
Let's try a particular value of t=0.2 and lambda=2.5.
evalf(U(0.2, 2.5));
.6774805135
Let's plot it. This takes a little while. Note that it doesn't seem to blow up for small t.
plot3d([Re,Im](U(t,lambda)),
t=0.0001 .. 1.0, lambda=-2.0 .. 2.0,
color=[blue,red]);
Let's also plot dV. This takes a while, since dV raises Digits. Note that it appears accurate (not blowing up) for small t.
plot(dV, 0.0 .. 1.0, color=[blue,red], view=-1..4);
Now let's construct a procedure H which computes the integral numerically.
Note that lambda is it's first parameter, which it passes as the second argument to U. This relates to your followup Comment.
The second parameter of H is passed to evalf(Int(...)) to specify its accuracy tolerance.
Note that the range of integration is 0.001 to 1.0 (for t). You can try a lower end-point closer to 0.0 but it will take even longer for compute (and may not succeed).
There is a complicated interplay between the lower end-point of numeric integration, and accuracy tolerance eps, and the numeric behavior for U and V and dV for very small t.
(The working precision Digits is set to 15 inside H. That's just to allow the method _d01ajc to be used. It's still the case that V and dV will raise working precision for their own computations, at each numeric invocation of the integrand. Also, I'm using unapply and a forced choice of numeric integration method as a tricky way to prevent evalf(Int(...)) from wasting a lot of time poking at the integrand to find singularities. You don't need to completely understand all these details. I'm just explaining that I had reasons for this complicated set-up.)
H:=proc(lambda,eps)
local res,t;
Digits:=15;
res:=evalf(Int(unapply(U(t,lambda)*dV(t),t),
1e-3..1,
epsilon=eps, method=_d01ajc) - R);
if type(res,numeric) then
res;
else
Float(undefined);
end if;
end proc:
Let's try calling H for a specific R. Note that it's not fast.
R:=2.93;
R := 2.93
H(0.1,1e-3);
-2.93
H(0.2,1e-5);
0.97247264305333
Let's plot H for a fixed integration tolerance. This takes quite a while.
[edited] I originally plotted t->H(t,1e-3) here. But your followup Comment showed a misunderstanding in the purpose of the first parameter of that operator. The t in t->H(t,1e-3) is just a dummy name and DOES NOT relate to the t parameter of V and dV, and the second parameter of U, and the integration varaible. What matters is that this dummy is passed as the first argument to H, which in turn passes it as the second argument to U, which is used for lambda. Since there was confusion earlier, I'll use the dummy name LAMBDA now.
plot(LAMBDA->H(LAMBDA,1e-3), 0.13 .. 0.17 , adaptive=false, numpoints=15);
Now let's try and find the value of that root (intercept).
[edited] Here too I was earlier using the dummy name t for the first argument of H. It's just a dummy name and does not relate to the t parameter of V and dV, and the second parameter of U, and the variable of integration. So I'll change it below to LAMBDA to lessen confusion.
This is not fast. I'm minimizing the absolute value because numeric accuracy issues make this problematic for numeric root-finder fsolve.
Osol:=Optimization:-Minimize('abs(H(LAMBDA,1e-3))', LAMBDA=0.15..0.17);
[ -8 ]
Osol := [1.43561000000000 10 , [LAMBDA = 0.157846355888300]]
We can pick off the numeric value of lambda (which "solved" the integration).
rhs(Osol[2,1]);
0.157846355888300
If we call H using this value then we'll get a residual (similar to the first entry in Osol, but without absolute value being taken).
H(rhs(Osol[2,1]), 1e-3);
-8
-1.435610 10
Now you have the tools to try and obtain the root more accurately. To do so you may experiment with:
tighter (smaller) eps the tolerance of numeric integration.
A lower end-point of the integration closer to 0.0
Even higher Digits forced within the procedures V and dV.
That will require patience, since the computations take a while, and since the numeric integration can fail (taking forever) when those three aspects are not working properly together.
I've I edit the body of procedure H and change the integration range to instead be 1e-4 .. 1.0 then I get the following result (specifying 1e-5 for eps in the call to H).
Osol:=Optimization:-Minimize('abs(H(LAMBDA,1e-5))', LAMBDA=0.15..0.17);
[ -7 ]
Osol := [1.31500690000000 10 , [LAMBDA = 0.157842787382700]]
So it's plausible that this approximate root lambda is accurate to perhaps 4 or 5 decimal places.
Good luck.

Flashplayer Gray exclamation error

This was driving me mad for some time, today I manged to reproduce this problem, so one of the causes is division by 0.
e.g.
var end:Number = 1024/0,
size:Number = 1000,
a:Array = [];
for (var i:int=0;i<end;i++){
a.push(size);
}
After dividing by 0 (I don't know how its possible, but anyway) value end becomes Infinity and sneaks into the loop, so flash player stops execution of the script with exclamation mark.
I discovered that when one of the components in flex, after state changes in layout, passed on its width of 0 as one of the parameters to construct the loop. How to avoid this behavior of flex component?
This is expected behavior in most, if not all, languages. I just tested AS3, Javascript, and PHP. AS3 and JS give you Infinity and PHP gives you false (PHP's go-to handler for if there is a low-level error). You should avoid dividing by 0 at all costs because it simply is not possible. Most languages do not want to prevent a user from dividing by 0, since Infinity is an actual value and the correct value of dividing by 0, which is why they allow it.
Your error is because you are using Infinity as your loop check, which will never be reached in your loop.
Instead, use a conditional to handle the possibility of the value being 0. So something like:
function calculateRatio(x:Number, y:Number):Number {
if (y == 0) {
return 1; // or whatever to indicate an error
}
return x / y;
}
or
function calculateRatio(x:Number, y:Number):Number {
return x / (y == 0 ? 1 : y); // the value is still calculated, just makes sure y is not 0 at all times
}
Just make sure any time you perform division, you never divide by 0. In your case, your loop is running an infinite number of times. Flash is single-threaded and will actually freeze completely while any script execution is happening. Generally, this happens so quickly that the end-user doesn't notice it but in your case, it will freeze until the end of time. Fortunately, Flash will end script execution at 15s or 30s, depending on runtime version so it should error out at some point.
Division by 0 is not a problem as such here, 0 is passed as a parameter, as a result of other operations at run-time, and I can not be sure, that 0 will not be passed as parameter.
What's boggling my mind is that division by 0 should be flagged in the first place, but it isn't. Then gets result of infinity, ok from mathematical point of view, but then I get infinite loop. I would normally expect that executing the script longer than e.g. 15 sec. throw an error and the script stops, but instead what I have is exclamation sign without any information for the developer, and that sucks..

input element 1 by 1 from other function w/o for loop

I have a function that calculates an array of numbers (randparam) that I want to input element by element into another function that does a simulation.
For example
function [randparam] = arraycode
code
randparam = results of code
% randparam is now a 1x1001 vector.
end
next I want to input randparam 1 by 1 into my simulation function
function simulation
x = constant + constant * randparam + constant
return
end
What makes this difficult for me is because of the return command in the simulation function, it only calculates one step of the equation x above, returns the result into another function, call it integrator, and then the integrator function will call simulation function again to calculate x.
so the integrator function might look like
function integrator (x)
y = simulation(x) * 5
u = y+10
yy = simulation(x) * 10 + u
end
As you can see, integrator function calls the simulation function twice. which creates two problems for me:
If I create a for loop in the simulation function where I input element by element using something like:
for i = 1:100
x = constant + constant * randparam(i) + constant
return
end
then every time my integrator function calls my simulation function again, my for loop starts at 1 all over again.
2.If I some how kept i in my base workspace so that my for loop in my simulation function would know to step up from 1, then my y and yy functions would have different x inputs because as soon as it would be called the second time for yy, then i would now be i+1 thanks to the call due to y.
Is there a way to avoid for loops in this scenario? One potential solution to problem number two is to duplicate the script but with a different name, and have my for loop use a different variable, but that seems rather inefficient.
Hope I made this clear.
thanks.
First, if you generically want to apply the same function to each element of an array and there isn't already a built in vectorized way to do it, you could use arrayfun (although often a simple for loop is faster and more readable):
%# randparam is a 1x1001 vector.
%#next I want to input randparam 1 by 1 into my simulation function
function simulation
x = constant + constant * randparam + constant
return
end
(Note: ask yourself what this function can possibly be doing, since it isn't returning a value and MATLAB doesn't pass by reference.) This is what arrayfun is for: applying a function to each element of an array (or vector, in this case). Again, you should make sure in your case that it makes sense to do this, rather than an explicit loop.
function simulation(input_val)
#% your stuff
end
sim_results = arrayfun( #simulation, randparam);
Of course, the way you've written it, the line
x = constant + constant*randparam + constant;
can (and will) be done vectorized - if you give it a vector or matrix, a vector or matrix will be the result.
Second it seems that you're not clear on the "scope" of function variables in MATLAB. If you call a function, a clean workspace is created. So x from one function isn't automatically available within another function you call. Variables also go out of scope at the end of a function, so using x within a function doesn't change/overwrite a variable x that exists outside that function. And multiple invocations of a function each have their own workspace.
What's wrong with a loop at the integrator level?
function integrator (x)
for i=1:length(x)
y = simulation(x(i)) * 5
u = y+10
yy = simulation(x(i)) * 10 + u
end
And pass your entire randparm into integrator? It's not clear from your question whether you want simulation to return the same value when given the same input, or whether you want it step twice with the same input, or whether you want a fresh input on every call. It is also not clear if simulation keeps any internal state. The way you've written the example, simulation depends only on the input value, not on any previous inputs or outputs, which would make it trivial to vectorize. If we're all missing the boat, please edit your question with more refined example code.

Generate (Poisson?) random variable in real-time

I have a program running in real-time, with variable framerate, e.g. can be 15 fps, can be 60fps. I want an event to happen, on average, once every 5 seconds. Each frame, I want to call a function which takes the time since last frame as input, and returns True on average once every 5 seconds of elapsed-time given it's called. I figure something to do with Poisson distribution.. how would I do this?
It really depends what distribution you want to use, all you specified was the mean. I would, like you said, expect that a Poisson distribution would suit your needs nicely but you also put "uniform random variable" in the title which is a different distribution, anyway let's just go with the former.
So if a Poisson distribution is what you want, you can generate samples pretty easily using the cumulative density function. Just follow the pseudocode here: Generating Poisson RVs, with 5 seconds being your value for lambda. Let's call this function Poisson_RN().
The algorithm at this point is pretty simple.
global float next_time = current_time()
boolean function foo()
if (next_time < current_time())
next_time = current_time() + Poisson_RN();
return true;
return false;
A random variable which generates true/false outcomes in fixed proportions with independent trials is called a Geometric random variable. In any time frame, generate true with probability 1/(5*fps) and in the long run you will get an average of one true per 5 seconds.