Octave - Why does input in a script only work once? - octave

I can't be the first one with this but searching didn't help me. This is an exercise of an edx MOOC. The first script.
% distance_traveled asks the user for speed and time traveled.
% It calculates and displays the distance from this.
clear all; % a naive try, doesn't make a difference
speed = input (['Enter speed in m/s :']);
time = input (['Enter time traveled in s :']);
distance_traveled = speed * time;
disp(['Traveling for ', num2str(time), 'seconds at a speed of ', num2str(speed), 'm/s results in ', num2str(distance_traveled), ' Meters travelled.'])
>> distance_traveled
Enter speed in m/s :10
Enter time traveled in s :10
Traveling for 10seconds at a speed of 10m/s results in 100meters travelled.
>> distance_traveled
distance_traveled = 100
>> distance_traveled
distance_traveled = 100
>> distance_traveled
Edit: Yeah, I forgot the problem: This works only once. When I rerun the script I don't get prompted for new values, the old ones are used. 'clear all' in the command window does the trick.
I've tried adding 'clear all;' but this doesn't help.
How is this done and why is it so hard to find ?
Have a nice weekend,
Stephan
Edit2: D'oh! The answer variable in the script and the script have the same name. distance_traveled
I've changed it to just distance in the script and anything works fine.

Related

What's the proper use of output property in Octave?

I am not sure what is the use of output while using fminunc.
>>options = optimset('GradObj','on','MaxIter','1');
>>initialTheta=zeros(2,1);
>>[optTheta, functionVal, exitFlag, output, grad, hessian]=
fminunc(#CostFunc,initialTheta,options);
>> output
output =
scalar structure containing the fields:
iterations = 11
successful = 10
funcCount = 21
Even when I use max no of iteration = 1 still it is giving no of iteration = 11??
Could anyone please explain me why is this happening?
help me with grad and hessian properties too, means the use of those.
Given we don't have the full code, I think the easiest thing for you to do to understand exactly what is happening is to just set a breakpoint in fminunc.m itself, and follow the logic of the code. This is one of the nice things about working with Octave, since the source code is provided and you can check it freely (there's often useful information in octave source code in fact, such as references to papers which they relied on for the implementation, etc).
From a quick look, it doesn't seem like fminunc expects a maxiter of 1. Have a look at line 211:
211 while (niter < maxiter && nfev < maxfev && ! info)
Since niter is initialised just before (at line 176) with the value of 1, in theory this loop will never be entered if your maxiter is 1, which defeats the whole point of the optimization.
There are other interesting things happening in there too, e.g. the inner while loop starting at line 272:
272 while (! suc && niter <= maxiter && nfev < maxfev && ! info)
This uses "shortcut evaluation", to first check if the previous iteration was "unsuccessful", before checking if the number of iterations are less than "maxiter".
In other words, if the previous iteration was successful, you don't get to run the inner loop at all, and you never get to increment niter.
What flags an iteration as "successful" seems to be defined by the ratio of "actual vs predicted reduction", as per the following (non-consecutive) lines:
286 actred = (fval - fval1) / (abs (fval1) + abs (fval));
...
295 prered = -t/(abs (fval) + abs (fval + t));
296 ratio = actred / prered;
...
321 if (ratio >= 1e-4)
322 ## Successful iteration.
...
326 nsuciter += 1;
...
328 endif
329
330 niter += 1;
In other words, it seems like fminunc will respect your maxiters ignoring whether these have been "successful" or "unsuccessful", with the exception that it does not like to "end" the algorithm at a "successful" turn (since the success condition needs to be fulfilled first before the maxiters condition is checked).
Obviously this is an academic point, since you shouldn't even be entering this inner loop when you couldn't even make it past the outer loop in the first place.
I cannot really know exactly what is going on without knowing your specific code, but you should be able to follow easily if you run your code with a breakpoint at fminunc. The maths behind that implementation may be complex, but the code itself seems fairly simple and straightforward enough to follow.
Good luck!

Monitor a variable for changes in lua

1) how do I create a function that monitors a variable for changes?
I am looking for some sort of operator to monitor a variable for changes and direction of change. Way back, I used VB to test if a variable has changed. They were called trigger+ve, trigger-ve, and udptrigger. My goal is to create a Text-To-Speech script that acts like a virtual flight instructor in a flight simulator. If the target altitude is 2000 feet, and the student deviates 200 feet to high, then the voice should say. "You are too high, descend to 2000 feet". For this case, I would like to monitor the variable alt_delta, which is the difference between the target altitude (target_alt) and the current indicated altitude (alt_ind). I have the code to find alt_delta, but now I want to monitor the newly created delta variable. So a trigger+ve event will occur if alt_delta moves from 198 feet to 202 feet. In this case, the trigger is satisfied, and the code will send an event to play the sound.
I have created three functions that will test the three types of triggers:
function **triggerpos** (var_test, value_test)
-- var_test is the variable to monitor, like "alt_delta"
-- value_test is the trigger point, like 200
function **triggerneg** (var_test, value_test)
function **monitor** (var_test)
So in the above example with altitude delta, the code would look like:
function monitor_alt ()
if triggerpos("alt_delta",200) and target_alt > alt_ind then
XPLMSpeakString("Check your altitude. You are too low. Climb to " ..
target_alt .. " feet.")
elseif triggerneg("alt_delta",-200) and target_alt < alt_ind then
XPLMSpeakString( "Check your altitude. You are too high. Descend to "
.. target_alt .. " feet.")
end
For the monitor, (as an example) I need to check the state of my scenario logic. This is an integer, so it is a whole number. For example:
monitor("scenario_state") and scenario_state=998
basically just asks if scenario_state has changed, and if so, did it change to a value of 998?? I don't care about what direction is changed, just that it DID change and to what value.
I searched the web for answers, but the code is all in C++ or java, and I can't understand it. Please help... I'm a newbie... Sorry.

How Self & Total time is calculated having Google Chrome's devtools profile report (cpuprofile file)

I'm writing tool to parse and extract some data from cpuprofile file (file produced when I save profile report) and I'm having some troubles with precision of Self & Total times calculations. So, time depends on the value from field hitCount, but. When hitCount is small (<300) the coefficient between hitCount and Self time ~1.033. But as hitCount grows, coefficient also grows.
So, when hitCount=3585, k is 1.057. When hitCount=7265: k=1.066.
Currently I'm using 1.035 as coefficient, I tried to minimize error on my sample data. But I'm not happy with approximation. I'm not familiar with Chromium code base to go and figure it out directly in the source code.
So how do I get Self time for function call having hitCount value?
Basically it's:
sampleDuration = totalRecordingDuration / totalHitCount
nodeSelfTime = nodeHitCount * sampleDuration
You can find it here:
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/Source/devtools/front_end/sdk/CPUProfileDataModel.js&sq=package:chromium&type=cs&l=31

WebAudio: how does timeConstant in setTargetAtTime work?

I want to rapidly fade out an oscillator in order to remove the pop/hiss I get from simply stoping it. Chris Wilson proposed the technique to set setTargetAtTime on the gain.
Now I don't quite grasp its last parameter 'timeConstant':
What's its unit? Seconds?
What do I have to put in there to get to the target-value in 1ms?
That Chris Wilson guy, such a trouble. :)
setTargetAtTime is an exponential falloff. The parameter is a time constant:
"timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1 - 1/e (around 63.2%) given a step input response (transition from 0 to 1 value)."
So, for every "timeconstant" length of time, the level will drop by a bit over 2/3rds (presuming the gain was 1, and you're setting a target of 0. At some point, the falloff becomes so close to zero, it's below the noise threshold, and you don't need to worry about this. It won't ever "get to the target value" - it successively approximates it, although of course at some point the difference falls below the precision you can represent in a float.
I'd suggest experimenting a bit, but here's a quick guess to get you started:
// only setting this up as a var to multiply it later - you can hardcode.
// initial value is 1 millisecond - experiment with this value if it's not fading
// quickly enough.
var timeConstant = 0.001;
gain = ctx.createGain();
// connect up the node in place here
gain.gain.setTargetAtTime(0, ctx.currentTime, timeConstant);
// by my quick math, 8x TC should take you to around 2.5% of the original level
// - more than enough to smooth the envelope off.
myBufferSourceNode.stop( ctx.currentTime + (8 * timeConstant) );
though I realize this might not be technically correct ( given the exponential nature of the time constant ) but i've been using this formula to "convert" from seconds to "time constant"
function secondsToTimeConstant( sec ){
return ( sec * 2 ) / 10;
}
...this was just via trial and error, but it more or less has been working out for me

Temperature Scale in SA

First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.