WebAudio: how does timeConstant in setTargetAtTime work? - html

I want to rapidly fade out an oscillator in order to remove the pop/hiss I get from simply stoping it. Chris Wilson proposed the technique to set setTargetAtTime on the gain.
Now I don't quite grasp its last parameter 'timeConstant':
What's its unit? Seconds?
What do I have to put in there to get to the target-value in 1ms?

That Chris Wilson guy, such a trouble. :)
setTargetAtTime is an exponential falloff. The parameter is a time constant:
"timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1 - 1/e (around 63.2%) given a step input response (transition from 0 to 1 value)."
So, for every "timeconstant" length of time, the level will drop by a bit over 2/3rds (presuming the gain was 1, and you're setting a target of 0. At some point, the falloff becomes so close to zero, it's below the noise threshold, and you don't need to worry about this. It won't ever "get to the target value" - it successively approximates it, although of course at some point the difference falls below the precision you can represent in a float.
I'd suggest experimenting a bit, but here's a quick guess to get you started:
// only setting this up as a var to multiply it later - you can hardcode.
// initial value is 1 millisecond - experiment with this value if it's not fading
// quickly enough.
var timeConstant = 0.001;
gain = ctx.createGain();
// connect up the node in place here
gain.gain.setTargetAtTime(0, ctx.currentTime, timeConstant);
// by my quick math, 8x TC should take you to around 2.5% of the original level
// - more than enough to smooth the envelope off.
myBufferSourceNode.stop( ctx.currentTime + (8 * timeConstant) );

though I realize this might not be technically correct ( given the exponential nature of the time constant ) but i've been using this formula to "convert" from seconds to "time constant"
function secondsToTimeConstant( sec ){
return ( sec * 2 ) / 10;
}
...this was just via trial and error, but it more or less has been working out for me

Related

Issue with delta time server-side

How can I make server-side game run the same on every machine, because when I use server's delta time it works different on every computer/phone.
Would something called 'fixed timestep' help me ?
Yes fixed timestep can help you. But also simple movement with a delta can help you.
Fixed timestep commonly using with a physics because sometimes physics needs to be update more often (120-200hz)than game's render method.
However you can still use fixed timestep without physic.
You just need to interpolate your game objects with
lerp(oldValue, newValue, accumulator / timestep);
In your case probably small frame rate differences causes unexpected results.
To avoid that you should use movement depends delta.
player.x+=5*60*delta;//I assume your game is 60 fps
Instead of
player.x+=5;
So last delta will be only difference between machines.And its negligible since delta difference between 60 and 58 fps is only ~0.0005 secs

Indirect Kalman Filter for Inertial Navigation System

I'm trying to implement an Inertial Navigation System using an Indirect Kalman Filter. I've found many publications and thesis on this topic, but not too much code as example. For my implementation I'm using the Master Thesis available at the following link:
https://fenix.tecnico.ulisboa.pt/downloadFile/395137332405/dissertacao.pdf
As reported at page 47, the measured values from inertial sensors equal the true values plus a series of other terms (bias, scale factors, ...).
For my question, let's consider only bias.
So:
Wmeas = Wtrue + BiasW (Gyro meas)
Ameas = Atrue + BiasA. (Accelerometer meas)
Therefore,
when I propagate the Mechanization equations (equations 3-29, 3-37 and 3-41)
I should use the "true" values, or better:
Wmeas - BiasW
Ameas - BiasA
where BiasW and BiasA are the last available estimation of the bias. Right?
Concerning the update phase of the EKF,
if the measurement equation is
dzV = VelGPS_est - VelGPS_meas
the H matrix should have an identity matrix in corrispondence of the velocity error state variables dx(VEL) and 0 elsewhere. Right?
Said that I'm not sure how I have to propagate the state variable after update phase.
The propagation of the state variable should be (in my opinion):
POSk|k = POSk|k-1 + dx(POS);
VELk|k = VELk|k-1 + dx(VEL);
...
But this didn't work. Therefore I've tried:
POSk|k = POSk|k-1 - dx(POS);
VELk|k = VELk|k-1 - dx(VEL);
that didn't work too... I tried both solutions, even if in my opinion the "+" should be used. But since both don't work (I have some other error elsewhere)
I would ask you if you have any suggestions.
You can see a snippet of code at the following link: http://pastebin.com/aGhKh2ck.
Thanks.
The difficulty you're running into is the difference between the theory and the practice. Taking your code from the snippet instead of the symbolic version in the question:
% Apply corrections
Pned = Pned + dx(1:3);
Vned = Vned + dx(4:6);
In theory when you use the Indirect form you are freely integrating the IMU (that process called the Mechanization in that paper) and occasionally running the IKF to update its correction. In theory the unchecked double integration of the accelerometer produces large (or for cheap MEMS IMUs, enormous) error values in Pned and Vned. That, in turn, causes the IKF to produce correspondingly large values of dx(1:6) as time evolves and the unchecked IMU integration runs farther and farther away from the truth. In theory you then sample your position at any time as Pned +/- dx(1:3) (the sign isn't important -- you can set that up either way). The important part here is that you are not modifying Pned from the IKF because both are running independent from each other and you add them together when you need the answer.
In practice you do not want to take the difference between two enourmous double values because you will lose precision (because many of the bits of the significand were needed to represent the enormous part instead of the precision you want). You have grasped that in practice you want to recursively update Pned on each update. However, when you diverge from the theory this way, you have to take the corresponding (and somewhat unobvious) step of zeroing out your correction value from the IKF state vector. In other words, after you do Pned = Pned + dx(1:3) you have "used" the correction, and you need to balance the equation with dx(1:3) = dx(1:3) - dx(1:3) (simplified: dx(1:3) = 0) so that you don't inadvertently integrate the correction over time.
Why does this work? Why doesn't it mess up the rest of the filter? As it turns out, the KF process covariance P does not actually depend on the state x. It depends on the update function and the process noise Q and so on. So the filter doesn't care what the data is. (Now that's a simplification, because often Q and R include rotation terms, and R might vary based on other state variables, etc, but in those cases you are actually using state from outside the filter (the cumulative position and orientation) not the raw correction values, which have no meaning by themselves).

Temperature Scale in SA

First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.

Possibility of flash Math.random() returning 1

We all know good old Math.random(). It returns a random floating point number between 0 and 1.
What I can't seem to find any evidence about is if zero or one is exclusive or inclusive.
I know that if they are inclusive, the probability of hitting either one of these values is seriously low.
But I can't help but wonder if I should wasting an if statement looking for it or not.
In my current scenario zero is not a problem, but one is.
var __rand:uint = Math.floor( Math.random() * myArray.length );
var result:String = myArray[__rand];
if the 1 in Math.random() is exclusive, then I will know that will NEVER be 1, and therefore __rand could never equal myArray.length and should always be below it.. But just wasn't sure if I should waste time in some performance critical code if I should account for it.
PS: The code above is NOT the performance critical code, just an example
Basically, just 2 simple questions.
1) Is returning one impossible or possible.
2) If possible, is it worth accounting for it.
As per the docs:
Returns a pseudo-random number n, where 0 <= n < 1. The number
returned is calculated in an undisclosed manner, and is
"pseudo-random" because the calculation inevitably contains some
element of non-randomness.
So it can be 0 but not 1. You don't have to worry about index out of bounds.
By the way, if this was really performance critical code, you are better off casting the value as int or uint rather than using Math.floor (see this performance test).
Math.random will return a number between 0 and (1 exclusive). Never will return a 1.

What does "step" mean in stepSimulation and what do its parameters mean in Bullet Physics?

What does the term "STEP" means in bullet physics?
What does the function stepSimulation() and its parameters mean?
I have read the documentation but i could not get hold of anything.
Any valid explanation would be of great help.
I know I'm late, but I thought the accepted answer was only marginally better than the documentation's description.
timeStep: The amount of seconds, not milliseconds, passed since the last call to stepSimulation.
maxSubSteps: Should generally stay at one so Bullet interpolates current values on its own. A value of zero implies a variable tick rate, meaning Bullet advances the simulation exactly timeStep seconds instead of interpolating. This feature is buggy and not recommended. A value greater than one must always satisfy the equation timeStep < maxSubSteps * fixedTimeStep or you're losing time in the simulation.
fixedTimeStep: Inversely proportional to the simulation's resolution. Resolution increases as this value decreases. Keep in mind that a higher resolution means it takes more CPU.
btDynamicsWorld::stepSimulation(
btScalar timeStep,
int maxSubSteps=1,
btScalar fixedTimeStep=btScalar(1.)/btScalar(60.));
timeStep - time passed after last simulation.
Internally simulation is done for some internal constant steps. fixedTimeStep
fixedTimeStep ~~~ 0.01666666 = 1/60
if timeStep is 0.1 then it will include 6 (timeStep / fixedTimeStep) internal simulations.
To make glider movements BulletPhysics interpolate final step results according reminder after division (timeStep / fixedTimeStep)
timeStep - the amount of time in seconds to step the simulation by. Typically you're going to be passing it the time since you last called it.
maxSubSteps - the maximum number of steps that Bullet is allowed to take each time you call it.
fixedTimeStep - regulates resolution of the simulation. If your balls penetrates your walls instead of colliding with them try to decrease it.
Here i would like to address the issue in Proxy's answer about special meaning of value 1 for maxSubSteps. There is only one special value, that is 0 and you most likely don't want to use it because then simulation will go with non-constant time step. All other values are the same. Let's have a look at the actual code:
if (maxSubSteps)
{
m_localTime += timeStep;
...
if (m_localTime >= fixedTimeStep)
{
numSimulationSubSteps = int(m_localTime / fixedTimeStep);
m_localTime -= numSimulationSubSteps * fixedTimeStep;
}
}
...
if (numSimulationSubSteps)
{
//clamp the number of substeps, to prevent simulation grinding spiralling down to a halt
int clampedSimulationSteps = (numSimulationSubSteps > maxSubSteps) ? maxSubSteps : numSimulationSubSteps;
...
for (int i = 0; i < clampedSimulationSteps; i++)
{
internalSingleStepSimulation(fixedTimeStep);
synchronizeMotionStates();
}
}
So, there is nothing special about maxSubSteps equal to 1. You should really abide this formula timeStep < maxSubSteps * fixedTimeStep if you don't want to lose time.