What does "step" mean in stepSimulation and what do its parameters mean in Bullet Physics? - bulletphysics

What does the term "STEP" means in bullet physics?
What does the function stepSimulation() and its parameters mean?
I have read the documentation but i could not get hold of anything.
Any valid explanation would be of great help.

I know I'm late, but I thought the accepted answer was only marginally better than the documentation's description.
timeStep: The amount of seconds, not milliseconds, passed since the last call to stepSimulation.
maxSubSteps: Should generally stay at one so Bullet interpolates current values on its own. A value of zero implies a variable tick rate, meaning Bullet advances the simulation exactly timeStep seconds instead of interpolating. This feature is buggy and not recommended. A value greater than one must always satisfy the equation timeStep < maxSubSteps * fixedTimeStep or you're losing time in the simulation.
fixedTimeStep: Inversely proportional to the simulation's resolution. Resolution increases as this value decreases. Keep in mind that a higher resolution means it takes more CPU.

btDynamicsWorld::stepSimulation(
btScalar timeStep,
int maxSubSteps=1,
btScalar fixedTimeStep=btScalar(1.)/btScalar(60.));
timeStep - time passed after last simulation.
Internally simulation is done for some internal constant steps. fixedTimeStep
fixedTimeStep ~~~ 0.01666666 = 1/60
if timeStep is 0.1 then it will include 6 (timeStep / fixedTimeStep) internal simulations.
To make glider movements BulletPhysics interpolate final step results according reminder after division (timeStep / fixedTimeStep)

timeStep - the amount of time in seconds to step the simulation by. Typically you're going to be passing it the time since you last called it.
maxSubSteps - the maximum number of steps that Bullet is allowed to take each time you call it.
fixedTimeStep - regulates resolution of the simulation. If your balls penetrates your walls instead of colliding with them try to decrease it.
Here i would like to address the issue in Proxy's answer about special meaning of value 1 for maxSubSteps. There is only one special value, that is 0 and you most likely don't want to use it because then simulation will go with non-constant time step. All other values are the same. Let's have a look at the actual code:
if (maxSubSteps)
{
m_localTime += timeStep;
...
if (m_localTime >= fixedTimeStep)
{
numSimulationSubSteps = int(m_localTime / fixedTimeStep);
m_localTime -= numSimulationSubSteps * fixedTimeStep;
}
}
...
if (numSimulationSubSteps)
{
//clamp the number of substeps, to prevent simulation grinding spiralling down to a halt
int clampedSimulationSteps = (numSimulationSubSteps > maxSubSteps) ? maxSubSteps : numSimulationSubSteps;
...
for (int i = 0; i < clampedSimulationSteps; i++)
{
internalSingleStepSimulation(fixedTimeStep);
synchronizeMotionStates();
}
}
So, there is nothing special about maxSubSteps equal to 1. You should really abide this formula timeStep < maxSubSteps * fixedTimeStep if you don't want to lose time.

Related

Evaluate function at constant speed relative to arc length

I'm implementing a realtime graphics engine (C++ / OpenGL) that moves a vehicle over time along a specified course that is described by a polynomial function. The function itself was programmatically generated outside the application and is of a high order (I believe >25), so I can't really post it here (I don't think it matters anyway). During runtime the function does not change, so it's easy to calculate the first and second derivatives once to have them available quickly later on.
My problem is that I have to move along the curve with a constant speed (say 10 units per second), so my function parameter is not equal to the time directly, since the arc length between two points x1 and x2 differs dependent on the function values. For example the difference f(a+1) - f(a) may be way larger or smaller than f(b+1) - f(b), depending on how the function looks at points a and b.
I don't need a 100% accurate solution, since the movement is only visual and will not be processed any further, so any approximation is OK as well. Also please keep in mind that the whole thing has to be calculated at runtime each frame (60fps), so solving huge equations with complex math may be out of the question, depending on computation time.
I'm a little lost on where to start, so even any train of thought would be highly appreciated!
Since the criterion was not to have an exact solution, but a visually appealing approximation, there were multiple possible solutions to try out.
The first approach (suggested by Alnitak in the comments and later answered by coproc) I implemented, which is approximating the actual arclength integral by tiny iterations. This version worked really well most of the time, but was not reliable at really steep angles and used too many iterations at flat angles. As coproc already pointed out in the answer, a possible solution would be to base dx on the second derivative.
All these adjustments could be made, however, I need a runtime friendly algorithm. With this one it is hard to predict the number of iterations, which is why I was not happy with it.
The second approach (also inspired by Alnitak) is utilizing the first derivative by "pushing" the vehicle along the calculated slope (which is equal to the derivative at the current x value). The function for calculating the next x value is really compact and fast. Visually there is no obvious inaccuracy and the result is always consistent. (That's why I chose it)
float current_x = ...; //stores current x
float f(x) {...}
float f_derv(x) {...}
void calc_next_x(float units_per_second, float time_delta) {
float arc_length = units_per_second * time_delta;
float derv_squared = f_derv(current_x) * f_derv(current_x);
current_x += arc_length / sqrt(derv_squared + 1);
}
This approach, however, will possibly only be accurate enough for cases with high frame time (mine is >60fps), since the object will always be pushed along a straight line with a length depending on said frame time.
Given the constant speed and the time between frames the desired arc length between frames can be computed. So the following function should do the job:
#include <cmath>
typedef double (*Function)(double);
double moveOnArc(Function f, const double xStart, const double desiredArcLength, const double dx = 1e-2)
{
double arcLength = 0.;
double fPrev = f(xStart);
double x = xStart;
double dx2 = dx*dx;
while (arcLength < desiredArcLength)
{
x += dx;
double fx = f(x);
double dfx = fx - fPrev;
arcLength += sqrt(dx2 + dfx*dfx);
fPrev = fx;
}
return x;
}
Since you say that accuracy is not a top criteria, choosing an appropriate dx the above function might work right away. Ofcourse, it could be improved by adjusting dx automatically (e.g. based on the second derivative) or by refining the endpoint with a binary search.

Statistical method to know when enough performance test iterations have been performed

I'm doing some performance/load testing of a service. Imagine the test function like this:
bytesPerSecond = test(filesize: 10MB, concurrency: 5)
Using this, I'll populate a table of results for different sizes and levels of concurrency. There are other variables too, but you get the idea.
The test function spins up concurrency requests and tracks throughput. This rate starts off at zero, then spikes and dips until it eventually stabilises on the 'true' value.
However it can take a while for this stability to occur, and there are lot of combinations of input to evaluate.
How can the test function decide when it's performed enough samples? By enough, I suppose I mean that the result isn't going to change beyond some margin if testing continues.
I remember reading an article about this a while ago (from one of the jsperf authors) that discussed a robust method, but I cannot find the article any more.
One simple method would be to compute the standard deviation over a sliding window of values. Is there a better approach?
IIUC, you're describing the classic problem of estimating the confidence interval of the mean with unknown variance. That is, suppose you have n results, x1, ..., xn, where each of the xi is a sample from some process of which you don't know much: not the mean, not the variance, and not the distribution's shape. For some required confidence interval, you'd like to now whether n is large enough so that, with high probability the true mean is within the interval of your mean.
(Note that with relatively-weak conditions, the Central Limit Theorem guarantees that the sample mean will converge to a normal distribution, but to apply it directly you would need the variance.)
So, in this case, the classic solution to determine if n is large enough, is as follows:
Start by calculating the sample mean μ = ∑i [xi] / n. Also calculate the normalized sample variance s2 = ∑i [(xi - μ)2] / (n - 1)
Depending on the size of n:
If n > 30, the confidence interval is approximated as μ ± zα / 2(s / √(n)), where, if necessary, you can find here an explanation on the z and α.
If n < 30, the confidence interval is approximated as μ ± tα / 2(s / √(n)); see again here an explanation of the t value, as well as a table.
If the confidence is enough, stop. Otherwise, increase n.
Stability means rate of change (derivative) is zero or close to zero.
The test function spins up concurrency requests and tracks throughput.
This rate starts off at zero, then spikes and dips until it eventually
stabilises on the 'true' value.
I would track your past throughput values. For example last X values or so. According to this values, I would calculate rate of change (derivative of your throughput). If your derivative is close to zero, then your test is stable. I will stop test.
How to find X? I think instead of constant value, such as 10, choosing a value according to maximum number of test can be more suitable, for example:
X = max(10,max_test_count * 0.01)

WebAudio: how does timeConstant in setTargetAtTime work?

I want to rapidly fade out an oscillator in order to remove the pop/hiss I get from simply stoping it. Chris Wilson proposed the technique to set setTargetAtTime on the gain.
Now I don't quite grasp its last parameter 'timeConstant':
What's its unit? Seconds?
What do I have to put in there to get to the target-value in 1ms?
That Chris Wilson guy, such a trouble. :)
setTargetAtTime is an exponential falloff. The parameter is a time constant:
"timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1 - 1/e (around 63.2%) given a step input response (transition from 0 to 1 value)."
So, for every "timeconstant" length of time, the level will drop by a bit over 2/3rds (presuming the gain was 1, and you're setting a target of 0. At some point, the falloff becomes so close to zero, it's below the noise threshold, and you don't need to worry about this. It won't ever "get to the target value" - it successively approximates it, although of course at some point the difference falls below the precision you can represent in a float.
I'd suggest experimenting a bit, but here's a quick guess to get you started:
// only setting this up as a var to multiply it later - you can hardcode.
// initial value is 1 millisecond - experiment with this value if it's not fading
// quickly enough.
var timeConstant = 0.001;
gain = ctx.createGain();
// connect up the node in place here
gain.gain.setTargetAtTime(0, ctx.currentTime, timeConstant);
// by my quick math, 8x TC should take you to around 2.5% of the original level
// - more than enough to smooth the envelope off.
myBufferSourceNode.stop( ctx.currentTime + (8 * timeConstant) );
though I realize this might not be technically correct ( given the exponential nature of the time constant ) but i've been using this formula to "convert" from seconds to "time constant"
function secondsToTimeConstant( sec ){
return ( sec * 2 ) / 10;
}
...this was just via trial and error, but it more or less has been working out for me

Temperature Scale in SA

First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.

Possibility of flash Math.random() returning 1

We all know good old Math.random(). It returns a random floating point number between 0 and 1.
What I can't seem to find any evidence about is if zero or one is exclusive or inclusive.
I know that if they are inclusive, the probability of hitting either one of these values is seriously low.
But I can't help but wonder if I should wasting an if statement looking for it or not.
In my current scenario zero is not a problem, but one is.
var __rand:uint = Math.floor( Math.random() * myArray.length );
var result:String = myArray[__rand];
if the 1 in Math.random() is exclusive, then I will know that will NEVER be 1, and therefore __rand could never equal myArray.length and should always be below it.. But just wasn't sure if I should waste time in some performance critical code if I should account for it.
PS: The code above is NOT the performance critical code, just an example
Basically, just 2 simple questions.
1) Is returning one impossible or possible.
2) If possible, is it worth accounting for it.
As per the docs:
Returns a pseudo-random number n, where 0 <= n < 1. The number
returned is calculated in an undisclosed manner, and is
"pseudo-random" because the calculation inevitably contains some
element of non-randomness.
So it can be 0 but not 1. You don't have to worry about index out of bounds.
By the way, if this was really performance critical code, you are better off casting the value as int or uint rather than using Math.floor (see this performance test).
Math.random will return a number between 0 and (1 exclusive). Never will return a 1.