I have a program running in real-time, with variable framerate, e.g. can be 15 fps, can be 60fps. I want an event to happen, on average, once every 5 seconds. Each frame, I want to call a function which takes the time since last frame as input, and returns True on average once every 5 seconds of elapsed-time given it's called. I figure something to do with Poisson distribution.. how would I do this?
It really depends what distribution you want to use, all you specified was the mean. I would, like you said, expect that a Poisson distribution would suit your needs nicely but you also put "uniform random variable" in the title which is a different distribution, anyway let's just go with the former.
So if a Poisson distribution is what you want, you can generate samples pretty easily using the cumulative density function. Just follow the pseudocode here: Generating Poisson RVs, with 5 seconds being your value for lambda. Let's call this function Poisson_RN().
The algorithm at this point is pretty simple.
global float next_time = current_time()
boolean function foo()
if (next_time < current_time())
next_time = current_time() + Poisson_RN();
return true;
return false;
A random variable which generates true/false outcomes in fixed proportions with independent trials is called a Geometric random variable. In any time frame, generate true with probability 1/(5*fps) and in the long run you will get an average of one true per 5 seconds.
Related
All examples that I can find on the Internet just visualize the result array of the function computeSpectrum, but I am tasked with something else.
I generate a music note and I need by analyzing the result array to be able to say what note is playing. I figured out that I need to set the second parameter of the function call 'FFTMode' to true and then it returns sound frequencies. I thought that really it should return only one non-zero value which I could use to determine what note I generated using Math.sin function, but it is not the case.
Can somebody suggest a way how I can accomplish the task? Using the soundMixer.computeSpectrum is a requirement because I am going to analyze more complex sounds later.
FFT will transform your signal window into set of Nyquist sine waves so unless 440Hz is one of them you will obtain more than just one nonzero value! For a single sine wave you would obtain 2 frequencies due to aliasing. Here an example:
As you can see for exact Nyquist frequency the FFT response is single peak but for nearby frequencies there are more peaks.
Due to shape of the signal you can obtain continuous spectrum with peaks instead of discrete values.
Frequency of i-th sample is f(i)=i*samplerate/N where i={0,1,2,3,4,...(N/2)-1} is sample index (first one is DC offset so not frequency for 0) and N is the count of samples passed to FFT.
So in case you want to detect some harmonics (multiples of single fundamental frequency) then set the samplerate and N so samplerate/N is that fundamental frequency or divider of it. That way you would obtain just one peak for harmonics sinwaves. Easing up the computations.
First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.
What does the term "STEP" means in bullet physics?
What does the function stepSimulation() and its parameters mean?
I have read the documentation but i could not get hold of anything.
Any valid explanation would be of great help.
I know I'm late, but I thought the accepted answer was only marginally better than the documentation's description.
timeStep: The amount of seconds, not milliseconds, passed since the last call to stepSimulation.
maxSubSteps: Should generally stay at one so Bullet interpolates current values on its own. A value of zero implies a variable tick rate, meaning Bullet advances the simulation exactly timeStep seconds instead of interpolating. This feature is buggy and not recommended. A value greater than one must always satisfy the equation timeStep < maxSubSteps * fixedTimeStep or you're losing time in the simulation.
fixedTimeStep: Inversely proportional to the simulation's resolution. Resolution increases as this value decreases. Keep in mind that a higher resolution means it takes more CPU.
btDynamicsWorld::stepSimulation(
btScalar timeStep,
int maxSubSteps=1,
btScalar fixedTimeStep=btScalar(1.)/btScalar(60.));
timeStep - time passed after last simulation.
Internally simulation is done for some internal constant steps. fixedTimeStep
fixedTimeStep ~~~ 0.01666666 = 1/60
if timeStep is 0.1 then it will include 6 (timeStep / fixedTimeStep) internal simulations.
To make glider movements BulletPhysics interpolate final step results according reminder after division (timeStep / fixedTimeStep)
timeStep - the amount of time in seconds to step the simulation by. Typically you're going to be passing it the time since you last called it.
maxSubSteps - the maximum number of steps that Bullet is allowed to take each time you call it.
fixedTimeStep - regulates resolution of the simulation. If your balls penetrates your walls instead of colliding with them try to decrease it.
Here i would like to address the issue in Proxy's answer about special meaning of value 1 for maxSubSteps. There is only one special value, that is 0 and you most likely don't want to use it because then simulation will go with non-constant time step. All other values are the same. Let's have a look at the actual code:
if (maxSubSteps)
{
m_localTime += timeStep;
...
if (m_localTime >= fixedTimeStep)
{
numSimulationSubSteps = int(m_localTime / fixedTimeStep);
m_localTime -= numSimulationSubSteps * fixedTimeStep;
}
}
...
if (numSimulationSubSteps)
{
//clamp the number of substeps, to prevent simulation grinding spiralling down to a halt
int clampedSimulationSteps = (numSimulationSubSteps > maxSubSteps) ? maxSubSteps : numSimulationSubSteps;
...
for (int i = 0; i < clampedSimulationSteps; i++)
{
internalSingleStepSimulation(fixedTimeStep);
synchronizeMotionStates();
}
}
So, there is nothing special about maxSubSteps equal to 1. You should really abide this formula timeStep < maxSubSteps * fixedTimeStep if you don't want to lose time.
I have a list of documents each having a relevance score for a search query. I need older documents to have their relevance score dampened, to try to introduce their date in the ranking process. I already tried fiddling with functions such as 1/(1+date_difference), but the reciprocal function is too discriminating for close recent dates.
I was thinking maybe a mathematical function with range (0..1) and domain(0..x) to amplify their score, where the x-axis is the age of a document. It's best to explain what I further need from the function by an image:
Decaying behavior is often modeled well by an exponentional function (many decaying processes in nature also follow it). You would use 2 positive parameters A and B and get
y(x) = A exp(-B x)
Since you want a y-range [0,1] set A=1. Larger B give slower decays.
If a simple 1/(1+x) decreases too quickly too soon, a sigmoid function like 1/(1+e^-x) or the error function might be better suited to your purpose. Let the current date be somewhere in the negative numbers for such a function, and you can get a value that is current for some configurable time and then decreases towards a base value.
log((x+1)-age_of_document)
Where the base of the logarithm is (x+1). Note the x is as per your diagram and is the "threshold". If the age of the document is greater than x the score goes negative. Multiply by the maximum possible score to introduce scaling.
E.g. Domain = (0,10) with a maximum score of 10: 10*(log(11-x))/log(11)
A bit late, but as thiton says, you might want to use a sigmoid function instead, since it has a "floor" value for your long tail data points. E.g.:
0.8/(1+5^(x-3)) + 0.2 - You can adjust the constants 5 and 3 to control the slope of the curve. The 0.2 is where the floor will be.
I want a function which takes, as input, the number of seconds elapsed since the last time it was called, and returns true or false for whether an event should have happened in that time period. I want it such that it will fire, on average, once per X time passed, say 5 seconds. I also am interested if it's possible to do without any state, which the answer from this question used.
I guess to be fully accurate it would have to return an integer for the number of events that should've happened, in the case of it being called once every 10*X times or something like that, so bonus points for that!
It sounds like you're describing a Poisson process, with the mean number of events in a given time interval is given by the Poisson distribution with parameter lambda=1/X.
The way to use the expression on the latter page is as follows, for a given value of lambda, and the parameter value of t:
Calculate a random number between zero and one; call this p
Calculate Pr(k=0) (ie, exp(-lambda*t) * (lambda*t)**0 / factorial(0))
If this number is bigger than p, then the number of simulated events is 0. END
Otherwise, calculate Pr(k=1) and add it to Pr(k=0).
If this number is bigger than p, then the answer is 1. END
...and so on.
Note that, yes, this can end up with more than one event in a time period, if t is large compared with 1/lambda (ie X). If t is always going to be small compared to 1/lambda, then you are very unlikely to get more than one event in the period, and so the algorithm is simplified considerably (if p < exp(-lambda*t), then 0, else 1).
Note 2: there is no guarantee that you will get at least one event per interval X. It's just that it'll average out to that.
(the above is rather off the top of my head; test your implementation carefully)
Asssume some event type happens on average once per 10 seconds, and you want to print a simulated list of timestamps on which the events happened.
A good method would be to generate a random integer in the range [0,9] each 1 second. If it is 0 - fire the event for this second. Of course you can control the resolution: You can generate a random integer in the range [0,99] each 0.1 second, and if it comes 0 - fire the event for this DeciSecond.
Assuming there is no dependency between events, there is no need to keep state.
To find out how many times the event happens in a given timeslice - just generate enough random integers - according to the required resolution.
Edit
You should use high resolution (at least 20 randoms per period of one event) for the simulation to be valid.