Coder's block: How to fire timer at intervals, compensating for early/late firing - language-agnostic

I'm having a silly-yet-serious case of coder's block. Please help me work through it so my brain stops hurting and refusing to answer my questions.
I want to fire a timer at intervals up to a final time. For example, if t = 0, my goal is 100, and my interval is 20, I want to fire at 0, 20, 40, 60, 80, and 100.
The timer is not precise, and may fire early or late. If it first fires at 22, I want to fire again in 18. If it first fires at 19, I want to fire in 21. All I know when the timer fires is the current time, goal time, and firing interval. What do I do?
Edit: Sorry, I wasn't too specific about what the heck I'm actually asking. I'm trying to figure out what kind of math (probably involving taking the modulus of something) needs to be done to calculate the delay until the next firing. Ideally, I also want the timer to by matched to the end time — so if I start the timer initially at 47, it schedules itself to fire at 60 and not at 67, so the last firing will still be at 100.

If the primitive functionality you have is "schedule X to fire once at time T", then your procedure handling X should know the time T0 at which it was supposed to fire (the time T1 at which it actually fired is not needed) as well as the desired firing interval DT and schedule itself for time T0+DT. If the primitive is "fire D from now", then it should schedule for D = T0+DT-T1 (if that's negative then it needs to schedule itself immediately again but record that the scheduled time and the "was supposed to fire at" time are different so it can keep compensating on following firings).
Somebody already mentioned that .NET's Timer does this for you; so does Python's sched stdlib module; so, I'm sure, do many other languages / frameworks / libraries. But in the end you can build it if needed on top of either of the single-scheduling primitives above (one for an absolute time or one for a relative delta from now) as long as you keep track of desired as well as actual firing times!_)

I would use the system clock to check your interval. For example if you know that your interval is every 20 minutes, fire off the first interval, check what the time was, and adjust the next interval start time.

If your language/platform's underlying timers don't do what you want, then it's usually best to implement timers in terms of "target times", which means the absolute time at which you want the timer to fire next. If you platform asks for an "absolute time", then you give it the target time. If it asks for a "relative time" (or, like sleep, a duration), then it is of course target_time - current_time.
The quick way to calculate each target time in turn is:
When you first set up the timer, calculate the "interval" (which might have to be a floating-point value, assuming that won't cripple performance) and also the "target time" of the first timer fire (again, you might need fractions). Record both, and set your underlying timer mechanism, whatever that is.
When the timer fires, work out the next target time by adding the interval to the previous target time.
The problem with that approach is that you might get some very tiny accumulating errors as you add the interval to the target time (or not so tiny, if you haven't used floats).
So the longer and more accurate way is to store the very first start time, the target finishing time, and the number of firings (n). Then you recalculate the target time for each new firing in turn, which makes sure that you don't get cumulative rounding errors. The formula for that is:
target(k) = start + ((target_end - start) * k) / n
Of if you prefer:
target(k) = (k/n) * end + (1-k/n) * start
Where the firings of the timer are k=1, 2, 3, ... n. I was going to make it 0-based, then realised that was daft ;-)
The last thing you have to wrestle with when implementing timers is the difference between "wall clock" time, and real elapsed time as measured by your hardware clock. Wall clock time can suddenly jump forwards or backwards (either by an hour if your wall clock is affected by daylight savings, or by any amount if the system's clock is adjusted or corrected). Real time always increases (as long as it doesn't wrap). Which you want your timer to respect depends on the intended purpose. If you want to know when your last bus leaves, you want a timer firing daily according to wall clock time, but most commonly you care about real time elapsed. A good timer API has options for these kinds of things.

Build a table listing the desired fire times, say 10:00, 10:20, 10:40, 11:00, and 11:20.
If your timer function takes an absolute time, the rest is trivial. Set it to fire at each of the desired times. If for whatever reason you can only set one timer at a time, okay, set it to fire at the first desired time. When that event happens, set it to fire again at the next time in the table, without regard to what time it is now. Each time through, pick up the next time until you're done.
If your timer function only accepts an interval, no big deal either. Find the difference between the desired time and the current time, and set it to fire at that interval. Like if the first time is 10:00 and it's now 9:23, set it to fire in 10:00 minus 9:23 equal 37 minutes. Then when that happens, set the interval to the next desired time minus the current time. If it really fired at 10:02, then the interval is 10:20 minus 10:02 equals 18 minutes. Etc.
You probably should check for the possibility that the next fire time has already passed. If the process can take longer than the interval you might run past it, and even if not, the system might have been down. If a fire time is missed, you may want to do catch up runs, or just skip it and go to the next desired time, depending on the details of your app.
If you can't keep the entire table -- like it goes on to infinity -- then just keep the next fire time. Each time through the process, add a fixed amount to the next fire time, without regard to when the current process ran. Then calculate the interval based on the current time. Like if you have a desired interval of 20 minutes going on forever starting at 10:00, and it's now 9:23, you set the first interval to 37 minutes. Say that actually happens at 9:59. You set the next fire time to 10:00 plus 20 minutes equals 10:20, i.e. base it on the goal time rather than the actual time. Then calculate the interval to the next fire time based on the current time, i.e. 10:20 minus 9:59 equals 21 minutes. Etc.

Related

Moving blocks down y axis in pygame tetris clone [duplicate]

I am writing a Tetris program with PyGame, and came across a funny problem.
Before I ask the question, here is the pseudo-code:
while True:
# In this part, the human controls the block to go left, right, or speed down
if a key is pressed and the block isnt touching the floor:
if the key is K-left:
move piece left one step
if the key is K-right:
move piece right one step
if the key is K-down:
move piece down one step
# This part of the code makes the piece fall by itself
if the block isnt touching the floor:
move block down one step
# This part makes the while loop wait 0.4 seconds so that the block does not move
# down so quickly
wait 0.4 seconds
The problem is that, because of the "wait 0.4 seconds" part of the code, the part that the human controls can only move every 0.4 seconds. I would like it so that the block moves as fast as the human can press the key, while at the same time, the block dropping every 0.4 seconds. How could I arrange the code so that it will do that? Thanks!
The main problem I see here is that you are limiting your framerate using a wait of 0.4 seconds.
You should not limit framerate, but instead, you should limit how fast your block falls.
If I remember well, there was a formula you could use to do just that. It was based on the amout of time elapsed since the last frame. It looked like:
fraction of a second elapsed since last frame * distance you want your block to move in a second
This way, you can keep your mainloop intact, and the move processing will happen at every frame.
You could also do...
...
# This part of the code makes the piece fall by itself
if the block isn't touching the floor and
the block hasn't automatically moved in the last 0.4 seconds:
move block down one step
...
Just realize you'll be doing a lot of polling if the user hasn't struck any keys.
You may try asking gamedev.stackexchange.com instead. Check the site for Game Loops, and check out other example pygame projects to see how they're doing it. Having a good game loop is essential and will take care of things for you such as user inputs and a consistent frame rate.
Edit: https://gamedev.stackexchange.com/questions/651/tips-for-writing-the-main-game-loop
When doing games you should always try to do something like this:
while not finished:
events = get_events() # get the user input
# update the world based on the time that elapsed and the events
world.update(events, dt)
word.draw() # render the world
sleep(1/30s) # go to next frame
The sleep time should be variable so it takes into consideration the amount of time spend drawing and calculating the world updates.
The world update method would look something like this:
def update(self, events, dt):
self.move(events) # interpret user action
self.elapsed += dt
if self.elapsed > ADVANCE_TIME:
self.piece.advance()
self.elapsed = 0
The other way of implementing this (so you dont redraw too much) is to have events fired when the user orders a piece to be moved or when ADVANCE_TIME time passes. In each event handler you would then update the world and redraw.
This is assuming you want the pieces to move one step at a time and not continuous. In any case, the change for continuous movement is pretty trivial.

Increase Timer interval

I have a timer that calls the function 'bottleCreate' from 500 to 500 miliseconds. But I want that time to increase during the game (getting faster the creation of the bottles, and the game gets more difficult). But I don't know how to increase that variable inside new Timer. Thanks
interval=500;
var my_timer=new Timer(interval);
my_timer.addEventListener(TimerEvent.TIMER, bottleCreate);
my_timer.start();
You want the game to get faster, so the variable needs to decrease, because less time between function calls will make it faster.
According to the Documentation of the Timer Class you can use the delay variable to change the interval speed.
So, to make it faster, you could simply write
my_timer.delay -= 50;
Each time you do this, the function call will be called 50 ms faster.
Be aware though, going beneath 20ms will cause problems, according to the Documentation.
Furthermore, each time you manipulate the delay variable, the timer will restart completely, with the same repeat count you use at initialization.

Smooth 60fps frame rate independent motion in AS3

I'm having trouble achieving frame rate independent motion in AS3 at 60fps. Every frame I measure the time since the previous frame, add it to an accumulator, and if the accumulator is greater than my target time, which is 16.666ms (60fps), a frame is simulated.
The problem is that the AS3 getTimer() only returns a time in milliseconds.
The delta times I get are often 16ms for the first frame, 16ms for the second, then 18ms for the third, and this pattern repeats. This averages out to 16.666. But in the first frame it is lower than the target time (16 < 16.666), so no frame is simulated. In the second frame the accumulator is higher than the target time, but slightly less than double it, so one frame is simulated. For the third frame 18ms pushes the accumulator over double the target time, so two frames are simulated.
So I'm getting this very jerky motion where no frames are rendered, then one, then two, then none, then one, then two, and this continues.
How would I get around this?
Wow... I thought I was only one who found that out.
Yes, the timer class in AS3 is not accurate. It will only trigger every ~16ms which causes MAJOR issues at times.
If want to see 2048 actions in 1000ms: FlashText Markup Language
(To test this, you'll need a method which takes 1ms to execute - just for even stats)
Notice the difference:
CORRECT: 1000ms | timer=0 == 1000 actions
AS3: 1000ms | timer=0 == 62.5 actions
I was able to write a class that works like this:
CORRECT: 1000ms | timer=0 == 1000 actions
RESHAPE TIMER: 1000ms | timer=0 == 1024 actions
NOTE:
it DOES NOT fire an action every ms
it DOES catch up to actions between the 16ms interval
an ACTION (the way I use it) is a method call, each method call can have its own body
The methodology I used to create this was catch-up... basically the first timer event will fire at 16ms... we know we have a full 16ms worth of code time to fire our own actions before the timer class fires again - thats where you inject sub-actions...
The highest I was able to produce was 2048 actions in 1000ms... more than 2 actions per ms.
Now... back to your problem
There is NO WAY to trigger a 0ms timer event. Based on my solution, if you want to by-pass the first 16ms of lag before the timer fires... you can dispatch an event which will fire within 2ms depending on the current system processes.
Possible Solution
If you take my approach for throwing your own actions within the 16ms, then you can build your own timer class. Use events for times under 16ms, when fired... fire 15 more - lol. Basically, you can create your own deltas between the 16ms.

Zabbix trigger expression - detect a drop and stay in problem state

I have this trigger that fires upon a match of the rule below:
{monitoring:test.item.change(0)}<-100
When my graph goes down by over 100 units, an event gets created. The event should switch to OK status when the graph goes back up. The graph has different average values at different times of day and besides, the item is a trapper value, which does not support flexible intervals. My problem is this; when the graph falls by over 100 units, let's say from 300 to 10, a PROBLEM situation is created. At the next interval, if the value is still low (e.g 13), Zabbix creates an OK event, because although the value is still low, the expression does not return true because the graph hasn't gone down by a further 100 units. Any ideas on how I could fix this? I have been trying to use
{{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100}
but Zabbix wouldn't take that expression. This is supposed to compare the last value of test.item to the average value of the past 30 minutes and raise an alert when the difference exceeds 100.
This, I believe, would sort out my problem situation of a false OK status when the graph remains at a low value.
EDIT: I think I have cracked it. Zabbix has accepted the below expression:
{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100
I think you'll soon realize that expression won't solve your targeted behavior and will keep on flapping between PROBLEM and OK.
You have just shifted the 'did a -100 change occurred' check between 'the last and previous last' values
to 'the last and the average of the last half an hour'.
Checking if either there was an abrupt change OR
if the value is still too low will probably better mimic your expected scenario,
{monitoring:test.item.last(0)}>100 | {monitoring:test.item.max(#2)}<20
max(#2)<20 checks if the maximum of the last 2 values is bellow 20.
EDIT: After reading your comment maybe this approach (after some tweaking for your expected values) will better serve you,
({monitoring:test.item.avg(1800)}<10 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>20) | ({monitoring:test.item.avg(1800)}>100 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100)
This way, you'll better fit your trigger for the different volumes during the day.

Generate random variable in real-time without state

I want a function which takes, as input, the number of seconds elapsed since the last time it was called, and returns true or false for whether an event should have happened in that time period. I want it such that it will fire, on average, once per X time passed, say 5 seconds. I also am interested if it's possible to do without any state, which the answer from this question used.
I guess to be fully accurate it would have to return an integer for the number of events that should've happened, in the case of it being called once every 10*X times or something like that, so bonus points for that!
It sounds like you're describing a Poisson process, with the mean number of events in a given time interval is given by the Poisson distribution with parameter lambda=1/X.
The way to use the expression on the latter page is as follows, for a given value of lambda, and the parameter value of t:
Calculate a random number between zero and one; call this p
Calculate Pr(k=0) (ie, exp(-lambda*t) * (lambda*t)**0 / factorial(0))
If this number is bigger than p, then the number of simulated events is 0. END
Otherwise, calculate Pr(k=1) and add it to Pr(k=0).
If this number is bigger than p, then the answer is 1. END
...and so on.
Note that, yes, this can end up with more than one event in a time period, if t is large compared with 1/lambda (ie X). If t is always going to be small compared to 1/lambda, then you are very unlikely to get more than one event in the period, and so the algorithm is simplified considerably (if p < exp(-lambda*t), then 0, else 1).
Note 2: there is no guarantee that you will get at least one event per interval X. It's just that it'll average out to that.
(the above is rather off the top of my head; test your implementation carefully)
Asssume some event type happens on average once per 10 seconds, and you want to print a simulated list of timestamps on which the events happened.
A good method would be to generate a random integer in the range [0,9] each 1 second. If it is 0 - fire the event for this second. Of course you can control the resolution: You can generate a random integer in the range [0,99] each 0.1 second, and if it comes 0 - fire the event for this DeciSecond.
Assuming there is no dependency between events, there is no need to keep state.
To find out how many times the event happens in a given timeslice - just generate enough random integers - according to the required resolution.
Edit
You should use high resolution (at least 20 randoms per period of one event) for the simulation to be valid.