AS3: Synchronize Timer event to actual time? - actionscript-3

I plan to use a timer event to fire every second (for a clock application).
I may be wrong, but I assume that there will probably be a (very slight) sync issue with the actual system time. For example the timer event might fire when the actual system time milliseconds are at 500 instead of 0 (meaning the seconds will be partially 'out of phase' if you will).
Is there a way to either synchronize the timer event to the real time or get some kind of system time event to fire when an second ticks in AS3?
Also if I set a Timer to fire every 1000 milliseconds, is that guaranteed or can there be some offset based on the application load?
These are probably negligible issues but I'm just curious.
Thanks.

You can get the current time using Date. If you really wanted to, you could try to control your timer jitter by trying to align it with what Date returns. I'm not certain this would even be an issue (certainly if your application is not kept running for long periods of time, and even then I am not certain the error would build too quickly).
Note that the OS is usually only accurate to within a few milliseconds and you may need to do something else if you need that kind of accuracy.

I just thought of a simple way to at least increase the accuracy of the timer. By simply reducing the Timer interval, the maximum margin of error gets reduced.
So setting the Timer to fire every 100ms instead of 1000ms would cause the maximum error to be 99ms instead of 999ms. So depending on the accuracy/performance required, these values can be tweaked.
If instead the timer frequency must remain 1 second, we can create a temporary initializing timer that fires at very quick intervals (~1ms). Then at each tick we keep track of the current time and the previous time. If the time's seconds changed (the real clock changed) then we can fire the main Timer (with 1000ms) at that instant. This would ensure that the 1 second timer starts when the real time seconds change (with an error in single digit milliseconds instead of 3 digit milliseconds)
Of course, again, there's going to be some lag from when the time change was detected and when the timer gets started. But at least the accuracy can be increased while retaining a Timer of 1s.
I will try this out some time this week and report back.

Unfortunately I don't think Timer is guaranteed to stay at the same rate. It's very likely it will drift over a period of time (eg if you set it for 1000ms, it may fire every 990ms, which is a very small difference, but over time it will add up). I think you should do as you said, fire every 100ms or so, and then check the Date object to determine if one second has passed yet.

Related

Why do I see spikes in response time when thread group is ending its execution?

My test goes for 3 hours.
Two particular thread groups(Ultimate Thread Group) out of 10 are set in such a way that load is generated in 3 sets by each of them. Both thread groups follow identical load generation pattern and both goes on for little less than 2 hours as shown in following picture, while rest of the threads groups continue to execute for remaining time.
But why do I see spikes in response times when these three sets are ending.
However, the response time remains low in overall duration of the test.
Similar spikes are seen in another thread group at the end of the test.
I have tried increasing the shutdown time of the thread groups from 10 seconds to 30 seconds. But no help so far. On going through details in JMeter, it was sure that when the load starts to go down or execution of the threads tends to end for the those two particular thread groups, then only we see spikes in response time.
I am using Jmeter 5.0
You are likely seeing the effect of forcibly shut down threads that are still in flight at the end of the test run. see
https://groups.google.com/forum/#!topic/jmeter-plugins/XAsUHsrJEDw
If possible, consider adding a rampdown to your test plan.
Given the very high response time of hundreds of seconds this is likely an artifact of the threads been shutdown before all the responses came back. Given the charts, i suggest using a 30-60 seconds shutdown time to insure enough padding.
I've noticed this as well using v5.1.1 with the 'Ultimate Thread Group' OR the 'Standard Thread Group' when using a schedule duration.
It occurs when using a Transaction Controller with 'Generate parent Sample' selected.
Un-checking this appears to resolve the problem. This however is not ideal as I have far too many sampler results (hence the reason clicking 'Generate parent Sample' to achieve aggregated transaction results only)
Response Times - High At End of Test
Transaction Controller - Generate Parent Sample

Ensure auto_increment value ordering in MySQL

I have multiple threads writing events into a MySQL table events.
The table has an tracking_no column configured as auto_increment used to enforce an ordering of the events.
Different readers are consuming from events and they poll the table regularly to get the new events and keep the value of the last-consumed event to get all the new events at each poll.
It turns out that the current implementation leaves the chance of missing some events.
This is what's happening:
Thread-1 begins an "insert" transaction, it takes the next value from auto_increment column (1) but takes a while to complete
Thread-2 begins an "insert" transaction, it takes the next auto_incremente value (2) and completes the write before Thread-1.
Reader polls and asks for all events with tracking_number greater than 0; it gets event 2 because Thread-1 is still lagging behind.
The events gets consumed and Reader updates it's tracking status to 2.
Thread-1 completes the insert, event 1 appears in the table.
Reader polls again for all events after 2, and while event 1 was inserted it will never be picked up again.
It seems this could be solved by changing the auto_increment strategy to lock the entire table until a transaction completes, but if possible we would avoid it.
I can think of two possible approaches.
1) If your event inserts are guaranteed to succeed (ie, you never roll back an event insert, and therefore there are never any persistent gaps in your tracking_no), then you can rewrite your Readers so that they keep track of the last contiguous event seen -- aka the last event successfully processed.
The reader queries the event store, starts processing the events in order, and then stops if a gap is found. The remaining events are discarded. The next query uses the sequence number of the last successfully processed event.
Rollback makes a mess of this, though - scenarios with concurrent writes can leave persistent gaps in the stream, which would cause your readers to block.
2) You could rewrite your query with a maximum event represented in time. See MySQL create time and update time timestamp for the mechanics of setting up timestamp columns.
The idea then is that your readers query for all events with a higher sequence number than the last successfully processed event, but with a timestamp less than now() - some reasonable SLA interval.
It generally doesn't matter if the projections of an event stream are a little bit behind in time. So you leverage this, reading events in the past, which protects you from writes in the present that haven't completed yet.
That doesn't work for the domain model, though -- if you are loading an event stream to prepare for a write, working from a stream that is a measurable interval in the past isn't going to be much fun. The good news is that the writers know which version of the object they are currently working on, and therefore where in the sequence their generated events belong. So you track the version in the schema, and use that for conflict detection.
Note It's not entirely clear to me that the sequence numbers should be used for ordering. See https://stackoverflow.com/a/9985219/54734
Synthetic keys (IDs) are meaningless anyway. Their order is not significant, their only property of significance is uniqueness. You can't meaningfully measure how "far apart" two IDs are, nor can you meaningfully say if one is greater or less than another.
So this may be a case of having the wrong problem.

stepSimulation parameters in Bullet Physics

I use Bullet for physics simulation and don't care about real-time simulation - it's ok if one minute of model time lasts two hours in real time. I am trying to call a callback every fixed amount of time in model time, but realized that I don't understand how StepSimulation works.
The documentation of StepSimulation() isn't that clear. I would strongly appreciate if someone explained what its parameters are for.
timeStep is simply the amount of time to step the simulation by. Typically you're going to be passing it the time since you last called it.
Why is it so? Why does everyone use the time passed since last simulation for this parameter? What will happen if we make this parameter fixed - say 0.1?
The third parameter is the size of that internal step. The second parameter is the maximum number of steps that Bullet is allowed to take each time you call it. It's important that timeStep < maxSubSteps * fixedTimeStep, otherwise you are losing time.
What exactly is internal step? Why is its size inversely proportional to resolution? Is it basically inverse frequency?
P.S. I somewhat duplicate the question which already exists, but the answer doesn't clarify what I would like to know.
Why is it so? Why does everyone use the time passed since last simulation for this parameter? What will happen if we make this parameter fixed - say 0.1?
It is so because the physics needs to know how much time has passed or it will run too fast or slow and will appear wrong to the user.
You can absolutely fix the parameter for your application.
For example
stepSimulation(btScalar(1.)/btScalar(60.), btScalar(1.)/btScalar(60.));
Will step your physics world on by 1/60th of a second. In a game this would result in it running faster/slower depending on how far out the actual frame rate is from the 1/60th of a second we are passing.
What exactly is internal step? Why is its size inversely proportional to resolution? Is it basically inverse frequency?
It is the duration that bullet will simulate at a time. For example, if you put 1/60 and 1/6 of a second has passed since the last step, then bullet will do 10 internal steps and not one large 1/6 step. This is so that it will produce the same results. A varying timestep will not.
This is a great article on why you need a fixed physics timestep and what happens when you don't: http://gafferongames.com/game-physics/fix-your-timestep/

Reliably select from a database table at fixed time intervals

I have a fairly 'active' CDR table I want to select records from it every say 5 minutes for those last 5 minutes. The problem is it has a SHA IDs generated on a few of the other columns so all I have to lean on is a timestamp field by which I filter by date to select the time window of records I want.
The next problem is that obviously I cannot guarantee my script will run on the second precisely every time, or that the wall clocks of the server will be correct (which doesn't matter) and most importantly there almost certainly will be more than one record per second say 3 rows '2013-08-08 14:57:05' and before the second expired one more might be inserted.
By the time for '2013-08-08 14:57:05' and get records BETWEEN '2013-08-08 14:57:05' AND '2013-08-08 15:02:05' there will be more records for '2013-08-08 14:57:05' which I would have missed.
Essentially:
imprecise wall clock time
no sequential IDs
multiple records per second
query execution time
unreliable frequency of running the query
Are all preventing me from getting a valid set of rows in a specified rolling time window. Any suggestions for how I can go around these?
If you are using the same clock then i see no reason why things would be wrong. a resolution you would want to consider is a datetime table. So that way, every time you updated the start and stop times based on the server time.... then as things are added it would be guarenteed to be within that timeframe.
I mean, you COULD do it by hardcoding, but my way would sort of forcibly store a start and stop point in the database to use.
I would use Cron to handle the intervals and timing somewhat. Not use the time from that, but just to not lock up the database by checking all the time.
I probably not got all the details but to answer to your question title "Reliably select from a database table at fixed time intervals"...
I don't think you could even hope for a query to be run at "second precise" time.
One key problem with that approach is that you will have to deal with concurrent access and lock. You might be able to send the query at fixed time maybe, but your query might be waiting on the DB server for several seconds (or being executed seeing fairly outdated snapshot of the db). Especially in your case since the table is apparently "busy".
As a suggestion, if I were you, I would spend some time to think about queue messaging systems (like http://www.rabbitmq.com/ just to cite one, not presaging it is somehow "your" solution). Anyway those kind of tools are probably more suited to your needs.

Computing estimated times of file copies / movements?

Inspired by this xckd cartoon I wondered exactly what is the best mechanism to provide an estimate to the user of a file copy / movement?
The alt tag on xkcd reads as follows:
They could say "the connection is probably lost," but it's more fun to do naive time-averaging to give you hope that if you wait around for 1,163 hours, it will finally finish.
Ignoring the funny, is that really how it's done in Windows? How about other OS? Is there a better way?
Have a look at my answer to a similar question (and the other answers there) on how the remaining time is estimated in Windows Explorer.
In my opinion, there is only one way to get good estimates:
Calculate the exact number of bytes to be copied before you begin the copy process
Recalculate you estimate regularly (every 1, 5 or 10 seconds, YMMV) based on the current transfer speed
The current transfer speed can fluctuate heavily when you are copying on a network, so use an average, for example based on the amount of bytes transfered since your last estimate.
Note that the first point may require quite some work, if you are copying many files. That is probably why the guys from Microsoft decided to go without it. You need to decide yourself if the additional overhead created by that calculation is worth giving your user a better estimate.
I've done something similar to estimate when a queue will be empty, given that items are being dequeued faster than they are being enqueued. I used linear regression over the most recent N readings of (time,queue size).
This gives better results than a naive
(bytes_copied_so_far / elapsed_time) * bytes_left_to_copy
Start a global timer that fires say, every 1000 milliseconds and update a total elpased time counter. Let's call this variable "elapsedTime"
While the file is being copied, update some local variable with the amount already copied. Let's call this variable "totalCopied"
In the timer event that is periodically raised, divide totalCopied by totalElapsed to give the number of bytes copied per timer interval (in this case, 1000ms). Let's call this variable "bytesPerSec"
Divide the total file size by bytesPerSec and obtain the total number of seconds theoretically required to copy this file. Let's call this variable remainingTime
Subtract elapsedTime from remainingTime and you a somewhat accurate calculation for file copy time.
I think dialogs should just admit their limitations. It's not annoying because it's failing to give a useful time estimate, it's annoying because it's authoritatively offering an estimate that's obvious nonsense.
So, estimate however you like, based on current rate or average rate so far, rolling averages discarding outliers, or whatever. Depends on the operation and the typical durations of events which delay it, so you might have different algorithms when you know the file copy involves a network drive. But until your estimate has been fairly consistent for a period of time equal to the lesser of 30 seconds or 10% of the estimated time, display "oh dear, there seems to be some kind of holdup" when it's massively slowed, or just ignore it if it's massively sped up.
For example, dialog messages taken at 1-second intervals when a connection briefly stalls:
remaining: 60 seconds // estimate is 60 seconds
remaining: 59 seconds // estimate is 59 seconds
remaining: delayed [was 59 seconds] // estimate is 12 hours
remaining: delayed [was 59 seconds] // estimate is infinity
remaining: delayed [was 59 seconds] // got data: estimate is 59 seconds
// six seconds later
remaining: 53 seconds // estimate is 53 seconds
Most of all I would never display seconds (only hours and minutes). I think it's really frustrating when you sit there and wait for a minute while the timer jumps between 10 and 20 seconds. And always display real information like: xxx/yyyy MB copied.
I would also include something like this:
if timeLeft > 5h --> Inform user that this might not work properly
if timeLeft > 10h --> Inform user that there might be better ways to move the file
if timeLeft > 24h --> Abort and check for problems
I would also inform the user if the estimated time varies too much
And if it's not too complicated, there should be an auto-check function that checks if the process is still alive and working properly every 1-10 minutes (depending on the application).
speaking about network file copy, the best thing is to calculate file size to be transfered, network response and etc. An approach that i used once was:
Connection speed = Ping and calculate the round trip time for packages with 15 Kbytes.
Get my file size and see, theorically, how many time it would take if i would break it in
15 kb packages using my connection speed.
Recalculate my connection speed after transfer is started and ajust the time that will be spended.
I've been pondering on this one myself. I have a copy routine - via a Windows Explorer style interface - which allows the transfer of selected files from an Android Device, to a PC.
At the start, I know the total size of the file(s) that are to be copied, and as I am using C#.NET, I am using a Stopwatch, to get the elapsed time, and while the copy is in progress, I am keeping a total of what is copied so far, in terms of bytes.
I haven't actually tested it yet, but the best way seems to be this -
estimated = elapsed * ((totalSize - copiedSoFar) / copiedSoFar)
I never saw it the way you guys are explaining it-by trasfeed bytes & total bytes.
The "experience" always made a lot more sense (not good/accurate) if you instead use bytes of each file, and file count. This is how the estimate swings wildly.
If you are transferring large files first, the estimate goes long-even with the connection static. It is like it naively thinks that all files are the average size of those thus transferred, and then makes a guess assuming that the average file size will remain accurate for the entire time.
This, and the other ways, all get worse when the connection 'speed' varies...