How much time passed since my scheduleOnce() method in cocos2d? - cocos2d-x

I want to know how much time passed since my scheduleOnce() method and in some situation reduce rest time that remain ,how I can do that?

Related

brownie TransactionReceipt.wait(n) which value to avoid front run with geth?

I am new to these Ethereum topics and I was wondering how confirmations work and why the value can change?
And the other question is if I have my own GETH node what value should I put to avoid the front run if I mine my own transaction this should be enough and I understand the value should be 1?
Thanks for your comments.
And the other question is if I have my own GETH node what value should I put to avoid the front run if I mine my own transaction this should be enough and I understand the value should be 1?
You are unlikely to be able to mine your own transaction unless you have considerable hashing power, worth of millions of dollars. Also confirmation value has nothing to with frontrunning.
To avoid frontrunning, instead set slippage parameter for your trades.

stepSimulation parameters in Bullet Physics

I use Bullet for physics simulation and don't care about real-time simulation - it's ok if one minute of model time lasts two hours in real time. I am trying to call a callback every fixed amount of time in model time, but realized that I don't understand how StepSimulation works.
The documentation of StepSimulation() isn't that clear. I would strongly appreciate if someone explained what its parameters are for.
timeStep is simply the amount of time to step the simulation by. Typically you're going to be passing it the time since you last called it.
Why is it so? Why does everyone use the time passed since last simulation for this parameter? What will happen if we make this parameter fixed - say 0.1?
The third parameter is the size of that internal step. The second parameter is the maximum number of steps that Bullet is allowed to take each time you call it. It's important that timeStep < maxSubSteps * fixedTimeStep, otherwise you are losing time.
What exactly is internal step? Why is its size inversely proportional to resolution? Is it basically inverse frequency?
P.S. I somewhat duplicate the question which already exists, but the answer doesn't clarify what I would like to know.
Why is it so? Why does everyone use the time passed since last simulation for this parameter? What will happen if we make this parameter fixed - say 0.1?
It is so because the physics needs to know how much time has passed or it will run too fast or slow and will appear wrong to the user.
You can absolutely fix the parameter for your application.
For example
stepSimulation(btScalar(1.)/btScalar(60.), btScalar(1.)/btScalar(60.));
Will step your physics world on by 1/60th of a second. In a game this would result in it running faster/slower depending on how far out the actual frame rate is from the 1/60th of a second we are passing.
What exactly is internal step? Why is its size inversely proportional to resolution? Is it basically inverse frequency?
It is the duration that bullet will simulate at a time. For example, if you put 1/60 and 1/6 of a second has passed since the last step, then bullet will do 10 internal steps and not one large 1/6 step. This is so that it will produce the same results. A varying timestep will not.
This is a great article on why you need a fixed physics timestep and what happens when you don't: http://gafferongames.com/game-physics/fix-your-timestep/

Is there any way to get last N changes for a user using drive.changes.list in one API call?

We are trying to get the last N changes for a user, and currently do so by getting the largestChangeId, then subtracting a constant from that and getting more changes.
As an example, we typically are making API calls with the changestamp = largestChangeId - 300, with maxResults set to 300.
We've seen as few as half a dozen changes to 180 changes come back across our userbase with these parameters.
One issue that we're running into is that the number of changes that we get back are rather unpredictable, with huge jumps in change stamps for some users and so we've have to choose between two rather unpalatable scenarios to get the last N changes.
Request lots of changes, which can lead to slow API calls simply because there are lots of changes.
Requests a small set of changes, and seek back progressively in smaller batches, which is also slow as it results in multiple RPC calls, due to multiple API calls.
Our goal is to get the last ~30 or so changes for a user as fast as possible.
As a workaround, we are currently maintaining per user state in our application to tune the max number of changes we request up or down based on the results we got for a user the last time around. However, this is somewhat fragile due to how the rate of changes incrementing for users can vary over time.
So my question is as follows:
Is there a way to efficiently get the last N changes a user, specifically in one API call ?
ID generation is very complex, it's impossible to calculate the ID of the user's nth latest change :) Changes list actually has no features that'd be appropriate for your use case. In my own personal opinion, changes list should be in the reverse chronological order, going to discuss it with the rest of the team.

ScriptDb operational limits

I have a script that generates about 20,000 small objects with about 8 simple properties. My desire was to toss these objects into ScriptDb for later processing of the data.
What I'm experiencing though is that even with a savebatch operation that the process takes much longer then desired and then silently stops. By too much time, it's often greater then the 5 min execution limit, though without throwing any error. The script runs so long that I've not attempted to check a mutation result to see what didn't make it, but from a check after exectution it appears that most do not.
So though I'm quite certain that my collection of objects is below the storage size limit, is there a lesser known limit or throttle on accesses that is causing me problems? Are the number of objects the culprit here, should I be instead attempting to save one big object that's a collection of the lessers?
I think it's the amount of data you're writing. I know you can store 20,000 small objects, you just can't write that much in 5 minutes. Write 1000 then quit. Write the next thousand, etc. Run your function 20 times and the data is loaded. If you need to do this more/automated, use ScriptApp.

AS3: Synchronize Timer event to actual time?

I plan to use a timer event to fire every second (for a clock application).
I may be wrong, but I assume that there will probably be a (very slight) sync issue with the actual system time. For example the timer event might fire when the actual system time milliseconds are at 500 instead of 0 (meaning the seconds will be partially 'out of phase' if you will).
Is there a way to either synchronize the timer event to the real time or get some kind of system time event to fire when an second ticks in AS3?
Also if I set a Timer to fire every 1000 milliseconds, is that guaranteed or can there be some offset based on the application load?
These are probably negligible issues but I'm just curious.
Thanks.
You can get the current time using Date. If you really wanted to, you could try to control your timer jitter by trying to align it with what Date returns. I'm not certain this would even be an issue (certainly if your application is not kept running for long periods of time, and even then I am not certain the error would build too quickly).
Note that the OS is usually only accurate to within a few milliseconds and you may need to do something else if you need that kind of accuracy.
I just thought of a simple way to at least increase the accuracy of the timer. By simply reducing the Timer interval, the maximum margin of error gets reduced.
So setting the Timer to fire every 100ms instead of 1000ms would cause the maximum error to be 99ms instead of 999ms. So depending on the accuracy/performance required, these values can be tweaked.
If instead the timer frequency must remain 1 second, we can create a temporary initializing timer that fires at very quick intervals (~1ms). Then at each tick we keep track of the current time and the previous time. If the time's seconds changed (the real clock changed) then we can fire the main Timer (with 1000ms) at that instant. This would ensure that the 1 second timer starts when the real time seconds change (with an error in single digit milliseconds instead of 3 digit milliseconds)
Of course, again, there's going to be some lag from when the time change was detected and when the timer gets started. But at least the accuracy can be increased while retaining a Timer of 1s.
I will try this out some time this week and report back.
Unfortunately I don't think Timer is guaranteed to stay at the same rate. It's very likely it will drift over a period of time (eg if you set it for 1000ms, it may fire every 990ms, which is a very small difference, but over time it will add up). I think you should do as you said, fire every 100ms or so, and then check the Date object to determine if one second has passed yet.