Add a 5 min timer between execution of each function - google-apps-script

Suppose I have a function as given below. I am facing a problem of Exceeding maximum execution time. This is because function a,b & c are executing one after the other without giving google server a breathing space. I want to give that breathing space so that maximum execution error doesn't occur. To do that I would like to give it a 5 mins delay between execution of each function a,b and c in this case.
function main(){
a();
b();
c();
}
The above function should work like this:
function main(){
a();
rest for 5 mins before executing the next function
b();
rest for 5 mins before executing the next function
c();
}
I can't figure out how to achieve this?

Your assumption is wrong. That error message is not related to how busy google app script is but to the total execution time being exceed by your main function.
In other words, your main function is taking more time than the allowed for a execution in app script, regardless of what you are doing in that function.
The solution then is not to add sleep or idle times since that will make the situation even worse.
The only solutions to expand the time limits are:
Create a trigger to run every N minutes and develop a system to track what to execute next (if something is not already running). It could be using the built-in cache or even a google sheet tab to maintain that internal system status.
Programmatically create and remove triggers, so that when function A starts, you can remove a trigger (code) to avoid future executions and when it ends you can create it (code) to run the next step. Once again, you need a way to maintain the internal system status.

Related

Is it possible to design Arinc653 scheduler with Azure-RTOS?

I want to design a scheduler that works in Arinc653 manner just for experimental issues.
Is this possible to manipulate the scheduler in this way?
There is time-slicing in threadX I know but all examples I've encountered are using TX_NO_TIME_SLICE (And my shots with that did not work either.).
Besides I'm not sure if time-slice make the thread wait until its deadline met or put it into sleep so that other threads get running.
For short; Arinc653 scheduler defines a constant major frame that each
'thread' has its definite amount of running times and repeats major
frame endlessly. If a thread assigned with i.e 3ms within a major frame and it finishes its job in 1 ms; kernel still waits 2ms to switch next 'thread'.
You can use time slicing to limit the amount of time each thread runs: https://learn.microsoft.com/en-us/azure/rtos/threadx/chapter4#tx_thread_create
I understand that the characteristic of of the Arinc653 scheduler that you want to emulate is time partitioning. The ThreadX schedule policy is based on priority, preemption threshold and time-slicing.
You can emulate time partitioning with ThreadX. To achieve that you can use a timer, where you can suspend/resume threads for each frame. Timers execute in a different context than threads, they are light weight and not affected by priorities. By default ThreadX uses a timer thread, set to the highest priority, to execute threads; but to get better performance you can compile ThreadX to run the timers inside an IRQ (define the option TX_TIMER_PROCESS_IN_ISR).
An example:
Threads thd1,thd2,thd3 belong to frame A
Threads thd4,thd5,thd6 belong to frame B
Timer tm1 is triggered once every frame change
Pseudo code for tm1:
tm1()
{
static int i = 0;
if (i = ~i)
{
tx_thread_suspend(thd1);
tx_thread_suspend(thd2);
tx_thread_suspend(thd3);
tx_thread_resume(thd4);
tx_thread_resume(thd5);
tx_thread_resume(thd6);
}
else
{
tx_thread_suspend(thd4);
tx_thread_suspend(thd5);
tx_thread_suspend(thd6);
tx_thread_resume(thd1);
tx_thread_resume(thd2);
tx_thread_resume(thd3);
}
}

How do I wait for a random amount of time before executing the next action in Puppeteer?

I would love to be able to wait for a random amount of time (let's say a number between 5-12 seconds, chosen at random each time) before executing my next action in Puppeteer, in order to make the behaviour seem more authentic/real world user-like.
I'm aware of how to do it in plain Javascript (as detailed in the Mozilla docs here), but can't seem to get it working in Puppeteer using the waitFor call (which I assume is what I'm supposed to use?).
Any help would be greatly appreciated! :)
You can use vanila JS to randomly wait between 5-12 seconds between action.
await page.waitFor((Math.floor(Math.random() * 12) + 5) * 1000)
Where:
5 is the start number
12 is the end number
1000 means it's converting seconds to milliseconds
(PS: However, if you question is about waiting 5-12 seconds randomly before every action, then you should have a class with wrapper, which is a different issue until you update your question.)

Why Gas Used By Txn is different when invoking the same function in the same smart contract?

I have seen some transactions in etherscan.io.But I have found that even invoking the same function in the same smart contract, the gas used by txn are different.I try to found that maybe the input data result in it.Really?
The input data might be different, but also the state stored in the smart contract might be different (and change e.g. the number of times a loop iterates). Also, if storing nonzero data in a state variable that previously held zero data, or vice versa, will change the gas usage. For example, a simple function that toggles a boolean variable will not use the same amount of gas on any two consecutive calls.
Check out https://ethereum.stackexchange.com/ for future questions like this!
Each time you invoke a function in contract that requires state change in the block,it would cost x amount of gas , so every time you call different or same function in a contract that requires a state change,you would see that x amount of gas is being deducted along with its taxation Id.This is the reason you see different Txn on same function.
More about Gas and Transaction in the link below. http://solidity.readthedocs.io/en/develop/introduction-to-smart-contracts.html

Increase Timer interval

I have a timer that calls the function 'bottleCreate' from 500 to 500 miliseconds. But I want that time to increase during the game (getting faster the creation of the bottles, and the game gets more difficult). But I don't know how to increase that variable inside new Timer. Thanks
interval=500;
var my_timer=new Timer(interval);
my_timer.addEventListener(TimerEvent.TIMER, bottleCreate);
my_timer.start();
You want the game to get faster, so the variable needs to decrease, because less time between function calls will make it faster.
According to the Documentation of the Timer Class you can use the delay variable to change the interval speed.
So, to make it faster, you could simply write
my_timer.delay -= 50;
Each time you do this, the function call will be called 50 ms faster.
Be aware though, going beneath 20ms will cause problems, according to the Documentation.
Furthermore, each time you manipulate the delay variable, the timer will restart completely, with the same repeat count you use at initialization.

Looping inside cuda code

I ran some CUDA code that updated an array of floats. I have a wrapper function like the one discussed in How can I compile CUDA code then link it to a C++ project? this question.
Inside my CUDA function I create a for loop like this...
int tid = threadIdx.x;
for(int i=0;i<X;i++)
{
//code here
}
Now the issue is that if X is equal to the value of 100, everything works just fine, but if X is equal to 1000000, my vector does not get updated (almost as if the code inside the for loop does not get executed)
Now inside the wrapper function, if I call the CUDA function in a for loop, it still works just fine, (but is significantly slower for some reason than if I simply did the same process all on the CPU) like this...
for(int i=0;i<1000000;i++)
{
update<<<NumObjects,1>>>(dev_a, NumObjects);
}
Does anyone know why I can loop a million times in the wrapper function but not simply call the CUDA "update" function once and then inside that function start a for loop of a million?
You should be using cudaThreadSynchronize and cudaGetLastError after running this to see if there was some error. I imagine the first time, it timed out. This happens if the kernel takes a long time to complete. The card just gives up on it.
The second thing, the reason it takes much longer to execute, is because there is a set overhead time for each kernel launch. When you had the loop inside the kernel, you experienced this overhead once and ran the loop. Now you're experiencing it X times. The overhead is fairly small, but large enough that as much of the loop should be put inside the kernel as possible.
If X is particularly large, you might look into running as much of the loop in the kernel as possible until it completes in a safe amount of time, and then loop over these kernels.