"Exceeded maximum execution time" when concurrent Google Calendar Trigger are running - google-apps-script

I have a Google Calendar Trigger which fires whenever a change in ones users calendar is detected. The Trigger sends some data to a 3rd party system and does some logic. It is important that concurrent Triggers are not sending at the same time.
Therefore I am using the Lock Service which prevents exactly this.
var lock = LockService.getScriptLock();
try {
lock.waitLock(30000); // wait 30 seconds for others' use of the code section and lock to stop and then proceed
} catch (e) {
Logger.log('Could not obtain lock after 30 seconds.');
}
// this can take a few seconds
doSomeStuff();
lock.releaseLock();
// END - end lock here
return;
The problem is that sometimes the execution of a Trigger can take up to 10 seconds and it also can happen that sometimes multiple trigger are executed where every Trigger holds the lock.
This means that that the total execution time can easily be exceeded.
In my opinion this can not be solved with Google AppScript only. In order to handle this I guess the best way would be to have some kind of a queue where the trigger writes to and then Google Cloud Functions takes the data from this queue and runs the logic.
How to solve this?
The only queue like resource I could find in the Google World was Cloud Tasks but I am not sure if this is the best fit (it also needs a lot of setup work) when there are 10K Users where everyone has a calendarTrigger running.
Another Idea was that every UserTrigger writes the data to a database like Firestore and in the backend Google Cloud Functions read and delete the data to run the logic. So in this case the db would work also as a simplified queue.
In both cases the actually logic doSomeStuff() has to run in Google Cloud Functions. However since there are 10K users where every user can fire multiple Triggers I want to have full control over the amount of Google Cloud Functions run at the same time.
Summary
So in my head the solution is something like this:
trigger writes to
-> Queue (DB? Cloud Tasks?)
-> Backend Function watches Queue (Cloud Functions? Compute Engine?)
-> starts (max amount) Cloud functions to dequeue and run actual logic
Questions:
is such a "complex" structure involving those resources necessary?
if so, what are the best Google Cloud Resources to achieve this?

Related

Programatically create additional cloud task queues from pub sub triggered cloud functions

Using cloud functions to schedule cloud tasks, upon scheduled time the cloud task triggers an HTTP end point. As of now created a single queue with the following configuration.
Max dispatches per second:500
Max concurrent dispatches :1000
Max attempts: 5
The cloud function is pub sub triggered. In a second pub sub may be receiving 10000 messages and in turn the cloud function scales and will be creating 10000 tasks.
Question:
If the scaled cloud functions has to create more tasks and assign it to different queues , how best the cloud function has to decide and create queues and assign tasks to different queues considering cold and warm queues capabilities to avoid latency.
I read through this official doc, but it is not so clear for dummies https://cloud.google.com/tasks/docs/manage-cloud-task-scaling#queue
Back to your original question, if your process is time sensitive and you need to trigger more than 500 requests at the same time, you need to create additional queues (as mentioned in the documentation)
To dispatch the AMQP messages in several queues, you need to define the number of queue that you need and a sharding key. If you have a numerical ID, you can use the modulo X (X is the number of queue) as key and use the corresponding queue name. You can also use a hash or your data.
In your process, if the queue exists, add a task to it, or create it, and add it then. In any case, you can't have more than 1000 queues.

Azure - Trigger which copies multible blobs

I am currently working on a ServiceBus trigger (using C#) which copies and moves related blobs to another blob storage and Azure Data Lake. After copying, the function has to emit a notification to trigger further processing tasks. Therefore, I need to know, when the copy/move task has been finished.
My first approach was to use a Azure Function which copies all these files. However, Azure Functions have a processing time limit of 10 minutes (when manually set) and therefore it seems to be not the right solution. I was considering calling azCopy or StartCopyAsync() to perform an asynchronous copy, but as far as I understand, the processing time of the function will be as long as azCopy takes. To solve the time limit problem, I could use WebJobs instead, but there are also other technologies like Logic Apps, Durable Azure functions, Batch jobs, etc. which makes me confused about choosing the right technology for this problem. The function won't be called every second but might copy large data. Does anybody have an idea?
I just found out that Azure Functions only have a time limit when using consumption plan. If there is no better solution for copy blob tasks, I'll go for Azure Functions.

Change queue time of Google spreadsheet app script trigger

When I create a daily time-based trigger for the Google app script associated with my Google spreadsheet, I am prompted to select an execution time that is within an hour-long window, and it appears that a cron wrapper randomly assigns an exact execution time within that hour-long interval.
Because my application's specific use case has several data dependencies which may not be completed early in the hour, I was forced to divide my application into several stages, with separate triggers each delayed by an hour, to insure that the required data would be available.
For example, the trigger time that was initially assigned for my script was 6:03AM, but the data which usually arrived at 5:57AM, occasionally did not arrive until 6:10AM and the script had nothing to process for that day. As a blunt force solution, I deleted the 6-7AM trigger and re-created it to execute in the 7-8AM time slot to insure the required data was available. This required that the second stage of the script had to be moved to 8-9AM, resulting in script results which could be delayed by as much as 2-3 hours.
To improve this situation, I am contemplating integrating the two script processing stages and creating a more accurate script execution trigger time, say 6:30AM to be safe. Does anyone know if:
Is it possible, other than by observing daily processing, to discover the exact trigger execution time that has been assigned, and
If randomly assigned, can script triggers be created and deleted until an acceptably precise execution time is obtained?
Thanks in advance for any guidance provided.
If accuracy is paramount, you can forgo using apps script triggers altogether and leverage a 3rd party tool instead.
I'd recommend using cron-job.org. This service can create cron jobs that make POST requests to a url endpoint you specify, and you can schedule times accurate to a minute. To use it with Apps Script implement a doPost() to handle post requests and deploy your script as a Web APP. You then create a cron job using the service and pass it the web app's URL as an endpoint.
The cron job will fire at the scheduled time and you can perform any requisite operations inside the doPost() in response to the incoming POST request.
Thank you to random parts and Dimu Designs for the guidance. Based upon experimentation, here are the answers to my questions:
Is it possible, other than by observing daily processing, to discover the exact trigger execution time that has been assigned? Answer: No way except by observing the random trigger time assigned within the requested hour window.
If randomly assigned, can script triggers be created and deleted until an acceptably precise execution time is obtained? Answer: Yes. I adjusted my script's assigned execution time by observing a trigger's execution time (via email message timestamp), and deleting, recreating, and observing the randomly assigned trigger execution time until I got an acceptable minute within the requested hour window.

How long will a Google Apps Script continue to run unattended

I have written a standalone Script, which is stored in my Drive account and I have set up a trigger to run it at set intervals (several times daily, if it's relevant).
If I don't log in to my account and the Script doesn't experience any unhandled errors or exceed quotas, how long will it continue running unattended?
Indefinitely?
Once you've set up a trigger as you've described, the script will continue run indefinitely, even if there are errors or exceeded quotas. (It will launch - but may fail again or get killed immediately for exceeded quota.)
For this reason, you should ensure that you have set notifications appropriately.
In Understanding Triggers, the behaviour of time-based triggers is explained, but it does not explicitly state that triggers run indefinitely. On the other hand, it does not say that they stop running - implying that they don't. (I have triggers that have been running daily for years.)
There have been reported instances of triggers misfiring (Issue 2708, 2746, 2547), as well as a personal favourite - scripts that continue to trigger after deletion (Issue 143).

Google Apps Script Service Using Too Much Computer Time for One Day

So I got the error message in the subject on the 1 (one) script I had running yesterday and I am assuming I will get a similar message today.
I have improved the script (which has a trigger to run once per minute) so it functions more along the lines of how it is supposed to however the error message got me thinking as to what sort of functions or bits of programs might be asking for more service time than others.
For example, I have had to use multiple sleep calls in my google apps script to allow the data import to run and again for the worksheet changes/copy paste calls to process. Are all those sleep calls counting against me in terms of service time used?
I would ask on the community's behalf that this be left as an open ended question not specific to the sleep function. What sorts of parts of a script are demanding service time and which are not (if any).
Every call to a service (Spreadsheet, Calendar or whatever) takes more time than regular JavaScript operations.
For example, if you have to modify 10 cells in a Spreadsheet,
calling range.setValue() 10 times takes far more time than having all the data in an array and then updating the spreadsheet in one go using range.setValues().
If you can paste pieces of your code, the community will be able to offer more advice on how to improve your script.
The limit is on CPU time used in time based triggers, and I believe those sleep calls are counted against your limit. I'd encourage you to find ways to avoid the sleep calls, or schedule your script to run less often.