Understanding GAS Trigger Total Runtime Quota - google-apps-script

I remember reading that Total Triggers Runtime is 1 hour for Consumer type users. I don't think I really understand what that means.
Let's say I programmatically create a trigger to run every 10 minutes, like so ...
ScriptApp.newTrigger("myFunction")
.timeBased()
.everyMinutes(10)
.create();
... and let it run around the clock.
myFunction does something very not time consuming like appending a couple rows to a spreadsheet.
My question is, when am I going to hit the said '1 hour' limit?

As you can see under Apps Script quota:
Those times refer to the total execution time of the function(s) that is / are being run on trigger.
When you go on https://script.google.com/u/0/home/executions, you can see all your executions, you can also see the executions for a particular trigger.
It should look like this:
So, if you sum the durations of all the executions of all the functions of type Trigger within the last 24 hours - it cannot exceed the Triggers total runtime.

Related

how is time for re-execute a google app script calculated?

let us suppose that i have a script running every 10 minutes, and now i add a code line
Utilities.sleep(30000)
in between.
will it now keep running every 10 minutes, or every 15 minutes?
Regards,
If a timed trigger is set to fire every 10 minutes, that is what it will do. It does not depend on how long the function takes to execute. In principle, you could have a 5 minute timeout inside a function that's triggered to run every 1 minute. Except that will quickly run into problems:
Total trigger-based execution time limit: 90 minutes per day
"There are too many scripts running simultaneously for this Google user account" (how many is "too many" is not documented as far as I know).

How would you expire (or update) a MySQL record with precision to the expiry time?

Last year I was working on a project for university where one feature necessitated the expiry of records in the database with almost to-the-second precision (i.e. exactly x minutes/hours after creation). I say 'almost' because a few seconds probably wouldn't have meant the end of the world for me, although I can imagine that in something like an auction site, this probably would be important (I'm sure these types of sites use different measures, but just as an example).
I did research on MySQL events and did end up using them, although now that I think back on it I'm wondering if there is a better way to do what I did (which wasn't all that precise or efficient). There's three methods I can think of using events to achieve this - I want to know if these methods would be effective and efficient, or if there is some better way:
Schedule an event to run every second and update expired records. I
imagine that this would cause issues as the number of records
increases and takes longer than a second to execute, and might even
interfere with normal database operations. Correct me if I'm wrong.
Schedule an event that runs every half-hour or so (could be any
time interval, really), updating expired records. At the same time, impose
selection criteria when querying the database to only return records
whose expiration date has not yet passed, so that any records that
expired since the last event execution are not retrieved. While this
would be accurate at the time of retrieval, it defeats the purpose
of having the event in the first place, and I'd assume the extra
selection criteria would slow down the select query. In my project
last year, I used this method, and the event updating the records
was really only for backend logging purposes.
At insert, have a trigger that creates a dynamic event specific to
the record that will expire it precisely when it should expire.
After the expiry, delete the event. I feel like this would be a
great method of doing it, but I'm not too sure if having so many
events running at once would impact on the performance of the
database (imagine a database that has even 60 inserts an hour -
that's 60 events all running simultaneously for just one hour. Over
time, depending on how long the expiration is, this would add up).
I'm sure there's more ways that you could do this - maybe using a separate script that runs externally to the RDBMS is an option - but these are the ones I was thinking about. If anyone has any insight as to how you might expire a record with precision, please let me know.
Also, despite the fact that I actually did use it in the past, I don't really like method 2 because while this works for the expiration of records, it doesn't really help me if instead of expiring a record at a precise time, I wanted to make it active at a certain time (i.e. a scheduled post in a blog site). So for this reason, if you have a method that would work to update a record at a precise time, regardless of what that that update does (expire or post), I'd be happy to hear it.
Option 3:
At insert, have a trigger that creates a dynamic event specific to the record that will expire it precisely when it should expire. After the expiry, delete the event. I feel like this would be a great method of doing it, but I'm not too sure if having so many events running at once would impact on the performance of the database (imagine a database that has even 60 inserts an hour - that's 60 events all running simultaneously for just one hour. Over time, depending on how long the expiration is, this would add up).
If you know the expiry time on insert just put it in the table..
library_record - id, ..., create_at, expire_at
And query live records with the condition:
expire_at > NOW()
Same with publishing:
library_record - id, ..., create_at, publish_at, expire_at
Where:
publish_at <= NOW() AND expire_at > NOW()
You can set publish_at = create_at for immediate publication or just drop create_at if you don't need it.
Each of these, with the correct indexing, will have performance comparable to an is_live = 1 flag in the table and save you a lot of event related headache.
Also you will be able to see exactly why a record isn't live and when it expired/should be published easily. You can also query things such as records that expire soon and send reminders with ease.

Google Apps Script for counting Matrix

I have a spreadsheet with a matrix set up to count how many times a student has had a lesson with a particular tutor.
The matrix works fine with this formula:
=ARRAYFORMULA(SUM(IF(TERM4!$B$6:$B$2398=B$1,IF(TERM4!$C$6:$C$2398=$A2,1,IF(TERM4!$D$6:$D$2398=$A2,1,FALSE()))),FALSE()))
however due to the number of students/tutors the matrix is 7000 cells, slowing the working sheet down considerably.
Is there a better way to do this. Can I run a google app script to count the matrix on a trigger (eg. once a week) to count the matrix, so the formulas are not slowing the sheet down.
I would also like the formula to return a blank rather than a 0 if the result is FALSE.
Thanks for your help!
Yes its possible to do it with gas. The only part that gets a little complex is that if your script takes over 5min it wont process all the rows. To avoid that, process it by chunks (say 100 at a time) and use scriptProperties to remember which spreadsheet and row you last processed. Each trigger will process as much as it can until all spreadsheets and rows are processed.

Need to increase Timeout for URLFetch.Fetch(url)

I have a Keynote API that I use to provide metrics for Week to Date, Month to Date, and Year to Date performance Metrics. The Week to date responds quickly, Month to Date sometimes Times out, and Year to Date always times out due to server side calculations.
Is there any way to increase the Timeout for my Google Script?
var prods = Utilities.jsonParse(UrlFetchApp.fetch(json_api_url).getContentText());
There are ways to get Google Apps Script to handle time-outs when performing repeated operations, but they don't apply in this case. You've got a single operation that just takes a long time.
You have no way to increase the timeout values for your google scripts.

GAS performance slower than other server-side JavaScript

Working on a Google Sites site, which takes data from a spreadsheet and builds several charts dynamically, I mentioned that Google Apps Script works quite slow. I profiled the code and optimized it, by using the Cache Service, where it is possible. After optimization the charting code takes approx. 3 secs (2759 ms is one of the fastest times, which I have ever seen) to draw 11 charts having 127 rows. And this time is for the case when all data are placed to the cache. The 1st execution time, which fetches data from the spreadsheet and places them to the cache, is around of 10 sec. The profiled code required sufficient time (tens of milliseconds) in simple places. To measure the GAS performance, I wrote a very simple procedure and executed it in the GAS environment, as deployed web application, and in the Caja Playground. Also I submitted an issue to the GAS issue tracker.
Eric Koleda reasonably mentioned, that it is not correct to compare a server code with a code running on a client. I rewrote the benchmark code and here are the results. The details and explanations are the following.
Engine |List To Map|Adjust|Quick Sort|Sort|Complete|
GAS | 138| 196| 155| 38| 570|
rhino-1.6.5 | 67| 44| 31| 9| 346|
spidermonkey-1.7| 40| 36| 11| 5| 104|
GAS - a row containing the execution times of different functions ran on the GAS engine. All the times are in milliseconds. The GAS execution time drifts in quite wide limits. In the table are the most fast times which I had across 5-10 executions. The worst Complete time, which I have seen, was 1194 ms. The source code is here. The results are here.
rhino-1.6.5 and spidermonkey-1.7 - rows contain the execution times of the same functions as GAS but executed on correspondent Javascript engines using ideone.com. The code and times for these engines are here and here.
The benchmark code contains a few functions.
List To Map [listToMap] - a function which converts a list of objects to a map having a compound key. It is taken from the site script and takes approx. 9.2% (256 of 2759 ms) of the charting code.
Adjust [adjustData_] - a function which converts all date columns in a matrix to a text in a predefined format, transposes it and converts rows from the [[[a], [1]], [[b], [2]]] form to the [[a, 1], [b, 2]] one. It is also taken from the script and consumes approx. 30.7% (857 of 2759 ms).
Sort - a standard Array.sort function, it is included to the test to see how fast work standard functions.
Quick Sort [quick_sort] - a quick sort function taken here. It is added to the benchmark to compare with the Array.sort function execution time.
Complete [test] - a function which includes calls of functions, preparing test data, and the functions mentioned above. This time is not summary of times in a raw.
Conclusion: The GAS functions execution time drifts. The GAS Complete function works 1.6 times slower than the slowest competitor. The GAS standard Array.sort function is 4 times slower than the slowest of two other engines. The service List To Map and Adjust in summary are 3 times slower (334 ms vs 111 ms) than slowest competitor. The functions take 39.2% (1113 of 2759 ms) of the charting function. I did not expect that these functions work so slow. It is possible to optimize them, for instance, using the cache. Let's assume that after optimization, these functions execution time will be 0 ms. In this case the charting function execution is 1646 ms.
Wishes: If GAS Team could optimize their engine to the speed of the slowest competitor, it is possible to expect that the execution time reduces till 1 sec or less. Also it would be great to optimize time to fetch data from a spreadsheet. I understand that the spreadsheets are not designed to handle a big amount of data, but in any case, it will increase overall performance.
I've been able to replicate this performance, and I'll post updates on the issue as I receive them.