I have a large script project that I've been working on for a couple of years that our company is using to track production in a manufacturing environment. Typically, the doGet function that loads the web interface for the tracking tool will execute in 5-15 seconds and is very snappy and responsive. However, since yesterday morning that function is taking 60-90 seconds per execution, and occasionally the web app doesn't open at all (even though I don't get a failure in the log for the doGet function). I've been out of vacation since last week and I'm the only developer with access to the code, so nothing in the code base has changed, and the underlying data in a Google sheet doesn't seem to have had any major shifts either.
I've narrowed things down to see that the reads/writes from/to Google Sheets is the main source of the slow down. I'm reading the data in a batch with getValues(), but a single call to that function on the ~850 rows x 9 columns is now taking almost 20 seconds, where the doGet function (which includes 3-4 getValues calls) ran in less than that as of a few days ago.
I'm completely at a loss for how to debug this issue. Here are a few lines of code from the beginning of my doGet function if it helps. There is more to the function than this, but I can look at the time stamps on the Logger statements to tell that this getValues is running way too slow.
var ss = SpreadsheetApp.openById("SPREADSHEETIDHERE");
var pst = ss.getSheetByName("Panel Status Tracker");
Logger.log("Start Panel Data Get")
var panelData = pst.getRange(9, 1, pst.getLastRow()-8, 8).getValues();
Logger.log("End Panel Data Get")
TIA!
These symptoms would usually suggest that the number of rows in the sheet has increased. Remove any unneeded blank rows at the bottom of the sheet and see if that helps.
If blank rows keep reappearing, chances are that you have a erroneous array formula somewhere in the sheet that causes runaway expansion.
Try adding console.time('sec1') and console.timeEnd('sec1') at various sections of your code to figure out which section takes the most time. If you figure out the section of code, try figuring out the exact line by adding subsections within that section.
console.time()
I've been having an issue with a Google Apps Script I'm running that kept throwing the following error when ever I triggered the code onChange. No such errors were returned when I manually ran the code.
Exception: Service Documents failed while accessing document with id XXXX
The error was happening on the same line of code each time and I've subsequently found that this is a relatively common error and is perhaps happening because my script is quite lengthy and perhaps not as efficient as it should be (I'm very much an Apps Script and JS novice). Interestingly I have a very similar script (with perhaps 20% less variables) running in another file that executes as expected every time.
I had no naming or scoping issues with the variable that was being used to call the document but I thought that perhaps wrapping the troublesome part of the script into a self-invoking function within the larger function would minimise hoisting and improve the efficiency of the script - at least enough to let it run consistently without errors. My understanding of JS is that when a script initiates variable declarations are hoisted within their scope and therefore by creating a self-invoking function I could reduce the hoisting within the main function which contains in excess of 100 variables and therefore reduce the initial demand.
So far the script does appear to be running more quickly and avoiding the error that I'd previously being seeing - the last run was just over 63 seconds whereas the previous successful manual run without the self-invoking function was just under 103 seconds.
I do believe this can be an intermittent error and I'm trying to find a robust, longer term fix without having to rewrite all of my code.
I have detailed the self-invoking code below with any IDs redacted. The part that was causing the error within the script was "var docFinal = DocumentApp.openById(docFinalId);
Do you think this could be a genuine fix or has the code started to work coincidently because of the intermittent nature of this error?
var docTempId = "XXXX";//Template File Id
var docFinalId = "XXXX"; //Final File Id
var sheetId = "XXXX";
(function () { // self-invoked within larger function to minimise hoisting
var docTemp = DocumentApp.openById(docTempId);
var docFinal = DocumentApp.openById(docFinalId);
docFinal.getBody().clear();
var templateParagraphs = docTemp.getBody().getParagraphs();
createMailMerge(Company,Branch,PropertyID,PropertyAddress,ApplicantName,Dateofbirth1,EmailAddress,PhoneNumber,Nationality,PassportNumber,NationalInsuranceNumber,Currentresidentialstatus,LeaseRequested,RentOfferedPCM,TotalNumberofAdultOccupants,Proposedleasecommencementdate1,TotalRentPayers,RentPayer2Name,RentPayer2Phone,RentPayer2Email,RentPayer3Name,RentPayer3Phone,RentPayer3Email,Relationshipwithadultoccupants,Numberofchildren,Ageofchildchildren,Currenteconomicstatus,Applicantoccupation,Applicantemployedorselfemployed,ApplicantDeclaredIncome,SelfEmployedDocuments,EmploymentPartorFullTime,EmploymentContractorPermanent,MainEmploymentCompany,Mainemploymentaddress,Mainemploymentstartdate1,Mainemploymemtpensionpayrollnumber,MainEmploymentManager,ManagerEmail,ManagerPhoneNumber,ApplicantPaymentType,ApplicantHourlyRate,Applicantprimaryaveragehourspermonth,Applicantsalary,ReceivesHousingBenefit,HousingBenefitAmount,Anyadditionalincome,Typeofadditonalincome,Secondemploymentcompany,Rolewithin2ndCompany,ndEmployeraddress,ndEmploymentstartdate1,ndEmployerpensionpayrollnumber,ndEmploymentContact,ndEmploymentEmail,ndEmploymentphonenumber,Additionalincomeamount,Additionalincomedetails,TotalDeclaredGrossIncome,Applicansavingsdeclared,MostRecentAddress,DateStartedlivingincurrentaddress1,Liveanywhereelseinlast3years,Applicant2ndresidingaddress,Applicant2ndaddressmoveindate1,Applicant2ndaddressmoveoutdate1,Applicantadditionaladdressdeclared,Applicantadditionaladdressdetails,Applicantadditonaladdressmoveindate1,Applicantadditionaladdressmoveoutdate1,Applicantpreviouslandlordreference,Landlordreferenceaddress,referencefromlandlordoragent,LandlordAgentName,lengthoftimeatproperty,LandlordAgentphonenumber,LandlordAgentemailaddress,Previouslandlordreferencepreventionreason,Anypets,Petdetails,Applicantsmoke,Applicantsmokeinside,Applicantadversecredit,Adversecreditdetails,Applicantprovidecurrentaccount,Applicantcurrentaccountname,Applicantcurrentaccountbank,Applicantcurrentaccountnumber,Applicantcurrentaccountsortcode,UKbasedguarantor,GuarantorName,GuarantorEmail,GuarantorPhoneNo,Noguarantorreason,NextofKinName,NextofKinrelationship,NextofKinEmail,NoNextofKinPhoneNo,Applicantadditionalinfo,Applicantdocuments,Applicantaccurateinformationdeclaration,Applicantaccepttermsandconditions,submittedatt,Token,maidenname,ApplicantReferencingChoice,Applicantcanprovide,Applicantdocumentlink,ApplicantacceptsHomelet,ApplicantallowsHomelettocontactreferences,ApplicanthappyforHomelet,templateParagraphs,docFinal);
docFinal.saveAndClose();
createPDF(); // calls the next function
}) ();
The script has just failed again. After working on a number of occasions which hadn't been happening, it has now failed and return the same error as before. I can therefore only assume that creating a self-invoking function within the main function has made no material difference to the efficiency of the script.
The error you are receiving does not seem to be the expected one.
I think the best solution in this situation is to file a bug on Google's Issue Tracker by using the template here.
I have a function which I wish to tie to a daily time trigger in Google Apps Script. The function is supposed to take all e-mails in my Inbox marked as read that are older than 14 days and archive them. Here is the code, which I got from here
function batchArchiveA() {
var batchSize = 100 // Process up to 100 threads at once
var threads = GmailApp.search('label:"inbox" is:read older_than:14d -label:"Delete me"');
for (j = 0; j < threads.length; j+=batchSize) {
Logger.log("Thread " + j);
GmailApp.moveThreadsToArchive(threads.slice(j, j+batchSize));
}
}
I have run this function manually to test it out a few times. However, none of the changes seem to be reflected when I open my Inbox in Gmail. I still have 890+ e-mails in my inbox dating back to 2012 (plus more under the Promotion, Update etc. sub-labels)
Thing is, the execution output initially reported no errors, and I could see that a lot of threads were being loaded and then dealt with in the loop. However, now when I run the script, there are no threads loaded. The search simply returns an empty array and the function exits.
I'm just curious what I am doing wrong. I've looked at the Google Developers reference for the GmailApp but there's not really much I can go on so far as debugging is concerned. And presumably, since the search no longer returns anything, the previous runs actually did work... since if they did archive all the threads older than 14 then the search would indeed no longer find them.
Any ideas why I'm not seeing the e-mails gone from my inbox when I load up Gmail?
Okay, so it turns out it was working, but my inbox was just so clogged up that the search I believe only grabbed a maximum number of threads (which must be something like 400-500) at a time, so I didn't really notice a difference.
I set it in a loop that keeps running as long as search returns an array bigger than 0. Meaning it will run through all threads it can (or, as at the moment, until the maximum execution time limit is hit!). I set the functions to run on a regular basis and my inbox has already shrunk to almost nothing!
In the last week or so we got a report of a user missing files in the file list in our app. We we're a bit confused at first because they said they only had a couple files that matched our query string, but with a bit of work we were able to reproduce their issue by adding a large number of files to our Google Drive. Previously we had been assuming people would have less than 100 files and hadn't been doing paging to avoid multiple files.list requests.
After switching to use paging, we noticed that on one of our test accounts was sending hundreds and hundreds of files.list requests and most of the responses did not contain any files but did contain a nextPageToken. I'll update as soon as I can get a screenshot - but the client was sending enough requests to heat the computer up and drain battery fairly quickly.
We also found that based on what the query is even though it matches the same files it can have a drastic effect of the number of requests needed to retrieve our full file list. For example, switching '=' to 'contains' in the query param significantly reduces the number of requests made, but we don't see any guarantee that this is a reasonable and generalizeable solution.
Is this the intended behavior? Is there anything we can do to reduce the number of requests that we are sending?
We're using the following code to retrieve files created by our app that is causing the issue.
runLoad: function(pageToken)
{
gapi.client.drive.files.list(
{
'maxResults': 999,
'pageToken': pageToken,
'q': "trashed=false and mimeType='" + mime + "'"
}).execute(function (results)
{
this.filePageRequests++;
if (results.error || !results.nextPageToken || this.filePageRequests >= MAX_FILE_PAGE_REQUESTS)
{
this.isLoading(false);
}
else
{
this.runLoad(results.nextPageToken);
}
}.bind(this));
}
It is, but probably shouldn't be, the correct behaviour.
It generally occurs when using the drive.file scope. What (I think) is happening is that the API layer is fetching all files, and then removing those that are outside of the current scope/query, and returning the remainder to your client app. In theory, a particular page of files could have no files in-scope, and so the returned array is empty.
As you've seen, it's a horribly inefficient way of doing it, but that seems to be the way it is. You simply have to keep following the next page link until it's null.
As to "Is there anything we can do to reduce the number of requests that we are sending?"
You're already setting max results to 999 which is the obvious step. Just be aware that I have seen this value trigger internal errors (timeouts?) which manifest themselves as 500 errors. You might want to sacrifice efficiency for reliability and stick to the default of 100 which seems to be better tested.
I don't know if the code you posted is your actual code, or just a simplified illustration, but you need to make sure you are dealing with 401 errors (auth expiry) and 500 errors (sometimes recoverable with a retry)
I use the following timed trigger on a Google Spreadsheet. It runs every ten minutes:
function timedTriggerWatchFiles(rootFolder) {
// make an array with all the names of the childfolder
if (DriveApp.getFoldersByName(rootFolder).hasNext()) {
var childFolders = DriveApp.getFoldersByName(rootFolder).next().getFolders();
}
var childFoldersA = [];
while (childFolders.hasNext()) {
childFoldersA.push(childFolders.next().getName());
}
// run watchfiles for each child folder
for ( var i=0 ; i < childFoldersA.length ; i++) {
watchFiles(rootFolder, childFoldersA[i]);
}
}
function timedTrigger() {
timedTriggerWatchFiles("folder");
}
At least once a day I get a 'failure report' in my Inbox, saying:
We're sorry, a server error occurred. Please wait a bit
and try again. (line 242, file "Code")
The execution log gives the following message:
[14-01-22 17:29:38:363 CET] Execution failed: We're sorry,
a server error occurred. Please wait a bit and try again.
(line 242, file "Code") [37.016 seconds total runtime]
The lines are always different, but also always contain the function hasNext(). This is line 242:
while (childFolders.hasNext()) {
What am I doing wrong here? My script works as it is supposed to work. I just don't understand why I am receiving the error messages.
In the Drive SDK documentation there is a section about exponential backoff that you should check out. You could try the GASRetry library to simplify this in Apps Script.
#Greg's answer points to all the right resources for working with APIs in Google Apps Scripts.
Although it may appear that access to things like ScriptDB, DriveApp, SpreadsheetApp etc is native to Apps Scripts, in fact there are access limitations for aggressive use. The simplest of solutions is Utilities.sleep(1000);.
However, if you have a script that is running tightly within the 600 second limit, or you need to ensure that every request is made as quickly as possibly, then the resources in #Greg's answer will be of great assistance.