I am trying setup caching for a spreadsheet custom funciton but the results seem to be inconsistent/unexpected. Sometimes I get the cached results, sometimes it refreshes the data. I've set the timeout to 10 seconds, and when I refresh within 10 seconds, sometimes it grabs new data, sometimes it caches. Even after waiting more than 10 seconds since last call, sometimes I get the cached results. Why is there so much inconsistency in the spreadsheet function? (or am I just doing something wrong?). When I call the function directly within the actual script, it seems to be much more consistent but sometimes I get inconsistenties/unexpected results.
function getStackOverflow(){
var cache = CacheService.getPublicCache();
var cached = cache.get("stackoverflow");
if(cached != null) {
Logger.log('this is cached');
return 'this is cached version';
}
// Fetch the data and create an object.
var result = UrlFetchApp.fetch('http://api.stackoverflow.com/1.1/tags/google-apps-script/top-answerers/all-time');
var json = Utilities.jsonParse(result.getContentText()).top_users;
var rows = [],data;
for (i = 0; i < json.length; i++) {
data = json[i].user;
rows.push(data.display_name);
}
Logger.log("This is a refresh");
cache.put("stackoverflow",JSON.stringify(rows),10);
return rows;
}
You cant use custom functions like that. Its documented.
Custom functions must be deterministic, they have always the same output given fhe same input (in your case none since you are passing no parameters.
the spreadsheet will remember the values for each input set, basically like a second layer of cache that yiu have no control.
Related
I have problem with my function in Google Sheets. I am getting every day this error: "Exception: Service invoked too many times for one day: urlfetch." I have about 1000 urls in document. I am looked for solution at google. I find some topics where is recommended to add cache to function but I dont know how to do it. Does somebody have any idea? My function:
function ImportCeny(url, HTMLClass) {
var output = '';
var fetchedUrl = UrlFetchApp.fetch(url, {muteHttpExceptions: true});
if (fetchedUrl) {
var html = fetchedUrl.getContentText();
}
// Grace period to avoid call limit
Utilities.sleep(1000);
var priceposition = html.search(HTMLClass);
return html.slice(priceposition,priceposition + 70).match(/(-\d+|\d+)(,\d+)*(\.\d+)*/g);
}
You may try to add a randomly generated number, for example 6 digits and add this number to the URL as a parameter each time before calling "UrlFetchApp"
i.e.;
url = url & "?t=458796"
You can certainly use Utilities.sleep() to force the program to stop for some time before making the next call. However, using the built-in Cache class (you can see the docs here) is much more suitable as it is specially designed for these scenarios.
So, if you wanted to leave one second you could replace:
Utilities.sleep(1000); //In milliseconds
with
var cache = CacheService.getScriptCache(); //Create a cache instance
cache.put("my_content", html, 1); // In seconds
Update: I have updated my code with some of the suggestions as well as a feature that allows for easy multiple markers and changed the arrayOfData into a 2D Array or strings. Has similar runtimes, if not slightly slower - 50pg avg: 12.499s, 100pg avg: 21.688s, per page avg: 0.233s.
I am writing a script that takes some data and a template and performs a 'mail merge' type function into another document. The general idea for this is easy and I can do it no problem.
However, I am currently 'mail merging' many rows (150-300+) of ~5 columns of data into predefined fields of a single page template (certificates) into a single document. The result is a single Google Document with 150 - 300 pages of the certificate pages. The alternative is to generate many documents and, somehow, combine them.
Is This a Good/Efficient Way of Doing This?
It took me a while to work out put together this example from the documentation alone as I couldn't find anything online. I feel like there should be a simpler way to do this but can not find anything close to it (ie. appending a Body to a Body). Is this the best way to do get this functionality right now?
Edit: What about using bytes from the Body's Blob? I'm not experienced with this but would it work faster? Though then the issue becomes replacing text without generating many Documents before converting to Blobs?
*Note: I know Code Review exists, but they don't seem to have many users who understand Google Apps Script well enough to offer improvement. There is a much bigger community here! Please excuse it this time.
Here Is My Code (Updated Feb 23, 2018 # 3:00PM PST)
Essentially it takes each child element of the Body, replaces some fields, then detects its type and appends it using the appropriate append function.
/* Document Generation Statistics:
* 50 elements replaced:
* [12.482 seconds total runtime]
* [13.272 seconds total runtime]
* [12.069 seconds total runtime]
* [12.719 seconds total runtime]
* [11.951 seconds total runtime]
*
* 100 elements replaced:
* [22.265 seconds total runtime]
* [21.111 seconds total runtime]
*/
var TEMPLATE_ID = "Document_ID";
function createCerts(){
createOneDocumentFromTemplate(
[
['John', 'Doe'], ['Jane', 'Doe'], ['Jack', 'Turner'], ['Jordan', 'Bell'],['Lacy', 'Kim']
],
["<<First>>","<<Last>>"]);
}
function createOneDocumentFromTemplate(arrayOfData, arrayOfMarkers) {
var file = DriveApp.getFileById(TEMPLATE_ID).makeCopy("Certificates");
var doc = DocumentApp.openById(file.getId());
var body = doc.getBody();
var fixed = body.copy();
body.clear();
var copy;
for(var j=0; j<arrayOfData.length;j++){
var item = arrayOfData[j];
copy = fixed.copy();
for (var i = 1; i < copy.getNumChildren() - 1; i++) {
for(var k=0; k<arrayOfMarkers.length; k++){
copy.replaceText(arrayOfMarkers[k], item[k]);
}
switch (copy.getChild(i).getType()) {
case DocumentApp.ElementType.PARAGRAPH:
body.appendParagraph(copy.getChild(i).asParagraph().copy());
break;
case DocumentApp.ElementType.LIST_ITEM:
body.appendListItem(copy.getChild(i).asListItem().copy());
break;
case DocumentApp.ElementType.TABLE:
body.appendTable(copy.getChild(i).asTable().copy());
break;
}
}
}
doc.saveAndClose();
return doc;
}
Gist
This is more of a Code Review question, but no, as written, I don't see any way to make it more efficient. I run a similar script for creating documents at work, though mine creates separate PDF files to share with the user rather than creating something we would print. It may save you time and effort to look into an AddOn like docAppender (if you're coming from a form) or autoCrat.
A couple of suggestions:
I'm more of a for loop person because it's easier to log errors on particular rows with the indexing variable. It's also more efficient if you're pulling from a spreadsheet where some rows could be skipped (already merged, let's say). Using forEach gives more readable code and is good if you always want to go over the entire array, but is less flexible with conditionals. Using a for loop will also allow you to set a row as merged with a boolean variable in the last column.
The other thing I can suggest would be to use some kind of time-based test to stop execution before you time the script out, especially if you're merging hundreds of rows of data.
// Limit script execution to 4.5 minutes to avoid execution timeouts
// #param {Object} - Date object from loop
// return Boolean
function isTimeUp_(starttime) {
var now = new Date();
return now.getTime() - starttime.getTime() > 270000; // 4.5 minutes
}
Then, inside your function:
var starttime = new Date();
replace.forEach(...
// include this line somewhere before you begin writing data
if (isTimeUp_(starttime )) {
Logger.log("Time up, finished on row " + i);
break;
}
... // rest of your forEach()
I have ran into a problem while dealing with bucket.get() API of the couchbase. I need to see, if some set of DocIDs are already stored in couchbase server or not, if not then I need to do some XML parsing.
var policy_bucket = cluster.openBucket('ss_policy_db');
function someFun(){
for (var i = 0; i < Policies.length; i++) {
var Profile = Policies[i];
var polID = Profile.get('id');
var ret = retrievePolicyNew(polID)
// do some action on the basis of ret.
}
}
function retrievePolicyNew(id) {
var result = policy_bucket.get(id.toString()); // TypeError: Second argument needs to be an object or callback.
console.log(result);
// return -1, on if we find the ID.
}
The problem with bucket.get() is that, it is a asynchronous (not properly know how to make synchronous call), I don't want to handle callback for every ID search. Is their any other way to search the list of ID in couchbase. It would be great if someone can help me getting synchronous call API set, that will solve my lot of other problems also. Because it not looks very good to make very small search also and handling it in callback.
I have stored very less data in DB, so performance is not a issue here.
You should be able to use this in a synchronous manner. I think either the code sample you provide above is incomplete and somewhere you're calling CouchbaseBucket.async() or something else. In any case, the docs are pretty clear that get() takes a string and returns a JsonDocument
I've searched online and I've looked at the Class Calendar API reference, found here:
https://developers.google.com/apps-script/reference/calendar/calendar
I notice from running a script I've created that the elements of CalendarEvent[] returned by getEvents(startTime,endTime) seem to be in chronological order. Is this always true?
Essentially, am I guaranteed that the following code
events[i].getStartTime().getTime() <= events[i+1].getStartTime().getTime()
will always be true for 0 <= i < (events.length - 1)?
I'm interested in this because I'm creating a script, which merges two (or more) distinct calendars into one and also returns all time slots which are either unallocated (i.e. no event scheduled) or overlap more than one event. Knowing that the elements within a CalendarEvent[] are chronologically ordered makes this task significantly easier (and computationally less expensive).
TIA for any assistance,
S
From my experience, yes. It was always in this order.
Though I checked the docs and they don't mention anything about it.
So to be safe, you can either use advanced services to sort by the date https://developers.google.com/google-apps/calendar/v3/reference/events/list
or use vanilla javascript to sort them.
My take on this is no, the array doesn't guarantee it will be ordered
An event will be returned if it starts during the time range, ends during the time range, or encompasses the time range. If no time zone is specified, the time values are interpreted in the context of the script's time zone, which may be different from the calendar's time zone.
If the data isn't complete it may hinder with what you handle it. Its still best for you to implement a sort
I was having this problem as well. Instead of going with the overkill Calendar Advanced Service, I wrote a simple sorter for arrays of CalendarEvent objects.
// copy this to the bottom of your script, then call it on your array of CalendarEvent objects that you got from the CalendarApp
//
// ex:
// var sortedEvents = sortArrayOfCalendarEventsChronologically(events);
// or
// events = sortArrayOfCalendarEventsChronologically(events);
function sortArrayOfCalendarEventsChronologically(array) {
if (!array || array.length == 0) {
return 0;
}
var temp = [];
for (var i in array) {
var startTime = new Date(array[i].getStartTime());
var startTimeMilli = startTime.getTime();
for (var j in temp) {
var iteratorStartTime = temp[j].getStartTime();
var iteratorStartTimeMilli = iteratorStartTime.getTime();
if (startTimeMilli < iteratorStartTimeMilli) {
break;
}
}
temp.splice(j, 0, array[i]);
}
return temp;
}
https://gist.github.com/xd1936/0d2b2222c068e4cbbbfc3a84edf8f696
I am attempting to retrieve a list of all folders and their respective IDs using a Google Apps Script. Currently, I am writing the result to an array which is then posted to a spreadsheet every 5000 records. Unfortunately, the script reaches the execution limit (5 minutes) before completion. How can I work around this? And, would I have more success doing RESTful API calls over HTTP than using Apps Script?
I've noted the following:
Code already follows Google's bulk-writes best practice.
Slow execution is as a result of Apps Script indexing Drive slowly.
Results appear to follow a consistent indexing pattern.
Multiple runs produce results in same order
Unknown how items are re-indexed upon addition preventing meaningful caching between runs
Delta not reliable unless indexing method is identified
Looked into Drive caching.
Still required to loop through FolderIterator object
Theoretical performance would be even worse imo (correct?)
Code is below:
function LogAllFolders() {
var ss_index = 1;
var idx = 0;
var folder;
var data = new Array(5000);
for (i=0;i<5000;i++){
data[i] = new Array(2);
}
var ss = SpreadsheetApp.create("FolderInv2",1,2).getSheets()[0];
var root = DriveApp.getFolders();
while(root.hasNext()) {
folder = root.next();
data[idx][0] = folder.getName();
data[idx][1] = folder.getId();
idx++;
if ((ss_index % 5000) == 0) {
ss.insertRowsAfter(ss.getLastRow()+1, 5000);
ss.getRange(ss.getLastRow()+1,1,5000,2).setValues(data);
SpreadsheetApp.flush();
idx = 0;
}
ss_index++;
}
}
I would first collect all the folder ids you wanted to process first, then you could save the folder ID (or maybe array index) that you've processed to your project properties and then run the job as a CRON every five minutes and just resume from that folder ID or index that you saved previously.
I guess when it's done, remove the CRON trigger programatically.