I am getting the following error in my Sheets Add-on:
Service invoked too many times for one day: urlfetch
I'm aware of the limits here, but how can I tell if I am hitting the "URLFetch calls" of 100,000 or the "URLFetch data received" of 100mB? They are two very different issues and if I'm hitting the first one, I must be making requests unintentionally somewhere because there's no way I'm intentionally making the call 100k times a day. It is possible I'm hitting the 100mb, but the way the error is phrased makes me think I'm hitting the first, is there anyway to know for sure which one I'm hitting?
I have run into that too. I only have 1000 rows going out to a web service. Data did not change neither in my sheet nor in the service. But at some point today most of my cells show #Error with this cause.
I feel like it's going out to re-fetch the results way too often. Is there not some caching that can be employed?
UPDATE (long time due): adding cache was exactly what was needed. So I implemented a function fetch(url) which uses a cache and that way avoids the replicate calls.
function fetch(url) {
var cache = CacheService.getScriptCache();
var result = cache.get(url);
if(!result) {
var response = UrlFetchApp.fetch(url);
result = response.getContentText();
cache.put(url, result, 21600);
}
return result;
}
You currently can not do it.
Perhaps run a counter in your script.
Related
Use is not so extensive. I read but I can not imagine the use case and when to apply the LockService in Apps Script. There are three different locks.
var lock = LockService.getScriptLock();// BEGIN - start lock here
try {
lock.waitLock(30000); // wait 30 seconds for others' use of the code section and lock to stop and then proceed
} catch (e) {
console.log('Could not obtain lock after 30 seconds.');
return HtmlService.createHtmlOutput("<b> Server Busy please try after some time <p>")
// In case this a server side code called asynchronously you return a error code and display the appropriate message on the client side
//return "Error: Server busy try again later... Sorry :("
}
//Then my stuff here
//finaly
lock.releaselock();
}
I am writing form data from my published webapp. Web app is accessed by username and password; that means only used by specific users. The form data is being saved in Google sheet where prevously Google Form used to save responses.
I have problem when two users submits form data, sometime one user's data is being replaced by another in the same row. To prevent that I wanted to implement getScriptLock()/Documentlock or Userlock. But it seems one or other problem reamin there. Some lock prevent another user to submit data [ at web app - message is submited] but actually in google form nothing is logged. Very frastating. Which of lockservice do you think I should serve my purpose?
prevents concurrent access
ensure data input are executed on a first-come-first-served basis
(similar to above) avoid data input messing up
allow setting a 'queue time'
Are you writing the responses to sheet by something like getRange(sheet.getLastRow(), 1, 1, cols).setValues(values)?
I think appendRow(values) ensure they are inserted one after another.
I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.
In the last week or so we got a report of a user missing files in the file list in our app. We we're a bit confused at first because they said they only had a couple files that matched our query string, but with a bit of work we were able to reproduce their issue by adding a large number of files to our Google Drive. Previously we had been assuming people would have less than 100 files and hadn't been doing paging to avoid multiple files.list requests.
After switching to use paging, we noticed that on one of our test accounts was sending hundreds and hundreds of files.list requests and most of the responses did not contain any files but did contain a nextPageToken. I'll update as soon as I can get a screenshot - but the client was sending enough requests to heat the computer up and drain battery fairly quickly.
We also found that based on what the query is even though it matches the same files it can have a drastic effect of the number of requests needed to retrieve our full file list. For example, switching '=' to 'contains' in the query param significantly reduces the number of requests made, but we don't see any guarantee that this is a reasonable and generalizeable solution.
Is this the intended behavior? Is there anything we can do to reduce the number of requests that we are sending?
We're using the following code to retrieve files created by our app that is causing the issue.
runLoad: function(pageToken)
{
gapi.client.drive.files.list(
{
'maxResults': 999,
'pageToken': pageToken,
'q': "trashed=false and mimeType='" + mime + "'"
}).execute(function (results)
{
this.filePageRequests++;
if (results.error || !results.nextPageToken || this.filePageRequests >= MAX_FILE_PAGE_REQUESTS)
{
this.isLoading(false);
}
else
{
this.runLoad(results.nextPageToken);
}
}.bind(this));
}
It is, but probably shouldn't be, the correct behaviour.
It generally occurs when using the drive.file scope. What (I think) is happening is that the API layer is fetching all files, and then removing those that are outside of the current scope/query, and returning the remainder to your client app. In theory, a particular page of files could have no files in-scope, and so the returned array is empty.
As you've seen, it's a horribly inefficient way of doing it, but that seems to be the way it is. You simply have to keep following the next page link until it's null.
As to "Is there anything we can do to reduce the number of requests that we are sending?"
You're already setting max results to 999 which is the obvious step. Just be aware that I have seen this value trigger internal errors (timeouts?) which manifest themselves as 500 errors. You might want to sacrifice efficiency for reliability and stick to the default of 100 which seems to be better tested.
I don't know if the code you posted is your actual code, or just a simplified illustration, but you need to make sure you are dealing with 401 errors (auth expiry) and 500 errors (sometimes recoverable with a retry)
I have a UI where the same image URL could be requested by several UIImageViews at varying times. Obviously if a request from one of them has finished then returning the cached version works as expected. However, especially with slower networks, I'd like to be able to piggy-back requests for an image URL onto any currently running/waiting HTTP request for the same URL.
On an HTTP server this called request combining and I'd love to do the same in the client - to combine the different requests for the same URL into a single request and then callback separately to each of the callers). The requests for that URL dont happen to start at the same time.
What's the best way to accomplish this?
I think re-writing UIImageView+AFNetworking might be the easiest way:
check the af_sharedImageRequestOperationQueue to see if it has an operation with the same request
if I do already have an operation in the queue or running then add myself to some list of callbacks/blocks to be called on success/failure
if I don't have the operation, then create it as normal
in the setCompletionBlockWithSuccess to call each of the blocks in turn.
Any simpler alternatives?
I encountered a similar problem and decided that your way was the most straightforward. One added bit of complexity is that these downloads require special credentials and so must go through their own operation queue. Here's the code from my UIImageView category to check whether a particular URL is inflight:
NSUInteger foundOperation = [[ConnectionManager sharedConnectionManager].operationQueue.operations indexOfObjectPassingTest:^BOOL(AFHTTPRequestOperation *obj, NSUInteger idx, BOOL *stop) {
BOOL URLAlreadyInFlight = [obj.request.URL.absoluteString isEqualToString:URL.absoluteString];
if (URLAlreadyInFlight) {
NSBlockOperation *updateUIOperation = [NSBlockOperation blockOperationWithBlock:^{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
self.image = [[ImageCache sharedImageCache] cachedImageForURL:URL];
}];
}];
//Makes updating the UI dependent on the completion of the matching operation.
[updateUIOperation addDependency:obj];
}
return URLAlreadyInFlight;
}];
Were you able to come up with a better solution?
EDIT: Well, it looks like my method of updating the UI just can't work, as the operation's completion blocks are run asynchronously, so the operation finishes before the blocks are run. However, I was able to modify the image cache to be able to add callbacks for when certain URLs are cached, which seems to work correctly. So this method will properly detect when certain URLs are in flight and be able to take action with that knowledge.
I'm looking for examples of a pattern where a demon script running within a GoogleAppsForBusiness domain can parse incoming email messages. Some messages will will contain a call to yet a different GAScript that could, for example, change the ACL setting of a specific document.
I'm assuming someone else has already implemented this pattern but not sure how I go about finding examples.
thx
You can find script examples in the Apps Script user guide and tutorials. You may also search for related discussions on the forum. But I don't think there's one that fits you exactly, all code is out there for sure, but not on a single script.
It's possible that someone wrote such script and never published it. Since it's somewhat straightforward to do and everyone's usage is different. For instance, how do you plan on marking your emails (the ones you've already read, executed, etc)? It may be nice to use a gmail filter to help you out, putting the "command" emails in a label right away, and the script just remove the label (and possibly set another one). Point is, see how it can differ a lot.
Also, I think it's easier if you can keep all functions in the same script project. Possibly just on different files. As calling different scripts is way more complicated.
Anyway, he's how I'd start it:
//set a time-driven trigger to run this function on the desired frequency
function monitorEmails() {
var label = GmailApp.getUserLabelByName('command');
var doneLabel = GmailApp.getUserLabelByName('executed');
var cmds = label.getThreads();
var max = Math.min(cmds.length,5);
for( var i = 0; i < max; ++i ) {
var email = cmds[i].getMessages()[0];
var functionName = email.getBody();
//you may need to do extra parsing here, depending on your usage
var ret = undefined;
try {
ret = this[functionName]();
} catch(err) {
ret = err;
}
//replying the function return value to the email
//this may make sense or not
if( ret !== undefined )
email.reply(ret);
cmds[i].removeLabel(label).addLabel(doneLabel);
}
}
ps: I have not tested this code
You can create a google app that will be triggered by an incoming email message sent to a special address for the app. The message is converted to an HTTP POST which your app receives.
More details here:
https://developers.google.com/appengine/docs/python/mail/receivingmail
I havn't tried this myself yet but will be doing so in the next few days.
There are two ways. First you can use Google pub/sub and handle incomming notifications in your AppScrit endpoint. The second is to use the googleapis npm package inside your AppScript code an example here. Hope it helps.
These are the steps:
made a project on https://console.cloud.google.com/cloudpubsub/topicList?project=testmabs thing?
made a pubsub topic
made a subscription to the webhook url
added that url to the sites i own, i guess? I think I had to do DNS things to confirm i own it, and the error was super vague to figure out that was what i had to do, when trying to add the subscription
added permission to the topic for "gmail-api-push#system.gserviceaccount.com" as publisher (I also added ....apps.googleusercontent.com and youtrackapiuser.caps#gmail.com but i dont think I needed them)
created oauth client info and downloaded it in the credentials section of the google console. (oauthtrash.json)