Im getting inconsistent behaviours when sending POST requests to a google apps script deployed as webapp.
I have a desktop app sending POST calls to a GAS webapp. This calls may be totally variable in their cadence, from 1 in several minutes, to bursts of dozens per second.
In my tests I have found requests seemingly lost, requests that don't progress along the webapp internal logic flow (like script instances that get cut or interrupeted (?)), while others work flawlessly. There is no evident pattern.
However, trying things around, I found that if I space the calls, adding a pause between requests, everything normalizes.
Are there stablished and known limits for this? The only option I have to solve this is to introduce this artificial intervals between calls? I have not found information on this in the GAS quotas page.
Any help and ideas would be appreciated.
Confirming in the answer: there is no evident or documented per-minute limit on the number of requests to a GAS webapp.
The issue I'm experimenting is related to concurrency. Even when they are coming from the same source, fast paced requests can produce concurrency issues when accesing storage services like Cache or Properties.
This should be handled using the Lock Service.
Related
we use a custom script to retrieve data from Bookeo API with UrlFetchApp.fetch. ยต
Everything went well but today, we have the following error "Service invoked too many times for one day: urlfetch"
We are aware of the limitation of 20.000 calls/day as mentionned here https://developers.google.com/apps-script/guides/services/quotas, but we don't think that we come close to this (maybe 1.000 - 1.500/day max)
The portion of the code where the error happen is
var responseBooking = UrlFetchApp.fetch(urlBooking);
So i'm sure it's related to quota issue
The weird thing is it's working like 1 time / 5-6 try
My questions are :
has Google changed it's quota limitation? (I didn't see any communication about it)
Is there a way to see how many calls was made for each service?
Is there a sort of chat for technical support for Google Apps Script?
Answer(s):
has Google changed it's quota limitation? (I didn't see any communication about it)
No.
Is there a way to see how many calls was made for each service?
No.
Is there a sort of chat for technical support for Google Apps Script?
No.
More Information:
Aside from the 20,000 calls/day limit, there are also limits which restrict the number of calls in short periods of time.
The quota works based on a rolling average of service invocations. You have a quota of 20,000 per day, but if you exceed the rate of ~0.231 calls per second (20,000/86,400) for a sustained period of time, you can still trigger an error.
You can rectify this by waiting for a while so that the impulse of invocations goes down. I would also suggest adding some form of exponential backoff to your code to stop this from happening again in future.
References:
Quotas for Google Services | Apps Script | Google Developers
Exponential backoff - Wikipedia
I have been looking everywhere for a solution to this problem.
At my work, we are trying to integrate Maximo with another system via the other systems REST API (which returns JSON responses). I am able to make this integration work on a small scale, however this API is taking upwards of 5 seconds to respond per request. Currently, I have defined this system as a JSON Resource, and I copy daily "snapshots" of the non-persistent data to a persistent attribute using an automation script. The requests all run in a sequence - which works slowly for 5 assets in testing, and will definitely not scale to 1000's of calls a day.
Assume that the API of the external system cannot be modified in any way... Is there a way to query this API in a non-blocking way? I'd imagine that if I could send a request, and send the next, etc. without needing to wait for a reply to proceed, this would solve the problem.
I looked into Invocation and Publishing Channels, and also Enterprise Services, and it seems like Enterprise Services along with JMS Queues might be what I need, however documentation says that these only support queuing incoming data... and I can't see how this solves my problem.
Any help? I am completely stuck on this.
Thank you!
I had to do something that sounds similar, once. I tried JSON Resources, but they didn't work for me. I ended up using the examples in Maximo 7.6 Scripting Features to do it. The first code sample in that document is a library script for making HTTP/S calls using out-of-the-Maximo-box libraries, and other examples in that document use IBM's JSONObject and JSONArray classes (also available out of the Maximo box) to parse responses.
To get things going concurrently / multithreaded, you could configure a cron task to call your automation script, and configure multiple instances on various schedules to call the same one and use the args or some other mechanism to prevent collisions.
I am encountering some very long response times from Exchange Online called via the EWS Managed API 2.0 in C#. I suspect I am being throttled, but I cannot find anything that lets me prove this in the Admin portal for my O365 account. I have seen in some search results that using PowerShell you can see messages indicating "micro delays" have been applied, but I'm stuck in C#/EWS, so my question is: is there anything I can look at coming back in the responses to my EWS calls that can identify if these micro delays have been applied? BTW, response times are very close to the 100 second timeout time, which is killing my code.
Thx,
Paul
100 Seconds ins't a micro delay, micro-delays are milliseconds(capped at 500 ms) and are more aimed at delaying a large volume of requests. (eg if an app is going to may 100 sequential requests a microdelay would spread the load of those request out over a greater time by punishing the app more and more and that would lower the resource load on the server). One request taking 100 seconds to fulfill is probably more to do with the request itself. Eg overuse of search filters or overcomplex search etc which my also impact throttling or if your using batching each request withing the batch could have a micro-delay applied.
EWS doesn't return metrics of throttle usage (the new REST API does give a little more information back in this regards). What you need is access to the EWS logs which has that information. Each Exchange request the EWS Managed API makes has and Client Requestid to help correlate the request to log entry there more detail in https://msdn.microsoft.com/en-us/library/office/dn720380(v=exchg.150).aspx
The google script limitations link shows that we can do URL Fetch calls 20,000 / day. Now thats looks quiet ambiguous to me. Inside script, you can use the UrlFetchApp to make get/post requests to external urls. But what if we are calling a deployed google script from external non script client(e.g. web browser/mobile device ).
Does that imply that we can only call the script(with url say abc/exec)20000 times a day(20000=total sum of times the script is called from all client devices) from outside of google app script?
I don't see the relationship between fetching an URL from within a script and running an app from a browser. I never saw any mention of a limit about how many times a webapp could be called but there are probably limits on the total processing time a script can use. The quota dashboard specifies the maximum processing time used by triggers, it does not however specify a limit on processing time by a human user.
If Google does not specify it that means they don't care or that they don't want us to know... in both case the result is the same: we have no way to get the info.
That said, I never encounter any issue with an app being called too often even if I know that some of them are heavily used sometimes.
Was your question purely rhetorical or did you experience some real situation?
In Salesforce I have created a future method that makes a Google Maps geocode callout. I know there is a limit of 2,500 requests per day but our instance has made no more than 100 requests today. Maybe the same number of requests yesterday. Yet the response code is 620 G_GEO_TOO_MANY_QUERIES.
Could it be that Google is seeing the IP address of the instance of Salesforce and aggregating all of these requests as coming from one location. So other companies that are sharing the address are causing my instance to hit this limit?
If not can anyone suggest another cause?
This discussion suggests that it is the shared origin from salesforce that is messing it up.
This only makes sense though if you are doing the geocode lookup from the server and not from a client. If you would do it fromt the client it would use the ip from the client and you are dealing with the local clients lookup limits (see below).
If you are doing it on the server you might also have to check if you are actually doing something that is legal and not breaking the ToS from google. In this discussion you will get some background on that and also a solution if you need to fix this on the server (buy a licence)
To be complete the G_GEO_TOO_MANY_QUERIES can mean one of 2 things:
You exceeded the daily limit (too many in a day)
You exceeded the speed limit (too many request is too short period)
Google has not specified an exact limit as far as I know but they don't seem to enjoy automated lookups. If you look at various libraries and plugins you'll see that all of them force a delay between each request and often they add a little randomness to the delay. You could experiment with this if it makes any difference
Did this end up working for you? we're hitting a server-side 620 and all configuration looks a.o.k... we have a premier license and upgraded to 250k requests per day.