What are the specific quotas for Properties Service in Google Apps for an Add-On? - google-apps-script

We have an Add-On that uses DocumentProperties and UserProperties for persistence. We've recently had some users complain about receiving the error: "Service invoked too many times for one day: properties". I spent some time optimizing usage of the Properties Service when developing the Add-On to ensure this doesn't happen, but it still seems to in some situations.
According to https://developers.google.com/apps-script/guides/services/quotas, there is a quota of 50,000 "set" calls per day per user or document. There does not appear to be a limit on the number of reads.
Here's how our app utilizes properties:
We only set properties when values change. Even with very heavy usage, I still can't imagine more than 500 property "set" calls per day per user. When we set properties, we also write to the Cache Service with a 6 hour timeout.
We read the properties when the user is using the add-on, and also every 10 seconds while the add-on is idling. That comes out to 8640 reads per document per day. However, we use the Cache Service for reads, so very few of those reads should hit the Properties Service. I did discover a bug that existed when the most recent bug report came in where we don't re-write to the Cache even after it expires until an explicit change is made. By my calculations, that leads to 6 hours with 1 read, and then 18 hours of 6 reads/min * 60 min/hr, or 6480 reads. Factor in some very heavy usage, and we're still at about 7000 reads per day per document. The user claims he had two copies of the document open, so 14000 reads. That's still far below the 50000 quota, not to mention that the quota seems to be for setting properties, not reading them.
Would anyone from Google be able to offer some insight into any limits on Properties reads, or give any advice on how to avoid this situation going forward?
Thanks much!

This 50,000 writes per day quota is shared across all the scripts. This means that if a particular script makes heavy use of writes, it could negatively impact other scripts that are installed by the same user.

Related

Quotas for Google Services of GAS

I am currently developing with GAS. And the limit of the number of executions became a problem. My account is Google Workspace.
What if the total number of executions by the trigger exceeds the limit?
And can't we increase that limit?
And what it mean "simultaneous executions"? Does that mean that the same script will be called at the same time? I couldn't understand it by looking at the documentation.
The quotas are here. To my knowledge, you cannot go beyond these quotas using Apps Script.
Simultaneouse executions is maybe best explained as an example. Imagine you have a function which takes 30 seconds to run. This function could run 30 times within the same time frame but not 31. If you try to execute it a 31st time, even though 30 functions are still running / have not finished, then your 31st execution will throw an error. I have an API which handles document generation and this takes quite some time, therefore I add a delay in between calls in order for some executions to be finished.

N2D/C2 quota for europe

I am a google cloud user which created a cloud HPC system. The web application is really consuming in terms of hardware resources [it's quite common that I have 4/5 N1 instances allocated with 96 cores each for a total of more than 400 cores].
I found that currently in certain zones it's possible to use N2D and C2 instances which are higher in terms of CPU the first ones and dedicated to the computing the latter. Unluckily I can't use these two instances because, for some reason, I have troubles increasing the quota N2D_CPUS and C2_CPUS above the default value of 24 [which is nothing considering my needs].
Can anyone help me?
The only way to increase the quota is to submit a Quota Increase request. Once you submit the request, you should receive an email saying that the request has been submitted and that it is being reviewed.

How often does Google Cloud Preemptible instances preempt (roughly)?

I see that Google Cloud may terminate preemptible instances at any time, but have any unofficial, independent studies been reported, showing "preempt rates" (number of VMs preempted per hour), perhaps sampled in several different regions?
Given how little information I'm finding (as with similar questions), even anecdotes such as: "Looking back the past 6 months, I generally see 3% - 5% instances preempt per hour in uswest1" would be useful (I presume this can be monitored similarly to instance count metrics in AWS).
Clients occasionally want to shove their existing, non-fault-tolerant code in the cloud for "cheap" (despite best practices), and without having an expected rate of failure, they're often blind-sighted by the cheapness of preemptible, so I'd like to share some typical experiences of the GCP community, even if people's experiences may vary, to help convey safe expectations.
Thinking about “unofficial, independent studies” and “even anecdotes such as:” “Clients occasionally want to shove their existing, non-fault-tolerant code in the cloud for "cheap"” it ought to be said that no one architect or sysadmin in right mind would place production workloads with defined SLA into an execution environment without SLA. Hence the topic is rather speculative.
For those who is keen, Google provides preemption rate expectation:
For reference, we've observed from historical data that the average
preemption rate varies between 5% and 15% per day per project, on a
seven-day average, occasionally spiking higher depending on time and
zone. Keep in mind that this is an observation only: Preemptible
instances have no guarantees or SLAs for preemption rates or
preemption distributions.
Besides that there is an interesting edutainment approach to the task of "how to make inapplicable applicable".

Azure service bus queue design

Each user in my application has gmail account. the application needs to be in sync with incoming emails. for each user every 1 minute the application should ask gmail servers if there is something new. 99% of the time nothing is new.
From what I know gmail dosen't provide web-hooks
In order to reduce the load from my servers and specially from the DB I want to use the service bus queue in the following manner.
queue properties:
query method: PEEK_AND_LOCK
lock time : 1 minute
max delivery count: X
flow:
queue listener receiving message A from the queue and process it.
if nothing is new the listener will not delete the message from the queue
the message will be delivered again after lock time (1 minute)
basically instead of sending new message to the queue again and again to be processed we just leave them in the queue and relying on the re-delivery mechanism.
we are expecting many users in the near future (100,000-500,000) which means many messages in the queue in a given moment which needs to be processed each minute.
lets assume the messages size is very small and less the 5GB all together
I am assuming that the re delivery mechanism is used mainly for error handling and I wonder whether our design is reasonable and the queue is apt for that task? or if there are any other suggestions to achieve our goal.
Thanks
You are trying to use the Service Bus Queue as a scheduler which it not really is. As a result SB Queue will be main bottleneck. With your architecture and expected number of users you will find yourself quickly blocked by limitations of the Service Bus queue. For example you have only max 100 concurrent connections, which means only 100 listeners (assuming long-pooling method).
Another issue might be max delivery count property of SB Queue. Even if you set it to int.MaxValue now, there is no guarantee that Azure Team will not limit it in the future.
Alternative solution might be that you implement your own scheduler worker role (using already existing popular tools, like Quartz.NET for example). Then you can experiment - you can host N jobs (which actually do Gmail api requests) in one worker role (each job runs every X minute(s)), and each job may handle M number of users concurently. Worker role could be easily scaled if number of users increases. Numbers N and M can depend on the worker role configuration and can be determined on practice. If applicable, just to save some resources, X can be made variable, for example, based on the time of the day (maybe you don't need to check emails so often at night). Hope it helps.

Google Compute Engine Quotas silently changing?

I've been using GCE for several weeks with free credits and have repeatedly found that the quota values keep changing. The default CPU quota is 24 per region, but, depending on what other APIs I enable and in what order, that can silently change to 2 or 8. And today when I went to use it the CPU quota had again changed from 24 to 2 even though I hadn't changed what APIs were enabled. Disabling then re-enabling Compute Engine put the quota back to 24, but that is not a very satisfactory solution. This seems to be a bug to me. Anyone else have this problem and perhaps a solution? I know about the quota increase request form, but it says that if I request an increase than that is the end of my free credits.
Free trial in GCE has some limitation as only 2 concurrent cores at a time, so if for some reason you were able to change it to 24 cores, it's expected that it will be back at 2 cores.