Chrome storage API quota - google-chrome

I read that there is a quota limit while using chrome.storage.sync API:
QUOTA_BYTES: 102,400
The maximum total amount (in bytes) of data that can be stored in sync storage, as measured by the JSON stringification of every value plus every key's length. Updates that would cause this limit to be exceeded fail immediately and set runtime.lastError
The question is: will every user have 102400 bytes or is the quota limit global to all users?

Just for future reference: the quota is per user.

Related

Instance Group : Exceeded limit 'QUOTA_FOR_INSTANCES' on resource 'us-instance-group-1'. Limit: 8.0

I am using GCE on Asia-Southeast-b with 2vCPU 10GM for my website
I am trying to test CDN and LB, so was halfway to create an instance group in the US but it threw me an error no matter how.
Instance Group : Exceeded limit 'QUOTA_FOR_INSTANCES' on resource 'us-instance-group-1'. Limit: 8.0
https://prnt.sc/tzyyrk
This document from https://cloud.google.com/compute/quotas leads me to think it could be due to the zone that I choose, so I have tried to choose all multi zones in different regions and even single zone but didn't allow me to create one no matter how I select it seems (I can't say I tried all different combinations but almost all).
I chose the Instance template with the lowest spec N1-standard with CentOS7 + 20G standard disk.
Under this project, I have the 4 following service accounts associated with this.
Compute Engine default service account, Google APIs Service Agent, Compute Engine Service Agent, Myself as Owner
I went to IAM & Admin > Quotas > All green checked
Is it because I am building this with free 300 credit?
How do I check which zone available I should create the instance group on?
What could be the reason? What did I do wrong?
Thank you
It seems to be the configuration you're setting in the Maximum number of instances.
For example, when you create an Instance Group, you set the Minimum number of instances and the Maximum number of instances. Even if you set as minimum 1 instance and you left the default value for Maximum number of instances (which is 10), it will always fail since it checks the pre-condition that the Maximum number of instances never exceeds the Quota for a region.
I reproduced this by setting Maximum number of instances to a value greater than my quota limit.
I suggest to change the value of Maximum number of instances to 3 and check if you can deploy the instance group.

What is the max number of users per room on ejabberd?

We are using ejabberd_16.01-0_amd64.deb and we want to set max number of users per room to 10000. According to doc: (https://docs.ejabberd.im/admin/configuration/#modmuc)
max_users: Number: This option defines at the service level, the
maximum number of users allowed per room. It can be lowered in each
room configuration but cannot be increased in individual room
configuration. The default value is 200.
On the other hand,
https://github.com/processone/ejabberd/blob/master/src/mod_muc_room.erl#L58
says, it could be also 5000.
We have tried 10000, but it didn't work (of course, values lower then 200 did work ).
Can anyone please advice us, what to do?
Ok, we tried to set max users per room to 5000 and that worked.
max_users: Number: This option defines at the service level, the
maximum number of users allowed per room. It can be lowered in each
room configuration but cannot be increased in individual room
configuration. The default value is 200.
It looks like, I misunderstood what the doc says: The limit max users per room is set globally. It can be only lowered per room (it can't be increased over the global maximum).
Note: we would expect the server to log an error or at least a warning, why value 10000 can't be set, but we couldn't find anything.

Google maps api key - limits

Will my API key be blocked for rest of the day when I reach the daily requests limit? I don't want to buy billing plan or get unexpected charges.
https://developers.google.com/maps/faq#usage_exceed
You won't be charged, but your API will return an error message.
It is my impression that as you approach your daily quota limit the system starts giving errors. For example, during the last 4 hours of the day the errors gradually increase with perhaps 100% errors during the last hour.
Most days the total allowed requests has been within my quota. One day it went over, like 2645 versus my 2500 daily quota.
I have tried to spread the error misery evenly around the world by limiting the accesses to 3 per 100 secs. This may be working but I have not seen any errors shown on the graphs as being due to exceeding the 100 secs quotas, which is surprising since I am often exceeding 0.04 requests per sec (5 min average).

maximum number of users

In order to limit user resources, I tend to create a new db user on each registration. The thing is, there can be millions of users signing up. And I have no idea about the maximum number of database users that can be created on MySQL.
How many users can be created on MySQL?
There are no hard coded limitations to number of users in a MySQL database. User accounts are stored in tables in terms of rows and columns that consume some variable amount of memory and disk space. Although you could in theory add an infinite number of users, you will hit resource boundaries such as disk space, memory use, and processing time to add new users will take too long.
The exact limit depends on the configuration settings of the MySQL database.

Couchbase Metadata overhead warning

I have a Couchbase (v 2.0.1) cluster with the following specifications:
5 Nodes
1 Bucket
16 GB Ram per node (80GB Total)
200GB Disk per node (1Tb Total)
Currently I have 201.000.000 documents in this bucket and only 200GB of disk in use.
I'm getting the following warning every minute for every node:
Metadata overhead warning. Over 51% of RAM allocated to bucket "my-bucket" on node "my-node" is taken up by keys and metadata.
The Couchbase documentation states the following:
Indicates that a bucket is now using more than 50% of the allocated
RAM for storing metadata and keys, reducing the amount of RAM
available for data values.
I understand that this could be a helpful indicator that I may need to add nodes to my cluster but I think this should not be necessary given the amount of resources available to the bucket.
General Bucket Analytics:
How could I know what is generating so much metadata?
Is there any way to configure the tolerance percentage?
Every document has metadata and a key stored in memory. The metadata is 56 bytes. Add that to your average key size and multiply the result times your document count to arrive at the total bytes for metadata and key in memory. So the RAM required is affected by the doc count, your key size, and the number of copies (replica count + 1). You can find details at http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#memory-quota. The specific formula there is:
(documents_num) * (metadata_per_document + ID_size) * (no_of_copies)
You can get details about the user and metadata being used by your cluster from the console (or via REST or command line interface). Look at the 'VBUCKET RESOURCES' section. The specific values of interest are 'user data in RAM' and 'metadata in RAM'. From your screenshot, you are definitely running up against your memory capacity. You are over the low water mark, so the system will eject inactive replica documents from memory. If you cross the high water mark, the system will then start ejecting active documents from memory until it reaches the low water mark. any requests for ejected documents will then require a background disk fetch. From your screenshot, you have less than 5% of your active documents in memory already.
It is possible to change the warning metadata warning threshold in the 2.5.1 release. There is a script you can use located at https://gist.github.com/fprimex/11368614. Or you can simply leverage the curl command from the script and plug in the right values for your cluster. As far as I know, this will not work prior to 2.5.1.
Please keep in mind that while these alerts (max overhead and max disk usage) are now tunable, they are there for a reason. Hitting either of these alerts (especially in production) at the default values is a major cause for concern and should be dealt with as soon as possible by increasing RAM and/or disk on every node, or adding nodes. The values are tunable for special cases. Even in development/testing scenarios, your nodes' performance may be significantly impaired if you are hitting these alerts. For example, don't draw conclusions about benchmark results if your nodes' RAM is over 50% consumed by metadata.