google Compute Engine:API rate limit exceeded - google-compute-engine

I had setup it for more than 5 days, and the error is this:
Error
API rate limit exceeded. Rate limit may take several minutes to update if Google Compute Engine has just been enabled, or if this is the first time you use Google Compute Engine.
the all menu diplay the error,and I can't on API.

I had a same issue as you. After setup new project and activation Compute Engine API I wasn't able to setup anything because of Error API rate limit.
The main issue I guess is in Courtesy Limit for API which I wasn't able to setup from N/A to some value
Steps which helped me to activate the API limit:
Turn OFF the Compute Engine API
Go to https://appengine.google.com -> Application Settings -> Cloud Integration - Create cloud Integration
Turn ON the Compute Engine API, it will automatically setup Courtesy Limit to 250,000 requests/day
After that, you will be able to use Compute Engine instances.
I can't 100% guarantee that will work in your case, but it helped me.

In my case, my trial had ended and I had to upgrade. There was a message at the top to upgrade. Click and refresh. Start her up again.

Related

Trying to create a local copy of our google drive with rclone bringing down all the files, constantly hitting rate limits

As the title states, I'm trying to create a local copy of our entire google drive, we currently are using it as a file storage service which is obviously not the best use-case, but to migrate else where I of course need to get all the files, the entire google drive is around 800gb~.
I am using rclone specifically the copy command to copy the files FROM google drive TO the local server, however I am constantly running into user Rate Limit errors.
I am using a google service account to authenticate this as well, which I believe should provide more usage limits.
2021/11/22 07:39:50 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User
Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may
consider re-evaluating expected per-user traffic to the API and adjust project quota
limits accordingly. You may monitor aggregate quota usage and adjust limits in the API
Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?
project=, userRateLimitExceeded)
But I don't really understand since according to my usage it is not even coming close, I am just wondering what exactly can I do to either increase my rate limit (even if that means paying) or is there some sort of solution to this issue? Thanks
Error 403: User
Rate Limit Exceeded. Rate of requests for user exceed configured project quota.
User rate limit is the number of requests your user is making per second. its basically flood protection. You are flooding the server. It is unclear how google calculates this, beyond the 100 requests per user per second. If you are getting the error there is really nothing you can do besides slow down your code. Its also unclear from your question how you are running these requests.
If you could include the code we could see how the requests are being preformed. How ever as you state you are using something called rclone so there is no way of knowing how that works.
Your only option would be to slow your code down if you have any control over that though this third party application. If not you may want to contact the owner of the product for direction as to how to fix it.

Which Compute Engine quotas need to be updated to run Dataflow with 50 workers (IN_USE_ADDRESSES, CPUS, CPUS_ALL_REGIONS ..)?

We are using a private GCP account and we would like to process 30 GB of data and do NLP processing using SpaCy. We wanted to use more workers and we decided to start with a maxiumn number of worker of 80 as show below. We submited our job and we got some issue with some of the GCP standard user quotas:
QUOTA_EXCEEDED: Quota 'IN_USE_ADDRESSES' exceeded. Limit: 8.0 in region XXX
So I decided to request some new quotas of 50 for IN_USE_ADDRESSES in some region (it took me few iteration to find a region who could accept this request). We submited a new jobs and we got new quotas issues:
QUOTA_EXCEEDED: Quota 'CPUS' exceeded. Limit: 24.0 in region XXX
QUOTA_EXCEEDED: Quota 'CPUS_ALL_REGIONS' exceeded. Limit: 32.0 globally
My questions is if I want to use 50 workers for example in one region, which quotas do I need to changed ? The doc https://cloud.google.com/dataflow/quotas doesn't seems to be up to date since they only said " To use 10 Compute Engine instances, you'll need 10 in-use IP addresses.". As you can see above this is not enought and other quotas need to be changed as well. Is there some doc, blog or other post where this is documented and explained ? Just for one region there are 49 Compute Engine quotas that can be changed!
I would suggest that you start using Private IP's instead of Public IP addresses. This would help in you in 2 ways:-
You can bypass some of the IP address related quotas as they are related to Public IP addresses.
Reduce costs significantly by eliminating network egress costs as the VM's would not be communicating with each other over public internet. You can find more details in this excellent article [1]
To start using the private IP's please follow the instructions as mentioned here [2]
Apart from this you would need to take care of the following quota's
CPUs
You can increase the quota for a given region by setting the CPUs quota under Compute Engine appropriately.
Persistent Disk
By default each VM needs a storage of 250 GB therefore for 100 instances it would be around 25TB. Please check the disk size of the workers that you are using and set the Persistent Disk quota under Compute Instances appropriately.
The default disk size is 25 GB for Cloud Dataflow Shuffle batch pipelines.
Managed Instance Groups
You would need to take that you have enough quota in the region as Dataflow needs the following quota:-
One Instance Group per Cloud Dataflow job
One Managed Instance Group per Cloud Dataflow job
One Instance Template per Cloud Dataflow job
Once you review these quotas you should be all set for running the job.
1 - https://medium.com/#harshithdwivedi/how-disabling-external-ips-helped-us-cut-down-over-80-of-our-cloud-dataflow-costs-259d25aebe74
2 - https://cloud.google.com/dataflow/docs/guides/specifying-networks

What is running on my Google compute engine

There is a lot of activity on my Google Compute engine API. It's less than 1 request per second which probably keeps me in the free zone but how do I figure out what is running and if I should stop it?
I have some pub/sub topics and a cloud function to copy data into a dataStore database. But even if I am not publishing any data (for days), I still get activity on the compute engine? Can I disable it or will it stop my cloud functions?

Why are instances permanently created and deleted in my project(s)?

For some reason I see under "Operations" in my "Compute Engine" the following:
I would like to know/understand why this is happening. What is this gae-default-* VM (assuming these are actually VMs)? What are they doing actually?
If you know a lot of stuff about GAE and the Compute Engine please consider taking a look at this question "Deploying a GWT application to Google Compute Engine - What is happening here?" as well.
The CPU is getting utilized as well even though there can't be anything that runs:
If I manually delete those VMs they simply re-appear.
GAE stands for Google App Engine. Looks like you have some App Engine jobs configured. If you use the flexible version then it would manage GCE instances on your behalf. I would imagine you should be able to find the running jobs in the web console.

Concurrent migrations of Google Compute Engine instances

Google Compute Engine guide says that Google may migrate a VM in order to do maintenance:
https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options
By default, standard instances are set to live migrate, where Google
Compute Engine automatically migrates your instance away from an
infrastructure maintenance event, and your instance remains running
during the migration. Your instance might experience a short period of
decreased performance, although generally most instances should not
notice any difference.
There is a disruption during migration.
Is it possible that Google decides to migrate all instances within a zone at the same time? Is there a maximum to a number of concurrent migrations?
Q: There is a disruption during migration?
A: Yes there is a small period of time where the instance is not running on the old host neither the new one. Here [1] you can see how the process works.
Q: Is it possible that Google decides to migrate all instances within a zone at the same time?
A: It is very unlikely that this escenario happens, as this would implicate that all your Google Compute Engine instances of your project are on the same physical host.
Q: Is there a maximum to a number of concurrent migrations?
A: I don't know the answer to that question but I have addressed to the proper team so maybe they can answer it.
You can find more about the live migration procedure here [2].
[1] https://cloud.google.com/compute/docs/instances/live-migration#how_does_the_live_migration_process_work
[2] https://cloud.google.com/compute/docs/instances/live-migration