Just noticed I have thousands of anonymous requests hitting all of the compute engine api list endpoints. I have no instances running and I'm only using Firebase and Cloud Build, Source, and Registry. Please see attached screenshot of API metrics report.
Any reason for this?
compute engine metrics
On the backend there are certain API calls needed to make sure that your project is healthy, these "Anonymous" requests represent an account used by the backend service making health checks.
Anonymous API calls (this could be just Compute Engine “list” calls) doesn't imply having enabled something from your side. A lot of different sections in the Console make calls to the Compute Engine API and there’s no easy way to figure out which section made the calls, but they are expected.
These kind of "Anonymous" Compute Engine APIs are part of the internal Monitoring tools needed to make sure that your project is healthy and are randomly triggered. These metrics might eventually disappear and come back throughout the project life.
Related
I want to know how much is each one of my cloud functions spending (break up the total costs on Google Cloud Functions that is shown in my Billing Console). I know I may be able to estimate it probably looking at the metrics and configuration of each function but it would be a hassle to do it one by one (I have a lot of functions).
I couldn't find any way of doing it using the Billing Console alone, it only shows the total cost.
If there was a tool/script it would also be appreciated.
You need to add labels on each of your functions (more about labels) and to sink the billing in BigQuery.
Now, you will be able to find the cost of each Cloud Functions label in BigQuery. Create a dashboard to view the summary, with datastudio for example.
I am unaware of any tool in Billing that would help with what you are after. And I don't think that Google's infrastructure is set up for what you are after.
The Cloud Function Pricing document details how pricing is broken down.
But understand that your Cloud Functions are actually deployed inside of a single (kubernetes) container and run inside of GCP. The GCP infrastructure is able to monitor that container's CPU use, and monitor individual calls to your Cloud Functions, but I strongly doubt that it is monitoring the CPU use of each and every invocation of a function (consider how complex that would be in a project where there is high load/concurrency...how would that even work?)
Your question is specific to a particular approach to solving a problem. I suggest you detail what the underlying problem is to see if there might be a different solution than "Google Billing breaking up Cloud Function costs".
We have a crawling engine that we are trialling on Google Kubernetes Engine.
It appears that we are severely network bound when it comes to making request outside the google network.
We have been in-touch with an architect at google, who though that perhaps there was some rate-limiting being applied inside the google data centre. He mentioned that I should raise a support ticket with Google to investigate. Raising a ticket involves subscribing to a support plan (which I am not ready to do until the network issues are addressed) [a bit of a catch-22].
Looking at the network documentation: https://cloud.google.com/network-tiers/?hl=en_US it seems that rates might be severely limited. I'm not sure that I'm reading this right, but are we saying 6Mbps network?
I'm reaching out to the community / Google to see is what we are seeing is expected, if there is any rate limiting and what options there are to increase raw throughput?
You can raise a ticket with Google using the public issue tracker free of charge. In this case, since it's possibly an issue on the Cloud side of things, raising a ticket in this manner will get a Google Engineer looking into this.
According to Kubernetes documentation,
If you are using GCE, you can configure your cluster so that the number of nodes will be automatically scaled based on:
CPU and memory utilization.
Amount of of CPU and memory requested by the pods (called also reservation).
Is this actually true?
I am running mainly Jobs on my cluster, and would like to spin up new instances to service them on demand. CPU usage doesn't work well as a scaling metric for this workload.
From Google's CKE documentation, however, this only appears to be possible by using Cloud Monitoring metrics -- relying on a third-party service that you then have to customize. This seems like a perplexing gap in basic functionality that Kubernetes itself claims to support.
Is there any simpler way to achieve the very simple goal of having the GCE instance group autoscale based on the CPU requirements that I'm quite explictly specifying in my GKE Jobs?
The disclaimer at the bottom of that section explains why it won't work by default in GKE:
Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring. To make the metrics accessible, you need to create your cluster with KUBE_ENABLE_CLUSTER_MONITORING equal to google or googleinfluxdb (googleinfluxdb is the default value). Please also make sure that you have Google Cloud Monitoring API enabled in Google Developer Console.
You might be able to get it working by standing up a heapster instance in your cluster configured with --sink=gcm (like this), but I think it was more of an older proof of concept than a well-maintained, production-grade configuration.
The community is working hard on a better, more-fully-supported version of node autoscaling in the upcoming 1.3 release.
I've been struggling with a GCE issue for a while and I would like to ask for some help. On the developer console I see large number of API requests that I don't know where originated from. I'm pretty sure that I'm not running any services / jobs that can burn the API quota. I see many errors as well. All my VM instances and other resources are working fine, but the issue concern me. I linked a few screens from the dev console about whats happening. I would really appreciate any help!
Thanks!
https://lh4.googleusercontent.com/-7_HaZLZxvF0/VC14TMVCKoI/AAAAAAAAE6Q/0b8NvjxttMQ/s1600/01.png
https://lh5.googleusercontent.com/-TdXJu2VQ7qA/VC14mcy2AOI/AAAAAAAAE6g/O8VPcoRJpfc/s1600/03.png
IT seems like you're using the Google Compute Engine API. When using gcloud compute commands or the Google Compute Engine console tool, you're making requests to the API. Also check if you have an app that uses the service account to make requests to GCE. You can visit this link for more information
I'm developing a HTML5 multiplayer game. Google have been doing a couple of these lately, but haven't released any information on how they made them.
I want the connection between the clients and the server to be sockets; not the old long polling hack.
The storage should be nosql / google datastore.
The framework should be in Python or JS.
Now, I can't use websockets with Google App Engine, which means I have to use Google Compute Engine (GCE). How much of the service should I run on Compute Engine; 100% or only the sockets and the rest of the backend on AppEngine. This seems like a good way to do it, but the GCE is in Europe and App Engine doesn't support this location yet, which means the the GCE have to talk back and forth over the Atlantic.
I could on the other hand develop the whole solution on GCE, but what storage and developer library should I use? I could use the new Google Cloud Datastore, but if I understand it correctly, it's like a low level api for talking the the datastore. I like how ndb is high level with models and takes care for caching. And for the solution, should I use nodejs, django or something else?
Running your web frontends on App Engine while managing the websocket connection on Compute Engine, is similar to what Google did for recent Chrome web experiments (see the end of this blog post)
Check out the amazing World Wide Maze Chrome Experiment, developed by
the Chrome team in Japan. This game converts any web site of your
choice into an interactive, three dimensional maze, navigated remotely
via your smartphone. Compute Engine virtual machines run Node.js to
manage the game state and synchronization with the mobile device,
while Google App Engine hosts the game’s web UI. This application
provides an excellent example of the new kinds of rich, high
performance back end services enabled by Google Cloud Platform.
You should also be able to create App Engine applications in Europe after filling the following form or signing up for premier account.
Google Cloud Datastore allows you to share you data between App Engine (using NDB if you use Python) and Compute Engine (using the low level API).
You can follow this issue about NDB support for Google Cloud Datastore.