GKE network bound kubernetes nodes? - google-compute-engine

We have a crawling engine that we are trialling on Google Kubernetes Engine.
It appears that we are severely network bound when it comes to making request outside the google network.
We have been in-touch with an architect at google, who though that perhaps there was some rate-limiting being applied inside the google data centre. He mentioned that I should raise a support ticket with Google to investigate. Raising a ticket involves subscribing to a support plan (which I am not ready to do until the network issues are addressed) [a bit of a catch-22].
Looking at the network documentation: https://cloud.google.com/network-tiers/?hl=en_US it seems that rates might be severely limited. I'm not sure that I'm reading this right, but are we saying 6Mbps network?
I'm reaching out to the community / Google to see is what we are seeing is expected, if there is any rate limiting and what options there are to increase raw throughput?

You can raise a ticket with Google using the public issue tracker free of charge. In this case, since it's possibly an issue on the Cloud side of things, raising a ticket in this manner will get a Google Engineer looking into this.

Related

Google Compute API Anonymous Requests

Just noticed I have thousands of anonymous requests hitting all of the compute engine api list endpoints. I have no instances running and I'm only using Firebase and Cloud Build, Source, and Registry. Please see attached screenshot of API metrics report.
Any reason for this?
compute engine metrics
On the backend there are certain API calls needed to make sure that your project is healthy, these "Anonymous" requests represent an account used by the backend service making health checks.
Anonymous API calls (this could be just Compute Engine “list” calls) doesn't imply having enabled something from your side. A lot of different sections in the Console make calls to the Compute Engine API and there’s no easy way to figure out which section made the calls, but they are expected.
These kind of "Anonymous" Compute Engine APIs are part of the internal Monitoring tools needed to make sure that your project is healthy and are randomly triggered. These metrics might eventually disappear and come back throughout the project life.

SQL Injection from Compute Engine

We have a web application that occasionally receives web request that we detect as attempts to inject SQL code, from Google virtual servers (Compute Engine).
I was asked to find a way to identify who is responsible for said machines, so that we can take the corresponding legal actions on our part, or at least, confirm that Google shut down those servers.
What I need is to find a way to communicate with Google, by email or chat, but I haven't found information about it.
EDIT 1:
I have tried to communicate with Google to indicate the information I am looking for, but the only contact available in my case is with the billing department, which could not confirm that they will give me that information if I buy a technical assistance package. On the other hand, I understand that this package is to review requirements of the applications that you own, but in my case I am looking for legal information.
What was recommended to me was to enter the corresponding application in
https://support.google.com/code/contact/cloud_platform_report?hl=en
but I have not received a response for weeks.
I am disappointed in Google, especially because of the importance of computer security.
I will keep searching information.
You can find all information concerning Tech support, phone support and Chat support in your Google Cloud console. Also, this doc shows different supports based on your support role or package.

I failed to start my VM instance (through the web browser), it is giving resource unavailability error

I failed to start my instance (through the web browser), it gave me the error:
"The zone 'projects/XXXXX/zones/us-central1-f' does not have enough resources available to fulfill the request. Try a different zone, or try again later."
Can anyone suggest some resolution to it.
The share error message meaning that you’re having a temporary resource stock-out issue in that particular zone. I would like to point you to this post made by "Paul Nash" on 4/18/17, who thoroughly explained the resource stock-out issue at Google Cloud Platform (GCP).
As a workaround, I would recommend that you try a different zone or to try later if you are looking to get resources in the same zone as those issues are to be expected transiently.
We also recommend deploying and balancing your workload across multiple zones or regions to reduce the likelihood of an outage. For more details visit the following link. Please review the documentation which outlines how to build resilient and scalable architectures on Google Cloud Platform.
Again, we want to offer our sincerest apologies. We are working hard to resolve this and make this an exceptionally rare event.

Load testing an application that uses Google Maps

Our client has implemented Google Maps in their applications and we are working with them on a large scale load test. Our concern is that Google may interpret this test as a denial-of-service attack and shut out the application. With this in mind, I have three questions:
Is this an issue? Meaning, is Google likely to lock out our application during a test that might have 50,000 simultaneous users?
If this an issue, is there anyone we can chat with to get "pre-approval" of the apps during the testing period to make sure this doesn't happen.
Alternatively, does Google offer a version of their API for testing purposes? (I could not find any information in the documentation)
Please note that we are also exploring other solutions (excluding the calls from the app, stubbing out the API, etc).
Thanks in advance for any help!
Running the load test on the page that implements google maps may result in a bill or having maps turned off if you reach the daily limit of requests.
https://developers.google.com/maps/pricing-and-plans/

How the quotas of google drive sdk works?

I am starting to develop a windows-like client google drive client for linux.
I have some problems that I am solving yet, but one no technical question are worrying me.
The drive sdk has request limit, I want open my app like other options (for example gdrive ) but the request limit will avoid general availability.
I need put a personal id, but I suppose that is not the way to publish the app.
How other options solve this problem?
Google Drive Apps have a "courtesy limit" of 10 million requests per day I believe.
I cannot imagine a situation in the near future where you will run into issues.
If so this is often referred to in the world of software development as "the good problem".
Google will no doubt allow you to scale if your app provides value to users and needs the bandwidth.