gRPC Compute engine tutorial - google-compute-engine

I'm trying out GCE for grpc. In the past, I've set-up ECS (ec2 + NLB + ALB), but since I had weird behavior with the grpc server behind NLB, my team decided to try out GCE that seems to be better suited for GRPC.
I'm following the tutorial here
I need to determine the following
To determine the ENDPOINTS_SERVICE_NAME you can either:
After deploying the Endpoints configuration, go to the Endpoints page
in the Cloud Console. The list of possible ENDPOINTS_SERVICE_NAME are
shown under the Service name column.
I'm at that point since I've just finished uploading with gcloud and enabled the required services.
There is no such "service" column, the closest thing I've got is this label
Service name: bookstore.endpoints.grpc-research.cloud.goog in the "Endpoints" section
I don't know if the documentation is out of date or I am at the wrong place or I'm missing something else.

The column is Service name. It is located in the Endpoints in the UI > Endpoints / Services / Service name. And it's the same you have shared in your post.
In my case is function1.xxxxxxx.a.run.app

Related

Functions triggered by Eventhub

We have an existing solution where there is an Eventhub ingests real time event data from a source. We are using a SAS key for authentication and authorization. For additional security we have whitelisted the IPs of the source on the Eventhub. We also have a databricks instance within a VNET reading from this eventhub. The VNET has been whitelisted on the Eventhub as well.
We now have a new requirement to read off the Eventhub using Azure functions. The problem is since we have enabled IP whitelisting on the Eventhub, we need to whitelist the IPs of the functions as well and we can't figure out which Inbound IPs to whitelist on the Eventhub.
The documentation says that the inbound IPs remain mostly the same but can change for Consumption plan which is what we intend to use.
Does that mean the only other solution is that we need to whitelist the entire Azure region where our functions are hosted using the list in the link Azure service IPs?
Any other suggestions what we can try?
Does that mean the only other solution is that we need to whitelist
the entire Azure region where our functions are hosted? Any other
suggestions what we can try?
Yes, if you don't know the outbound ip address of azure function app, please add the ip region to the whitelist. You could get those here.
More realistic option: You can put your function app in a azure VNET and let the VNET to access the Event Hub. However, this requires a AppService Plan or Premium Consumption Plan Function.

How to assign multiple service account credentials to Google Cloud Functions?

I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.

Google Cloud function send call to app hosted on GKE

I would like to load data to my db hosted on GKE, using cloud function (small ETL needs, Cloud function would be great for that case)
I'm working in the same region. my GKE has an internal load balancer exposing an gcloud internal IP.
the method called is working perfectly when it's from Appengine but when doing it with cloud function I have an connexion error : "can't find client at IP"
I would like to know if it is possible ?
if so, what would be the procedure ?
Many thanks !!
Gab
We just released this feature to Beta. You can get started by following our docs:
https://cloud.google.com/functions/docs/connecting-vpc https://cloud.google.com/appengine/docs/standard/python/connecting-vpc
https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
This is not currently possible as of today.
https://issuetracker.google.com/issues/36859738
Thanks for your feedback.
You are totally right. At the moment the instances are only able to receive such requests via the external IP [1].
I have filed a feature request in your behalf so that this functionality might be considered for future deployments. I cannot guarantee this will be implemented or provide an E.T.A. Nevertheless, rest assured that your feedback is always seriously taken.
We also reached out to our Google Cloud representative who confirmed this was a highly requested feature that was being looked at but was unable to provide an ETA as when it would be released.

how to get number of pcf instances running in java code?

I have an app that uses spring rest and deployed on PCF. Now inside the code I have to get the number of PCF instances running currently. Can anyone help?
Before I answer this - why do you want to know? It's an anti-pattern for cloud native apps to know about their peers; they should each be working in total isolation.
You can discover this by looking up application details by GUID in the CloudController. You can get your current app's GUID in the VCAP_APPLICATION environment variable.
https://apidocs.cloudfoundry.org/245/apps/get_app_summary.html
In order to hit the CloudController your app will need to know the system domain of your Cloud Foundry (eg api.mycf.com) and credentials that allow it to make that request.

Making a HTTP API request to Amazon Elastic Beanstalk

I'm trying to make a HTTP get request to
https://elasticbeanstalk.us-east-1.amazonaws.com/?ApplicationName=MyApplicationName&Operation=DescribeEnvironments
and getting
<?xml version="1.0" standalone="no"?>
<ErrorResponse xmlns="http://elasticbeanstalk.amazonaws.com/docs/2010-12-01/>
<Error>
<Type>Sender</Type>
<Code>InvalidClientTokenId</Code>
<Message>No account found for the given parameters</Message>
</Error>
<RequestId>ca83cbc7-f22a-11e3-8380-3bbf7df037f3</RequestId>
</ErrorResponse>
I've tried setting my key and secret as username and password for basic HTTP auth, but clearly this doesn't work.
So how do I add my key and secret to my remote request?
For most AWS usage scenarios it is highly recommended to use one of the many AWS SDKs to ease working with the APIs via higher level abstractions - these SDKs also take care of the required and slightly complex request signing, an explanation for the usually several options how to provide your AWS credentials can be found in the resp. SDK documentation:
The AWS SDKs provide functions that wrap an API and take care of many of the connection details, such as calculating signatures, handling request retries, and error handling. The SDKs also contain sample code, tutorials, and other resources to help you get started writing applications that call AWS. Calling the wrapper functions in an SDK can greatly simplify the process of writing an AWS application.
If you really have a need to use the AWS APIs via REST directly, Signing AWS API Requests will guide you through the required steps, see e.g. section Components of an AWS Signature 4 Request within Signature Version 4 Signing Process for the one that applies to AWS Elastic Beanstalk.
Please note that several services augment that documentation with a tailored one, see e.g. Signing and Authenticating REST Requests for the Amazon S3 variation.