Google Cloud alerts to Opsgenie alerting system - integration

for example: i have GCP alert set high cpu usage for vm and this will send email to my team.
Im exploring a way to enable those GCP alerts send to Opsgenie. So then i can configure escalation workflow so Opsgenie can alert relevant team member.
Now i'm stuck since Opsgenie has no direct integration with GCP alerts/alarms.
How i can do this? if API is the only way, can share some light on the integration?

On Opsgenie side: create an integration with "Google Stackdriver": https://support.atlassian.com/opsgenie/docs/integrate-opsgenie-with-google-stackdriver/
On GCP Cloud Monitoring side: send your alert to the notification channel you have webhooked to Opsgenie:
enable sending to Opsgenie notification channel
Done. :)

Related

Authenticating to cloud function from PubSub push subscription

We are using PubSub for queuing utilizing a push subscription pointing at an http-triggered cloud function. According to this documentation Cloud Run and App Engine will both authenticate requests from PubSub, cloud functions isn't listed. We have used other google services, like scheduler to invoke functions which require authentication, but have not had luck doing so with PubSub.
My question is, does cloud functions support authentication from PubSub through a subscription aim account set, or is it required that the function read and deal with the JWT itself for authentication?
You need different things:
A service account with the role/cloudfunctions.invoker
tick the Enable authentication
Select your service account
Add the Cloud Function URL (as provided in the Cloud Function) in the audience field. It's the missing part in the Ricco answer
EDIT 1
PubSub needs to have the authorization to generate a token on a service account. Check the first step on this. There, it shows how to grant the pubsub service agent service account as token creator.
Pub/Sub subscription supports the use of service account authentication for subscriptions using "Push".
To use service accounts just specify the endpoint of the cloud function, enable authentication and add a service account to be used to send requests to the cloud function. Make sure that the service account has the appropriate permissions to access both PubSub and cloud functions.

Securing free API App Service behind consumption API Management

I have created a .NET Core API and deployed it as an App Service in Azure. On top of that, I have an instance of Azure API Management. Now I want the API to be only accessible through the APIM.
During the free testing phase, i restricted the access to the API to the IP of the APIM. As i do not expect my API to have high traffic and to save costs, i now switched to free and consumption tier.
As my APIM uses the consumption tier, there is no static IP that I could use to restrict the API access.
As my App Service uses a free plan, neither VNet Integration nor incoming client certificates are available.
Is there are a way to secure a free App Service API with a APIM in consumption tier with Azure except from implementing it myself?
You have a few options with Consumption SKU in mind:
Basic auth - make APIM send a well known secret and check for that secret in API App.
Client certificate authentication - make APIM use client cert to connect to API App and check for it there.

How to protect the Backend API against calls other than Azure API Management

I have an ASP.NET Core REST API Service hosted on an Azure Web App. I own its source code and I can change it if required.
I am planning to publish REST API Service with Azure API Management.
I am adding Azure AD authentication to the Azure API Management front. So, the API management front is secured. All the steps are is described here.
All good so far. Here is the question (or challange?) :
Considering that my backend REST API Service is hosted on Azure and publicly accessible, how do I protect it against the request calls other than the API Management Calls?
How the backend service knows the identity and AAD group claims of the incoming call and access to its claims?
A link to a code sample or online documentation would be a great help.
Update
While there are some overlaps with the follwoing question:
How to prevent direct access to API hosted in Azure app service
... part of this question is still outstanding:
How the backend service knows the identity and AAD group claims of the incoming call and access to its claims?
You can enable static IP restriction on your WebApp to only allow incoming traffic from the VIP of your APIM Service facing ( keep in mind in some specific scenarios , the VIP may change and will be required to update the whitelist again).
Clients ==> AAD==> VIP APIM Service <==> (VIP APIM allowed) Web App
https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions

Dot net core console application uses Mailkit does not send emails when deployed as azure web job

I am using " Mailkit " to send emails from a dot net core console application (using smtp). It works when i run the application in my local machine and I receive emails.
However, when i deployed as an azure webjob, It does not send emails. I could see the job completes successfully and no errors logged. BUT i do not receive emails
I am just wondering how it works in local machine and it does not while run in azure.
Update:
Based on comments below, trying to be specific. can someone confirm if you were able to use Mailkit and an organizational smtp server(no public domains) & email account to send emails from an azure webjob ? or else please suggest what should be a working setup. thanks!
can someone confirm if you were able to use Mailkit and an organizational smtp server(no public domains) & email account to send emails from an azure webjob ? or else please suggest what should be a working setup. thanks!
No public domain means that we can't access the smtp server through the internet. As a workaround, you could save the mail information to a Azure Queue Storage from the WebJob and writing a new application and publish it to your internal network. In your new application, you could get the information from Azure queue storage and send the mail from your internal network.

Google Cloud SQL monitoring

I am wondering what are the current/existing possibilities of monitoring mysql instances on Google Cloud SQL? What I need is:
define alerts and trigger actions (send via http) for set of mysql metrics
retrive metrics via http
There is not much I see in Google Cloud SQL dashboard for my mysql instance. When I go to Monitoring -> Dashboard & alerts I see that it is disabled due to Stackdriver migration. Am I doing something wrong here or what I want/looking for is not possible at the moment?
Although the dashboard & alerts are not yet available, there is an API to access monitoring statistics: Cloud Monitoring. You would need to build your own alerting/trigger system on top of this at the moment.