Functions triggered by Eventhub - function

We have an existing solution where there is an Eventhub ingests real time event data from a source. We are using a SAS key for authentication and authorization. For additional security we have whitelisted the IPs of the source on the Eventhub. We also have a databricks instance within a VNET reading from this eventhub. The VNET has been whitelisted on the Eventhub as well.
We now have a new requirement to read off the Eventhub using Azure functions. The problem is since we have enabled IP whitelisting on the Eventhub, we need to whitelist the IPs of the functions as well and we can't figure out which Inbound IPs to whitelist on the Eventhub.
The documentation says that the inbound IPs remain mostly the same but can change for Consumption plan which is what we intend to use.
Does that mean the only other solution is that we need to whitelist the entire Azure region where our functions are hosted using the list in the link Azure service IPs?
Any other suggestions what we can try?

Does that mean the only other solution is that we need to whitelist
the entire Azure region where our functions are hosted? Any other
suggestions what we can try?
Yes, if you don't know the outbound ip address of azure function app, please add the ip region to the whitelist. You could get those here.
More realistic option: You can put your function app in a azure VNET and let the VNET to access the Event Hub. However, this requires a AppService Plan or Premium Consumption Plan Function.

Related

To authenticate the client that invokes Google cloud function in Java

I have a google cloud function in Java.
Client will invoke the function using HTTP trigger URL.
But that is not secure. I have gone through some docs saying that you should pass a token or client ID and then verify it in server side.
Can anyone explain that in detail and please provide a code example if any.
My doubt is to authenticate the client while they invoke the function using Http trigger
This page explains quite well all the capacity that you have to authenticate a requester on Cloud Functions.
If you have users, the best way is to use Firebase Auth (our Google Cloud Identity Platform which is simply a more advance solution than Firebase Auth with more features)
However, you need to grant all you user with cloudfunction.invoker role, to allow them to invoke the Cloud Functions. It could be difficult. You can also perform the check on your side, but in this case you remove the security (filter) layer of google and you have to check all the traffic by yourselves (not really safe, in term of billing and in case of attack).
The latest solution, API keys, is not recommended, especially for the users. But for machine to machine it's sometime the only solution. However, there isn't out of the box solution and for this I wrote an article, that explains how to create a Cloud Endpoint (or now a Cloud API Gateway which is the serverless solution of Cloud Endpoint with ESPv2) to accept API Keys.
With this latest solution, if you change your security definition, you can also accept OAuth2 tokens coming from Firebase Auth (or Cloud Identity Platform), but this time, you don't need to grant all the users on your Cloud Functions IAM role. The token only need to be valid and it's the Cloud Endpoint service account which is used to perform the call (and thus which needs to be authorized on the Cloud Functions).
In addition, because you can accept OAuth2 token, you can also accept non Google token, and thus have your users in any IDP OAuth2 compliant (KeyCloak, Okta,...)
You could use external OAuth server like keycloack (https://github.com/keycloak/keycloak), or use somethging like Json Web Tokens -- https://jwt.io/ -- available for various languages, siutable for microservices.

How to assign multiple service account credentials to Google Cloud Functions?

I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.

Whitelist IBM Cloud function location

Hi does anyone know what I can use to whitelist IBM Cloud function locations? I wrote a function that makes rest-api calls to a server but the server needs to whitelist incoming requests. Eg. If I select "US South" as the location for my IBM Cloud function, then what ip/domain/hostname etc does that appear as so I can whitelist it?
Thank you.
I recommend to have a look at IBM Cloud's Statica service which allows you to access restricted resources behind firewalls and whitelisted services using a static IP regardless of where your app is running or the number of instances.
https://console.bluemix.net/catalog/services/statica
Does this help?

Azure API Management - User Metadata

I am using Azure API Management to provide API gateway for some APIs. To set up a policy for a particular Api, I have used a Property(Named Value) to restore user metadata and then I assign it into a Variable in incoming request body. When adding a new user I need to add metadata for the new user in to the json. The property value has grown and exceeded the limit now and I cannot add more info to it anymore. I am wondering what the best way is to restore my large metadata in order to be accessible in API Management policy?
Update1:
I have switched the Authentication process from Azure to Auth0 so I can add the user metadata to Auth0 app_metadata and then in Azure policies I validate JWT from Auth0 and obtain token claim(app_metadata) explained in this article. By doing so I can solve the large user metadata (json) issue however this doesn't solve other non-related user metadata stored in other Properties(Named Value) and moreover the API gateway inbound policies are growing and becoming a huge bunch of logic which is not easy to manage and maintain.
At this stage I am looking for a solution to handle all the API gateway inbound policies in a better way and more manageable environment i.e. C#. So my two cents is to implement the API gateway inbound policies in a new .net Api and call this new API in the existing API gateway inbound policies so that it can play a bridge role between Azure API gateway and existing API however I'm still not sure if this is acheivable and whether existing API can be called via new API directly or it should be called via Azure API gateway in some way!
At this point you have to either store it in multiple variables or hardcode it in policy directly.
After more research I ended up with this solution which basically suggests to restore user metadata in Azure Cosmos DB and call Cosmos API in API Management Policy to access to the metadata and also the Cosmos API call can be cached in the policy.

Accessing IP restricted URI from Azure function (powershell)

I'd like to know given a Powershell function such as:
$url = "http://AnIPrestrictedURL"
[xml]$xml = (new-object System.Net.WebClient).DownloadString($url)
with $url being an IP restricted url what are my options (if any) to be able to make this work? Is VPN an option or some other method? Would Express route (obv cost not being a factor) or point to site VPN work for this??
EDIT: To make this clearer? I have control of the IPRestrictedUrl server so in theory I could allow access to this via VPN / express route presumably without the IP restriction?? this is the point of the question?
Perhaps I should have said "how can I use an Azure Function with a service I do not want to be publicly accessible" If it matters the end service is SOLR
While the list of Outbound IPs is provided for Azure Web App (and is stable), the equivalent does not exist for Azure Function Apps. The reason is that they get scaled out very dynamically, and can end up running across many scale units, each with a different set of Outbound IPs.
So generally, you cannot make assumptions on Outbound IPs when using Azure Functions.