Pipelining API calls using Azure APIM Policies - azure-api-management

I'm somewhat new to Azure APIM and am trying to figure out solution to a scenario which I've be tasked to solve using APIM Policies. Here's how the workflow is supposed to work:
System A makes a REST call to the APIM gateway.
This triggers APIM to call REST API endpoint B to a get value x.
Finally, APIM needs to relay the original call received from system A to system C such that the header information from the System A's call and value x from endpoint B's call are included.
Thus, is there a way to accomplish this using only a combination of Azure APIM policies?
Thanks,

1 & 2. You can connect System A to System B via Azure APIM gateway. You can connect APIM with system B as steps suggested in answer : Azure Api management for connecting to application
After connection, the REST API URL will be ready which you can call from system A and setup connection.
3.All the parameters (headers/payload) sent to API by calling service (System A) will be passed to System B as it is unless you make changes.

Related

Can Azure API Management acquire access tokens from B2C?

We want to make our APIs available to external systems.
Our APIs are protected by "Access tokens" using OAUTH2 and Azure AD B2C as an Identity Provider.
Unfortunately, B2C does not support the "Client Credential Flow", so external systems cannot get tokens from B2C by passing their client id and their secret.
We are thinking of fronting the APIs with Azure API Management, and providing the external systems with Subscription Keys. Then once we verify the subscription key in API Management, we want to acquire an Access Token to call our back-end.
Is this possible? It seems like not because of the Client Credentials flow missing. However, I've seen videos from APIM experts claiming that it is possible. I'm I missing something? Does APIM have special treatment?

Does it possible to save WebSocket gateway with the same route in Azure API Management

We are using Azure API Management where is supporting WebSocket, but we need 2 different endpoint routes for it, because you can`t create route to your API:
http(s)://{base_url} and ws(s)://{base_url},
you must add difference by using suffix, like :
http(s)://{base_url}
ws(s)://{base_url}/{suffix}
or
http(s)://{base_url}/{suffix}
ws(s)://{base_url}
How we can configure same endpoits ?
During the WebSocket passthrough the client application establishes a WebSocket connection with the API Management Gateway
Check for the steps in adding WebSocket API to APIM here.
Make sure we follow below limitations:
WebSocket APIs are not supported yet in the Consumption tier.
WebSocket APIs are not supported yet in the self-hosted gateway.
Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs
Refer to this SO thread in including two endpoints for same URL in backend, thanks to Hury for great explanation. Though it is for functions app, but the process is similar.

API Management to forward client certificate

I am trying to achieve the following the scenario but ending up as 403 response.
Client -> sends Cert A -> API Management -> Forwards Cert A -> Backend API (Azure Api App) -> Authenticates the certificate.
Is there is a way to configure API management to forward the incoming certificate to the backend API?
I tried various transformation policies on the incoming request but none of the options worked.
Please suggest.
This is technically not possible since client certificate's private key is never transmitted over wire. So there is no way APIM could use it to authenticate to backend. Even more so since there is no affinity between client connection and backend connection in APIM. Your best option is to send client certificate information in a custom header. You can use ser-header policy to set it at APIM level along with policy expressions to extract client certificate information from request.
With the new authentication-certificate policy (learn.microsoft.com) you may return the certificate as a byte[] coming from a separate send-request response-variable and use it as follows:
<authentication-certificate body="#(context.Variables.GetValueOrDefault<byte[]>("byteCertificate"))" password="optional-certificate-password" />
You could store the password as a secret named value or even get it from the KeyVault by using this snippet:
github.com/Azure/api-management-policy-snippets

How to protect the Backend API against calls other than Azure API Management

I have an ASP.NET Core REST API Service hosted on an Azure Web App. I own its source code and I can change it if required.
I am planning to publish REST API Service with Azure API Management.
I am adding Azure AD authentication to the Azure API Management front. So, the API management front is secured. All the steps are is described here.
All good so far. Here is the question (or challange?) :
Considering that my backend REST API Service is hosted on Azure and publicly accessible, how do I protect it against the request calls other than the API Management Calls?
How the backend service knows the identity and AAD group claims of the incoming call and access to its claims?
A link to a code sample or online documentation would be a great help.
Update
While there are some overlaps with the follwoing question:
How to prevent direct access to API hosted in Azure app service
... part of this question is still outstanding:
How the backend service knows the identity and AAD group claims of the incoming call and access to its claims?
You can enable static IP restriction on your WebApp to only allow incoming traffic from the VIP of your APIM Service facing ( keep in mind in some specific scenarios , the VIP may change and will be required to update the whitelist again).
Clients ==> AAD==> VIP APIM Service <==> (VIP APIM allowed) Web App
https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions

AWS SQS to receive message from outside of AWS

my company has a messaging system which sends real-time messages in JSON format, and it's not built on AWS, and will not have any VPN connection with AWS.
our team is trying to use AWS SQS to receive these messages, which will then have DynamoDB process JSON messages to TSV, then load into RDS.
however, as per the FAQ, SQS can only receive message from within AWS.
https://aws.amazon.com/sqs/faqs/
Q: Who can perform operations on a message queue?
Only an AWS account owner (or an AWS account that the account owner has delegated rights to can perform operations on an Amazon SQS message queue.
In order to use SQS, one way I can think of is to create a public-facing EC2 instance, which receives messages and passes over to SQS.
My questions here are:
is my idea correct?
if it's correct, can you share any details on how to build any applications on this EC2 instance to achieve the functionality (I have no experience on application development, your insights are really appreciated!)
is there any easier/better options in AWS that can achieve the goal to receive message in my use case?
is my idea correct?
No, it isn't.
You're misinterpreting the (admittedly somewhat unclear) information in the FAQ.
SQS is accessible and usable from anywhere on the Internet. Its only exposed interface is HTTP(S). In fact, from inside EC2, SQS is not accessible unless the EC2 instance actually has outbound access to the Internet.
The point being made in the documentation is not that you need to be "inside" AWS to use queues, but rather that you need to be in possession of an authorized set of AWS credentials in order to work with queues.¹
If you have an AWS account, you have credentials, and you can use SQS. There is no requirement that you access the queue from "inside" AWS.
Choose the endpoint closest to your servers (for lowest latency) and you should find it open and accessible, from anywhere.
¹Queues can be configured to allow anonymous acccess after they are created. (Don't do it, I'm just saying it is possible.) This section of the FAQ seems to be referring to a subset of operations, such as creating queues.
I was not able to write to SQS from an external service. I found some partial explanations but got stuck at the role creation.
The alternative I found is using AWS services Lambda + API Gateway to write to SQS.
This tutorial was extremely helpful, explaining all the steps in great details:
https://startupnextdoor.com/adding-to-sqs-queue-using-aws-lambda-and-a-serverless-api-endpoint/
You can access sqs from anywhere once you have proper permission through accesskey&secret key or IAM role.
SQS is not specific to vpc
It is clear that you try to do this :
Take message from your company messaging system, send it to SQS.
It is not wrong using your method (using EC2 as a bridge). However, you don't need EC2 to connect to SQS.
All AWS services can be access using AWS API(e.g. Python boto3, etc) from internet, as long as you provide the correct credential. So you can put your "middleware" in anywhere as long as you are able establish connection to the said services.
So there is lots of more options available to you. e.g. trigger from your messaging system; use AWS Lambda, etc.
Thanks for sharing the information and your insights with me!
I have tested below solution, which works for my use case:
created an endpoint in AWS API Gateway, which is able to receive messages from company messaging system, a system that does not carry AWS credentials
created a Lambda function triggered by API Gateway, so once a message arrives, Lambda will digest the JSON message and convert it to TSV, and then load into RDS