Add HSTS to an Azure API Management Service - azure-api-management

On Azure, I created a new API Management Service and behind it I connected all the APIs.
After a penetration test, there was only one vulnerability detected from the security company that is No HSTS Header observed.
The HTTP Strict Transport Security (HSTS) policy defines a time-frame where a browser must connect to the web server via HTTPS. Without a Strict Transport Security policy the web application may be connect to the application using unencrypted HTTP. The application does not specify any HSTS configuration.
Potential Impact
If the web application mixes usage of HTTP and HTTPS, an attacker can manipulate pages in the unsecured area of the application or change redirection targets in a manner that the switch to the secured page is not performed or done in a manner, that the attacker remains between client and server.
If there is no HTTP server, an attacker in the same network could simulate a HTTP server and motivate the user to click on a prepared URL by a social engineering attack.
So, my question is: how can I apply this policy across my APIs?

There is no way to implement a policy in azure api management we can only implement inbound and out bound policies which are pre defined.
You can define/implement HSTS in you api if you are using asp.net core .
Here we use app.UseHsts(); to implement the HTST policies in the api .
For detailed and Indepth explanation refer the documentation.

Related

How to access secured API Management APIs linked to an Azure Static Web Application for local development with swa cli

How to call the secured API management linked APIs configured in azure portal when developing locally using SWA CLI? All I observed in the SWA configuration is meant for functions as APIs not the APIM.
https://learn.microsoft.com/en-us/azure/static-web-apps/apis-api-management
When adding API Management APIs to an azure static web app, an automatic proxy product is created on APIM securing access to the API for this app via /api prefix on the static web app domain. I did not see any mentions of how this works for local development to pass the user claims from SWA emulator to the API via that proxy?
I was trying to do this recently and I don't think it's possible. My solution was to add a proxy to my dev server (in my case vite) to proxy all requests to the /api route to the Api Management URL, setting the necessary subscription key header.

Does it possible to save WebSocket gateway with the same route in Azure API Management

We are using Azure API Management where is supporting WebSocket, but we need 2 different endpoint routes for it, because you can`t create route to your API:
http(s)://{base_url} and ws(s)://{base_url},
you must add difference by using suffix, like :
http(s)://{base_url}
ws(s)://{base_url}/{suffix}
or
http(s)://{base_url}/{suffix}
ws(s)://{base_url}
How we can configure same endpoits ?
During the WebSocket passthrough the client application establishes a WebSocket connection with the API Management Gateway
Check for the steps in adding WebSocket API to APIM here.
Make sure we follow below limitations:
WebSocket APIs are not supported yet in the Consumption tier.
WebSocket APIs are not supported yet in the self-hosted gateway.
Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs
Refer to this SO thread in including two endpoints for same URL in backend, thanks to Hury for great explanation. Though it is for functions app, but the process is similar.

How to protect the Backend API against calls other than Azure API Management

I have an ASP.NET Core REST API Service hosted on an Azure Web App. I own its source code and I can change it if required.
I am planning to publish REST API Service with Azure API Management.
I am adding Azure AD authentication to the Azure API Management front. So, the API management front is secured. All the steps are is described here.
All good so far. Here is the question (or challange?) :
Considering that my backend REST API Service is hosted on Azure and publicly accessible, how do I protect it against the request calls other than the API Management Calls?
How the backend service knows the identity and AAD group claims of the incoming call and access to its claims?
A link to a code sample or online documentation would be a great help.
Update
While there are some overlaps with the follwoing question:
How to prevent direct access to API hosted in Azure app service
... part of this question is still outstanding:
How the backend service knows the identity and AAD group claims of the incoming call and access to its claims?
You can enable static IP restriction on your WebApp to only allow incoming traffic from the VIP of your APIM Service facing ( keep in mind in some specific scenarios , the VIP may change and will be required to update the whitelist again).
Clients ==> AAD==> VIP APIM Service <==> (VIP APIM allowed) Web App
https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions

Fiware: How to restrict user access to specific entity for Orion Context Broker API using keystone & keypass

First of all, I'm using the Telefonica implementations of Identity Manager, Authorization PDP and PEP Proxy, instead of the Fiware reference implementations which are Keyrock, AuthZForce and Wilma PEP Proxy. The source code and reference documentation of each component can be found in the following GitHub repos:
Telefonica keystone-spassword:
GitHub /telefonicaid/fiware-keystone-spassword
Telefonica keypass:
GitHub /telefonicaid/fiware-keypass
Telefonica PEP-Proxy:
GitHub /telefonicaid/fiware-pep-steelskin
Besides, I'm working with my own in-house installation of the components, NO Fi-Lab. In addition to security components, I've an IoT Agent-UL instance and an Orion Context Broker instance.
Starting from that configuration, I've created a domain in keystone (Fiware-Service) and a project inside the domain (Fiware-ServicePath). Then I've one device connected to the platform, sendding data to the IoT Agent behind the PEP Proxy. The whole device message is represented as a single Entity in Orion Context Broker.
So, the question is:
How can I restrict a specific keystone user to access only to the entity associated to this device, at the level of the Orion Context Broker API?
I know that I can allow/deny user acces to specific API via keystone Roles and XACML Policies but that implies that I should create one Policy per User-Device pair.
I could use some help with this, to know if I'm on the right way.
I do not think Access Control can be done to Orion without Security GEs. Each GE has a specific purpose and access control is not one of the Orion's purposes.
As stated in the Security Considerations from Orion documentation:
Orion doesn't provide "native" authentication nor any authorization mechanisms to enforce access control. However, authentication/authorization can be achieved the access control framework provided by FIWARE GEs.
Also, there is something related in another link:
Orion itself has no security. It’s designed to be run behind a proxy server which provides security and access control. Used within the FIWARE Lab, they run another service build on node.js, “PEP Proxy Wilma”, in front of it. Wilma checks that you have obtained a token from the FIWARE lab and put it in the headers.
Besides, the link below can endorse my opinion about Orion and access control:
Fiware-Orion: Access control on a per subscription basis
My opinion is that you are in the right way using the other security components.
About "create one Policy per User-Device pair" as you mention, maybe it would be better you thought about "group policies" instead.

CAS server with SAML.2

I'm starting to work with CAS on my company. This is totally new for me, so I had to read lot of documents and how to's to have an idea of how CAS works.
So, we have to provide a single sign on service in our server to a company with two different applications. One of those, uses SAML2.
My CAS server is now working against a MySQL database, so I'll have the users of those 2 apps on my database to provide authentication service.
What I don't get clear is about SAML. All the tutorials I've read about SAML2 integrated with CAS 4.0.0 are using Google Accounts. I don't know why! I have some SAML2 configuration on a xml on my CAS directories, but I don't know how to prove if it's working or not.
If you are going to authenticate both of the applications using your single database, CAS is enough, SAML not required. With SAML you can connect to an external application(which supports SAML), both might be having their own internal authentication, but they will commnicate each other through SAML2 protocol/agreement
CAS is ideal ,if you want to setup a web single sign-on to different web applications (exclusively for a single institution), which all use the same authentication (DB, LDAP or whatever). With this the authentication will be centralized for all these different applications.
For users from another external institution to use your web application, SAML would be the choice, provided the External application also should support SAML.