Setting custom Request Headers through nginx ingress controller - kubernetes-ingress

I have a kubernetes cluster using nginx controller to proxy requests to the backend. There is an LB in the front.
LB <-> Nginx Ingress <-> WLS in K8s
When I terminate the SSL at the LB, and the backend sends a redirect it will send the redirect with location that starts with http. However, WebLogic recognizes WL-PROXY-SSL request header to send a https redirect.
I am trying to set the request header on the Nginx Ingress controller for a specific URL patterns only.
Tried using
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header WL-PROXY-SSL: "true";
It didn't work.
Even tried ....
more_set_headers "WL-PROXY-SSL: true";
nginx.org/location-snippets: |
proxy_set_header "WL-PROXY-SSL: true";
Also tried the custom-headers module but it sets for all resources. While I see the entry in the nginx.conf, it is not taking effect even with global custom-headers configMap also.
Is there any good example of adding this header to the request ?
Thanks in advance.

Related

Nginx ingress does not able to get real IP with Cloudflare DNS (No proxy)

I am trying to get the Client IP (or Real IP).
I am using a third tier cloud provider with basic services and there is a LB in that.
My current nginx-ingress controller config is:
data:
allow-snippet-annotations: "true"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
allow-snippet-annotations: "true"
real_ip_recursive: "on"
real-ip-header: "X-Real-IP"
use-proxy-protocol: "false"
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
And yes, i alr turned on the below in the service.
externalTrafficPolicy: Local
My ingress resource has this:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $remote_addr";
However, I have tried on and off many options above and I could not retrieve the client IP when a pass in, Nginx ingress always gives me the Node private IPv4 as the x-forwaded-header and remote_add. Note that if I turned on the Proxied Cloudflare , it worked fine, if i turned the Proxied option off , it returns the private IPv4 of the k8s node, because I have other DNS management in other DNS tools, so using Cloudflare proxied is not always an option for me.

Insecure forms and proxy servers?

I have a web application written in Apache Tomcat 8.5 that is proxied behind NGINX. i.e. I am using NGINX to offload SSL and serve static images etc. The app has been working reliably for years.
Now, the Chrome 87 update is causing a warning "The information that you’re about to submit is not secure" on every form submission. I've gone through the code with a fine-toothed comb and I can't figure out what could be triggering it.
The user gets to NGINX on https and the certificate is valid. NGINX forwards the request to Tomcat on port 8080. See config below.
The forms are submitted on the tomcat server as HTTP. But NGINX should prevent the browser from knowing that. It's https as far as the browser knows...
All tags are written as relative links or implied to be the same URL. e.g.
<form action="/login/login.do" method="post"> or <form method="post">.
Can anyone please point out something to look for? Am I missing a header or something?
Thanks in advance
from NGINX conf.d/site.conf:
location ~ \.(do|jsp)$ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
Seems like there was a change in Chrome 87 to give warnings for mixed forms, so that is probably why those errors are appearing.
Perhaps there are some stray absolute links within your application which are still http, and are not being automatically converted when proxied by nginx?
If you are sure all your content is served over https, you can try enabling this header Content-Security-Policy: upgrade-insecure-requests (more info here) to force browsers to upgrade insecure connections automatically.
Had a similar issue, and in my case was the response from my app server being a redirect to a different scheme (http) than the one used by the client (https).
If it's your case as well, adding this to your location definition should do the trick. Assuming your app/app server respects this header, then it should respond with the proper scheme (https) on the Location header.
proxy_set_header X-Forwarded-Proto $scheme;
For completeness, excerpt for X-Forwarded-Proto from MDN docs:
The X-Forwarded-Proto (XFP) header is a de-facto standard header for identifying the protocol (HTTP or HTTPS) that a client used to connect to your proxy or load balancer.

Ingress with and without host

It is really getting hard to understand and debug the rules for ingress. Can anyone share a good reference?
The question is how the ingress works without specifying the host?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: \"false\"
name: my-app
spec:
rules:
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
Upon assigning a host (e.g.- host: aws-dsn-name.org) it doesn't work.
Upon changing the path to path: /v1/ it also doesn't work :( .
How can I debug/check whether the mapping is correctly done?
Additionally, when to use extensions/v1beta1 or networking.k8s.io/v1beta1
There is pretty good documentation available here for getting started. It may not cover all aspects but it does answer your questions. Ingress controller is basically a reverse proxy and follows similar ideas.
The snippet you have shared is called single backend or single service ingress. / Path would be default. It's the only entry so every request on the exposed port will be served by the tied service.
Host entry; host: aws-dns-name.org should work as long as your DNS is resolving aws-dns-name.org to the IP of a node in the cluster or the LB fronting the cluster. Do a ping to that DNS entry and see if it's resolving to the target IP correctly. Try curl -H 'Host: aws-dns-name.org' IP_Address to verify if ingress responding correctly. NGINX is using Host header to decide which backend service to use. If you are sending traffic to IP with a different Host entry, it will not connect to the right service and will serve default-backend.
If you are doing path based routing, which can be combined with host based routing as well, NGINX will route to the correct backend service based on the intercepted path. However, just like any other reverse proxy, it will send the request to the specified path (http://service:80/v1/). Your application may not be listening on /v1/ path so you will end up with a 404. Use the rewrite-target annotation to let NGINX know that you serving at /.
API resources versions do switch around in K8s and can be hard to keep up with. The correct annotation now is networking.k8s.io/v1beta1 (networking.k8s.io/v1 starting 1.19) even though the old version is working but eventually will stop working. I have seen cluster upgrades break applications because somebody forgot to update the API version.

SPA applications (Vue, React, Angular) not working properly behind Nginx ingress controller on Kubernetes

We are using AKS (Azure Kubernetes Service) for managed Kubernetes clusters and for the biggest part we are happy with the benefit the platform brings but we face some issues as well.
On AKS if you host a service of LoadBalancer type it automatically creates a new dynamic IP address (Azure resource) and assigns it to the service. This is not very optimal if you want to whitelist and simply does not make sense hence we switched to Nginx ingress controller (no particular reason to choose Nginx). We have a lot of apps - APIs, SPAs, 1 ingress controller for the whole cluster and separate cluster per environment - QA/Sta/Prod etc.. So we need to manage routing somehow and the ingress path parameter felt like the way to go. Example:
http://region.azurecloud.com/students/
http://region.azurecloud.com/courses/
where students and courses are the ingress paths and then you can add /api/student for example to access a particular API. The result would be http://region.azurecloud.com/students/api/student/1 which is not perfect but does the job for now.
This is how the ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: students-api-ingress
namespace: university
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: https://region.azurecloud.com
http:
paths:
- backend:
serviceName: students-api-service
servicePort: 8001
path: /students(/|$)(.*)
This however does not work very well with SPA applications such as React, Vue or Angular. We face the same problem regardless of technology. They are hosted behind Nginx in docker so this is how the Dockerfile looks like:
# build environment
FROM node:12.2.0-alpine as build
WORKDIR /app
COPY package*.json /app/
RUN npm install --silent
COPY . /app
RUN npm run build
# production environment
FROM nginx:1.16.0-alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And here is the nginx.conf file:
server {
listen 80;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html =404;
index index.html index.htm;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin: $http_origin');
add_header 'Access-Control-Allow-Origin: GET, POST, DELETE, PUT, PATCH, OPTIONS');
add_header 'Access-Control-Allow-Credentials: true');
add_header 'Vary: Origin');
}
add_header 'Access-Control-Allow-Origin' "$http_origin" always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, PATCH, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
}
include /etc/nginx/extra-conf.d/*.conf;
}
The problem comes when assets such as .js files or images are accessed by the application. It creates the url in the format ingress.host/asset.name such as http://region.azurecloud.com/2342424ewfwer.js instead of including the ingress path as well which would look http://region.azurecloud.com/spa/2342424ewfwer.js
and the result is a 404 not found error for all assets.
The applications work properly if the ingress path is just set to / without any rewrite annotations but this is a problem because you cannot have multiple applications using the base ingress host. One solution is to use a separate ingress controller for each SPA application but this brings us back to the initial issue with load balancers - separate load balancer and IP address for each SPA app which is what we want to avoid here.
I guess I am not the only person who is hosting SPA applications behind nginx ingress controller on Kubernetes but all similar topics I managed to find ended pretty much nowhere with no clear solution what should be done or the suggestions did not work for us. I wonder where does the problem come from - the nginx web server or the ingress controller and are ingress controllers generally the way to go for managing application routing on Kubernetes. I would appreciate any help or advice on this.
Thank you,
R
The way I usually deal with this for SPAs is to have different hostnames for each SPA. For example, in a non-production cluster having two SPAs named student-portal and teacher-portal, I would create DNS records for student-portal.mydomain.com, teacher-portal.mydomain.com pointing to the public IP of the cluster load balancer.
Include the domain name in the rules of the ingress resource.
I find this is the most efficient way and avoids needing to deal with each SPA framework individually.

Google cloud compute - forward http to https

I'm on google cloud compute engine with a go webserver (no apache or nginx). I want to forward all http requests to https. My go code has ListenAndServe on port 8080 and the binary runs on port 3000 as https. This was accomplished using below.
gcloud compute forwarding-rules create pgurus --global --address
xxx.xxx.xxx.xxxx --ip-protocol TCP --ports=3000 --target-http-proxy
TARGET_HTTP_PROXY
Thanks in advance!
You can send back a 301 response when you receive an HTTP request. Google Cloud load balancer will set the X-Forwarded-Proto HTTP header with either the value HTTP or HTTPS. See this answer for details:
https://serverfault.com/a/735223
The HTTP response status code 301 Moved Permanently is used for
permanent URL redirection, meaning current links or records using the
URL that the response is received for should be updated. The new URL
should be provided in the Location field included with the response.