Nginx ingress does not able to get real IP with Cloudflare DNS (No proxy) - kubernetes-ingress

I am trying to get the Client IP (or Real IP).
I am using a third tier cloud provider with basic services and there is a LB in that.
My current nginx-ingress controller config is:
data:
allow-snippet-annotations: "true"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
allow-snippet-annotations: "true"
real_ip_recursive: "on"
real-ip-header: "X-Real-IP"
use-proxy-protocol: "false"
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
And yes, i alr turned on the below in the service.
externalTrafficPolicy: Local
My ingress resource has this:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $remote_addr";
However, I have tried on and off many options above and I could not retrieve the client IP when a pass in, Nginx ingress always gives me the Node private IPv4 as the x-forwaded-header and remote_add. Note that if I turned on the Proxied Cloudflare , it worked fine, if i turned the Proxied option off , it returns the private IPv4 of the k8s node, because I have other DNS management in other DNS tools, so using Cloudflare proxied is not always an option for me.

Related

Setting custom Request Headers through nginx ingress controller

I have a kubernetes cluster using nginx controller to proxy requests to the backend. There is an LB in the front.
LB <-> Nginx Ingress <-> WLS in K8s
When I terminate the SSL at the LB, and the backend sends a redirect it will send the redirect with location that starts with http. However, WebLogic recognizes WL-PROXY-SSL request header to send a https redirect.
I am trying to set the request header on the Nginx Ingress controller for a specific URL patterns only.
Tried using
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header WL-PROXY-SSL: "true";
It didn't work.
Even tried ....
more_set_headers "WL-PROXY-SSL: true";
nginx.org/location-snippets: |
proxy_set_header "WL-PROXY-SSL: true";
Also tried the custom-headers module but it sets for all resources. While I see the entry in the nginx.conf, it is not taking effect even with global custom-headers configMap also.
Is there any good example of adding this header to the request ?
Thanks in advance.

Ingress with and without host

It is really getting hard to understand and debug the rules for ingress. Can anyone share a good reference?
The question is how the ingress works without specifying the host?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: \"false\"
name: my-app
spec:
rules:
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
Upon assigning a host (e.g.- host: aws-dsn-name.org) it doesn't work.
Upon changing the path to path: /v1/ it also doesn't work :( .
How can I debug/check whether the mapping is correctly done?
Additionally, when to use extensions/v1beta1 or networking.k8s.io/v1beta1
There is pretty good documentation available here for getting started. It may not cover all aspects but it does answer your questions. Ingress controller is basically a reverse proxy and follows similar ideas.
The snippet you have shared is called single backend or single service ingress. / Path would be default. It's the only entry so every request on the exposed port will be served by the tied service.
Host entry; host: aws-dns-name.org should work as long as your DNS is resolving aws-dns-name.org to the IP of a node in the cluster or the LB fronting the cluster. Do a ping to that DNS entry and see if it's resolving to the target IP correctly. Try curl -H 'Host: aws-dns-name.org' IP_Address to verify if ingress responding correctly. NGINX is using Host header to decide which backend service to use. If you are sending traffic to IP with a different Host entry, it will not connect to the right service and will serve default-backend.
If you are doing path based routing, which can be combined with host based routing as well, NGINX will route to the correct backend service based on the intercepted path. However, just like any other reverse proxy, it will send the request to the specified path (http://service:80/v1/). Your application may not be listening on /v1/ path so you will end up with a 404. Use the rewrite-target annotation to let NGINX know that you serving at /.
API resources versions do switch around in K8s and can be hard to keep up with. The correct annotation now is networking.k8s.io/v1beta1 (networking.k8s.io/v1 starting 1.19) even though the old version is working but eventually will stop working. I have seen cluster upgrades break applications because somebody forgot to update the API version.

SPA applications (Vue, React, Angular) not working properly behind Nginx ingress controller on Kubernetes

We are using AKS (Azure Kubernetes Service) for managed Kubernetes clusters and for the biggest part we are happy with the benefit the platform brings but we face some issues as well.
On AKS if you host a service of LoadBalancer type it automatically creates a new dynamic IP address (Azure resource) and assigns it to the service. This is not very optimal if you want to whitelist and simply does not make sense hence we switched to Nginx ingress controller (no particular reason to choose Nginx). We have a lot of apps - APIs, SPAs, 1 ingress controller for the whole cluster and separate cluster per environment - QA/Sta/Prod etc.. So we need to manage routing somehow and the ingress path parameter felt like the way to go. Example:
http://region.azurecloud.com/students/
http://region.azurecloud.com/courses/
where students and courses are the ingress paths and then you can add /api/student for example to access a particular API. The result would be http://region.azurecloud.com/students/api/student/1 which is not perfect but does the job for now.
This is how the ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: students-api-ingress
namespace: university
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: https://region.azurecloud.com
http:
paths:
- backend:
serviceName: students-api-service
servicePort: 8001
path: /students(/|$)(.*)
This however does not work very well with SPA applications such as React, Vue or Angular. We face the same problem regardless of technology. They are hosted behind Nginx in docker so this is how the Dockerfile looks like:
# build environment
FROM node:12.2.0-alpine as build
WORKDIR /app
COPY package*.json /app/
RUN npm install --silent
COPY . /app
RUN npm run build
# production environment
FROM nginx:1.16.0-alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And here is the nginx.conf file:
server {
listen 80;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html =404;
index index.html index.htm;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin: $http_origin');
add_header 'Access-Control-Allow-Origin: GET, POST, DELETE, PUT, PATCH, OPTIONS');
add_header 'Access-Control-Allow-Credentials: true');
add_header 'Vary: Origin');
}
add_header 'Access-Control-Allow-Origin' "$http_origin" always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, PATCH, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
}
include /etc/nginx/extra-conf.d/*.conf;
}
The problem comes when assets such as .js files or images are accessed by the application. It creates the url in the format ingress.host/asset.name such as http://region.azurecloud.com/2342424ewfwer.js instead of including the ingress path as well which would look http://region.azurecloud.com/spa/2342424ewfwer.js
and the result is a 404 not found error for all assets.
The applications work properly if the ingress path is just set to / without any rewrite annotations but this is a problem because you cannot have multiple applications using the base ingress host. One solution is to use a separate ingress controller for each SPA application but this brings us back to the initial issue with load balancers - separate load balancer and IP address for each SPA app which is what we want to avoid here.
I guess I am not the only person who is hosting SPA applications behind nginx ingress controller on Kubernetes but all similar topics I managed to find ended pretty much nowhere with no clear solution what should be done or the suggestions did not work for us. I wonder where does the problem come from - the nginx web server or the ingress controller and are ingress controllers generally the way to go for managing application routing on Kubernetes. I would appreciate any help or advice on this.
Thank you,
R
The way I usually deal with this for SPAs is to have different hostnames for each SPA. For example, in a non-production cluster having two SPAs named student-portal and teacher-portal, I would create DNS records for student-portal.mydomain.com, teacher-portal.mydomain.com pointing to the public IP of the cluster load balancer.
Include the domain name in the rules of the ingress resource.
I find this is the most efficient way and avoids needing to deal with each SPA framework individually.

How to enable CORS with ingress without using nginx?

I'm trying to setup RESTful API application with Kubernetes. I have a barebones setup with a cluster, static IP address, app deployed with exposed service of type NodePort, and an ingress configured with a managed certificate for SSL. I need to enable CORS and I am not yet using nginx. Is it possible, or do I need to install nginx instead of the default gce class?
Here is my ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: artsdata-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "artsdasta-static-ip"
networking.gke.io/managed-certificates: artsdata-certificate
ingress.kubernetes.io/enable-cors: "true"
spec:
backend:
serviceName: artsdata-kg
servicePort: 80
To check I am using curl as follows:
curl -H "Access-Control-Request-Method: GET" -H "Origin: http://localhost" --head http://db.artsdata.ca
I am expecting the response to include Access-Control-Allow-*
Currently CORS mechanism is not supported in GCP L7 load balancer, therefore ingress-gce ingress controller does contain appropriate annotation to accomplish this functionality, find here related Stack thread.
If you consider replacing native GCP Ingress class by Nginx Ingress Controller in order to enable Cross-origin requests then you might have to include at least two annotations in the origin Ingress resource definition:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
I've found a great guideline through GCP community tutorials that explains Nginx Ingress Controller implementation procedure in GKE.
There are also the other L7 proxy frameworks available on the market that can leverage CORS requests like Traefik, Skipper, etc.

OpenShift Hazelcast

Is it possible to open a port for hazelcast on openshift? No matter what port I try, I get the same exception:
SocketException: Permission denied
I am not trying to open the port to the world. I just want to open a port so the gears can use Hazelcast. It seems like this should be possible.
You're probably have to use a HTTP tunnel to connect Hazelcast, not a nice solution but I prototyped it some time ago: https://github.com/noctarius/https-tunnel-openshift-hazelcast
Anyhow gears should be openshift V2, isn't it? Never tried it with V2, if you get the chance, there's support for V3 (and V3.1) - http://blog.hazelcast.com/openshift/
What cartridge type do you use?
You can bind to any port from 15000 to 35530 internally, but other gears won't be able to access it.
From my experience - I had to open the public proxy port for other members of the cluster to join.
For example, Vert.x cartridge uses Hazelcast for clustering and has some additional public proxy ports open (see https://github.com/vert-x/openshift-cartridge/blob/master/metadata/manifest.yml).
Endpoints:
- Private-IP-Name: IP
Private-Port-Name: PORT
Private-Port: 8080
Public-Port-Name: PROXY_PORT
Mappings:
- Frontend: ""
Backend: ""
Options: { "websocket": 1}
- Private-IP-Name: IP
Private-Port-Name: HAZELCAST_PORT
Private-Port: 5701
Public-Port-Name: HAZELCAST_PROXY_PORT
- Private-IP-Name: IP
Private-Port-Name: CLUSTER_PORT
Private-Port: 9123
Public-Port-Name: CLUSTER_PROXY_PORT
(see https://access.redhat.com/documentation/en-US/OpenShift_Online/2.0/html/Cartridge_Specification_Guide/chap-Exposing_Services.html).
On OpenShift, you should only bind websockets to either port 8000 or 8443.
See:
https://developers.openshift.com/en/managing-port-binding-routing.html
https://blog.openshift.com/paas-websockets/