Path (AKS Application Gateway Ingress Controller) - kubernetes-ingress

I have a cluster in AKS with AGIC... the system has changed the route system so that the url appears, for example, table.system.com/login or table.system.com/box.
If I go to table.system.com it automatically redirects me to table.system.com/login without any problem, but if I press F5 there it throws me a "404 Not Dound" or if I write directly in the url table.system.com/login I get the same "404 not found" error (which is an external request and goes through the ingress).
I thought a path / and "Prefix" should take everything after / of my url ( /* doesn't work)
The same thing happens with all the internal links, if I browse from the web there is no problem, but if I refresh the page or if I write the url manually, it gives me the 404 error.
my ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: system-ingress
namespace: test
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/ssl-redirect: "true"
appgw.ingress.kubernetes.io/appgw-ssl-certificate: "cert-system.cl"
spec:
rules:
- host: table.system.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 80

Related

Azure Application Gateway Ingress Configuration - retain same URL post moving to root level

My ingress configuration is set up like below and the expected URL to access the site is company.com/product. It is a Tomcat-based application which used to be extracted from product.war. Now that I have moved it to ROOT level, I am unable to retain the same URL. I would like to have both
company.com and company.com/product to show as company.com/product in the browser address bar. Currently, the application loads as company.com/login/login.jsp while the expectation is to load company.com/product/login/login.jsp. This login page redirection is handled by the application.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
appgw.ingress.kubernetes.io/backend-hostname: company.com
appgw.ingress.kubernetes.io/backend-protocol: https
appgw.ingress.kubernetes.io/cookie-based-affinity: "true"
appgw.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: azure/application-gateway
#appgw.ingress.kubernetes.io/backend-path-prefix: product
#nginx.ingress.kubernetes.io/rewrite-target: /product
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "/" {
rewrite / https://company.com/product permanent;
}
generation: 3
labels:
app.kubernetes.io/managed-by: Helm
name: website-ingress
namespace: product
spec:
rules:
- host: company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: product-service
port:
number: 443
tls:
- hosts:
- company.com
secretName: product-sslcert
I have tried multiple approaches like specifying backend-path-prefix, rewrite-target, configuration-snippet, server-snippet but nothing has worked so far.
I also resorted to moving the application from product.war to ROOT.war as my attempts to set up a redirect from / to /product did not succeed either. I was under the impression that moving the application to the root level would allow me more customizations.
I took a different approach to solving this problem. Instead of using the ingress, I decided to retain the product.war in webapps and updated the ROOT folder to only have an index.jsp file with this content -
<% response.sendRedirect("/product"); %>
With this change, I am getting the expected results i.e., company.com now redirects to company.com/product

Path based routing fails to match propertly with Traefik Ingress in Kubernetes

I have a kubernetes ingress for an application where i'm using path based routing.
The cluster is running on Google Cloud Kubernetes Engine and my ingress controller is Traefik v2.4.
Some of my links are:
https://www.kwetter.org/ -> Homepage (Frontend)
https://www.kwetter.org/profile -> Profile page (Frontend)
https://www.kwetter.org/messages-> Messages page (Frontend)
https://www.kwetter.org/api/auth/connect -> OAuth endpoints (IdentityServer)
https://www.kwetter.org/api/auth/users -> User endpoints (IdentityServer)
The logic that I want is to have anything matching the path /* going to the frontend, and anything matching /api/auth/* to be routed to identity server.
However, only exact paths are routed, https://www.kwetter.org/ works, https://www.kwetter.org/profile doesnt.
Same for the other service, https://www.kwetter.org/api/auth works, https://www.kwetter.org/api/auth/users doesn't.
My ingress looks like this:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: traefik-ingress
annotations:
networking.gke.io/managed-certificates: kwetter-certificate
traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
spec:
rules:
- host: kwetter.org
http:
paths:
- path: /
backend:
serviceName: kwetter-web-app
servicePort: 80
- path: /api/auth
pathType: Prefix
backend:
serviceName: kwetter-identity-server
servicePort: 80
- host: www.kwetter.org
http:
paths:
- path: /
backend:
serviceName: kwetter-web-app
servicePort: 80
- path: /api/auth
pathType: Prefix
backend:
serviceName: kwetter-identity-server
servicePort: 80
The page is loaded fine for the frontend, but the static files return a 404, with the traefik message "response 404 (backend NotFound), service rules for the path non-existent". The full url is https://kwetter.org/static/js/2.2217857e.chunk.js and with pathType: Prefix, this should match to the "/" path.
Can anybody tell me where i'm going wrong?
Edit for Solution:
I have tried the re-write target based solution, which conflicted with my API controllers at the service they reached.
Eventually I just tried to put a star in the path:
path: /*
path: /api/auth/*
This solved the whole routing issue, didn't know this was even possible.
The logic that I want is to have anything matching the path /* going to the frontend, and anything matching /api/auth/* to be routed to identity server.
You need to use regular expressions and rewrite-target annotation in your .yaml files. Look at the example Ingress .yaml file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
You can find more information about rewrite-target annotation here.
You can find similar tips in the traefik documentation.
# Replace path with regex
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-replacepathregex
spec:
replacePathRegex:
regex: ^/foo/(.*)
replacement: /bar/$1
But in this situation you may notice some differences in yaml. If you want to create regular expressions for traefik, you can test the solution here.

Nginx ingress returns 502 after POST with redirect

My hosting provider is DigitalOcean. Main page (e.g. /) requires user to be authenticated. If user is not authenticated he is redirected to identity server. Once user enters credentials POST request is sent to application as last step of OAuth flow.
Application receives this request and handles it correctly. This fact was verified by logs produced by application. It performs redirect to main page https://ui.example.com (302 status code + some cookies).
User sees 502 error message issues by gateway.
Ingress configuration is very simple (and it works for GET requests):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rie-ui-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- ui.example.com
secretName: rie-ui-prd-tls
rules:
- host: ui.example.com
http:
paths:
- backend:
serviceName: rie-ui-svc
servicePort: 9000
I'm wondering what could be wrong with this configuration?
UPDATE 1 The following log message was found in log stream of Ingress controller:
2019/11/20 05:33:09 [error] 1465#1465: *813467 upstream sent too big header while reading response header from upstream, client: 10.131.18.136, server: ui.example.com, request: "POST /signin-oidc HTTP/2.0", upstream: "http://10.244.1.228:9000/signin-oidc", host: "ui.example.com"
I have impression that this is something to fix with Nginx controller ingress settings?
ANSWER:
After studying documentation the following changes where made to Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rie-ui-ingress
annotations:
...
nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
These settings increase buffer sizes. More details can be found on the following page.

how to make ingress nginx return 200 on each request

I want my ingress nginx to return 200 on every request.
If I had access to the nginx configuration, I would have done something like this:
location = /health {
return 200;
}
But I'm not sure how to do it in ingress configuration YAML
Consider that the Kubernetes' ingress object, when using the Nginx controller, is mostly meant to do routing instead of serving requests by itself. What is actually serving them is the backend deployed in the cluster and, this part is what is returning the status codes, not the ingress.
The controller has a feature something similar to what you want, but for errors only. This only makes the ingress to add some headers so that, a backend can interpret them an return some non-standard code response.
It might be possible to make them respond 200 if you modify this backend. However, I find less disruptive and more straight-forward to just "catch-all" all the incoming requests in the ingress to redirect them to a custom Nginx backend that always responds 200 (you already have the Nginx configuration for that):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
app: all-good
name: happy-ingress
spec:
rules:
- host: "*"
http:
paths:
- path: /(.*)
backend:
serviceName: ok-status-test
servicePort: 80
With this approach, you can even add non-200's backends to it, and match them using regex, so the ingress can be fully reusable.

Unhealthy Ingress services

I am trying to deploy an application via GKE. As far I created two services and two deployments for the front and the back for the App .
I created an ingress ressource using "gce" controller and I mapped the services as shown
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
part: ingress
name: my-irool-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
backend:
serviceName: client-svc
servicePort: 3000
rules:
- http:
paths:
- path: /back
backend:
serviceName: back-svc
servicePort: 9000
- http:
paths:
- path: /back/*
backend:
serviceName: back-svc
servicePort: 9000
It worked almost fine ( not all the root where mapped correctly but it worked). I added modification on the code ( only the code of the application ) and I rebuild the images and recreated the services, but the ingress seemed angry with the modifications I have added and
all my services became in the unhealthy state
This is the front service
apiVersion: v1
kind: Service
metadata:
labels:
app: app
part: front
name: client
namespace: default
spec:
type: NodePort
ports:
- nodePort: 32585
port: 3000
protocol: TCP
selector:
app: app
part: front
when I do a describe , I got nothing beside that my services are unhealthy.
And in the moment of creation I keep getting
Warning GCE 6m loadbalancer-controller
googleapi: Error 409: The resource
'[project/idproject]/global/healthChecks/k8s-be-32585--17c7......01'
already exists, alreadyExists
My question is:
What is wrong about the code showed above? Should I map all the services to the port 80 ( default ingress port so it could work? )
What are the readinessProbe and livenessProbe? Should I add them or mapping one to the services to default backend should be enough ?
For your first question, deleting and re-creating the ingress may resolve the issue. For the second question, you can review the full steps of configuring Liveness and Readiness probes here. Furthermore, as defined here (as an example for a pod):
livenessProbe: Indicates whether the Container is running. If the
liveness probe fails, the kubelet kills the Container, and the
Container is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is Success.
And readinessProbe: Indicates whether the Container is ready to
service requests. If the readiness probe fails, the endpoints
controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the
initial delay is Failure. If a Container does not provide a readiness
probe, the default state is Success.