kubernetes-dashboard broken via AWS ALB ingress - kubernetes-ingress

I need some help getting kubernetes-dashboard working properly with an AWS ALB ingress. I have successfully deployed kubernetes-dashboard using the helm chart, and everything works correctly when accessing via kubectl proxy or kubectl port-forward. However, I get just a blank screen when accessing via an AWS ALB.
Not sure if this is relevant, but I've noticed that the <body><kd-root> section is empty when accessed via the ALB, but non-empty when accessing via other methods (aka, port-fowarding). I'm wondering if I'm missing some key configuration parameter that makes this all work.
<!--
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Kubernetes Dashboard</title>
<link rel="icon"
type="image/png"
href="assets/images/kubernetes-logo.png" />
<meta name="viewport"
content="width=device-width">
<link rel="stylesheet" href="styles.c3ed2dcd657a389ecc4d.css"></head>
<body>
<kd-root></kd-root>
<script src="runtime.6304db2809b97aa812ee.js" defer></script><script src="polyfills-es5.8f06d415489cadffc1de.js" nomodule defer></script><script src="polyfills.36db5820637aca3bd1e6.js" defer></script><script src="scripts.e296fd4cf14eea7ea0bd.js" defer></script><script src="main.17bd8ead409f8f047d6a.js" defer></script></body>
</html>
I'm using
Kubernetes 1.18 (AWS EKS)
kubernetes-dashboard 2.0.4
kubernetes-dashboard helm chart 2.8.1
chrome 85.0.4183.121
firefox 81.0.2
I'm using a NodePort service. Here's my ingress resource (created by the helm chart).
$ kc get ingress kubernetes-dashboard -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internal
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: kubernetes-dashboard
meta.helm.sh/release-namespace: default
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
service.alpha.kubernetes.io/app-protocols: '{"https":"HTTPS"}'
creationTimestamp: "2020-10-13T21:45:48Z"
generation: 1
labels:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.0.4
helm.sh/chart: kubernetes-dashboard-2.8.1
name: kubernetes-dashboard
namespace: default
resourceVersion: "21742506"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/kubernetes-dashboard
uid: bda0ce1d-b112-45db-9fa4-c220e3e0e691
spec:
rules:
- host: dashboard.my-domain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
status:
loadBalancer:
ingress:
- hostname: long-amazon-alb-url.us-east-1.elb.amazonaws.com

Nevermind, I found the problem. I had my ALB rules set to allow / only, instead of /* which is what I really wanted.

Related

Azure Application Gateway Ingress Configuration - retain same URL post moving to root level

My ingress configuration is set up like below and the expected URL to access the site is company.com/product. It is a Tomcat-based application which used to be extracted from product.war. Now that I have moved it to ROOT level, I am unable to retain the same URL. I would like to have both
company.com and company.com/product to show as company.com/product in the browser address bar. Currently, the application loads as company.com/login/login.jsp while the expectation is to load company.com/product/login/login.jsp. This login page redirection is handled by the application.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
appgw.ingress.kubernetes.io/backend-hostname: company.com
appgw.ingress.kubernetes.io/backend-protocol: https
appgw.ingress.kubernetes.io/cookie-based-affinity: "true"
appgw.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: azure/application-gateway
#appgw.ingress.kubernetes.io/backend-path-prefix: product
#nginx.ingress.kubernetes.io/rewrite-target: /product
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "/" {
rewrite / https://company.com/product permanent;
}
generation: 3
labels:
app.kubernetes.io/managed-by: Helm
name: website-ingress
namespace: product
spec:
rules:
- host: company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: product-service
port:
number: 443
tls:
- hosts:
- company.com
secretName: product-sslcert
I have tried multiple approaches like specifying backend-path-prefix, rewrite-target, configuration-snippet, server-snippet but nothing has worked so far.
I also resorted to moving the application from product.war to ROOT.war as my attempts to set up a redirect from / to /product did not succeed either. I was under the impression that moving the application to the root level would allow me more customizations.
I took a different approach to solving this problem. Instead of using the ingress, I decided to retain the product.war in webapps and updated the ROOT folder to only have an index.jsp file with this content -
<% response.sendRedirect("/product"); %>
With this change, I am getting the expected results i.e., company.com now redirects to company.com/product

How to use Traefik+MetalLB to Expose Kubernetes API (apiserver)

I have a microk8s running on my raspberry pi and I'm hoping to use a traefik ingressroute to expose kubernetes API to my subdomain
below is my ingressroute:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kube-api
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`kubernetes.mydomain.com`)
kind: Rule
services:
- kind: Service
name: kubernetes
port: 16443 # have also tried 443
tls:
secretName: kubernetes.mydomain.com
This works fine for my other services+ingressroute but not with the api.
For the kubernetes api I'm only able to see my certificate was successfully generated but the page just displays 'Internal Server Error'
Please let me know what additional information I can provide and I will gladly do so!
This issue was because traefik was trying to connect with kube-apiserver over https.
I had to use a serverTransport to allow insecure communication between traefik and kube-apiserver. This is not a security concern as communication to traefik will verify ssl.
The way to do this can be found at the very bottom of this page.
https://doc.traefik.io/traefik/v2.4/routing/providers/kubernetes-crd/#kind-serverstransport

How to set timeout for gloo ingress controller

I am replacing nginx ingress with gloo ingress controller in kubernetes cluster and want to set timeout for response.There is an annotation for this in nginx.
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
Is there anything similar this in gloo-ingress-controller or else do I have to use virtualservice for this?
The only annotation that you are supposed to use with Gloo is kubernetes.io/ingress.class: gloo which is the standard way to mark an Ingress object as handled by a specific Ingress controller. This requirement will go away if you add the ability for Gloo to be the default Ingress controller for your cluster. Also, according to the documentation:
If you need more advanced routing capabilities, we encourage you to
use Gloo VirtualServices by installing as glooctl install gateway.
Gloo Gateway uses Kubernetes Custom Resources instead of Ingress
Objects as the only way to configure Ingress’ beyond their basic
routing spec is to use lots of vendor-specific Kubernetes Annotations
to your Kubernetes manifests.
So you are supposed to use VirtualService in order to achieve your goal. You can see the example below:
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: 'default'
namespace: 'gloo-system'
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- prefix: '/petstore'
routeAction:
single:
upstream:
name: 'default-petstore-8080'
namespace: 'gloo-system'
options:
timeout: '20s'
retries:
retryOn: 'connect-failure'
numRetries: 3
perTryTimeout: '5s'
I hope this helps.

Static global IP on GKE using Nginx Ingress?

I am trying to setup an nginx ingress controller in my GKE cluster and I'd like to use a static global IP address but I am struggling figuring it out how.
After a lot of research, most guides/stackoverflow/blogs just say "use the kubernetes.io/ingress.global-static-ip-name annotation on your ingress resource" however that does not do anything.
Below is an example of my Ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.org/websocket-services: "ws-svc"
kubernetes.io/ingress.global-static-ip-name: my-global-gce-ip
spec:
tls:
- secretName: my-secret
hosts:
- mysite.com
rules:
- host: mysite.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
The service always get's an ephemeral IP address which is thrown away whenever I recreate the controller.
I suspect the issue at hand here is that annotation only works for GCE type of Ingress, not nginx (Even though this is stated nowhere)
Next I attempted setting the IP manually in my ingress resource as showsn in this guide yet when I look at the service created, the external IP address just shows as pending which some github issues seem to point is due to the fact that it is a global and not a regional IP.
With all this in mind, is there any way to have a static global ip on a GKE cluster using an nginx ingress controller?
You have to set the static IP as loadBalancerIP in nginx ingress controller, not in ingress-resource (as you did). As per the documentation, Load Balancer IP is the IP address to assign to load balancer (if supported).
https://github.com/helm/charts/tree/master/stable/nginx-ingress
spec:
...
externalTrafficPolicy: Cluster
loadBalancerIP: [your static IP]
sessionAffinity: None
type: LoadBalancer
And make sure your IP is regional and not global. Only GCP load balancers (GCP built-in ingress controller) support global IP.

Unhealthy Ingress services

I am trying to deploy an application via GKE. As far I created two services and two deployments for the front and the back for the App .
I created an ingress ressource using "gce" controller and I mapped the services as shown
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
part: ingress
name: my-irool-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
backend:
serviceName: client-svc
servicePort: 3000
rules:
- http:
paths:
- path: /back
backend:
serviceName: back-svc
servicePort: 9000
- http:
paths:
- path: /back/*
backend:
serviceName: back-svc
servicePort: 9000
It worked almost fine ( not all the root where mapped correctly but it worked). I added modification on the code ( only the code of the application ) and I rebuild the images and recreated the services, but the ingress seemed angry with the modifications I have added and
all my services became in the unhealthy state
This is the front service
apiVersion: v1
kind: Service
metadata:
labels:
app: app
part: front
name: client
namespace: default
spec:
type: NodePort
ports:
- nodePort: 32585
port: 3000
protocol: TCP
selector:
app: app
part: front
when I do a describe , I got nothing beside that my services are unhealthy.
And in the moment of creation I keep getting
Warning GCE 6m loadbalancer-controller
googleapi: Error 409: The resource
'[project/idproject]/global/healthChecks/k8s-be-32585--17c7......01'
already exists, alreadyExists
My question is:
What is wrong about the code showed above? Should I map all the services to the port 80 ( default ingress port so it could work? )
What are the readinessProbe and livenessProbe? Should I add them or mapping one to the services to default backend should be enough ?
For your first question, deleting and re-creating the ingress may resolve the issue. For the second question, you can review the full steps of configuring Liveness and Readiness probes here. Furthermore, as defined here (as an example for a pod):
livenessProbe: Indicates whether the Container is running. If the
liveness probe fails, the kubelet kills the Container, and the
Container is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is Success.
And readinessProbe: Indicates whether the Container is ready to
service requests. If the readiness probe fails, the endpoints
controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the
initial delay is Failure. If a Container does not provide a readiness
probe, the default state is Success.