Kubernetes ingress same path multiple ports - kubernetes-ingress

After much googling and searching (even here), I'm not able to find a definitive answer to my question. So I hope someone here might be able to point me in the right direction.
I have a Kube Service definition that's already working for me, but right now I've simply exposed it with just a LoadBalancer. Here's my current Service yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: namespace1
labels:
app: my-service
spec:
type: LoadBalancer
selector:
app: my-service
tier: web
ports:
- name: proxy-port
port: 8080
targetPort: 8080
- name: metrics-port
port: 8082
targetPort: 8082
- name: admin-port
port: 8092
targetPort: 8092
- name: grpc-port
port: 50051
targetPort: 50051
This is obviously only TCP load-balanced. What I want to do is secure this with Mutual TLS, so that the server will only accept connections from my client with the authorized certificate.
From all I can tell in Kube land, what I need to do that is an Ingress definition. I've been researching all the docs I can find on kind:Ingress and I can't seem to find anything where it allows me to create a single Ingress with multiple ports on the same path!
Am I missing something here? Is there no way to create a K8s Ingress that simply has the same functionality as the above Service definition?

To my knowledge you cannot use custom ports (e.g 8080) for HTTPS LoadBalancer backed with Ingress Controller (e.g. NGINX HTTP(S) Proxy), as
currently the port of an Ingress is implicitly :80 for http and :443 for https, as official doc reference for IngressRule explains.
I think the workaround would be to use different host per service, like with this example of Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: proxy.foo.com
http:
paths:
- backend:
serviceName: proxy-svc
servicePort: 8080
- host: metrics.foo.com
http:
paths:
- backend:
serviceName: metrics-svc
servicePort: 8082
- host: admin.foo.com
http:
paths:
- backend:
serviceName: admin-svc
servicePort: 8092
- host: grpc.foo.com
http:
paths:
- backend:
serviceName: grpc-svc
servicePort: 50051

I faced the same situation where we had to expose port 80,443 and 50051 on the same host. Using traefik v2+ on K3S, this is how I solved it:
Apply this file to modify traefik config; this is with K3S. If you have installed traefik directly with the chart, add a values file with the same config as below.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
ports:
grpc:
port: 50051
protocol: TCP
expose: true
exposedPort: 50051
After it is done, watch the traefik service be updated with the new config.
If its not working, make sure the port youve set are free and not used by another service.
When this is done, you can create IngressRoute object. Here I got one for grpc (50051) and one for web (80/443).
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: db
spec:
entryPoints:
- web
- websecure
routes:
- kind: Rule
match: Host(`lc1.nebula.global`)
services:
- name: db
port: 80
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: db-grpc
spec:
entryPoints:
- grpc
routes:
- kind: Rule
match: Host(`lc1.nebula.global`)
services:
- name: db-grpc
port: 50051
EDIT:::
if running with your own instance of k3s with the community helm chart (not the one provided by k3s). Here is the equivalent config I have:
traefik:
rbac:
enabled: true
ports:
web:
redirectTo: websecure
websecure:
tls:
enabled: true
grpc:
port: 50051
protocol: TCP
expose: true
exposedPort: 50051
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
providers:
kubernetesIngress:
publishedService:
enabled: true
priorityClassName: "system-cluster-critical"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
additionalArguments:
- "--entrypoints.grpc.http2.maxconcurrentstreams=10000"
ingress:
enabled: false
host: traefik.local
annotations: {}
In my case, I also increase the max number of concurrent http2 streams for grpc.

Related

How to expose MySQL database in kubernetes using Kong Gateway?

I installed kong using helm chart. In values.yaml I specified two TCP ports in Stream section:
stream:
- containerPort: 39019 #MongoDB
servicePort: 39019
protocol: TCP
parameters:
- ssl
- containerPort: 43576 #MySQL
servicePort: 43576
protocol: TCP
parameters:
- ssl
My intention is to expose one port for MongoDB and another for MySQL.
After that, I created a TCPIngress file for both databases:
one for MySQL
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
name: tcp-mysql
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: kong
konghq.com/plugins: global-file-log
spec:
tls:
- hosts:
- s.mytest.domain
secretName: s.mytest.domain-certificate
rules:
- host: s.mytest.domain
port: 43576
backend:
serviceName: mysql
servicePort: 3306
and other for MongoDB
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
name: tcp-mongodb
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: kong
konghq.com/plugins: global-file-log
spec:
tls:
- hosts:
- s.mytest.domain
secretName: s.mytest.domain-certificate
rules:
- host: s.mytest.domain
port: 39019
backend:
serviceName: mongodb
servicePort: 27017
with this configuration, MongoDB works perfect, but MySQL doesnt.
I couldn't find a way to connect to MySQL using my test domain.
If I do a port-forward to my MySQL pod, it works as expected.
I'm new to Kong and Kubernetes in general. How can I trace whats going wrong and how can I solve this?

Streamlit application reloads every 30secs when deployed on kubernetes

Hi I have deployed a streamlit application which acts as a UI for downloading data from our platform. After deploying on kubernetes I observe that application keeps reloading after every 30 seconds, which is very annoying.
If I access the app by port forwarding the service it works ok but somehow using via nginx, it has the above mentioned problem.
Has anyone faced this issue ?
I was looking into the streamlit forum and saw same issue like mine but there is no clear solution.
Streamlit reruns all 30 seconds
I do see a websocket time of ~30s in developer tool under network section in browser.
My k8s manifest file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: streamlit-deployment
labels:
app: streamlit
spec:
replicas: 1
selector:
matchLabels:
app: streamlit
template:
metadata:
labels:
app: streamlit
spec:
containers:
- name: streamlit
image: <image>:<tag>
imagePullPolicy: Always
ports:
- containerPort: 8501
livenessProbe:
httpGet:
path: /healthz
port: 8501
scheme: HTTP
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 8501
scheme: HTTP
timeoutSeconds: 1
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 100m
memory: 745Mi
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: data
---
apiVersion: v1
kind: Service
metadata:
labels:
app: streamlit
name: streamlit-service
spec:
ports:
- nodePort: 32640
port: 8501
protocol: TCP
targetPort: 8501
selector:
app: streamlit
sessionAffinity: None
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp
labels:
app: streamlit
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /webapp(/|$)(.*)
pathType: Prefix
backend:
service:
name: streamlit-service
port:
number: 8501
`
I tried to add following annotation in nginx definition but it didnt work
nginx.ingress.kubernetes.io/connection-draining: "true" nginx.ingress.kubernetes.io/connection-draining-timeout: "3000"
Also I looked into the source code and was wondering if its because of tornado settings in
lib/streamlit/web/server/server.py which has web socket ping timeout set as 30 seconds.
"websocket_ping_timeout": 30,
I tried to set this to a higher value and create a new image for my deployment, but unfortunately it didnt help.
I'd really appreciate any leads.
Finally, we found out that issue was due to the default time out of the GCP load balancer, which was causing the web socket connection to re-initialize every 30 secs. All we needed to do was change the time out in the backend configuration of the load balancer. I hope it helps.

How to use nginx ingress to route traffic based on port

I'm currently working on deploying ELK stack on kubernetes cluster, i was successfully able to use ClusterIP service and nginx-ingress on minikube to route inbound http traffic to kibana (5601 port), need inputs on how i can route traffic based on inbound port rather than path?
Using below Ingress object declaration, i was successfully able to connect to my kibana deployment, but how can i access other tools stack exposed on different ports (9200, 5044, 9600)?
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: ingress-service
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: kibana-service
servicePort: 5601
CUrl'ing minikube ip on default 80 port returns valid response
# curl http://<minikube-ip>/api/status
{"name":"kibana",....}
Note: i would not want to use NodePort, but would like to know if nodeport is the only way we can achieve the above?
As you already have minikube and minikube ingress addon enabled:
$ minikube addons list | grep ingress
| ingress | minikube | enabled ✅ |
| ingress-dns | minikube | enabled ✅ |
Just as reminder:
targetPort: is the port the container accepts traffic on (port where application runs inside the pod).
port: is the abstracted Service port, which can be any port other pods use to access the Service.
Please keep in mind that if your container will not be listening port specified in targetPort you will not be able to connect to the pod.
Also remember about firewall configuration to allow traffic.
As for example I've used this yamls:
apiVersion: v1
kind: Service
metadata:
name: service-one
spec:
selector:
key: application-1
ports:
- port: 81
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
key: application-1
template:
metadata:
labels:
key: application-1
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
spec:
selector:
key: application-2
ports:
- port: 82
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-2
spec:
replicas: 1
selector:
matchLabels:
key: application-2
template:
metadata:
labels:
key: application-2
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: service-one
servicePort: 81
- path: /hello2
backend:
serviceName: service-two
servicePort: 82
service/service-one created
deployment.apps/deployment-1 created
service/service-two created
deployment.apps/deployment-2 created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/ingress created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Please keep in mind that soon Minikube will change apiVersion as per warning above.
Below output of this configuration:
$ curl http://172.17.0.3/hello
Hello, world!
Version: 1.0.0
Hostname: deployment-1-77ddb77d56-2l4cp
minikube-ubuntu18:~$ curl http://172.17.0.3/hello2
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-5dvbx
You could use:
paths:
- path: /elasticsearch
backend:
serviceName: elasticsearch-service
servicePort: 100
- path: /anotherservice
backend:
serviceName: another-service
servicePort: 101
Where service would looks like:
name: elasticsearch-service
...
ports:
- port: 100
targetPort: 9200
---
name: another-service
...
ports:
- port: 101
targetPort: 5044
However, if you would need more advanced path configuration you can also use rewrite. Also you can use default backend to redirect to specific service.
More information about accessing Minikube you can find in Minikube documentation.
Is it what you were looking for or something different?

Error creating ingress path with GCE + ExternalName

I have an ExternalName service:
apiVersion: v1
kind: Service
metadata:
name: external
namespace: default
spec:
externalName: my-site.com
ports:
- port: 443
protocol: TCP
targetPort: 443
type: ExternalName
And an Ingress path:
spec:
rules:
- http:
paths:
- backend:
serviceName: external
servicePort: 443
path: /*
But saving the ingress returns:
Error during sync: error while evaluating the ingress spec: service "default/external" is type "ExternalName", expected "NodePort" or "LoadBalancer"
GCE ingress should support ExternalName services (or at least there isn't easily findable documentation suggesting otherwise) and that error is hard to track down.
GCE ingresses do not support type: ExternalName due to the fact that they use GCE LB as the providing infrastructure. the GCE LB can't use it as a backend.
I recommend posting this as a Feature Request on Google's Issue tracker

Different ingress in different Namespace in kubernetes

I have created two different namespaces for different environment. one is devops-qa and another is devops-dev. I created two ingress in different namespaces. So while creating ingress of qa env in devops-qa namespace, the rules written inside ingress of qa is working fine. Means I am able to access the webpage of qa env. The moment I will create the ingress of dev env in devops-dev namespace, I will be able to access the webpage of dev env but wont be able to access the webpage of qa. And when I delete the dev ingress then again I will be able to access the qa env website
Below is the ingree of both dev and qa env.
Dev Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-dev
namespace: devops-dev
spec:
tls:
- hosts:
- cafe-dev.example.com
secretName: default-token-drk6n
rules:
- host: cafe-dev.example.com
http:
paths:
- path: /
backend:
serviceName: miqpdev-svc
servicePort: 80
QA Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-qa
namespace: devops-qa
spec:
tls:
- hosts:
- cafe-qa.example.com
secretName: default-token-jdnqf
rules:
- host: cafe-qa.example.com
http:
paths:
- path: /greentea
backend:
serviceName: greentea-svc
servicePort: 80
- path: /blackcoffee
backend:
serviceName: blackcoffee-svc
servicePort: 80
The token mentioned in the ingress file is of each namespace. And the nginx ingress controller is running in QA namespace
How can i run both the ingress and will be able to get all the websites deployed in both dev and qa env ?
I actually Solved my problem. I did everything correct. But only thing I did not do is to map the hostname with the same ip in Route53. And instead of accessing the website with hostname, I was accessing it from IP. Now after accessing the website from hostname, I was able to access it :)
Seems like you posted here and got your answer. The solution is to deploy a different Ingress for each namespace. However, deploying 2 Ingresses complicates matters because one instance has to run on a non-standard port (eg. 8080, 8443).
I think this is better solved using DNS. Create the CNAME records cafe-qa.example.com and cafe-dev.example.com both pointing to cafe.example.com. Update each Ingress manifest accordingly. Using DNS is somewhat the standard way to separate the Dev/QA/Prod environments.
Had the same issue, found a way to resolve it:
you just need to add the "--watch-namespace" argument to the ingress controller that sits under the ingress service that you've linked to your ingress resource. Then it will be bound only to the services within the same namespace as the ingress service and its pods belong to.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: my-namespace
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress-lb
template:
metadata:
labels:
name: nginx-ingress-lb
spec:
serviceAccountName: ingress-account
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
- "--default-ssl-certificate=$(POD_NAMESPACE)/secret-tls"
- "--watch-namespace=$(POD_NAMESPACE)"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1"
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
namespace: my-namespace
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
selector:
name: nginx-ingress-lb
You can create nginx ingress cotroller in kube-system namespace instead of creating it in QA namespace.