I installed kong using helm chart. In values.yaml I specified two TCP ports in Stream section:
stream:
- containerPort: 39019 #MongoDB
servicePort: 39019
protocol: TCP
parameters:
- ssl
- containerPort: 43576 #MySQL
servicePort: 43576
protocol: TCP
parameters:
- ssl
My intention is to expose one port for MongoDB and another for MySQL.
After that, I created a TCPIngress file for both databases:
one for MySQL
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
name: tcp-mysql
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: kong
konghq.com/plugins: global-file-log
spec:
tls:
- hosts:
- s.mytest.domain
secretName: s.mytest.domain-certificate
rules:
- host: s.mytest.domain
port: 43576
backend:
serviceName: mysql
servicePort: 3306
and other for MongoDB
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
name: tcp-mongodb
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: kong
konghq.com/plugins: global-file-log
spec:
tls:
- hosts:
- s.mytest.domain
secretName: s.mytest.domain-certificate
rules:
- host: s.mytest.domain
port: 39019
backend:
serviceName: mongodb
servicePort: 27017
with this configuration, MongoDB works perfect, but MySQL doesnt.
I couldn't find a way to connect to MySQL using my test domain.
If I do a port-forward to my MySQL pod, it works as expected.
I'm new to Kong and Kubernetes in general. How can I trace whats going wrong and how can I solve this?
Related
I'm currently working on deploying ELK stack on kubernetes cluster, i was successfully able to use ClusterIP service and nginx-ingress on minikube to route inbound http traffic to kibana (5601 port), need inputs on how i can route traffic based on inbound port rather than path?
Using below Ingress object declaration, i was successfully able to connect to my kibana deployment, but how can i access other tools stack exposed on different ports (9200, 5044, 9600)?
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: ingress-service
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: kibana-service
servicePort: 5601
CUrl'ing minikube ip on default 80 port returns valid response
# curl http://<minikube-ip>/api/status
{"name":"kibana",....}
Note: i would not want to use NodePort, but would like to know if nodeport is the only way we can achieve the above?
As you already have minikube and minikube ingress addon enabled:
$ minikube addons list | grep ingress
| ingress | minikube | enabled ✅ |
| ingress-dns | minikube | enabled ✅ |
Just as reminder:
targetPort: is the port the container accepts traffic on (port where application runs inside the pod).
port: is the abstracted Service port, which can be any port other pods use to access the Service.
Please keep in mind that if your container will not be listening port specified in targetPort you will not be able to connect to the pod.
Also remember about firewall configuration to allow traffic.
As for example I've used this yamls:
apiVersion: v1
kind: Service
metadata:
name: service-one
spec:
selector:
key: application-1
ports:
- port: 81
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
key: application-1
template:
metadata:
labels:
key: application-1
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
spec:
selector:
key: application-2
ports:
- port: 82
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-2
spec:
replicas: 1
selector:
matchLabels:
key: application-2
template:
metadata:
labels:
key: application-2
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: service-one
servicePort: 81
- path: /hello2
backend:
serviceName: service-two
servicePort: 82
service/service-one created
deployment.apps/deployment-1 created
service/service-two created
deployment.apps/deployment-2 created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/ingress created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Please keep in mind that soon Minikube will change apiVersion as per warning above.
Below output of this configuration:
$ curl http://172.17.0.3/hello
Hello, world!
Version: 1.0.0
Hostname: deployment-1-77ddb77d56-2l4cp
minikube-ubuntu18:~$ curl http://172.17.0.3/hello2
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-5dvbx
You could use:
paths:
- path: /elasticsearch
backend:
serviceName: elasticsearch-service
servicePort: 100
- path: /anotherservice
backend:
serviceName: another-service
servicePort: 101
Where service would looks like:
name: elasticsearch-service
...
ports:
- port: 100
targetPort: 9200
---
name: another-service
...
ports:
- port: 101
targetPort: 5044
However, if you would need more advanced path configuration you can also use rewrite. Also you can use default backend to redirect to specific service.
More information about accessing Minikube you can find in Minikube documentation.
Is it what you were looking for or something different?
I have the following deployment...
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
This works great I can access the db like this...
kubectl exec -it mysql-deployment-<POD-ID> -- /bin/bash
Then I run...
mysql -u root -h localhost -p
And I can log into it. However, when I try to access it as a service by using the following yaml...
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
I can see it by running this kubectl describe service mysql-service I get...
Name: mysql-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mysql-service","namespace":"default"},"spec":{"ports":[{"port":33...
Selector: app=mysql
Type: ClusterIP
IP: 10.101.1.232
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 172.17.0.4:3306
Session Affinity: None
Events: <none>
and I get the ip by running kubectl cluster-info
#kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
but when I try to connect using Oracle SQL Developer like this...
It says it cannot connect.
How do I connect to the MySQL running on K8s?
Service type ClusterIP will not be accessible outside of Pod network.
If you don't have LoadBalancer option, then you have to use either Service type NodePort or kubectl port-forward
You need your mysql service to be of Type NodePort instead of ClusterIP to access it outside Kubernetes.
Use the Node Port in your client config
Example Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
nodePort: 30036
targetPort: 3306
So then you can use the port: 30036 in your client.
After much googling and searching (even here), I'm not able to find a definitive answer to my question. So I hope someone here might be able to point me in the right direction.
I have a Kube Service definition that's already working for me, but right now I've simply exposed it with just a LoadBalancer. Here's my current Service yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: namespace1
labels:
app: my-service
spec:
type: LoadBalancer
selector:
app: my-service
tier: web
ports:
- name: proxy-port
port: 8080
targetPort: 8080
- name: metrics-port
port: 8082
targetPort: 8082
- name: admin-port
port: 8092
targetPort: 8092
- name: grpc-port
port: 50051
targetPort: 50051
This is obviously only TCP load-balanced. What I want to do is secure this with Mutual TLS, so that the server will only accept connections from my client with the authorized certificate.
From all I can tell in Kube land, what I need to do that is an Ingress definition. I've been researching all the docs I can find on kind:Ingress and I can't seem to find anything where it allows me to create a single Ingress with multiple ports on the same path!
Am I missing something here? Is there no way to create a K8s Ingress that simply has the same functionality as the above Service definition?
To my knowledge you cannot use custom ports (e.g 8080) for HTTPS LoadBalancer backed with Ingress Controller (e.g. NGINX HTTP(S) Proxy), as
currently the port of an Ingress is implicitly :80 for http and :443 for https, as official doc reference for IngressRule explains.
I think the workaround would be to use different host per service, like with this example of Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: proxy.foo.com
http:
paths:
- backend:
serviceName: proxy-svc
servicePort: 8080
- host: metrics.foo.com
http:
paths:
- backend:
serviceName: metrics-svc
servicePort: 8082
- host: admin.foo.com
http:
paths:
- backend:
serviceName: admin-svc
servicePort: 8092
- host: grpc.foo.com
http:
paths:
- backend:
serviceName: grpc-svc
servicePort: 50051
I faced the same situation where we had to expose port 80,443 and 50051 on the same host. Using traefik v2+ on K3S, this is how I solved it:
Apply this file to modify traefik config; this is with K3S. If you have installed traefik directly with the chart, add a values file with the same config as below.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
ports:
grpc:
port: 50051
protocol: TCP
expose: true
exposedPort: 50051
After it is done, watch the traefik service be updated with the new config.
If its not working, make sure the port youve set are free and not used by another service.
When this is done, you can create IngressRoute object. Here I got one for grpc (50051) and one for web (80/443).
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: db
spec:
entryPoints:
- web
- websecure
routes:
- kind: Rule
match: Host(`lc1.nebula.global`)
services:
- name: db
port: 80
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: db-grpc
spec:
entryPoints:
- grpc
routes:
- kind: Rule
match: Host(`lc1.nebula.global`)
services:
- name: db-grpc
port: 50051
EDIT:::
if running with your own instance of k3s with the community helm chart (not the one provided by k3s). Here is the equivalent config I have:
traefik:
rbac:
enabled: true
ports:
web:
redirectTo: websecure
websecure:
tls:
enabled: true
grpc:
port: 50051
protocol: TCP
expose: true
exposedPort: 50051
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
providers:
kubernetesIngress:
publishedService:
enabled: true
priorityClassName: "system-cluster-critical"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
additionalArguments:
- "--entrypoints.grpc.http2.maxconcurrentstreams=10000"
ingress:
enabled: false
host: traefik.local
annotations: {}
In my case, I also increase the max number of concurrent http2 streams for grpc.
I am new to Kubernetes. I am trying to deploy a microservices architecture based springboot web application in Kubernetes. I have setup Kubernetes on OpenStack. All Kubernetes services are running fine.
I followed https://github.com/fabric8io/gitcontroller/tree/master/vendor/k8s.io/kubernetes/examples/javaweb-tomcat-sidecar
to deploy a sample springboot application at
https://www.mkyong.com/spring-boot/spring-boot-hello-world-example-jsp/
and I could see the web app at localhost:8080 and <'node-ip'>:8080.
But the application which I am trying to deploy needs MySQL and rabbitmq, so I created MySQL and rabbitmq services using the yaml file(javaweb.yaml):
javaweb.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
# Expose the management HTTP port on each node
name: rabbitmq-management
labels:
app: rabbitmq
spec:
type: NodePort # Or LoadBalancer in production w/ proper security
ports:
- port: 15672
name: http
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
clusterIP: None
selector:
app: rabbitmq
---
apiVersion: v1
kind: Pod
metadata:
name: javaweb
spec:
containers:
- image: chakravarthych/sample:v1
name: war
volumeMounts:
- mountPath: /app
name: app-volume
- image: mysql:5.7
name: mysql
ports:
- protocol: TCP
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root123
command: ["sh","-c","service mysql start; tail -f /dev/null"]
- image: rabbitmq:3.7-management
name: rabbitmq
ports:
- name: http
protocol: TCP
containerPort: 15672
- name: amqp
protocol: TCP
containerPort: 5672
command: ["sh","-c","service rabbitmq-server start; tail -f /dev/null"]
- image: tomcat:8.5.33
name: tomcat
volumeMounts:
- mountPath: /usr/local/tomcat/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8080
command: ["sh","-c","/usr/local/tomcat/bin/startup.sh; tail -f /dev/null"]
volumes:
- name: app-volume
emptyDir: {}
When I try to access my application at localhost:8080 or <'node-ip'>:8080 I see a blank page.
command kubectl get all -o wide gave me below output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/javaweb 4/4 Running 0 1h 192.168.9.123 kube-node1 <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d <none>
service/mysql NodePort 10.101.253.11 <none> 3306:30527/TCP 1h app=mysql
service/rabbitmq ClusterIP None <none> 5672/TCP,4369/TCP,25672/TCP 1h app=rabbitmq
service/rabbitmq-management NodePort 10.108.7.162 <none> 15672:30525/TCP 1h app=rabbitmq
which shows that MySQL and rabbitmq are running.
My question is how to check if my application has access to MySQL and rabbitmq services running in Kubernetes.
Note:
I could access rabbitmq at 192.168.9.123:15672 only.
I could also log in to MySQL inside of Docker container.
Did you try a "kubectl log" on the springboot pod? You may have some indication of what is going wrong.
I have created two different namespaces for different environment. one is devops-qa and another is devops-dev. I created two ingress in different namespaces. So while creating ingress of qa env in devops-qa namespace, the rules written inside ingress of qa is working fine. Means I am able to access the webpage of qa env. The moment I will create the ingress of dev env in devops-dev namespace, I will be able to access the webpage of dev env but wont be able to access the webpage of qa. And when I delete the dev ingress then again I will be able to access the qa env website
Below is the ingree of both dev and qa env.
Dev Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-dev
namespace: devops-dev
spec:
tls:
- hosts:
- cafe-dev.example.com
secretName: default-token-drk6n
rules:
- host: cafe-dev.example.com
http:
paths:
- path: /
backend:
serviceName: miqpdev-svc
servicePort: 80
QA Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-qa
namespace: devops-qa
spec:
tls:
- hosts:
- cafe-qa.example.com
secretName: default-token-jdnqf
rules:
- host: cafe-qa.example.com
http:
paths:
- path: /greentea
backend:
serviceName: greentea-svc
servicePort: 80
- path: /blackcoffee
backend:
serviceName: blackcoffee-svc
servicePort: 80
The token mentioned in the ingress file is of each namespace. And the nginx ingress controller is running in QA namespace
How can i run both the ingress and will be able to get all the websites deployed in both dev and qa env ?
I actually Solved my problem. I did everything correct. But only thing I did not do is to map the hostname with the same ip in Route53. And instead of accessing the website with hostname, I was accessing it from IP. Now after accessing the website from hostname, I was able to access it :)
Seems like you posted here and got your answer. The solution is to deploy a different Ingress for each namespace. However, deploying 2 Ingresses complicates matters because one instance has to run on a non-standard port (eg. 8080, 8443).
I think this is better solved using DNS. Create the CNAME records cafe-qa.example.com and cafe-dev.example.com both pointing to cafe.example.com. Update each Ingress manifest accordingly. Using DNS is somewhat the standard way to separate the Dev/QA/Prod environments.
Had the same issue, found a way to resolve it:
you just need to add the "--watch-namespace" argument to the ingress controller that sits under the ingress service that you've linked to your ingress resource. Then it will be bound only to the services within the same namespace as the ingress service and its pods belong to.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: my-namespace
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress-lb
template:
metadata:
labels:
name: nginx-ingress-lb
spec:
serviceAccountName: ingress-account
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
- "--default-ssl-certificate=$(POD_NAMESPACE)/secret-tls"
- "--watch-namespace=$(POD_NAMESPACE)"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1"
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
namespace: my-namespace
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
selector:
name: nginx-ingress-lb
You can create nginx ingress cotroller in kube-system namespace instead of creating it in QA namespace.