How to access Kong admin API when Kong is deployed using Kong-Ingress-Controller - kubernetes-ingress

I have installed Kong using Minikube Kong-Ingress-Controller
Services are visible like below. Now I am trying to find kong admin API so that I can install Konga dashboard on top of it but not sure where to find that
I am not sure whether I am following the correct process.
Can anyone help me to access the Kong admin API?

use kubectl port-forward deployment/ingress-kong -n kong 8444:8444

You can follow this response in a GitHub issue.
https://github.com/Kong/kubernetes-ingress-controller/issues/59#issuecomment-534380451
Expanding in case the link is invalidated. You need to create a service to expose the admin API endpoint and set the env var and ports for the same in kong ingress controller deployment.
apiVersion: v1
kind: Service
metadata:
name: kong-admin
namespace: kong
spec:
type: NodePort
ports:
- name: admin
port: 8001
protocol: TCP
targetPort: 8001
- name: admin-ssl
port: 8444
targetPort: 8444
protocol: TCP
selector:
app: ingress-kong
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
...
template:
...
spec:
containers:
- env:
...
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001,0.0.0.0:8444 ssl
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000,0.0.0.0:8443 ssl
image: kong:1.3
name: proxy
ports:
- containerPort: 8001
name: admin
protocol: TCP
- containerPort: 8444
name: admin-ssl
protocol: TCP
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 9542
name: metrics
protocol: TCP

Related

how to configure ingress to direct traffic to an https backend ... with istio ingress

Hi i have deployed elastic search in Kubernetes with a self-signed certificate I want expose elastic search URL but am able to do nginx ingress but not successful with istio can any one explained how to do that
this is the virtual service
kind: VirtualService
metadata:
name: elasticsearch
namespace: istio-system
spec:
hosts:
- elasticsearch.domain.com
gateways:
- monitor-gateway
http:
- match:
- port: 443
route:
- destination:
host: elasticsearch.monitor.svc.cluster.local
port:
number: 9200
gateway
# Source: istio-ingress/templates/gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: monitor-gateway
namespace: istio-system
labels:
app: istio-ingress
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: istio-ingress
app.kubernetes.io/version: 1.15.3
helm.sh/chart: gateway-1.15.3
istio: ingress
spec:
selector:
istio: ingress
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTP
- hosts:
- '*'
port:
name: tpc
number: 15021
protocol: TCP
By adding below destination Rule i resloved
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
annotations:
name: elasticsearch
namespace: istio-system
spec:
host: elasticsearch.monitor.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 9200
tls:
clientCertificate: /etc/istio/ingress/ca.cert
mode: SIMPLE
privateKey: /etc/istio/ingress/tls.key

AWS EKS service ingress and ALB --no ADDRESS

I seem to be having an issue with the way my ports are setup on this manifest, which is a simple go app. The app is configured to listen on port 3000.
This container runs fine on my local machine (localhost:3000), but I get no ADDRESS when I look at the Ingress (k get ingress ...).
I am getting an error logged in the AWS aws-load-balancer-controller log when I try to run this image on EKS:
controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"fiber-demo","namespace":"demo","error":"ingress: demo/fiber-demo: unable to find port 3000 on service demo/fiber-demo"
This is my k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
replicas: 1
selector:
matchLabels:
app: fiber-demo
template:
metadata:
labels:
app: fiber-demo
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: fiber-demo
image: 240195868935.dkr.ecr.us-east-2.amazonaws.com/fiber-demo:0.0.2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
selector:
app: fiber-demo
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fiber-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: fiber-demo
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: fiber-demo
servicePort: 3000
Am I simply not able to specify a targetPort other than port 80 in the Service?
Am I simply not able to specify a targetPort other than port 80 in the Service?
backend.servicePort refers to port exposed by service, not container.
...
backend:
serviceName: fiber-demo # <-- ingress for this service, not container
servicePort: 80 # <-- port which the service exposed, not the port which the container exposed.

MySQL router in kubernetes as a service

I want to deploy MySQL-router in Kubernetes working as a service.
My plan..
Deploy MySQL-router inside k8 and expose MySQL-router as a service using LoadBalancer (MetalLB)
Applications running inside k8 sees mysql-router service as its database.
MySQL-router sends application data to outside InnoDB cluster.
I tried to deploy using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "192.168.123.130"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 6446
192.168.123.130 is MySQL cluster Master IP.
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- protocol: TCP
port: 6446
type: LoadBalancer
loadBalancerIP: 192.168.123.123
When I check mysql-router container logs, I see something like this:
Waiting for mysql server 192.168.123.130 (0/12)
Waiting for mysql server 192.168.123.130 (1/12)
Waiting for mysql server 192.168.123.130 (2/12)
....
After setting my external MySQL cluster info in deployment, I get following errors:
Successfully contacted mysql server at 192.168.123.130. Checking for cluster state.
Can not connect to database. Exiting.
I can not deploy mysql-router without specifying MYSQL_HOST. What am I missing here?
My ideal deployment
Of course you have to provide the MySQL Host. You could doing this with k8s DNS which setup with in the services.
MySQL Router is middleware that provides transparent routing between your application and any backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers.
Examples
For examples below i use dynamic volume provisioning for data using openebs-hostpath And using StatefulSet for the MySQL Server.
Deployment
MySQL Router :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "mariadb-galera.galera-cluster"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 3306
MySQL Server
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: galera-cluster
name: mariadb-galera
spec:
podManagementPolicy: OrderedReady
replicas: 1
selector:
matchLabels:
app: mariadb-galera
serviceName: mariadb-galera
template:
metadata:
labels:
app: mariadb-galera
spec:
restartPolicy: Always
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- command:
- bash
- -ec
- |
# Bootstrap from the indicated node
NODE_ID="${MY_POD_NAME#"mariadb-galera-"}"
if [[ "$NODE_ID" -eq "0" ]]; then
export MARIADB_GALERA_CLUSTER_BOOTSTRAP=yes
export MARIADB_GALERA_FORCE_SAFETOBOOTSTRAP=no
fi
exec /opt/bitnami/scripts/mariadb-galera/entrypoint.sh /opt/bitnami/scripts/mariadb-galera/run.sh
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: BITNAMI_DEBUG
value: "false"
- name: MARIADB_GALERA_CLUSTER_NAME
value: galera
- name: MARIADB_GALERA_CLUSTER_ADDRESS
value: gcomm://mariadb-galera.galera-cluster
- name: MARIADB_ROOT_PASSWORD
value: root#123
- name: MARIADB_DATABASE
value: my_database
- name: MARIADB_GALERA_MARIABACKUP_USER
value: mariabackup
- name: MARIADB_GALERA_MARIABACKUP_PASSWORD
value: root#123
- name: MARIADB_ENABLE_LDAP
value: "no"
- name: MARIADB_ENABLE_TLS
value: "no"
image: docker.io/bitnami/mariadb-galera:10.4.13-debian-10-r23
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- bash
- -ec
- |
exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: mariadb-galera
ports:
- containerPort: 3306
name: mysql
protocol: TCP
- containerPort: 4567
name: galera
protocol: TCP
- containerPort: 4568
name: ist
protocol: TCP
- containerPort: 4444
name: sst
protocol: TCP
readinessProbe:
exec:
command:
- bash
- -ec
- |
exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /opt/bitnami/mariadb/.bootstrap
name: previous-boot
- mountPath: /bitnami/mariadb
name: data
- mountPath: /opt/bitnami/mariadb/conf
name: mariadb-galera-config
volumes:
- emptyDir: {}
name: previous-boot
- configMap:
defaultMode: 420
name: my.cnf
name: mariadb-galera-config
volumeClaimTemplates:
- apiVersion: v1
metadata:
name: data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Services
MySQL Router Service
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- protocol: TCP
port: 3306
type: LoadBalancer
loadBalancerIP: 192.168.123.123
MySQL Service
apiVersion: v1
kind: Service
metadata:
namespace: galera-cluster
name: mariadb-galera
labels:
app: mariadb-galera
spec:
type: ClusterIP
ports:
- name: mysql
port: 3306
selector:
app: mariadb-galera
---
apiVersion: v1
kind: Service
metadata:
namespace: galera-cluster
name: mariadb-galera-headless
labels:
app: mariadb-galera
spec:
type: ClusterIP
ports:
- name: galera
port: 4567
- name: ist
port: 4568
- name: sst
port: 4444
selector:
app: mariadb-galera
What you need its #1 communication from App1-x to Mysql router and #2 a VIP/LB from MysqlRoutere to external mysql instances.
Well start with #2 configuration of Mysql instances VIP. You will need a service without selector.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: 3306
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysql-service
subsets:
- addresses:
- ip: 192.168.123.130
- ip: 192.168.123.131
- ip: 192.168.123.132
ports:
- name: mysql
port: 3306
protocol: TCP
You don't need LoadBalancer cuz you will connect only inside cluster. So, use ClusterIp instead.
#1 Create MysqlRouter deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "mysql-service"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 6446
To connect to external MySQL instances trough VIP/ClusterIP use mysql-service service and if deployment and service is in same namespace use mysql-service as hostname or put there a CLusterIP from kubectl get service mysql-service
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- name: mysql
port: 6446
protocol: TCP
targetPort: 6446
type: ClusterIP
You can connect within kubernetes cluster to mysql-router-service hostname in same namespace and outside namespace to mysql-router-service.namespace.svc or outside kubernetes cluster use NodePort or LoadBalancer.

Accessing TCP port using istio ingress gateway outside the cluster

I have my gateway setup this way
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- hosts:
- "bitcoin-testnet-zmq.my.net"
port:
number: 48832
protocol: tcp
name: bitcoin-zmq-testnet
- hosts:
- "*"
port:
number: 80
protocol: http
name: bitcoin-mainnet
Virtual service like this
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bitcoin-testnet-zmq
namespace: dev
spec:
hosts:
- "bitcoin-testnet-zmq.my.net"
gateways:
- my-gateway
tcp:
- match:
- port: 48832
route:
- destination:
port:
number: 48832
name: bitcoin-zmq-testnet
host: bitcoinrpc-testnet-dev-service
and my service is as follows
kind: Service
apiVersion: v1
metadata:
name: bitcoinrpc-testnet-dev-service
namespace: dev
spec:
selector:
app: bitcoin-node-testnet
ports:
- name: bitcoin-testnet
protocol: TCP
port: 80
targetPort: 18332
- name: bitcoin-zmq-testnet
protocol: TCP
port: 48832
targetPort: 48832
type: NodePort
When I login to a pod in the same namespace and do telnet bitcoinrpc-testnet-dev-service 48832, then it can connect.
Also, found that all the other http serviecs can be accessed correctly through the istio-gateway
I don't see an issue with your configurations, actually that's the usage of the istio Gateway, to allow external access to your services.

One GCE ingress on GKE is causing a different GCE ingress to serve default backend

I am using external-DNS, for extra background.
I setup one service, deployment, and ingress for application "A," and it all works as expected and I can reach application A at the specified URL. Then I setup a similar thing for application "B," and and now I can reach application B, but if I hit the URL specified for application A, I get the default backend - 404 message. I haven't seen this issue before, what is the problem? Below are the service, deployment, and ingress manifests for A and for B:
A:service:
apiVersion: v1
kind: Service
metadata:
name: my-app-A
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 3000
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
run: my-app-A
type: NodePort
A:deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-A
spec:
replicas: 1
template:
metadata:
labels:
run: my-app-A
spec:
containers:
- name: my-app-A
image: this-is-my-docker-image
imagePullPolicy: Always
envFrom:
- secretRef:
name: my-app-A-secrets
- configMapRef:
name: my-app-A-configmap
ports:
- containerPort: 3000
A:ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-A
annotations:
external-dns.alpha.kubernetes.io/hostname: "A.myurl.com"
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "true"
spec:
rules:
- host: "A.myurl.com"
http:
paths:
- path: /*
backend:
serviceName: my-app-A
servicePort: 80
- host: "my-app-A-namespace.clusterbase.myurl.com"
http:
paths:
- path: /*
backend:
serviceName: my-app-A
servicePort: 80
For the manifests for B, replace all instances of "A" with "B", and replace external-dns.alpha.kubernetes.io/hostname: "A.myurl.com" with just external-dns.alpha.kubernetes.io/hostname: "myurl.com".
The problem was that the name of the namespace+ingress were too long, and the resources that get created in the background ended up with the same name, since they have a 64 character limit and the unique part was truncated. I filed a bug here that explains in it more detail.
https://github.com/kubernetes/ingress-gce/issues/537
You will hit this issue if the first 64 characters of <namespace>-<ingress> are not unique.