Exposing AKS cluster application using ingress - kubernetes-ingress

I am trying to expose my application inside the AKS cluster using ingress:
It creates a service and an ingress but somehow does not assign an address to the ingress. What could be a possible reason for this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: dockerdemo
spec:
replicas: 1
selector:
matchLabels:
app: dockerdemo
template:
metadata:
labels:
app: dockerdemo
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: dockerdemo
image: devsecopsacademy/dockerapp:v3
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
apiVersion: v1
kind: Service
metadata:
name: dockerdemo-service
spec:
type: ClusterIP
ports:
port: 80
selector:
app: dockerdemo
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress15
annotations:
kubernetes.io/ingress.class: addon-http-application-rounting
spec:
rules:
host: curefirsttestapp.cluster15-dns-c42b65ee.hcp.westeurope.azmk8s.io
http:
paths:
path: /
pathType: Prefix
backend:
service:
name: dockerdemo-service
port:
number: 80

Well, first make sure your application is up and functionning inside your K8s Cluster using a port-forword to your localhost
kubectl -n $NAMESPACE port-forward svc/$SERVICE :$PORT
if app is reachable and your call are getting back 200 Status, you can now move to the ingress part:
Make sure ingress controller is well installed under your services
kubectl -n $NAMESPACE get svc
Add a DNS record in your DNS zone which maps your domain.com to ingress controller $EXTERNAL_IP
Take a look at the ingress you created for your $SERVICE
kubectl -n $NAMESPACE get ingress
At this stage, if you application is well running and also the the ingress is well set, the app should be reachable trough domain.com, otherwise we'll need further debugging.

Make sure you have an ingress controller deployed. This is a load balancer service which can have either a public or private ip depending on your situation.
Make sure you have an ingress definition which has a rule to point to your service. This is the metadata which will tell your ingress controller how to route requests to its ip address. These routing rules can contain how to handle paths like strip, exact, etc....

Related

Allow Azure Application Gateway to route all sub paths in AKS

I have AKS configured with Azure Application Gateway as my ingress.
I am trying to deploy a .net core Angular app to a path within the cluster. I would like to access the app on http://<cluster ip>/app1.
My kubernetes deployment (including ingress settings) is as follows:
apiVersion: v1
kind: Pod
metadata:
name: web-app-1
labels:
app: web-app-1
spec:
containers:
- image: "xxx.azurecr.io/web-app-1:latest"
name: web-app-1
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web-app-1
spec:
selector:
app: web-app-1
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-app-1
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: web-app-1
servicePort: 80
In the Angular app itself, I have left <base href="/" /> in index.html. However, I have amended the build to now be ng build --base-href /app1/"
Issue
When this is deployed and I browse to http://<cluster ip>/app1 then it loads the index.html file. However it returns a 404 for all the additional scripts e.g. 404 on http://<cluster ip>/app1/main-es2015.9ae13a2658e759db61f5.js
The issue could be with how I've configured Angular, but browsing to http://<cluster ip>/app1/index.html returns a 404 when I know it can be accessed just using /app1/.
I believe the issue is that Application Gateway is not routing requests properly for anything after /app1/. How can I get it to allow sub routes through (i.e. the scripts)?
Thanks
Got this working now. If I looked at the 404 response headers it says it was from kestrel, so was hitting the dotnet core api, so it needs configuring there. All the changes I made were:
Client:
Leave the base href as / e.g.
Add the base href to the build argument e.g. ng build --base-href /app1/"
In Configure of Startup.cs, add app.UsePathBase("/app1"); I do this in the else of env.IsDevelopment().
Application Gateway:
Change the path for the rules to - path: /app1*. I didn't have the asterisk so wasn't routing all subsequent routes.
You could also do something like this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: appgw-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
rules:
- http:
paths:
- path: /api/*
....
Where you are updating the routing prefix form "/" to "/api/*" . Specifically this annotation
appgw.ingress.kubernetes.io/backend-path-prefix: "/"

How to use nginx ingress to route traffic based on port

I'm currently working on deploying ELK stack on kubernetes cluster, i was successfully able to use ClusterIP service and nginx-ingress on minikube to route inbound http traffic to kibana (5601 port), need inputs on how i can route traffic based on inbound port rather than path?
Using below Ingress object declaration, i was successfully able to connect to my kibana deployment, but how can i access other tools stack exposed on different ports (9200, 5044, 9600)?
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: ingress-service
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: kibana-service
servicePort: 5601
CUrl'ing minikube ip on default 80 port returns valid response
# curl http://<minikube-ip>/api/status
{"name":"kibana",....}
Note: i would not want to use NodePort, but would like to know if nodeport is the only way we can achieve the above?
As you already have minikube and minikube ingress addon enabled:
$ minikube addons list | grep ingress
| ingress | minikube | enabled ✅ |
| ingress-dns | minikube | enabled ✅ |
Just as reminder:
targetPort: is the port the container accepts traffic on (port where application runs inside the pod).
port: is the abstracted Service port, which can be any port other pods use to access the Service.
Please keep in mind that if your container will not be listening port specified in targetPort you will not be able to connect to the pod.
Also remember about firewall configuration to allow traffic.
As for example I've used this yamls:
apiVersion: v1
kind: Service
metadata:
name: service-one
spec:
selector:
key: application-1
ports:
- port: 81
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
key: application-1
template:
metadata:
labels:
key: application-1
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
spec:
selector:
key: application-2
ports:
- port: 82
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-2
spec:
replicas: 1
selector:
matchLabels:
key: application-2
template:
metadata:
labels:
key: application-2
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: service-one
servicePort: 81
- path: /hello2
backend:
serviceName: service-two
servicePort: 82
service/service-one created
deployment.apps/deployment-1 created
service/service-two created
deployment.apps/deployment-2 created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/ingress created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Please keep in mind that soon Minikube will change apiVersion as per warning above.
Below output of this configuration:
$ curl http://172.17.0.3/hello
Hello, world!
Version: 1.0.0
Hostname: deployment-1-77ddb77d56-2l4cp
minikube-ubuntu18:~$ curl http://172.17.0.3/hello2
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-5dvbx
You could use:
paths:
- path: /elasticsearch
backend:
serviceName: elasticsearch-service
servicePort: 100
- path: /anotherservice
backend:
serviceName: another-service
servicePort: 101
Where service would looks like:
name: elasticsearch-service
...
ports:
- port: 100
targetPort: 9200
---
name: another-service
...
ports:
- port: 101
targetPort: 5044
However, if you would need more advanced path configuration you can also use rewrite. Also you can use default backend to redirect to specific service.
More information about accessing Minikube you can find in Minikube documentation.
Is it what you were looking for or something different?

kubectl cannot acces pod application

A have this pod specification :
apiVersion: v1
kind: Pod
metadata:
name: wp
spec:
containers:
- image: wordpress:4.9-apache
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
value: mysqlpwd
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpwd
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumes:
- name: data
emptyDir: {}
I deployed it using :
kubectl create -f wordpress-pod.yaml
Now it is correctly deployed :
kubectl get pods
wp 2/2 Running 3 35h
Then when i do :
kubectl describe po/wp
Name: wp
Namespace: default
Priority: 0
Node: node3/192.168.50.12
Start Time: Mon, 13 Jan 2020 23:27:16 +0100
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.233.92.7
IPs:
IP: 10.233.92.7
Containers:
My issue is that i cannot access to the app :
wget http://192.168.50.12:8080/wp-admin/install.php
Connecting to 192.168.50.12:8080... failed: Connection refused.
Neither wget http://10.233.92.7:8080/wp-admin/install.php
works
Is there any issue in the pod description or deployment process ?
Thanks
With your current setup you need to use wget http://10.233.92.7:8080/wp-admin/install.php from within the cluster i.e by performing kubectl exec into another pod because 10.233.92.7 IP is valid only within the cluster.
You should create a service for exposing your pod. Create a cluster IP type service(default) for accessing from within the cluster. If you want to access from outside the cluster i.e from your desktop then create a NodePort or LoadBalancer type service.
Other way to access the application from your desktop will be port forwarding. In this case you don't need to create a service.
Here is a tutorial for accessing pods using NodePort service. In this case your node need to have public ip.
The problem with your configuration is lack of services that will allow external access to your WordPress.
There a lot of materials explaining what are the options and how they are strictly connected with infrastructure that Kubernetes works on.
Let me elaborate on 3 of them:
minikube
kubeadm
cloud provisioned (GKE, EKS, AKS)
The base of the WordPress configuration will be the same in each case.
Table of contents:
Running MySQL
Secret
PersistentVolumeClaim
Deployment
Service
Running WordPress
PersistentVolumeClaim
Deployment
Allowing external access
minikube
kubeadm
cloud provisioned (GKE)
There is a good tutorial on Kubernetes site: HERE!
Running MySQL
Secret
As the official Kubernetes documentation:
Kubernetes secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
-- Kubernetes secrets
Example below is a YAML definition of a secret used for MySQL password:
apiVersion: v1
kind: Secret
metadata:
name: mysql-password
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
You can check if it was created correctly with:
$ kubectl get secret mysql-password -o yaml
PersistentVolumeClaim
MySQL require a dedicated space for storing the data. There is an official documentation explaining it: Persistent Volumes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Above YAML will create a storage claim for MySQL. Apply it with command:
$ kubectl apply -f FILE_NAME.
Deployment
Create a YAML definition of a deployment from the official example and adjust it if there were any changes to names of the objects:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Take a specific look on the part below, which is parsing secret password to the MySQL pod:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
Apply it with command: $ kubectl apply -f FILE_NAME.
Service
What was missing in your the configuration was service objects. This objects allows communication with other pods, external traffic etc. Look at below example:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
This definition will create a object which will point to the MySQL pod.
It will create a DNS entry with the name of wordpress-mysql and IP address of the pod.
It will not expose it to external traffic as it's not needed.
Apply it with command: $ kubectl apply -f FILE_NAME.
Running WordPress
Persistent Volume Claim
As well as MySQL, WordPress require a dedicated space for storing the data. Create it with below example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply it with command: $ kubectl apply -f FILE_NAME.
Deployment
Create YAML definition of WordPress as example below:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Take a specific look at:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
This part will parse secret value to the deployment.
Below definition will tell WordPress where MySQL is located:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
Apply it with command: $ kubectl apply -f FILE_NAME.
Allowing external access
There are many different approaches for configuring external access to applications.
Minikube
Configuration could differ between different hypervisors.
For example Minikube can expose WordPress to external traffic with:
NodePort
apiVersion: v1
kind: Service
metadata:
name: wordpress-nodeport
spec:
type: NodePort
selector:
app: wordpress
tier: frontend
ports:
- name: wordpress-port
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will need to enter the minikube IP address with appropriate port to the web browser.
This port can be found with command:
$ kubectl get svc wordpress-nodeport
Output of above command:
wordpress-nodeport NodePort 10.76.9.15 <none> 80:30173/TCP 8s
In this case it is 30173.
LoadBalancer
In this case it will create NodePort also!
apiVersion: v1
kind: Service
metadata:
name: wordpress-loadbalancer
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
Ingress resource
Please refer to this link: Minikube: create-an-ingress-resource
Also you can refer to this Stack Overflow post
Kubeadm
With the Kubernetes clusters provided by kubeadm there are:
NodePort
The configuration process is the same as in minikube. The only difference is that it will create NodePort on each of every node in the cluster. After that you can enter IP address of any of the node with appropriate port. Be aware that you will neeed to be in the same network without firewall blocking your access.
LoadBalancer
You can create LoadBalancer object with the same YAML definition as in minikube. The problem is that with kubeadm provisioning on a bare metal cluster the LoadBalancer will not get IP address. The one of the options is: MetalLB
Ingress
Ingress resources share the same problem as LoadBalancer in kubeadm provisioned infrastructure. As above one of the options is: MetalLB.
Cloud Provisioned
There are many options which are strictly related to cloud that Kubernetes works on. Below is example for configuring Ingress resource with NGINX controller on GKE:
Apply both of the YAML definitions:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Apply NodePort definition from minikube
Create Ingress resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: wordpress-nodeport
servicePort: wordpress-port
Apply it with command: $ kubectl apply -f FILE_NAME.
Check if Ingress resource got the address from cloud provider with command:
$ kubectl get ingress
The output should look like that:
NAME HOSTS ADDRESS PORTS AGE
ingress * XXX.XXX.XXX.X 80 26m
After entering the IP address from above command you should get:
Cloud provisioned example can be used for kubeadm provisioned clusters with the MetalLB configured.

How to assign a static IP to a pod using Kubernetes on deployment

I am trying to assign a static IP address to a pod on deployment.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aws-test-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: aws-test-mysql
spec:
containers:
- name: aws-test-mysql
image: 461677341235123.dkr.ecr.us-east-1.amazonaws.com/aws-test-mysql
securityContext:
privileged: true
ports:
- containerPort: 3306
hostIP: 172.20.32.50
hostPort: 3306
resources:
requests:
cpu: 100m
imagePullSecrets:
- name: ecrkey
As you can see when I described my pod it is created with another IP.
test-mbp1:aws test$ kubectl describe pods | grep IP
IP: 100.96.1.3
I'm trying to deploy a pod with a static IP on "kind: Deployment" and not as a service.
Is this posible ?
A static IP cannot be assigned to a Pod because of the dynamic nature of kubernetes' IP layer.
Since you don't want to attach a Service (which is the best way imho), a close alternative is to convert the Deployment to a StatefulSet. This will give the Pod a static hostname which more-or-less fulfils your requirement.
The first replica of the StatefulSet will be called aws-test-mysql-0.<kubernetes.cluster.tld>.

Different ingress in different Namespace in kubernetes

I have created two different namespaces for different environment. one is devops-qa and another is devops-dev. I created two ingress in different namespaces. So while creating ingress of qa env in devops-qa namespace, the rules written inside ingress of qa is working fine. Means I am able to access the webpage of qa env. The moment I will create the ingress of dev env in devops-dev namespace, I will be able to access the webpage of dev env but wont be able to access the webpage of qa. And when I delete the dev ingress then again I will be able to access the qa env website
Below is the ingree of both dev and qa env.
Dev Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-dev
namespace: devops-dev
spec:
tls:
- hosts:
- cafe-dev.example.com
secretName: default-token-drk6n
rules:
- host: cafe-dev.example.com
http:
paths:
- path: /
backend:
serviceName: miqpdev-svc
servicePort: 80
QA Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-qa
namespace: devops-qa
spec:
tls:
- hosts:
- cafe-qa.example.com
secretName: default-token-jdnqf
rules:
- host: cafe-qa.example.com
http:
paths:
- path: /greentea
backend:
serviceName: greentea-svc
servicePort: 80
- path: /blackcoffee
backend:
serviceName: blackcoffee-svc
servicePort: 80
The token mentioned in the ingress file is of each namespace. And the nginx ingress controller is running in QA namespace
How can i run both the ingress and will be able to get all the websites deployed in both dev and qa env ?
I actually Solved my problem. I did everything correct. But only thing I did not do is to map the hostname with the same ip in Route53. And instead of accessing the website with hostname, I was accessing it from IP. Now after accessing the website from hostname, I was able to access it :)
Seems like you posted here and got your answer. The solution is to deploy a different Ingress for each namespace. However, deploying 2 Ingresses complicates matters because one instance has to run on a non-standard port (eg. 8080, 8443).
I think this is better solved using DNS. Create the CNAME records cafe-qa.example.com and cafe-dev.example.com both pointing to cafe.example.com. Update each Ingress manifest accordingly. Using DNS is somewhat the standard way to separate the Dev/QA/Prod environments.
Had the same issue, found a way to resolve it:
you just need to add the "--watch-namespace" argument to the ingress controller that sits under the ingress service that you've linked to your ingress resource. Then it will be bound only to the services within the same namespace as the ingress service and its pods belong to.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: my-namespace
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress-lb
template:
metadata:
labels:
name: nginx-ingress-lb
spec:
serviceAccountName: ingress-account
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
- "--default-ssl-certificate=$(POD_NAMESPACE)/secret-tls"
- "--watch-namespace=$(POD_NAMESPACE)"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1"
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
namespace: my-namespace
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
selector:
name: nginx-ingress-lb
You can create nginx ingress cotroller in kube-system namespace instead of creating it in QA namespace.