AWS EKS ALB ingress controller is not working, and HOST is not populating - kubernetes-ingress

I am trying to implement a simple "hello world" on eks with alb Ingress controller.
My goal is to ..
Create a cluster
Deploy an Ingress to access using ELB
Following things have been done
Created EKS cluster
added "alb ingress controller"
C:\workspace\eks>kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
alb-ingress-controller-5f96d7df77-mdrw2 1/1 Running 0 4m1s
Created application as below
apiVersion: apps/v1
kind: Deployment
metadata:
name: "2048-deployment"
namespace: "2048-game"
labels:
app: "2048"
spec:
replicas: 1
selector:
matchLabels:
app: "2048"
template:
metadata:
labels:
app: "2048"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "2048"
ports:
- containerPort: 80
Serveice is as following
apiVersion: v1
kind: Service
metadata:
name: "service-2048"
namespace: "2048-game"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "2048"
Ingress controller is as below
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "2048-ingress"
namespace: "2048-game"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: 2048-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "service-2048"
servicePort: 80
output is as below, not getting Host addess as ELB .and not able to access from outside
C:\sample>kubectl get ingress/2048-ingress -n 2048-game
NAME HOSTS ADDRESS PORTS AGE
2048-ingress * 80 71s
Update :
Found following error in alb-ingress-controller-5f96d7df77-mdrw2 logs.
Not able to find how to change
kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/ascluster': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details. Resolved qualified subnets: '[]'" "controller"="alb-ingress-controller" "request"={"Namespace":"default","Name":"ingress-default-dev"}

The subnets where eks nodes resides should be tagged with the following
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging

If your subnets are not tagged with kubernetes.io/cluster/<cluster-name>=shared etc....
you can also try passing subnets in ingress file annotations like below
alb.ingress.kubernetes.io/subnets: subnet-xxxxxx, subnet-xxxxxx

Related

AWS EKS service ingress and ALB --no ADDRESS

I seem to be having an issue with the way my ports are setup on this manifest, which is a simple go app. The app is configured to listen on port 3000.
This container runs fine on my local machine (localhost:3000), but I get no ADDRESS when I look at the Ingress (k get ingress ...).
I am getting an error logged in the AWS aws-load-balancer-controller log when I try to run this image on EKS:
controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"fiber-demo","namespace":"demo","error":"ingress: demo/fiber-demo: unable to find port 3000 on service demo/fiber-demo"
This is my k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
replicas: 1
selector:
matchLabels:
app: fiber-demo
template:
metadata:
labels:
app: fiber-demo
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: fiber-demo
image: 240195868935.dkr.ecr.us-east-2.amazonaws.com/fiber-demo:0.0.2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
selector:
app: fiber-demo
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fiber-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: fiber-demo
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: fiber-demo
servicePort: 3000
Am I simply not able to specify a targetPort other than port 80 in the Service?
Am I simply not able to specify a targetPort other than port 80 in the Service?
backend.servicePort refers to port exposed by service, not container.
...
backend:
serviceName: fiber-demo # <-- ingress for this service, not container
servicePort: 80 # <-- port which the service exposed, not the port which the container exposed.

Application Gateway Ingress Controller I am not able to access the application throgh Ingress controller but was able to access throgh LB External IP

I have created Ingress Controller for my deployment and I am able to access for some time and when i tried the same sometime later i was not able to access the application. But, I was able to access the application with LoadBalancer External IP at the same time. Please can someone help here.
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
name: label
spec:
replicas: 1
selector:
matchLabels:
app: label
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: label
spec:
containers:
- image: <Image Name>
name: label
env:
- name: ASPNETCORE_ENVIRONMENT
value: "UAT"
- name: EnvironmentName
value: "UAT"
volumeMounts:
- name: mst-storage
mountPath: /home/appuser/.aspnet/DataProtection-Keys
volumes:
- name: mst-storage
emptyDir: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: label
spec:
selector:
app: label
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: label
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /
backend:
service:
name: label
port:
number: 80
pathType: Prefix
```
Are you still facing this issue? You can always check application gateway ingress controller logs which constantly monitors the changes in the AKS PODS and pass that information to Application Gateway so that both components are in sync. If there are any issues those logs will clearly show the error information.
Example Command:
kubectl logs ingress-appgw-deployment-* -n kube-system
Also in the above deployment YAML file , can you kindly check the below section under metadata->labels. It was mentioned as app: lable (There is a spell mistake ?)
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
Compare that with Service YAML file (Spec -> Selector -> app: label )
spec:
selector:
app: label

Problem with ALB Ingress Controller in redirecting to right path

I have done the setup of ALB (Application Load Balancer) using Ingress Controller (version -> docker.io/amazon/aws-alb-ingress-controller:v1.1.8) for my AWS EKS cluster (v 1.20) running with Fargate profile.
I can access my service using the load balancer link:-
http://5e07dbe1-default-nginxingr-29e9-1260427999.us-east-1.elb.amazonaws.com/
I have 2 different services configured in my Ingress as shown below:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-014b302d73097d083
# alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:195725532069:certificate/b6a9e691-b807-4f10-a0bf-0449730ecdf4
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
# alb.ingress.kubernetes.io/backend-protocol: HTTPS
#alb.ingress.kubernetes.io/load-balancer-attributes: "60"
#alb.ingress.kubernetes.io/rewrite-target: /
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# number: use-annotation
- path: /foo
pathType: Prefix
backend:
service:
name: "nginx-service"
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: "mydocker-svc"
port:
number: 8080
Now the problem is if I put /foo at the end of LB link then nothing happens and I get 404 not found error:-
Both my services are fine with respective Pods running behind their respective Kubernetes NodePort services but they are not accessible using the Ingress. If I swap the path to /* from /foo for the other service (nginx-service), I can then access that but then it will break my previous service (mydocker-svc).
Please let me know where I'm the mistake so that I can fix this issue. Thank you
ALB Controller:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=eks-fargate-alb-demo
- --aws-vpc-id=vpc-0dc46d370e38de475
- --aws-region=us-east-1
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.8
serviceAccountName: alb-ingress-controller
Nginx service:-
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
mydocker-svc:-
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
name: mydocker-svc
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
type: NodePort
status:
loadBalancer: {}
TargetGroups become unhealthy, if the annotation in Kubernetes NodePort service like alb.ingress.kubernetes.io/target-type: IP is missing:-
You can try this out one i am using as reference
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /app1/*
backend:
serviceName: app1-nginx-nodeport-service
servicePort: 80
- path: /app2/*
backend:
serviceName: app2-nginx-nodeport-service
servicePort: 80
- path: /*
backend:
serviceName: usermgmt-restapp-nodeport-service
servicePort: 8095
Read more at : https://www.stacksimplify.com/aws-eks/aws-alb-ingress/kubernetes-aws-alb-ingress-context-path-based-routing/

502 Bad Gateway on WorkerNode without the pod

I am trying to understand the Nginx-Ingress on the K8S cluster.
I set up the nginx-ingress controller based on the instructions here
My cluster has 3 nodes
kubernetes-master, kubernetes-node1, kubernetes-node2
I have an IoTPoD running with 1 replica (kubernetes-node1). I have created a Cluster IP service for accessing this pod via rest. Below is the manifest.
apiVersion: v1
kind: Namespace
metadata:
name: myiotgarden
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ioteventshandler
namespace: myiotgarden
labels:
app: ioteventshandler
spec:
replicas: 2
selector:
matchLabels:
app: ioteventshandler
template:
metadata:
labels:
app: ioteventshandler
spec:
containers:
- name: ioteventshandler
image: 192.168.56.105:5000/ioteventshandler:latest
resources:
limits:
memory: "1024M"
requests:
memory: "128M"
imagePullPolicy: "Always"
---
apiVersion: v1
kind: Service
metadata:
name: iotevents-service
namespace: myiotgarden
labels:
app: iotevents-service
spec:
selector:
app: ioteventshandler
ports:
- port: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ioteventshandler-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
namespace: myiotgarden
spec:
rules:
- host: kubernetes-master
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node1
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node2
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
I have a haproxy running on Kmaster node. which has been configured with both the knode1 and knode2 as worker nodes.
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
backend http_back
balance roundrobin
server worker 192.168.56.207:80
server worker 192.168.56.208:80
When there are 2 replicas of the ioteventshandler running, the below command works fine.
curl -kL http://kubernetes-master/iotEvents/sonoff/sonoff11
I get back the response perfectly.
However, when I reduce the replicas to 1. the curl command intermittently returns 502 Bad Gateway. I am assuming this happens when the haproxy forwards the request to knode2 where a replica of ioteventshandler is not running.
Question:
Is there a way that the ingress controller on knode2 forwards it to knode1? And how do we do it?
Thank you.

How to set INGRESS_HOST and INGRESS_PORT and access GATEWAY_URL

How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service
Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change
Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)