AWS EKS service ingress and ALB --no ADDRESS - kubernetes-ingress

I seem to be having an issue with the way my ports are setup on this manifest, which is a simple go app. The app is configured to listen on port 3000.
This container runs fine on my local machine (localhost:3000), but I get no ADDRESS when I look at the Ingress (k get ingress ...).
I am getting an error logged in the AWS aws-load-balancer-controller log when I try to run this image on EKS:
controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"fiber-demo","namespace":"demo","error":"ingress: demo/fiber-demo: unable to find port 3000 on service demo/fiber-demo"
This is my k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
replicas: 1
selector:
matchLabels:
app: fiber-demo
template:
metadata:
labels:
app: fiber-demo
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: fiber-demo
image: 240195868935.dkr.ecr.us-east-2.amazonaws.com/fiber-demo:0.0.2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
selector:
app: fiber-demo
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fiber-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: fiber-demo
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: fiber-demo
servicePort: 3000
Am I simply not able to specify a targetPort other than port 80 in the Service?

Am I simply not able to specify a targetPort other than port 80 in the Service?
backend.servicePort refers to port exposed by service, not container.
...
backend:
serviceName: fiber-demo # <-- ingress for this service, not container
servicePort: 80 # <-- port which the service exposed, not the port which the container exposed.

Related

K3S: Can't access my web. is there any problem in my yaml files?

I deploy my web service on K3S. I use DuckDNS to access https. i can access my domain with https. But i can't access my web service specific URL.
Here is my yaml files
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: auto-trade-api
spec:
replicas: 1
selector:
matchLabels:
app: auto-trade-api
template:
metadata:
labels:
app: auto-trade-api
spec:
containers:
- name: auto-trade-api
image: image
args: ['yarn', 'start']
resources:
requests:
cpu: '200m'
memory: '200Mi'
limits:
cpu: '200m'
memory: '200Mi'
envFrom:
- secretRef:
name: auto-trade-api
ports:
- containerPort: 3001
restartPolicy: Always
imagePullSecrets:
- name: regcred
Service
kind: Service
metadata:
name: auto-trade-api
spec:
type: "NodePort"
selector:
app: auto-trade-api
ports:
- protocol: TCP
port: 3001
targetPort: 3001
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auto-trade-api
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt"
spec:
tls:
- hosts:
- mydomain
secretName: auto-trade-api-tls
rules:
- host: mydomain
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: auto-trade-api
port:
number: 3001
i tried access https://mydomain/users that i made. but only display 404 page not found
I think i did connect each component well.

Problem with ALB Ingress Controller in redirecting to right path

I have done the setup of ALB (Application Load Balancer) using Ingress Controller (version -> docker.io/amazon/aws-alb-ingress-controller:v1.1.8) for my AWS EKS cluster (v 1.20) running with Fargate profile.
I can access my service using the load balancer link:-
http://5e07dbe1-default-nginxingr-29e9-1260427999.us-east-1.elb.amazonaws.com/
I have 2 different services configured in my Ingress as shown below:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-014b302d73097d083
# alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:195725532069:certificate/b6a9e691-b807-4f10-a0bf-0449730ecdf4
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
# alb.ingress.kubernetes.io/backend-protocol: HTTPS
#alb.ingress.kubernetes.io/load-balancer-attributes: "60"
#alb.ingress.kubernetes.io/rewrite-target: /
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# number: use-annotation
- path: /foo
pathType: Prefix
backend:
service:
name: "nginx-service"
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: "mydocker-svc"
port:
number: 8080
Now the problem is if I put /foo at the end of LB link then nothing happens and I get 404 not found error:-
Both my services are fine with respective Pods running behind their respective Kubernetes NodePort services but they are not accessible using the Ingress. If I swap the path to /* from /foo for the other service (nginx-service), I can then access that but then it will break my previous service (mydocker-svc).
Please let me know where I'm the mistake so that I can fix this issue. Thank you
ALB Controller:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=eks-fargate-alb-demo
- --aws-vpc-id=vpc-0dc46d370e38de475
- --aws-region=us-east-1
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.8
serviceAccountName: alb-ingress-controller
Nginx service:-
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
mydocker-svc:-
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
name: mydocker-svc
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
type: NodePort
status:
loadBalancer: {}
TargetGroups become unhealthy, if the annotation in Kubernetes NodePort service like alb.ingress.kubernetes.io/target-type: IP is missing:-
You can try this out one i am using as reference
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /app1/*
backend:
serviceName: app1-nginx-nodeport-service
servicePort: 80
- path: /app2/*
backend:
serviceName: app2-nginx-nodeport-service
servicePort: 80
- path: /*
backend:
serviceName: usermgmt-restapp-nodeport-service
servicePort: 8095
Read more at : https://www.stacksimplify.com/aws-eks/aws-alb-ingress/kubernetes-aws-alb-ingress-context-path-based-routing/

Reference nginx-ingress host name to internal service

After creating all require resource and config the nginx-ingress controller.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
replicas: 1
selector:
matchLabels:
app: user-api
strategy: {}
template:
metadata:
labels:
app: user-api
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: user-api
image: doumeyi/user-api-amd64:1.0
ports:
- name: user-api
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: user-api
spec:
selector:
app: user-api
ports:
- name: user-api
port: 3000
targetPort: 3000
type: LoadBalancer
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api
backend:
serviceName: user-api
servicePort: 3000
I can view example.com show the 404 not found page, but also can not see example.com/user-api to show any message I build in user-api service.
It seems the nginx-ingress cannot resolve the host name to the internal service, how should I fix it?
If NGINX cannot find a route to your Pod, it should not respond with 404. Instead it should give an 502 (Bad Gateway) AFAIK. I assume the 404 is from your application.
NGINX-Ingress changed behavior in 0.22 as mentioned here
The ingress resource should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api(/|$)(.*)
backend:
serviceName: user-api

502 Bad Gateway on WorkerNode without the pod

I am trying to understand the Nginx-Ingress on the K8S cluster.
I set up the nginx-ingress controller based on the instructions here
My cluster has 3 nodes
kubernetes-master, kubernetes-node1, kubernetes-node2
I have an IoTPoD running with 1 replica (kubernetes-node1). I have created a Cluster IP service for accessing this pod via rest. Below is the manifest.
apiVersion: v1
kind: Namespace
metadata:
name: myiotgarden
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ioteventshandler
namespace: myiotgarden
labels:
app: ioteventshandler
spec:
replicas: 2
selector:
matchLabels:
app: ioteventshandler
template:
metadata:
labels:
app: ioteventshandler
spec:
containers:
- name: ioteventshandler
image: 192.168.56.105:5000/ioteventshandler:latest
resources:
limits:
memory: "1024M"
requests:
memory: "128M"
imagePullPolicy: "Always"
---
apiVersion: v1
kind: Service
metadata:
name: iotevents-service
namespace: myiotgarden
labels:
app: iotevents-service
spec:
selector:
app: ioteventshandler
ports:
- port: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ioteventshandler-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
namespace: myiotgarden
spec:
rules:
- host: kubernetes-master
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node1
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node2
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
I have a haproxy running on Kmaster node. which has been configured with both the knode1 and knode2 as worker nodes.
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
backend http_back
balance roundrobin
server worker 192.168.56.207:80
server worker 192.168.56.208:80
When there are 2 replicas of the ioteventshandler running, the below command works fine.
curl -kL http://kubernetes-master/iotEvents/sonoff/sonoff11
I get back the response perfectly.
However, when I reduce the replicas to 1. the curl command intermittently returns 502 Bad Gateway. I am assuming this happens when the haproxy forwards the request to knode2 where a replica of ioteventshandler is not running.
Question:
Is there a way that the ingress controller on knode2 forwards it to knode1? And how do we do it?
Thank you.

One GCE ingress on GKE is causing a different GCE ingress to serve default backend

I am using external-DNS, for extra background.
I setup one service, deployment, and ingress for application "A," and it all works as expected and I can reach application A at the specified URL. Then I setup a similar thing for application "B," and and now I can reach application B, but if I hit the URL specified for application A, I get the default backend - 404 message. I haven't seen this issue before, what is the problem? Below are the service, deployment, and ingress manifests for A and for B:
A:service:
apiVersion: v1
kind: Service
metadata:
name: my-app-A
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 3000
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
run: my-app-A
type: NodePort
A:deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-A
spec:
replicas: 1
template:
metadata:
labels:
run: my-app-A
spec:
containers:
- name: my-app-A
image: this-is-my-docker-image
imagePullPolicy: Always
envFrom:
- secretRef:
name: my-app-A-secrets
- configMapRef:
name: my-app-A-configmap
ports:
- containerPort: 3000
A:ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-A
annotations:
external-dns.alpha.kubernetes.io/hostname: "A.myurl.com"
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "true"
spec:
rules:
- host: "A.myurl.com"
http:
paths:
- path: /*
backend:
serviceName: my-app-A
servicePort: 80
- host: "my-app-A-namespace.clusterbase.myurl.com"
http:
paths:
- path: /*
backend:
serviceName: my-app-A
servicePort: 80
For the manifests for B, replace all instances of "A" with "B", and replace external-dns.alpha.kubernetes.io/hostname: "A.myurl.com" with just external-dns.alpha.kubernetes.io/hostname: "myurl.com".
The problem was that the name of the namespace+ingress were too long, and the resources that get created in the background ended up with the same name, since they have a 64 character limit and the unique part was truncated. I filed a bug here that explains in it more detail.
https://github.com/kubernetes/ingress-gce/issues/537
You will hit this issue if the first 64 characters of <namespace>-<ingress> are not unique.