502 Bad Gateway on WorkerNode without the pod - kubernetes-ingress

I am trying to understand the Nginx-Ingress on the K8S cluster.
I set up the nginx-ingress controller based on the instructions here
My cluster has 3 nodes
kubernetes-master, kubernetes-node1, kubernetes-node2
I have an IoTPoD running with 1 replica (kubernetes-node1). I have created a Cluster IP service for accessing this pod via rest. Below is the manifest.
apiVersion: v1
kind: Namespace
metadata:
name: myiotgarden
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ioteventshandler
namespace: myiotgarden
labels:
app: ioteventshandler
spec:
replicas: 2
selector:
matchLabels:
app: ioteventshandler
template:
metadata:
labels:
app: ioteventshandler
spec:
containers:
- name: ioteventshandler
image: 192.168.56.105:5000/ioteventshandler:latest
resources:
limits:
memory: "1024M"
requests:
memory: "128M"
imagePullPolicy: "Always"
---
apiVersion: v1
kind: Service
metadata:
name: iotevents-service
namespace: myiotgarden
labels:
app: iotevents-service
spec:
selector:
app: ioteventshandler
ports:
- port: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ioteventshandler-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
namespace: myiotgarden
spec:
rules:
- host: kubernetes-master
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node1
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node2
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
I have a haproxy running on Kmaster node. which has been configured with both the knode1 and knode2 as worker nodes.
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
backend http_back
balance roundrobin
server worker 192.168.56.207:80
server worker 192.168.56.208:80
When there are 2 replicas of the ioteventshandler running, the below command works fine.
curl -kL http://kubernetes-master/iotEvents/sonoff/sonoff11
I get back the response perfectly.
However, when I reduce the replicas to 1. the curl command intermittently returns 502 Bad Gateway. I am assuming this happens when the haproxy forwards the request to knode2 where a replica of ioteventshandler is not running.
Question:
Is there a way that the ingress controller on knode2 forwards it to knode1? And how do we do it?
Thank you.

Related

AWS EKS service ingress and ALB --no ADDRESS

I seem to be having an issue with the way my ports are setup on this manifest, which is a simple go app. The app is configured to listen on port 3000.
This container runs fine on my local machine (localhost:3000), but I get no ADDRESS when I look at the Ingress (k get ingress ...).
I am getting an error logged in the AWS aws-load-balancer-controller log when I try to run this image on EKS:
controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"fiber-demo","namespace":"demo","error":"ingress: demo/fiber-demo: unable to find port 3000 on service demo/fiber-demo"
This is my k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
replicas: 1
selector:
matchLabels:
app: fiber-demo
template:
metadata:
labels:
app: fiber-demo
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: fiber-demo
image: 240195868935.dkr.ecr.us-east-2.amazonaws.com/fiber-demo:0.0.2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
selector:
app: fiber-demo
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fiber-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: fiber-demo
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: fiber-demo
servicePort: 3000
Am I simply not able to specify a targetPort other than port 80 in the Service?
Am I simply not able to specify a targetPort other than port 80 in the Service?
backend.servicePort refers to port exposed by service, not container.
...
backend:
serviceName: fiber-demo # <-- ingress for this service, not container
servicePort: 80 # <-- port which the service exposed, not the port which the container exposed.

How can i expose my EKS microservices via nginx ingress controller

I am trying to use microservices with my frontend application through nginx ingress controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/aws/deploy.yaml
above is the command which we have followed to deploy nginx-controller.
reference - https://kubernetes.github.io/ingress-nginx/deploy/#aws
------ My deployment.yaml & service.yaml for integrations-api is as below -------
'''
apiVersion: apps/v1
kind: Deployment
metadata:
name: integrations-api
labels:
app: integrations-api
spec:
replicas: 1
selector:
matchLabels:
app: integrations-api
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: integrations-api
spec:
containers:
- image: "###imagepath####"
imagePullPolicy: Always
name: integrations-api
ports:
- containerPort: 8083
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: integrations-api
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp,http"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "###certpath###"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: integrations-api
ports:
- name: http
port: 80
targetPort: 8083
protocol: TCP
- name: https
port: 443
targetPort: 8083
protocol: TCP
'''
------ My ingress.yaml looks like this --------
'''
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: integrations-api
servicePort: 80
- path: /
backend:
serviceName: user-api
servicePort: 80
'''
IN my node js integraion-api code we have added testing api path as below
'''
app.get('/camps', (req, res) => {
let obj = {}
res.send(obj);
});
'''
When i am vising endpoint of nginx-controller(here it is load balancer endpoint) https://####NLB-endpoint###/camps
i am getting response.
same configuration like deployment.yaml, service.yaml & nodejs code is written for user-services api. but i am not getting response for user-api
https://####NLB-endpoint###/users
Note, When i am shuffeling the ingress file as below i am getting response of https://####NLB-endpoint###/users but not for https://####NLB-endpoint###/camps. looks inress is taking path which is mentioned in first place only.
'''
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: user-api
servicePort: 80
- path: /
backend:
serviceName: integrations-api
servicePort: 80
'''
Any clue how can i fix this ?
Thanks in advance. it would great help from your side if someone guide us on the same.
This response may be too late. You are trying to creating a path based routing with multiple microservices. In this case, you need to set a path. For user-api, you need to set /user as path and then for integrations-api, you need to specify another path like /camps.

Problem with ALB Ingress Controller in redirecting to right path

I have done the setup of ALB (Application Load Balancer) using Ingress Controller (version -> docker.io/amazon/aws-alb-ingress-controller:v1.1.8) for my AWS EKS cluster (v 1.20) running with Fargate profile.
I can access my service using the load balancer link:-
http://5e07dbe1-default-nginxingr-29e9-1260427999.us-east-1.elb.amazonaws.com/
I have 2 different services configured in my Ingress as shown below:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-014b302d73097d083
# alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:195725532069:certificate/b6a9e691-b807-4f10-a0bf-0449730ecdf4
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
# alb.ingress.kubernetes.io/backend-protocol: HTTPS
#alb.ingress.kubernetes.io/load-balancer-attributes: "60"
#alb.ingress.kubernetes.io/rewrite-target: /
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# number: use-annotation
- path: /foo
pathType: Prefix
backend:
service:
name: "nginx-service"
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: "mydocker-svc"
port:
number: 8080
Now the problem is if I put /foo at the end of LB link then nothing happens and I get 404 not found error:-
Both my services are fine with respective Pods running behind their respective Kubernetes NodePort services but they are not accessible using the Ingress. If I swap the path to /* from /foo for the other service (nginx-service), I can then access that but then it will break my previous service (mydocker-svc).
Please let me know where I'm the mistake so that I can fix this issue. Thank you
ALB Controller:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=eks-fargate-alb-demo
- --aws-vpc-id=vpc-0dc46d370e38de475
- --aws-region=us-east-1
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.8
serviceAccountName: alb-ingress-controller
Nginx service:-
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
mydocker-svc:-
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
name: mydocker-svc
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
type: NodePort
status:
loadBalancer: {}
TargetGroups become unhealthy, if the annotation in Kubernetes NodePort service like alb.ingress.kubernetes.io/target-type: IP is missing:-
You can try this out one i am using as reference
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /app1/*
backend:
serviceName: app1-nginx-nodeport-service
servicePort: 80
- path: /app2/*
backend:
serviceName: app2-nginx-nodeport-service
servicePort: 80
- path: /*
backend:
serviceName: usermgmt-restapp-nodeport-service
servicePort: 8095
Read more at : https://www.stacksimplify.com/aws-eks/aws-alb-ingress/kubernetes-aws-alb-ingress-context-path-based-routing/

Reference nginx-ingress host name to internal service

After creating all require resource and config the nginx-ingress controller.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
replicas: 1
selector:
matchLabels:
app: user-api
strategy: {}
template:
metadata:
labels:
app: user-api
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: user-api
image: doumeyi/user-api-amd64:1.0
ports:
- name: user-api
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: user-api
spec:
selector:
app: user-api
ports:
- name: user-api
port: 3000
targetPort: 3000
type: LoadBalancer
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api
backend:
serviceName: user-api
servicePort: 3000
I can view example.com show the 404 not found page, but also can not see example.com/user-api to show any message I build in user-api service.
It seems the nginx-ingress cannot resolve the host name to the internal service, how should I fix it?
If NGINX cannot find a route to your Pod, it should not respond with 404. Instead it should give an 502 (Bad Gateway) AFAIK. I assume the 404 is from your application.
NGINX-Ingress changed behavior in 0.22 as mentioned here
The ingress resource should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api(/|$)(.*)
backend:
serviceName: user-api

How to set INGRESS_HOST and INGRESS_PORT and access GATEWAY_URL

How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service
Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change
Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)