Application Gateway Ingress Controller I am not able to access the application throgh Ingress controller but was able to access throgh LB External IP - kubernetes-ingress

I have created Ingress Controller for my deployment and I am able to access for some time and when i tried the same sometime later i was not able to access the application. But, I was able to access the application with LoadBalancer External IP at the same time. Please can someone help here.
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
name: label
spec:
replicas: 1
selector:
matchLabels:
app: label
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: label
spec:
containers:
- image: <Image Name>
name: label
env:
- name: ASPNETCORE_ENVIRONMENT
value: "UAT"
- name: EnvironmentName
value: "UAT"
volumeMounts:
- name: mst-storage
mountPath: /home/appuser/.aspnet/DataProtection-Keys
volumes:
- name: mst-storage
emptyDir: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: label
spec:
selector:
app: label
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: label
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /
backend:
service:
name: label
port:
number: 80
pathType: Prefix
```

Are you still facing this issue? You can always check application gateway ingress controller logs which constantly monitors the changes in the AKS PODS and pass that information to Application Gateway so that both components are in sync. If there are any issues those logs will clearly show the error information.
Example Command:
kubectl logs ingress-appgw-deployment-* -n kube-system
Also in the above deployment YAML file , can you kindly check the below section under metadata->labels. It was mentioned as app: lable (There is a spell mistake ?)
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
Compare that with Service YAML file (Spec -> Selector -> app: label )
spec:
selector:
app: label

Related

How can i expose my EKS microservices via nginx ingress controller

I am trying to use microservices with my frontend application through nginx ingress controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/aws/deploy.yaml
above is the command which we have followed to deploy nginx-controller.
reference - https://kubernetes.github.io/ingress-nginx/deploy/#aws
------ My deployment.yaml & service.yaml for integrations-api is as below -------
'''
apiVersion: apps/v1
kind: Deployment
metadata:
name: integrations-api
labels:
app: integrations-api
spec:
replicas: 1
selector:
matchLabels:
app: integrations-api
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: integrations-api
spec:
containers:
- image: "###imagepath####"
imagePullPolicy: Always
name: integrations-api
ports:
- containerPort: 8083
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: integrations-api
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp,http"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "###certpath###"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: integrations-api
ports:
- name: http
port: 80
targetPort: 8083
protocol: TCP
- name: https
port: 443
targetPort: 8083
protocol: TCP
'''
------ My ingress.yaml looks like this --------
'''
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: integrations-api
servicePort: 80
- path: /
backend:
serviceName: user-api
servicePort: 80
'''
IN my node js integraion-api code we have added testing api path as below
'''
app.get('/camps', (req, res) => {
let obj = {}
res.send(obj);
});
'''
When i am vising endpoint of nginx-controller(here it is load balancer endpoint) https://####NLB-endpoint###/camps
i am getting response.
same configuration like deployment.yaml, service.yaml & nodejs code is written for user-services api. but i am not getting response for user-api
https://####NLB-endpoint###/users
Note, When i am shuffeling the ingress file as below i am getting response of https://####NLB-endpoint###/users but not for https://####NLB-endpoint###/camps. looks inress is taking path which is mentioned in first place only.
'''
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: user-api
servicePort: 80
- path: /
backend:
serviceName: integrations-api
servicePort: 80
'''
Any clue how can i fix this ?
Thanks in advance. it would great help from your side if someone guide us on the same.
This response may be too late. You are trying to creating a path based routing with multiple microservices. In this case, you need to set a path. For user-api, you need to set /user as path and then for integrations-api, you need to specify another path like /camps.

Problem with ALB Ingress Controller in redirecting to right path

I have done the setup of ALB (Application Load Balancer) using Ingress Controller (version -> docker.io/amazon/aws-alb-ingress-controller:v1.1.8) for my AWS EKS cluster (v 1.20) running with Fargate profile.
I can access my service using the load balancer link:-
http://5e07dbe1-default-nginxingr-29e9-1260427999.us-east-1.elb.amazonaws.com/
I have 2 different services configured in my Ingress as shown below:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-014b302d73097d083
# alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:195725532069:certificate/b6a9e691-b807-4f10-a0bf-0449730ecdf4
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
# alb.ingress.kubernetes.io/backend-protocol: HTTPS
#alb.ingress.kubernetes.io/load-balancer-attributes: "60"
#alb.ingress.kubernetes.io/rewrite-target: /
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# number: use-annotation
- path: /foo
pathType: Prefix
backend:
service:
name: "nginx-service"
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: "mydocker-svc"
port:
number: 8080
Now the problem is if I put /foo at the end of LB link then nothing happens and I get 404 not found error:-
Both my services are fine with respective Pods running behind their respective Kubernetes NodePort services but they are not accessible using the Ingress. If I swap the path to /* from /foo for the other service (nginx-service), I can then access that but then it will break my previous service (mydocker-svc).
Please let me know where I'm the mistake so that I can fix this issue. Thank you
ALB Controller:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=eks-fargate-alb-demo
- --aws-vpc-id=vpc-0dc46d370e38de475
- --aws-region=us-east-1
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.8
serviceAccountName: alb-ingress-controller
Nginx service:-
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
mydocker-svc:-
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
name: mydocker-svc
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
type: NodePort
status:
loadBalancer: {}
TargetGroups become unhealthy, if the annotation in Kubernetes NodePort service like alb.ingress.kubernetes.io/target-type: IP is missing:-
You can try this out one i am using as reference
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /app1/*
backend:
serviceName: app1-nginx-nodeport-service
servicePort: 80
- path: /app2/*
backend:
serviceName: app2-nginx-nodeport-service
servicePort: 80
- path: /*
backend:
serviceName: usermgmt-restapp-nodeport-service
servicePort: 8095
Read more at : https://www.stacksimplify.com/aws-eks/aws-alb-ingress/kubernetes-aws-alb-ingress-context-path-based-routing/

AWS EKS ALB ingress controller is not working, and HOST is not populating

I am trying to implement a simple "hello world" on eks with alb Ingress controller.
My goal is to ..
Create a cluster
Deploy an Ingress to access using ELB
Following things have been done
Created EKS cluster
added "alb ingress controller"
C:\workspace\eks>kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
alb-ingress-controller-5f96d7df77-mdrw2 1/1 Running 0 4m1s
Created application as below
apiVersion: apps/v1
kind: Deployment
metadata:
name: "2048-deployment"
namespace: "2048-game"
labels:
app: "2048"
spec:
replicas: 1
selector:
matchLabels:
app: "2048"
template:
metadata:
labels:
app: "2048"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "2048"
ports:
- containerPort: 80
Serveice is as following
apiVersion: v1
kind: Service
metadata:
name: "service-2048"
namespace: "2048-game"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "2048"
Ingress controller is as below
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "2048-ingress"
namespace: "2048-game"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: 2048-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "service-2048"
servicePort: 80
output is as below, not getting Host addess as ELB .and not able to access from outside
C:\sample>kubectl get ingress/2048-ingress -n 2048-game
NAME HOSTS ADDRESS PORTS AGE
2048-ingress * 80 71s
Update :
Found following error in alb-ingress-controller-5f96d7df77-mdrw2 logs.
Not able to find how to change
kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/ascluster': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details. Resolved qualified subnets: '[]'" "controller"="alb-ingress-controller" "request"={"Namespace":"default","Name":"ingress-default-dev"}
The subnets where eks nodes resides should be tagged with the following
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging
If your subnets are not tagged with kubernetes.io/cluster/<cluster-name>=shared etc....
you can also try passing subnets in ingress file annotations like below
alb.ingress.kubernetes.io/subnets: subnet-xxxxxx, subnet-xxxxxx

Reference nginx-ingress host name to internal service

After creating all require resource and config the nginx-ingress controller.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
replicas: 1
selector:
matchLabels:
app: user-api
strategy: {}
template:
metadata:
labels:
app: user-api
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: user-api
image: doumeyi/user-api-amd64:1.0
ports:
- name: user-api
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: user-api
spec:
selector:
app: user-api
ports:
- name: user-api
port: 3000
targetPort: 3000
type: LoadBalancer
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api
backend:
serviceName: user-api
servicePort: 3000
I can view example.com show the 404 not found page, but also can not see example.com/user-api to show any message I build in user-api service.
It seems the nginx-ingress cannot resolve the host name to the internal service, how should I fix it?
If NGINX cannot find a route to your Pod, it should not respond with 404. Instead it should give an 502 (Bad Gateway) AFAIK. I assume the 404 is from your application.
NGINX-Ingress changed behavior in 0.22 as mentioned here
The ingress resource should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api(/|$)(.*)
backend:
serviceName: user-api

How to set INGRESS_HOST and INGRESS_PORT and access GATEWAY_URL

How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service
Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change
Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)