How to set INGRESS_HOST and INGRESS_PORT and access GATEWAY_URL - kubernetes-ingress

How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service

Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change

Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)

Related

AWS EKS service ingress and ALB --no ADDRESS

I seem to be having an issue with the way my ports are setup on this manifest, which is a simple go app. The app is configured to listen on port 3000.
This container runs fine on my local machine (localhost:3000), but I get no ADDRESS when I look at the Ingress (k get ingress ...).
I am getting an error logged in the AWS aws-load-balancer-controller log when I try to run this image on EKS:
controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"fiber-demo","namespace":"demo","error":"ingress: demo/fiber-demo: unable to find port 3000 on service demo/fiber-demo"
This is my k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
replicas: 1
selector:
matchLabels:
app: fiber-demo
template:
metadata:
labels:
app: fiber-demo
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: fiber-demo
image: 240195868935.dkr.ecr.us-east-2.amazonaws.com/fiber-demo:0.0.2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
selector:
app: fiber-demo
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fiber-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: fiber-demo
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: fiber-demo
servicePort: 3000
Am I simply not able to specify a targetPort other than port 80 in the Service?
Am I simply not able to specify a targetPort other than port 80 in the Service?
backend.servicePort refers to port exposed by service, not container.
...
backend:
serviceName: fiber-demo # <-- ingress for this service, not container
servicePort: 80 # <-- port which the service exposed, not the port which the container exposed.

Application Gateway Ingress Controller I am not able to access the application throgh Ingress controller but was able to access throgh LB External IP

I have created Ingress Controller for my deployment and I am able to access for some time and when i tried the same sometime later i was not able to access the application. But, I was able to access the application with LoadBalancer External IP at the same time. Please can someone help here.
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
name: label
spec:
replicas: 1
selector:
matchLabels:
app: label
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: label
spec:
containers:
- image: <Image Name>
name: label
env:
- name: ASPNETCORE_ENVIRONMENT
value: "UAT"
- name: EnvironmentName
value: "UAT"
volumeMounts:
- name: mst-storage
mountPath: /home/appuser/.aspnet/DataProtection-Keys
volumes:
- name: mst-storage
emptyDir: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: label
spec:
selector:
app: label
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: label
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /
backend:
service:
name: label
port:
number: 80
pathType: Prefix
```
Are you still facing this issue? You can always check application gateway ingress controller logs which constantly monitors the changes in the AKS PODS and pass that information to Application Gateway so that both components are in sync. If there are any issues those logs will clearly show the error information.
Example Command:
kubectl logs ingress-appgw-deployment-* -n kube-system
Also in the above deployment YAML file , can you kindly check the below section under metadata->labels. It was mentioned as app: lable (There is a spell mistake ?)
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
Compare that with Service YAML file (Spec -> Selector -> app: label )
spec:
selector:
app: label

AWS EKS ALB ingress controller is not working, and HOST is not populating

I am trying to implement a simple "hello world" on eks with alb Ingress controller.
My goal is to ..
Create a cluster
Deploy an Ingress to access using ELB
Following things have been done
Created EKS cluster
added "alb ingress controller"
C:\workspace\eks>kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
alb-ingress-controller-5f96d7df77-mdrw2 1/1 Running 0 4m1s
Created application as below
apiVersion: apps/v1
kind: Deployment
metadata:
name: "2048-deployment"
namespace: "2048-game"
labels:
app: "2048"
spec:
replicas: 1
selector:
matchLabels:
app: "2048"
template:
metadata:
labels:
app: "2048"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "2048"
ports:
- containerPort: 80
Serveice is as following
apiVersion: v1
kind: Service
metadata:
name: "service-2048"
namespace: "2048-game"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "2048"
Ingress controller is as below
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "2048-ingress"
namespace: "2048-game"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: 2048-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "service-2048"
servicePort: 80
output is as below, not getting Host addess as ELB .and not able to access from outside
C:\sample>kubectl get ingress/2048-ingress -n 2048-game
NAME HOSTS ADDRESS PORTS AGE
2048-ingress * 80 71s
Update :
Found following error in alb-ingress-controller-5f96d7df77-mdrw2 logs.
Not able to find how to change
kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/ascluster': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details. Resolved qualified subnets: '[]'" "controller"="alb-ingress-controller" "request"={"Namespace":"default","Name":"ingress-default-dev"}
The subnets where eks nodes resides should be tagged with the following
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging
If your subnets are not tagged with kubernetes.io/cluster/<cluster-name>=shared etc....
you can also try passing subnets in ingress file annotations like below
alb.ingress.kubernetes.io/subnets: subnet-xxxxxx, subnet-xxxxxx

502 Bad Gateway on WorkerNode without the pod

I am trying to understand the Nginx-Ingress on the K8S cluster.
I set up the nginx-ingress controller based on the instructions here
My cluster has 3 nodes
kubernetes-master, kubernetes-node1, kubernetes-node2
I have an IoTPoD running with 1 replica (kubernetes-node1). I have created a Cluster IP service for accessing this pod via rest. Below is the manifest.
apiVersion: v1
kind: Namespace
metadata:
name: myiotgarden
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ioteventshandler
namespace: myiotgarden
labels:
app: ioteventshandler
spec:
replicas: 2
selector:
matchLabels:
app: ioteventshandler
template:
metadata:
labels:
app: ioteventshandler
spec:
containers:
- name: ioteventshandler
image: 192.168.56.105:5000/ioteventshandler:latest
resources:
limits:
memory: "1024M"
requests:
memory: "128M"
imagePullPolicy: "Always"
---
apiVersion: v1
kind: Service
metadata:
name: iotevents-service
namespace: myiotgarden
labels:
app: iotevents-service
spec:
selector:
app: ioteventshandler
ports:
- port: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ioteventshandler-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
namespace: myiotgarden
spec:
rules:
- host: kubernetes-master
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node1
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node2
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
I have a haproxy running on Kmaster node. which has been configured with both the knode1 and knode2 as worker nodes.
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
backend http_back
balance roundrobin
server worker 192.168.56.207:80
server worker 192.168.56.208:80
When there are 2 replicas of the ioteventshandler running, the below command works fine.
curl -kL http://kubernetes-master/iotEvents/sonoff/sonoff11
I get back the response perfectly.
However, when I reduce the replicas to 1. the curl command intermittently returns 502 Bad Gateway. I am assuming this happens when the haproxy forwards the request to knode2 where a replica of ioteventshandler is not running.
Question:
Is there a way that the ingress controller on knode2 forwards it to knode1? And how do we do it?
Thank you.

One GCE ingress on GKE is causing a different GCE ingress to serve default backend

I am using external-DNS, for extra background.
I setup one service, deployment, and ingress for application "A," and it all works as expected and I can reach application A at the specified URL. Then I setup a similar thing for application "B," and and now I can reach application B, but if I hit the URL specified for application A, I get the default backend - 404 message. I haven't seen this issue before, what is the problem? Below are the service, deployment, and ingress manifests for A and for B:
A:service:
apiVersion: v1
kind: Service
metadata:
name: my-app-A
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 3000
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
run: my-app-A
type: NodePort
A:deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-A
spec:
replicas: 1
template:
metadata:
labels:
run: my-app-A
spec:
containers:
- name: my-app-A
image: this-is-my-docker-image
imagePullPolicy: Always
envFrom:
- secretRef:
name: my-app-A-secrets
- configMapRef:
name: my-app-A-configmap
ports:
- containerPort: 3000
A:ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-A
annotations:
external-dns.alpha.kubernetes.io/hostname: "A.myurl.com"
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "true"
spec:
rules:
- host: "A.myurl.com"
http:
paths:
- path: /*
backend:
serviceName: my-app-A
servicePort: 80
- host: "my-app-A-namespace.clusterbase.myurl.com"
http:
paths:
- path: /*
backend:
serviceName: my-app-A
servicePort: 80
For the manifests for B, replace all instances of "A" with "B", and replace external-dns.alpha.kubernetes.io/hostname: "A.myurl.com" with just external-dns.alpha.kubernetes.io/hostname: "myurl.com".
The problem was that the name of the namespace+ingress were too long, and the resources that get created in the background ended up with the same name, since they have a 64 character limit and the unique part was truncated. I filed a bug here that explains in it more detail.
https://github.com/kubernetes/ingress-gce/issues/537
You will hit this issue if the first 64 characters of <namespace>-<ingress> are not unique.