why is My Ingress Ip same as Minikube Ip. I am not able to access the Minikube Ip in my Broswer - kubernetes-ingress

I have created a Service as Load Balancer and tried accessing the service using Minikube tunnel. It is working.
When I tried creating Ingress for the service i get the IP as same as minikube IP and not the tunnel IP.
My ingress Controller is of type NodePort
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
default springboot NodePort 10.103.228.107 <none> 8090:32389/TCP 16h
ingress-nginx ingress-nginx-controller NodePort 10.98.92.81 <none> 80:31106/TCP,443:32307/TCP 17h
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.99.224.119 <none> 443/TCP 17h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.23.18 <none> 8000/TCP 16h
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.98.172.252 <none> 80/TCP 16h
I tunnel this using:
minikube service ingress-nginx-controller -n ingress-nginx --url
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:58628 |
| | | | http://127.0.0.1:58629 |
|---------------|--------------------------|-------------|------------------------|
http://127.0.0.1:58628
http://127.0.0.1:58629
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
I get the Url as http://127.0.0.1:58628.
I now apply ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingresstest
spec:
rules:
- host: "ravi.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: springboot
port:
number: 8090
But the ingress addressed is exposed in
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingresstest <none> ravi.com 192.168.49.2 80 64m
I need the tunnel URL in ingress

Unfortunately you cannot have the tunnel URL in your ingress. Ingress is working as expected.
You can add minikube ingress by commmand: minikube addons enable ingress. After enabling ingress addon it is specifically stated that: After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1". This tunnel creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. You can find more info here.
So you can install ingress, but unfortunately it won't work the way you want it.
You should also know, that Minikube is mainly used for testing and learning purposes so some of it's features might not be ideal.

Related

Exposing AKS cluster application using ingress

I am trying to expose my application inside the AKS cluster using ingress:
It creates a service and an ingress but somehow does not assign an address to the ingress. What could be a possible reason for this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: dockerdemo
spec:
replicas: 1
selector:
matchLabels:
app: dockerdemo
template:
metadata:
labels:
app: dockerdemo
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: dockerdemo
image: devsecopsacademy/dockerapp:v3
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
apiVersion: v1
kind: Service
metadata:
name: dockerdemo-service
spec:
type: ClusterIP
ports:
port: 80
selector:
app: dockerdemo
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress15
annotations:
kubernetes.io/ingress.class: addon-http-application-rounting
spec:
rules:
host: curefirsttestapp.cluster15-dns-c42b65ee.hcp.westeurope.azmk8s.io
http:
paths:
path: /
pathType: Prefix
backend:
service:
name: dockerdemo-service
port:
number: 80
Well, first make sure your application is up and functionning inside your K8s Cluster using a port-forword to your localhost
kubectl -n $NAMESPACE port-forward svc/$SERVICE :$PORT
if app is reachable and your call are getting back 200 Status, you can now move to the ingress part:
Make sure ingress controller is well installed under your services
kubectl -n $NAMESPACE get svc
Add a DNS record in your DNS zone which maps your domain.com to ingress controller $EXTERNAL_IP
Take a look at the ingress you created for your $SERVICE
kubectl -n $NAMESPACE get ingress
At this stage, if you application is well running and also the the ingress is well set, the app should be reachable trough domain.com, otherwise we'll need further debugging.
Make sure you have an ingress controller deployed. This is a load balancer service which can have either a public or private ip depending on your situation.
Make sure you have an ingress definition which has a rule to point to your service. This is the metadata which will tell your ingress controller how to route requests to its ip address. These routing rules can contain how to handle paths like strip, exact, etc....

AKS AGIC Application Gateway Ingress Controller Not Deploying

I created a new cluster, created an application gateway and then installed AGIC per the tutorial. I then configured the ingress controller with the following config:
# This file contains the essential configs for the ingress controller helm chart
# Verbosity level of the App Gateway Ingress Controller
verbosityLevel: 3
################################################################################
# Specify which application gateway the ingress controller will manage
#
appgw:
subscriptionId: <<subscriptionid>>
resourceGroup: experimental-cluster-rg
name: experimental-cluster-ag
usePrivateIP: false
# Setting appgw.shared to "true" will create an AzureIngressProhibitedTarget CRD.
# This prohibits AGIC from applying config for any host/path.
# Use "kubectl get AzureIngressProhibitedTargets" to view and change this.
shared: false
################################################################################
# Specify which kubernetes namespace the ingress controller will watch
# Default value is "default"
# Leaving this variable out or setting it to blank or empty string would
# result in Ingress Controller observing all acessible namespaces.
#
# kubernetes:
# watchNamespace: <namespace>
################################################################################
# Specify the authentication with Azure Resource Manager
#
# Two authentication methods are available:
# - Option 1: AAD-Pod-Identity (https://github.com/Azure/aad-pod-identity)
# armAuth:
# type: aadPodIdentity
# identityResourceID: <identityResourceId>
## identityClientID: <identityClientId>
## Alternatively you can use Service Principal credentials
armAuth:
type: servicePrincipal
secretJSON: <<hash>>
################################################################################
# Specify if the cluster is RBAC enabled or not
rbac:
enabled: true
When I deploy the application and check the gateway, it appears to be updating the gateway through the ingress controller by creating its own settings. The problem seems to be that the application never gets exposed. I checked the health probe and it stated it was unhealthy due to 404 status. I was unable to access the application directly by IP. I get a 404 or 502 depending on how I try to access the application.
I tried deploying both an nginx and agic ingress and the nginx seems to work fine:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aks-seed-ingress-main
annotations:
kubernetes.io/ingress.class: azure/application-gateway
# appgw.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- agic-cluster.company.com
- frontend.<ip0>.nip.io
secretName: zigzypfxtls
rules:
- host: agic-cluster.company.com
http:
paths:
- backend:
serviceName: aks-seed
servicePort: 80
path: /
- host: frontend.<ip0>.nip.io
http:
paths:
- backend:
serviceName: aks-seed
servicePort: 80
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: aks-seed-ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- frontend.<ip>.nip.io
rules:
- host: frontend.<ip>.nip.io
http:
paths:
- backend:
serviceName: aks-seed # Modify
servicePort: 80
path: /
I am unsure what I am missing. I followed the tutorials as best I could and the agic controller and application gateway appear to be communicating. However the application is inaccessible on the agic controller but accessible on the nginx controller. I only installed the nginx controller afterwards to ensure there was no issue with the application itself.
I am facing the same issue, I followed below article and deployed the resources
https://learn.microsoft.com/en-us/azure/developer/terraform/create-k8s-cluster-with-aks-applicationgateway-ingress
Azure ingress never came up Ready state
NAME READY STATUS RESTARTS AGE
aspnetapp 1/1 Running 0 25h
ingress-azure-1616064464-6694ff48f8-pptnp 0/1 Running 0 72s
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-azure-1616064464 default 1 2021-03-18 06:47:45.959459087 -0400 EDT deployed ingress-azure-1.4.0 1.4.0
myrelease default 1 2021-03-18 05:45:12.419235356 -0400 EDT deployed nginx-ingress-controller-7.4.10 0.44.0
From describe pod I see below message
$ kubectl describe pod ingress-azure-1616064464-6694ff48f8-pptnp
Name: ingress-azure-1616064464-6694ff48f8-pptnp
Namespace: default
Warning Unhealthy 4s (x8 over 74s) kubelet Readiness probe failed: Get http://15.0.0.68:8123/health/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
aspnetapp <none> * 80 10s
cafe-ingress-with-annotations <none> cafe.example.com 20.XX.XX.XX 80 63m
Check the health probes. When the health probes in the ingress controller are not within the accepted default return code range of 200-399, they will prevent you from accessing the app. Within the Ingress controller YAML (this is important), either change the path from '/' to a proper health endpoint within the health probe, or update the accepted range of return codes to 200-500 (for testing purposes).
Example YAML with health probes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/use-private-ip: "false"
cert-manager.io/cluster-issuer: letsencrypt
appgw.ingress.kubernetes.io/ssl-redirect: "true"
appgw.ingress.kubernetes.io/health-probe-path: "/"
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-500"
spec:
tls:
- hosts:
- dev.mysite.com
secretName: secret
rules:
- host: dev.mysite.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: srv-mysite
port:
number: 80
Please check the permission assigned to the identity Might be you are Missing the Managed Identity Operator assignment please check it

How do I capture external I.P. address of end user on Voyager/HAProxy/Kubernetes?

I would like to capture the external I.P. address of clients visiting my application. I am using kubernetes on AWS/Kops. The ingress set-up is Voyager configured HAProxy. I am using the LoadBalancer service.
I configured HAProxy through Voyager to add the x-forwarded-for header by using ingress.appscode.com/default-option: '{"forwardfor": "true"}' annotation.
The issue is that when I test the header is coming through with an internal I.P. address of one of my kubernetes nodes, rather than my external I.P. as desired.
I'm not sure what LoadBalancer voyager is using under the covers, there's no associated pod, just one for the ingress-controller.
kubectl describe svc voyager-my-app outputs
Name: <name>
Namespace: <namespace>
Labels: origin=voyager
origin-api-group=voyager.appscode.com
origin-name=<origin-name>
Annotations: ingress.appscode.com/last-applied-annotation-keys:
ingress.appscode.com/origin-api-schema: voyager.appscode.com/v1beta1
ingress.appscode.com/origin-name: <origin-name>
Selector: origin-api-group=voyager.appscode.com,origin-name=<origin-name>,origin=voyager
Type: LoadBalancer
IP: 100.68.184.233
LoadBalancer Ingress: <aws_url>
Port: tcp-443 443/TCP
TargetPort: 443/TCP
NodePort: tcp-443 32639/TCP
Endpoints: 100.96.3.204:443
Port: tcp-80 80/TCP
TargetPort: 80/TCP
NodePort: tcp-80 30263/TCP
Endpoints: 100.96.3.204:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Typically with Kubernetes ingresses, there are a couple relevant settings:
xff_num_trusted_hops, which specifies the number of hops that are "trusted" i.e., internal. This way you can distinguish between internal and external IP addresses.
You'll want to make sure you set ExternalTrafficPolicy: local in your load balancer (you didn't specify what your LB is)
Note I'm mostly familiar with Ambassador (built on Envoy Proxy) which does this by default.

How to assign a static IP to a pod using Kubernetes on deployment

I am trying to assign a static IP address to a pod on deployment.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aws-test-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: aws-test-mysql
spec:
containers:
- name: aws-test-mysql
image: 461677341235123.dkr.ecr.us-east-1.amazonaws.com/aws-test-mysql
securityContext:
privileged: true
ports:
- containerPort: 3306
hostIP: 172.20.32.50
hostPort: 3306
resources:
requests:
cpu: 100m
imagePullSecrets:
- name: ecrkey
As you can see when I described my pod it is created with another IP.
test-mbp1:aws test$ kubectl describe pods | grep IP
IP: 100.96.1.3
I'm trying to deploy a pod with a static IP on "kind: Deployment" and not as a service.
Is this posible ?
A static IP cannot be assigned to a Pod because of the dynamic nature of kubernetes' IP layer.
Since you don't want to attach a Service (which is the best way imho), a close alternative is to convert the Deployment to a StatefulSet. This will give the Pod a static hostname which more-or-less fulfils your requirement.
The first replica of the StatefulSet will be called aws-test-mysql-0.<kubernetes.cluster.tld>.

Kubernetes service dns resolution returning wrong IP

I have a simple MYSQL pod sitting behind a MYSQL service.
Additionally I have another pod that is running a python process that is trying to connect to the MYSQL pod.
If I try connecting to the IP address of the MYSQL pod manually from the python pod, everything is A-OK. However if I try connecting to the MYSQL service then I get an error that I can't connect to MYSQL.
grant#grant-Latitude-E7450:~/k8s/objects$ kubectl describe pod mysqlpod
Name: mysqlpod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 20 Jan 2017 11:10:50 -0600
Labels: <none>
Status: Running
IP: 172.17.0.4
Controllers: <none>
grant#grant-Latitude-E7450:~/k8s/objects$ kubectl describe service mysqlservice
Name: mysqlservice
Namespace: default
Labels: <none>
Selector: db=mysqllike
Type: ClusterIP
IP: None
Port: <unset> 3306/TCP
Endpoints: 172.17.0.5:3306
Session Affinity: None
No events.
grant#grant-Latitude-E7450:~/k8s/objects$ kubectl describe pod basic-python-model
Name: basic-python-model
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 20 Jan 2017 12:01:50 -0600
Labels: db=mysqllike
Status: Running
IP: 172.17.0.5
Controllers: <none>
If I attach to my python container and do an nslookup of the mysqlservice, then I'm actually getting the wrong IP. As you saw above the IP of the mysqlpod is 172.17.0.4 while nslookup mysqlservice is resolving to 172.17.0.5.
grant#grant-Latitude-E7450:~/k8s/objects$ k8s exec -it basic-python-model bash
[root#basic-python-model /]# nslookup mysqlservice
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: mysqlservice.default.svc.cluster.local
Address: 172.17.0.5
I'm fairly new to kubernetes, but I've been banging my head on this issue for a few hours and I can't seem to understand what I'm doing wrong.
So this was the exact correct behavior but I just misconfigured my pods.
For future people who are stuck:
The selector defined in a kubernetes service must match the label of the pod(s) you wish to serve. IE) In my MySqlService.yaml file I have the name selector for "mysqlpod":
apiVersion: v1
kind: Service
metadata:
name: mysqlservice
spec:
clusterIP: None
ports:
- port: 3306
targetPort: 3306
selector:
name: mysqlpod
Thus in my MySqlPod.yaml file I need an exactly matching label.
kind: Pod
apiVersion: v1
metadata:
name: mysqlpod
labels:
name: mysqlpod
spec:
...
For anyone coming again here, please check #gnicholas answer, but also make sure that clusterIP: None is correctly set.
I happened to indent clusterIP: None too much in the .yml file and the command was ignored by Kubernetes therefore clusterIP was mistakenly assigned causing the wrong IP issue.
Be aware that the validation won't throw any error, but will silently ignore it.