How do I capture external I.P. address of end user on Voyager/HAProxy/Kubernetes? - kubernetes-ingress

I would like to capture the external I.P. address of clients visiting my application. I am using kubernetes on AWS/Kops. The ingress set-up is Voyager configured HAProxy. I am using the LoadBalancer service.
I configured HAProxy through Voyager to add the x-forwarded-for header by using ingress.appscode.com/default-option: '{"forwardfor": "true"}' annotation.
The issue is that when I test the header is coming through with an internal I.P. address of one of my kubernetes nodes, rather than my external I.P. as desired.
I'm not sure what LoadBalancer voyager is using under the covers, there's no associated pod, just one for the ingress-controller.
kubectl describe svc voyager-my-app outputs
Name: <name>
Namespace: <namespace>
Labels: origin=voyager
origin-api-group=voyager.appscode.com
origin-name=<origin-name>
Annotations: ingress.appscode.com/last-applied-annotation-keys:
ingress.appscode.com/origin-api-schema: voyager.appscode.com/v1beta1
ingress.appscode.com/origin-name: <origin-name>
Selector: origin-api-group=voyager.appscode.com,origin-name=<origin-name>,origin=voyager
Type: LoadBalancer
IP: 100.68.184.233
LoadBalancer Ingress: <aws_url>
Port: tcp-443 443/TCP
TargetPort: 443/TCP
NodePort: tcp-443 32639/TCP
Endpoints: 100.96.3.204:443
Port: tcp-80 80/TCP
TargetPort: 80/TCP
NodePort: tcp-80 30263/TCP
Endpoints: 100.96.3.204:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Typically with Kubernetes ingresses, there are a couple relevant settings:
xff_num_trusted_hops, which specifies the number of hops that are "trusted" i.e., internal. This way you can distinguish between internal and external IP addresses.
You'll want to make sure you set ExternalTrafficPolicy: local in your load balancer (you didn't specify what your LB is)
Note I'm mostly familiar with Ambassador (built on Envoy Proxy) which does this by default.

Related

AKS AGIC Application Gateway Ingress Controller Not Deploying

I created a new cluster, created an application gateway and then installed AGIC per the tutorial. I then configured the ingress controller with the following config:
# This file contains the essential configs for the ingress controller helm chart
# Verbosity level of the App Gateway Ingress Controller
verbosityLevel: 3
################################################################################
# Specify which application gateway the ingress controller will manage
#
appgw:
subscriptionId: <<subscriptionid>>
resourceGroup: experimental-cluster-rg
name: experimental-cluster-ag
usePrivateIP: false
# Setting appgw.shared to "true" will create an AzureIngressProhibitedTarget CRD.
# This prohibits AGIC from applying config for any host/path.
# Use "kubectl get AzureIngressProhibitedTargets" to view and change this.
shared: false
################################################################################
# Specify which kubernetes namespace the ingress controller will watch
# Default value is "default"
# Leaving this variable out or setting it to blank or empty string would
# result in Ingress Controller observing all acessible namespaces.
#
# kubernetes:
# watchNamespace: <namespace>
################################################################################
# Specify the authentication with Azure Resource Manager
#
# Two authentication methods are available:
# - Option 1: AAD-Pod-Identity (https://github.com/Azure/aad-pod-identity)
# armAuth:
# type: aadPodIdentity
# identityResourceID: <identityResourceId>
## identityClientID: <identityClientId>
## Alternatively you can use Service Principal credentials
armAuth:
type: servicePrincipal
secretJSON: <<hash>>
################################################################################
# Specify if the cluster is RBAC enabled or not
rbac:
enabled: true
When I deploy the application and check the gateway, it appears to be updating the gateway through the ingress controller by creating its own settings. The problem seems to be that the application never gets exposed. I checked the health probe and it stated it was unhealthy due to 404 status. I was unable to access the application directly by IP. I get a 404 or 502 depending on how I try to access the application.
I tried deploying both an nginx and agic ingress and the nginx seems to work fine:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aks-seed-ingress-main
annotations:
kubernetes.io/ingress.class: azure/application-gateway
# appgw.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- agic-cluster.company.com
- frontend.<ip0>.nip.io
secretName: zigzypfxtls
rules:
- host: agic-cluster.company.com
http:
paths:
- backend:
serviceName: aks-seed
servicePort: 80
path: /
- host: frontend.<ip0>.nip.io
http:
paths:
- backend:
serviceName: aks-seed
servicePort: 80
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: aks-seed-ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- frontend.<ip>.nip.io
rules:
- host: frontend.<ip>.nip.io
http:
paths:
- backend:
serviceName: aks-seed # Modify
servicePort: 80
path: /
I am unsure what I am missing. I followed the tutorials as best I could and the agic controller and application gateway appear to be communicating. However the application is inaccessible on the agic controller but accessible on the nginx controller. I only installed the nginx controller afterwards to ensure there was no issue with the application itself.
I am facing the same issue, I followed below article and deployed the resources
https://learn.microsoft.com/en-us/azure/developer/terraform/create-k8s-cluster-with-aks-applicationgateway-ingress
Azure ingress never came up Ready state
NAME READY STATUS RESTARTS AGE
aspnetapp 1/1 Running 0 25h
ingress-azure-1616064464-6694ff48f8-pptnp 0/1 Running 0 72s
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-azure-1616064464 default 1 2021-03-18 06:47:45.959459087 -0400 EDT deployed ingress-azure-1.4.0 1.4.0
myrelease default 1 2021-03-18 05:45:12.419235356 -0400 EDT deployed nginx-ingress-controller-7.4.10 0.44.0
From describe pod I see below message
$ kubectl describe pod ingress-azure-1616064464-6694ff48f8-pptnp
Name: ingress-azure-1616064464-6694ff48f8-pptnp
Namespace: default
Warning Unhealthy 4s (x8 over 74s) kubelet Readiness probe failed: Get http://15.0.0.68:8123/health/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
aspnetapp <none> * 80 10s
cafe-ingress-with-annotations <none> cafe.example.com 20.XX.XX.XX 80 63m
Check the health probes. When the health probes in the ingress controller are not within the accepted default return code range of 200-399, they will prevent you from accessing the app. Within the Ingress controller YAML (this is important), either change the path from '/' to a proper health endpoint within the health probe, or update the accepted range of return codes to 200-500 (for testing purposes).
Example YAML with health probes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/use-private-ip: "false"
cert-manager.io/cluster-issuer: letsencrypt
appgw.ingress.kubernetes.io/ssl-redirect: "true"
appgw.ingress.kubernetes.io/health-probe-path: "/"
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-500"
spec:
tls:
- hosts:
- dev.mysite.com
secretName: secret
rules:
- host: dev.mysite.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: srv-mysite
port:
number: 80
Please check the permission assigned to the identity Might be you are Missing the Managed Identity Operator assignment please check it

Windows Jenkins Slave unable to connect to master hosted on Openshift instance

Unable to connect jenkins master hosted On Openshift Cluster. Terminates with below error after handshaking:
may 23, 2020 2:05:55 PM hudson.remoting.jnlp.Main$CuiListener error
GRAVE: Failed to connect to jenkins-jnlp-poc:50000
java.io.IOException: Failed to connect to jenkins-jnlp-poc:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:246)
at hudson.remoting.Engine.connectTcp(Engine.java:678)
at hudson.remoting.Engine.innerRun(Engine.java:556)
at hudson.remoting.Engine.run(Engine.java:488)
Caused by: java.net.ConnectException: Connection timed out: connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:204)
... 3 more
I added route to jenkins-jnlp service but I'm not able to expose the port, I'been trying to configure nodePort but I couldn't archive it yet. Any help will be welcomed!
Thanks.
A Route will only work with HTTP / HTTPS traffic and will not work in this case and as you correctly noted, NodePorts is most likely what you want. Here is an example for a Service type NodePort using Port 32000:
apiVersion: v1
kind: Service
metadata:
name: jenkins-jnlp-poc-service
spec:
selector:
app: jenkins-jnlp-poc
type: NodePort
ports:
- name: jnlp
port: 50000
targetPort: 50000
nodePort: 32000
protocol: TCP
Note that you may need to change multiple parts of the Service:
The port and targetPort specifying which port the Service "listens" on and where traffic is forwarded to (typically to the port your container exposes)
The selector, which Pods are targeted (you'll need to check your Pods which labels are used and adjust accordingly)

Expose TCP Ports outside the cluster

How can be a service that does not use HTTP/s be exposed in Openshift 3.11 or 4.x?
I think routes only support HTTP/s traffic.
I have read about using ExternalIP configuration for services but that makes the operation of the cluster complicated and static compared to routes/ingress.
For example Nginx-ingress-controller allows it with special configurations: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
What are the options in Openshift 3.11 or 4.x?
Thank you.
There is a section in the official OpenShift documentation for this called Getting Traffic Into the Cluster.
The recommendation, in order or preference, is:
- If you have HTTP/HTTPS, use a router.
- If you have a TLS-encrypted protocol other than HTTPS (for example, TLS with the SNI header), use a router.
- Otherwise, use a Load Balancer, an External IP, or a NodePort.
NodePort exposes the Service on each Node’s IP at a static port (30000~32767)[0].
You’ll be able to contact the NodePort Service, from outside the cluster, by requesting : format.
apiVersion: v1
kind: Service
metadata:
name: nodeport
spec:
type: NodePort
ports:
- name: "8080"
protocol: "TCP"
port: 8080
targetPort: 80
nodePort: 30000
selector:
labelName: targetname

Connection to MySQL (AWS RDS) in Istio

We have a issue where connecting to AWS RDS in Istio Service Mesh is results in upstream connect error or disconnect/reset before header .
Our Egress rule is as below
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
namespace: <our-namespace>
name: rds-egress-rule-with
spec:
destination:
service: <RDS End point>
ports:
- port: 80
protocol: http
- port: 443
protocol: https
- port: 3306
protocol: https
The connection to MySQL works fine in a stand alone MySQL in EC2. The connection to AWS RDS works fine without Istio. The problem only occurs in Istio Service Mesh.
We are using istio in Disabled Mutual TLS Configuration.
The protocol in your EgressRule definition should be tcp. The service should contain the IP address or a range of IP addresses in CIDR notation.
Alternatively, you can use the --includeIPRanges flag of istioctl kube-inject, to specify which IP ranges are handled by Istio. Istio will not interfere with the the not-included IP addresses and will just allow the traffic to pass thru.
References:
https://istio.io/latest/blog/2018/egress-tcp/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services

Why does my kubernetes service endpoint IP change every time I update the pods?

I have a kubernetes service called staging that selects all app=jupiter pods. It exposes an HTTP service on port 1337. Here's the describe output:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: run=staging
Selector: app=jupiter
Type: NodePort
IP: 10.11.255.80
Port: <unnamed> 1337/TCP
NodePort: <unnamed> 30421/TCP
Endpoints: 10.8.0.21:1337
Session Affinity: None
No events.
But when I run a kubectl rolling-update on the RC, which removes the 1 pod running the application and adds another, and run describe again, I get:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: run=staging
Selector: app=jupiter
Type: NodePort
IP: 10.11.255.80
Port: <unnamed> 1337/TCP
NodePort: <unnamed> 30421/TCP
Endpoints: 10.8.0.22:1337
Session Affinity: None
No events.
Everything is the same, except for the Endpoint IP address. In fact, it goes up by 1 every time I do this. This is the one thing I expected not to change, since services are an abstraction over pods, so they shouldn't change when the pods change.
I know you can hardcode the endpoint address, so this is more of a curiosity.
Also, can anyone tell me what the IP field in the describe output is for?
IP is the address of your service, which remains constant over time. Endpoints is the collection of backend addresses across which requests to the service address are spread at a given point in time. That collection changes every time the set of pods comprising your service changes, as you've noticed when performing a rolling update on your replication controller (RC).