EKS Ingress does not show created ALB Address - kubernetes-ingress

In my EKS cluster, I am using the AWS Load Balancer Controller to monitor the cluster and create an ALB when ingress resources are created/seen (This is working correctly, configured through annotations).
I am trying to use External-DNS in order to update the Route53 entry to route the hostname in the ingress to the ALB that gets created through the Load Balancer Controller. The ALB is created but the address field in the cluster is empty, and this should contain the ALB URL that gets created.
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
test-ns test-ingress <none> *.example.com 8080 1d
I believe this is causing external-dns to think that all entries are synced, as the external-dns logs show the below repeatedly
level=debug msg="No endpoints could be generated from service kube-system/core-dns"
level=debug msg="No endpoints could be generated from ingress test-ns/test-ingress"
level=debug msg="Refreshing zones list cache"
level=debug msg="Considering zone: /hostedzone/123123123123 (domain: example.com)"
level=info msg="Applying provider record filter for domains: [example.com]"
level=debug msg="Skipping endpoint *.example.com 300 IN CNAME internal-alb-testing.us-west-1.elb.amazonaws.com [] because owner id does not match, found: \"\", required: \"externaldns\""
level=debug msg="Refreshing zones list cache"
level=debug msg="Considering zone: /hostedzone/123123123123 (domain: example.com)"
level=info msg="All records are already up to date"
level=debug msg="Refreshing zones list cache"
level=debug msg="Considering zone: /hostedzone/123123123123 (domain: example.com)"
level=info msg="All records are already up to date"

The WAF settings on the ALB were blocking the connection, so needed to add the below flags to the AWS Load Balancer Controller:
--enable-waf=false
--enable-wafv2=false

Related

Nginx ingress does not able to get real IP with Cloudflare DNS (No proxy)

I am trying to get the Client IP (or Real IP).
I am using a third tier cloud provider with basic services and there is a LB in that.
My current nginx-ingress controller config is:
data:
allow-snippet-annotations: "true"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
allow-snippet-annotations: "true"
real_ip_recursive: "on"
real-ip-header: "X-Real-IP"
use-proxy-protocol: "false"
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
And yes, i alr turned on the below in the service.
externalTrafficPolicy: Local
My ingress resource has this:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $remote_addr";
However, I have tried on and off many options above and I could not retrieve the client IP when a pass in, Nginx ingress always gives me the Node private IPv4 as the x-forwaded-header and remote_add. Note that if I turned on the Proxied Cloudflare , it worked fine, if i turned the Proxied option off , it returns the private IPv4 of the k8s node, because I have other DNS management in other DNS tools, so using Cloudflare proxied is not always an option for me.

ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet

I am getting an error "ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet" while deploying ingress in EKS.
Steps already taken:
Cluster Name is correct in Deployment file
below annotation is am using in Ingress-Resource file
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
kubernetes.io/role/internal-elb: 1
alb.ingress.kubernetes.io/subnets: subnet-xxxx, subnet-yyy, subnet-zzz
kubernetes.io/cluster/<ClusterName>: owned ---> (I am using correct cluster name)
Key point:
I am using private subnet in EKS, Subnets were separately created with proper Tags.
2. below annotation is am using in Ingress-Resource file
...
kubernetes.io/role/internal-elb: 1
...
kubernetes.io/cluster/<ClusterName>: owned ---> (I am using correct
cluster name)
The above are tags and not for annotation usage. Try tag the 3 subnets in your question on the AWS console with kubernetes.io/role/internal-elb: 1 and kubernetes.io/cluster/<ClusterName>: owned; so that the LB controller can discover them.

Pod level route restriction

EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.

route to application stopped working in OpenShift Online 3.9

I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.
The problem has been resolved by Openshift Online, the application is working again

Openshift Origin registry: how to make it accessible?

We are setting up a test cloud Openshift Origin which we created using the openshift ansible playbook. We are following the documentation at: https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html
We have not done anything special concerning the openshift registry or router.
We are pretty new to this topic and we tried since few tags to bring the openshift registry accessible....
We have 3 hosts:
master (unschedulable)
node-1 which is set to the region 'infra' and has the registry and router services
node-2 (other region).
Here the services running on the default project:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.78.66 <none> 5000/TCP 3h
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 3h
registry-console 172.30.190.63 <none> 9000/TCP 3h
router 172.30.197.135 <none> 80/TCP,443/TCP,1936/TCP 3h
When we SSH directly on the node-1 where the registry and router are running, we can access the registry without problem and we can push some images. Exactly what is here described: docs.openshift.org/latest/install_config/registry/accessing_registry.html
Now we cannot access the registry for other hosts (master or node-2) and we really do not understand how we can make the registry accessible.... We have of course read: docs.openshift.org/latest/install_config/registry/securing_and_exposing_registry.html#access-insecure-registry-by-exposing-route
We have used this command:
oc expose service docker-registry --hostname=<hostname> -n default
The documentation says: You must be able to resolve this name externally via DNS to the router’s IP address.
As the router does not have any EXTERNAL-IP address attached to it, we do not understand how to reach it.
Is there any oc or oadm command for exposing the router through an external-ip address?
Thanks a lot in advance
Emmanuel
Based on your stated configuration I would expect the path to your UI/API for Openshift (openshift.yourdomain.com) to be routed to the same IP as your node-1, because that is where you are running the router.
If that is the case then you would point the hostname you are passing via the command in DNS to the same IP, or as a CNAME to that host.
oc expose service docker-registry --hostname=<hostname> -n default
In a larger setup with dedicated set of load balancer (lb) nodes you might have a specific A record for the set. You could then have the hostname be a CNAME to that record.