K8s Ingress re-directing to http app running outside the cluster in a VM - Not working - kubernetes-ingress

Setup
VM running outside the k8s cluster is hosting the Web app running at port 5601 (Time being exposed via Public IP)
We are accessing the app using the public IP which has login and other things. If we hit http://xx.xx.xx.xx:5601/, it will ask for login and works properly. What this means it will redirect to http://xx.xx.xx.xx:5601/app/login and works
I have a k8s cluster with an ingress controller exposed via LoadBalance(LB) which is registered with Cloudflare.
I create the service with no selectors and created the endpoints exposing the above ports as shown below
apiVersion: v1
kind: Endpoints
metadata:
subsets:
- addresses:
- ip: 10.128.0.18
ports:
- name: opensearch
port: 5601
protocol: TCP
I created the ingress pointing to this service, and I could see that it has the backend for VM IP and Port.
So when I hit this, it is going there but then, I get the error saying page not found for /app/login on nginx. What I understand is that when it is re-directing it is not going to VM rather it is coming back k8s ingress controller and it does not find anything on k8s.
So my questions is is it possible to expose via the path (/opensearch) to external service (It could be VM or domain name) which is not a static page. Meaning from k8s ingress hostname with path like this "app.kohls.com/opensearch" should get transferred to VM:port and it should work properly with login and other things.
So here app.kohls.com/opensearch ==> VM:port, but then it re-directs back app.kohls.com/app/login which does not exist. So how do I make it work everything with VM:port/app/login and other things, once it got re-directed from k8s ingress
I tried with the following annotations for nginx
kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/cors-allow-origin: nginx.ingress.kubernetes.io/enable-cors: true nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: true
Is there any annotations that are possible to make it stay in the re-directed thing

Related

kubernetes - egress traffic - whats is the source IP for receiving ingress traffic (how to check) - Need to randomize

My Setup
GKE / EKS - Managed Kubernetes Cluster
As of now for Business requirements, it is k8s cluster with Public Endpoints
What it means is that I have a Public endpoint for API Server as well Nodes have an External Public IP Address
nginx ingress is deployed for route-based traffic and exposed as a Loadbalancer type
And The LoadBalancer is of type Network Load Balancer internet facing(Or External) having a Public IP Address (say 35.200.24.99)
My requirement or I want to understand, is this
If my Pod makes a call to the outside APIs, what will be the source IP that the outside API will receive? Is it my LoadBalencer IP or the Pod Node External IP Address
If it receives the LB IP, is there a way to change this behavior to send the Pod Node IP Address?
Also is there any tool or a way to simulate what is the Source IP, I am getting while Pod makes a request to an outside API
I could not try out anything
I tried hitting curl requests to nginx Pod that wsa running inside, but did not get desired results or I could not figure out
If my Pod makes a call to the outside APIs, what will be the source IP
that the outside API will receive? Is it my LoadBalencer IP or the Pod
Node External IP Address
It your POD sending request and your cluster is public it will be Node's IP on which POD is running/scheduled.
If it receives the LB IP, is there a way to change this behavior to
send the Pod Node IP Address?
it wont get the LB IP, it will be Node's IP only on which POD is running. If you want to manage the Single outgoing IP you can use the NAT gateway so all traffic will go out of the single source IP.
Also is there any tool or a way to simulate what is the Source IP, I
am getting while Pod makes a request to an outside API
Go to the POD using kubectl exec -it <POD name> bash once you are inside the POD run the curl ifconfig.me it will return the IP from which you are hitting the site. Mostly it will be Node's IP.
Consider ifconfig.me as an outside API and you will get your result.

How can i connect my domain (godaddy) to my EKS ALB

I have successfully installed and ingress into my EKS cluster and all its dependancies such as the AWS Load Balancer Ingress Controller, ServiceAccounts and the rest of them.
I have also applied the manifest below which has created an application load balancer i can see in my console with the right target. It points to the right service. Now my issue is i bought my domain on Godaddy, and previously i'd create a classic Loadbalancer and attach an Elastic IP to that load balancer then add that EIP address to my Godaddy DNS record and that would work and that would route my domain name to the classic elb.
Now i am trying to route through an alb.
Ingress Manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ing0-test
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui0-service
port:
number: 80
The image below shows the ALB in the console.
I can also see the target has been picked up properly.
however when i copy the DNS name of that ALB into the browser, i do not get anything returned. I am expecting to see a site which one of my network load balancer is pointing to through a pod deployment. I am able to paste the network load balancer DNS name to the browser and see a site. I expect that since the ingress is deployed correctly, it should route to that same network load balance through one of the ingress paths.
so for instance let's say this is the ingress dns name k8s-ingXXXXXXX.eu-west-1.elb.amazonaws.com ... i expect that this should route me to example.com... Also how do i map my Godaddy domain name to this ingress.
You can only add alias(A) record for top level domain. There are two possible ways to solve your issue.
Use nginx ingress controller instead of AWS load balancer controller. Nginx ingress controller can create NLB for your service. Which will have a static ip. (https://kubernetes.github.io/ingress-nginx/deploy/)
Use Route53 hosted zone as DNS server for your domain. Basically you need to replace GoDaddy nameservers with AWS Route53's name server. Then you can use Alias record.(https://aws.amazon.com/premiumsupport/knowledge-center/route-53-create-alias-records/)

Is it possible to create route for both HTTP and HTTPS with the same host for a service?

I have created a route for HTTP, and when creating another route with the same host, the status of the route will be "Rejected".
Is it possible to create route for both HTTP and HTTPS with the same host for a service?
I am using the openshift sandbox:
I found the issue, it is because the host is already been used by the HTTP route. What I need to do is to update existing route with the TLS settings, and set the "Insecure traffic" to "Allow"
apiVersion: v1
kind: Route
spec:
tls:
insecureEdgeTerminationPolicy: Allow

Kuberntes ingress and client auth application

I have a springboot Application
I secured it by using spring security and X509 authentication as described here
So far, so good... all works as a charm.
Now I need to deploy it ad kubernetes app. Is it possible?
When I use K8S the K8S ingress controller "consumes" certificate and on my app it is missing.... Is it so? Can I configure it in order to leave the certificate so that I can find it in my HttpServletRequest attribute?
Thank you
Angelo
As pointed in the comments:
Hello, have you considered to use service of type LoadBalancer to send the traffic to your Pods without any facilities to "consume" your certificate? Also if you are using nginx-ingress you could look on SSL passthrough: kubernetes.github.io/ingress-nginx/user-guide/tls/…
Having a connection between a Client and a Pod in Kubernetes without "consuming" the certificate can be done by either:
Service of type LoadBalancer
Ingress controller with a SSL Passthrough
Service of type Loadbalancer
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
-- Kubernetes.io: Service: LoadBalancer
You can configure a service that will expose your traffic externally on Layer4 (TCP/UDP). The traffic will be routed to your desired workload(Deployment/Statefulset).
Example:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 443
targetPort: 443
type: LoadBalancer
Ingress controller with a SSL Passthrough
You can also use Ingress controller capable of SSL Passthrough. One of the controllers with this feature is ingress-nginx:
SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.
This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
-- Kubernetes.github.io: Ingress nginx: User guide: TLS: SSL passthrough
Remember!
The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default.
As more of a workaround solution you can also look on the (there is an example in the link):
Exposing TCP and UDP services
Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
-- Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp and udp services

k8s nginx Ingress take my node IP as Address

I have 3 node k8s Cluster on my virtual env which is VMware Fusion.
When try to create basic Ingress it takes my one node_ip which is nginx_controller running.
But 80 port is not open on all nodes. I mean it is not working
curl: (7) Failed to connect to 172.16.242.133 port 80: Connection refused
What I missing ?
I installed Nginx Ingress Controller
I installed MetalLB and configured it. It is working if I create service with type: LoadBalancer. It takes ExernalIp Ip and I can access it.
I deploy basic app for test.
I create a service for app. I can access on NodePort or CulesterIP. Both I tried.
I create basic Ingress for manage hosts and routing staff. But this step I stuck.
My Questions ;
1-) Normaly what should Ingress take Ip as Address ? One of my node or External DHCP IP.
2-) When I create service with type: LoadBalancer it takes externalIP. I can record DNS to this IP and clients can access it. What is wrong with that ?
Ingress supports two types of service type: NodePort and LoadBalancer.
While using NodePort service type you should use nodeport number instead of default port 80. Explanation to this behavior is available in nginx ingress documentation:
However, due to the container namespace isolation, a client located
outside the cluster network (e.g. on the public internet) is not able
to access Ingress hosts directly on ports 80 and 443. Instead, the
external client must append the NodePort allocated to the
ingress-nginx Service to HTTP requests.
So your curl should look like this:
curl 172.16.242.133:<node_port_number>
When you use MetalLB with LoadBalancer service type, it takes externalIPs from it's configuration that you specified when installing metallb in cluster.
More information about nginx ingress controller cooperation with metallb is available in nginx documentation.
MetalLB requires a pool of IP addresses in order to be able to take
ownership of the ingress-nginx Service. This pool can be defined in a
ConfigMap named config located in the same namespace as the MetalLB
controller. This pool of IPs must be dedicated to MetalLB's use, you
can't reuse the Kubernetes node IPs or IPs handed out by a DHCP
server.
My Problem was,
I thought Ingress takes the IP and we record DNS to this IP. But It is not. Why Ingress object has Address and Port field I do not know. Just for information I guess but It is confusing for newbies.
Clients access the Ingress Controller not Ingress.
Actually Ingress Controller Service manages the externalIP or NodePort. So we have to configure this.
In my case nginx
kubectl edit service/ingress-nginx-controller -n ingress-nginx
You can change type to LoadBalancer and you will get externalIP after configured the MetalLB. And define your Ingress objects, record DNS Records then you are ready.