Is it ok to install multiple nginx-ingress controllers in a single Kubernetes namespace? - kubernetes-ingress

In AKS, we have a requirement to install two nginx controllers inside a single Kubernetes namespace. In fact, we need to assign each controller a dedicated IP and DNS address. Does it bring any conflict between controllers? Is there any best practice regarding having multiple nginx controllers in a namespace?

Your question can be answered with the help of the official documentations.
When running NGINX Ingress Controller, you have the following
options with regards to which configuration resources it handles:
Cluster-wide Ingress Controller (default). The Ingress Controller handles configuration resources created in any namespace of
the cluster. As NGINX is a high-performance load balancer capable of
serving many applications at the same time, this option is used by
default in our installation manifests and Helm chart.
Single-namespace Ingress Controller. You can configure the Ingress Controller to handle configuration resources only from a
particular namespace, which is controlled through the
-watch-namespace command-line argument. This can be useful if you
want to use different NGINX Ingress Controllers for different
applications, both in terms of isolation and/or operation.
Ingress Controller for Specific Ingress Class. This option works in conjunction with either of the options above. You can further
customize which configuration resources are handled by the Ingress
Controller by configuring the class of the Ingress Controller and
using that class in your configuration resources. See the section
Configuring Ingress Class.
You may deploy any number of ingress controllers within a cluster. When you create an ingress, you should annotate each ingress with the appropriate ingress.class to indicate which ingress controller should be used if more than one exists within your cluster.
The main idea is that the multiple Ingress controllers can co-exist and key off the ingress.class annotation.

Related

Openshift Egress Routing

I want to allow/block traffic to few endpoints in the Egress Controller network Policy within Openshift using Pod Selector. Is it possible? As per the Documentation its possible from Namespace.
It has to be done per Pod as per the requirement so that each pod can access specific endpoints.

k8s nginx Ingress take my node IP as Address

I have 3 node k8s Cluster on my virtual env which is VMware Fusion.
When try to create basic Ingress it takes my one node_ip which is nginx_controller running.
But 80 port is not open on all nodes. I mean it is not working
curl: (7) Failed to connect to 172.16.242.133 port 80: Connection refused
What I missing ?
I installed Nginx Ingress Controller
I installed MetalLB and configured it. It is working if I create service with type: LoadBalancer. It takes ExernalIp Ip and I can access it.
I deploy basic app for test.
I create a service for app. I can access on NodePort or CulesterIP. Both I tried.
I create basic Ingress for manage hosts and routing staff. But this step I stuck.
My Questions ;
1-) Normaly what should Ingress take Ip as Address ? One of my node or External DHCP IP.
2-) When I create service with type: LoadBalancer it takes externalIP. I can record DNS to this IP and clients can access it. What is wrong with that ?
Ingress supports two types of service type: NodePort and LoadBalancer.
While using NodePort service type you should use nodeport number instead of default port 80. Explanation to this behavior is available in nginx ingress documentation:
However, due to the container namespace isolation, a client located
outside the cluster network (e.g. on the public internet) is not able
to access Ingress hosts directly on ports 80 and 443. Instead, the
external client must append the NodePort allocated to the
ingress-nginx Service to HTTP requests.
So your curl should look like this:
curl 172.16.242.133:<node_port_number>
When you use MetalLB with LoadBalancer service type, it takes externalIPs from it's configuration that you specified when installing metallb in cluster.
More information about nginx ingress controller cooperation with metallb is available in nginx documentation.
MetalLB requires a pool of IP addresses in order to be able to take
ownership of the ingress-nginx Service. This pool can be defined in a
ConfigMap named config located in the same namespace as the MetalLB
controller. This pool of IPs must be dedicated to MetalLB's use, you
can't reuse the Kubernetes node IPs or IPs handed out by a DHCP
server.
My Problem was,
I thought Ingress takes the IP and we record DNS to this IP. But It is not. Why Ingress object has Address and Port field I do not know. Just for information I guess but It is confusing for newbies.
Clients access the Ingress Controller not Ingress.
Actually Ingress Controller Service manages the externalIP or NodePort. So we have to configure this.
In my case nginx
kubectl edit service/ingress-nginx-controller -n ingress-nginx
You can change type to LoadBalancer and you will get externalIP after configured the MetalLB. And define your Ingress objects, record DNS Records then you are ready.

Using HTTP Load Balancer with Kubernetes on Google Cloud Platform

I have followed the GKE tutorial for creating an HTTP Load Balancer using the beta Ingress type and it works fine when using the nginx image. My question is about why Ingress is even necessary.
I can create a container engine cluster and then create a HTTP Load Balancer that uses the Kubernetes-created instance group as the service backend and everything seems to work fine. Why would I go through all of the hassel of using Ingress when using Kubernetes for only part of the process seems to work just fine?
While you can create "unmanaged" HTTP Load Balancer by yourself, what happens when you add new deployments (pods with services) and want traffic to be routed to them as well (perhaps using URL Maps)?
What happens when one of your services goes down for some reason and the new service allocates another node port?
The great thing about Ingress is that it manages the HTTP Load Balancer for you while keeping track on Kubernetes' resources and updates the HTTP Load Balancer accordingly.
The ingress object serves two main purposes:
It is simpler to use for repeatable deployments than configuring the HTTP balancer yourself, because you can write a short declarative yaml file of what you want you balancing to look like rather than a script of 7 gcloud commands.
It is (at least somewhat) portable across cloud providers.
If you are running on GKE and don't care about the second one, you can weigh the ease of use of the ingress object and the declarative syntax vs. the additional customization that you get from configuring the load balancer manually.

Name kubernete generated Google cloud ingress load balancer

I have multiple kubernetes clusters that have Google powered load balancers (ingress lbs).
So right now to access my k8s cluster service (s) I just have to ping the public IP given by the $ kubectl get service, cool.
My problem is that sometimes I need to tear down/create clusters, reconfigure services, those services might also need SSL certificates very soon, and my clusters'/services' builds needs to be easily reproducible too (for cloud devs!).
The question is simple: can I instead of having an ingress load balancer IP have an ingress load balancer hostname?
Something like ${LOAD_BALANCER_NAME}.${GOOGLE_PROJECT_NAME}.appspot.com would be uber awesome.
Kubernetes integration with google cloud DNS is a feature request for which there is no immediate timeline (it will happen, I cannot comment on when). You can however create DNS records with the static ip of a loadbalancer.
If I've understood your problem correctly, you're using an L4 loadbalancer (service.Type=LoadBalancer) and you want to be able to delete the service/nodes etc and continue using the same IP (because you have DNS records for it). In other words, you want a loadbalancer not tied to the service lifecycle. This is possible through an L7 loadbalancer [1] & [2], or by recreating the service with an existing IP [3].
Note that [1] divorces the loadbalancer from service lifetime, but if you take down your entire cluster you will lose the loadbalancer. [2] is tied to the Ingress resource, so if you delete your cluster and recreate it, start the loadbalancer controller pod, and recreate the same Ingress resource, it will use the existing loadbalancer. Also note that both [1] and [2] depend on a "beta" resource that will be released with kubernetes 1.1, I'd appreciate your feedback if you deploy them :)
[1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[2] https://github.com/kubernetes/contrib/pull/132
[3] github.com/kubernetes/kubernetes/issues/10323

Access labels during runtime in kubernetes

Is there a way for my application to access the labels assigned to the pod / service during runtime?
Either via client API or via ENV / passed variables to the docker container?
The Downward API is designed to automatically expose information about the pod's configuration to the pod using environment variables. As of Kubernetes 1.0 is only exposes the pod's name and namespace. Adding labels to the Downward API is being discussed in #560 but isn't currently implemented.
In the mean time, your application can query the Kubernetes apiserver and introspect it's configuration to determine what labels have been set.