Nginix crashes while using -SSL Passthrough - kubernetes-ingress

I have deployed a nginx ingress controller which works fine when there is no firewall. With the firewall (all egress blocked) the nginx controller seems to be struck. It immediately starts working when the firewall is removed. Not able to find any useful logs on the pod. my ingress config :-
- --default-backend-service=kube-system/nginx-ingress-default-backend
- --election-id=ingress-controller-leader-apps
- --enable-ssl-passthrough
- --ingress-class=nginx-apps
- --configmap=kube-system/nginx-ingress-controller

It's working as designed, when you create a firewall rule blocking all egress connections you are preventing everything that is behind it to talk with the outside world.
To achieve what you want you need to use priority on your firewall rules By using it you can create a rule to allow traffic to specific ports and block everything else.
Here you can find a document describing how to achieve that.

Related

Route to application stopped working in OpenShift 4.6

I have an application running in Openshift 4.6.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
when trying to access the application, I get the error message:
Application is not available The application is currently not serving
requests at this endpoint. It may not have been started or is still
starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and
that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL
path was typed correctly and that the route was created using the
desired path.
Route and path matches, but all pods are down. Make sure that the
resources exposed by this route (pods, services, deployment configs,
etc) have at least one pod running.
There could be multiple reasons for this. You don't really provide enough debugging details to get to the next steps. But I generally find it helps to work backwards through the request.
Can you access the pod via port-forward? You say you've already tested this, but I include it for completeness. But I also mention it to make sure that you are verifying that you are serving the protocol you expect. If you have HTTPS passthrough on the route, but you are serving HTTP from your pod, there will obviously be a problem.
Can you access the pod providing your service from outside the pod (but within the cluster)? e.g. create a debug pod and see if you can connect to your service with curl some other client. If this doesn't work, you may not be exposing the ports of your pod correctly. Check the pod definitions.
Can you access the service from outside the pod (but within the cluster)? e.g. from your debug pod, use the service directly. If this doesn't work, you may have the selector on your service wrong. Or some other problem with your service. Check the service definition.
Can you access the route from inside the cluster? e.g. from your debug pod, try to use the full route URL. If this doesn't work, you've narrowed it down to the route definition. Again, HTTPS vs HTTP can sometimes be a mistake here such as having HTTPS passthrough when your service doesn't support HTTPS. Check the route definition.
Finally, try accessing the route eternally. Which is sounds like you have already tried. But if you've narrowed it down such that your route works internally you've determined that the problem is something in the external network. It doesn't sound like this is your problem, but it's something to keep in mind.

HAProxy use the old backend

I'm using HAProxy as reverse-proxy for https connections. I have some rules like this:
use_backend server2 if { req_ssl_sni -i test.domain.com }
yesterday i changed a backend for one acl, but somehow some (!!) clients still get the content of the old backend (which is still active for other acls). I can see two different result on the same machine in chrome if one chrome runs in privacy mode.
Restart and reload didn't help.
It looks that I solved it. I think the Problem was the passthrough via TCP. I change the config to do the SSL Termination by HAProxy and handle it via http. Seems like the rules for TCP / req_ssl_sni were only used for the first connection and bypassed for the following connections even if the hostname changed.

Load Balancer not able to connect with backend

I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.

Which URL/IP to use, when accessing Kubernetes Nodes on Rancher?

I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.

Google Compute Engine - How to allow access from (only) other project instances?

With Google Compute Engine, how do I create a firewall rule so that only instances within the same project are allowed access? Access from other clusters (within same project) should be allowed.
The scenario is to allow a GKE cluster to access a cluster of RethinkDB database servers that run on GCE instances.
"So that only instances within the same project are allowed access" to what?
I assume you don't mean access to the cluster's apiserver, since that IP should already be accessible from all your instances.
If you mean accessing a container in a cluster from an instance outside the cluster, you can create a firewall rule to be more permissive about allowing traffic within your GCE network. You can either be very permissive or a little more fine-grained when doing this:
Very permissive - just create a firewall rule that allows traffic from the source IP range 10.0.0.0/8 to all instances in your network (don't add any "target tags") on all the protocols and ports your care about (e.g. tcp:1-65535,udp:1-65535,icmp). The 10.0.0.0/8 range will cover all instances and containers in your network (and nothing outside of it).
Separate firewall per cluster - do the same thing as number one, but add the target tag that's on all nodes in the cluster. You can get this from looking at one of the instances' tags or by looking at the target tags on the firewalls that GKE created for your cluster when it was created. The benefit of this approach is that it will let everything in your network talk to your cluster without exposing anything else in your network that you don't want to open up quite so much.
If you mean accessing a service from outside the cluster, then it's a little tougher since you need to run the kube-proxy on the instances outside the cluster and configure it to talk to the cluster's apiserver in order to route the service packets properly.
Turns out the problem was that I was accessing the RethinkDB instances via external IPs. For some reason, this causes the firewall rule with internal source IPs not to match. The solution was to access the instances via internal DNS names instead, in which case the firewall rule applies.
Furthermore, there is a default firewall rule already, default-allow-internal, which allows any traffic between instances on the same project. Therefore I do not need to create my own rule.