How to map requests to multiple ports in a pod in Openshift v3? - openshift

I have a web app that does http and ws requests. I am trying to deploy it to Openshift v3. Hence, I need my requests to be mapped to ports 80 and 90 in the pod. However:
As mentioned in a related thread it is not possible for a route to expose multiple ports, so, I cannot just map requests to different services based on the port.
I tried setting one route mapping any port to a service with multiple ports, but I get a warning
Route has no target port, but service has multiple ports. The route
will round robin traffic across all exposed ports on the service
I cannot use different routes for http and ws, because the session cookie obtained for http would not be attached for web socket requests.
Solutions (?):
In the related thread Ingress Controller is suggested, but It seems that it can only be set up by a cluster administrator.
I could use two routes and set a separate cookie for each route, but this does not seem right -- why do I have to use 2 cookies for 2 domains, when essentially there is a single domain with a single authentication?
Switch to token authentication?
So, what am I missing? What would be the optimal way to handle this?

If any websocket endpoints are under a unique sub URL path, you could add a second route where which has a path definition for the sub URL path that the route applies to. You could then have requests under that sub URL path routed to the alternate port. You will need to have a definition for the alternate port on the service in addition to the primary port, or create a separate service for the alternate port. Would need to see your current service definition to be more specific. It is odd that you would be using ports 80 and 90 on the pod as that would imply you are running the container as root, which is not normal practice on OpenShift because of the security risks of running any container as root on a container hosting platform.

Related

Which URL/IP to use, when accessing Kubernetes Nodes on Rancher?

I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.

GCE managed group (autoscaling) - Proxy/Load Balancer for both HTTP(S) and TCP requests

I have an autoscaling istance group, i need to setup a Proxy/Load balancer that take request and send it to the istance group.
I thinked to use a Load balancer, but I need to grab both HTTP(S) and TCP requests.
There is some way (or some workaround) to solve this?
EDIT: The problem is that from TCP LB settings i can set the backend service (the managed group that i need to set) only for one port.
For your use case, a single load balancing configuration available on Google Cloud Platform will not be able to serve the purpose. On the other hand, since you are using managed instance groups (Autoscaling), it can not be used as backend for 2 different load balancers.
As per my understanding, the closest you can go is by using Network load balancing (TCP) and install SSL certificate to handle HTTPS requests
on the instance level.

Service with multiple ports/protocols per route (e.g. HTTP and HTTPS) possible in OpenShift?

I am currently evaluating OpenShift for use in our company.
We have a web application in a container, which exposes both port 80 http and port 443 https. Is it possible to run this container in OpenShift, using both ports over the SAME hostname? The OpenShift GUI lets me select only one port per service, if I try to create a route, and either http or https, not both. My use-case is, that my application is reachable on http://my-app as well as on https://my-app (in my opinion a quite common use-case)
It is not possible to have multiple routes with the same hostname and path. Only first such route will be admitted to the router.
The routes with paths will work as mentioned by #Graham
You can put all 3 below in a single project without problem:
example.com
example.com/hello
example.com/world
They can have different protocols. Adding a duplicate route with different protocol will not work.
Additionally if you have Project B, you won't be able to use example.com host again. So, none of the below will be accepted to a router:
example.com
example.com/hello
example.com/world
example.com/path
Which makes sense, as you don't want someone else to use your domain.
What usually happens when you have https exposed is that all http traffic is redirected to https.
You can achieve it by creating edge terminated route (over UI) and selecting Insecure Traffic: Redirect. There is also an option to set it to Allow.
More documentation and yaml examples if you would like to create route from command line: OpenShift Origin: Secured routes

Openshift 3 communication between deployments

I'm just learning OSE 3. I'd like to deploy two Node.js Web applications I have created. So I have created a Project with two Node.js deployments, which are now running in their own Pod.
My question is, how are they supposed to communicate ? say for example one application needs to redirect to the other, or include components from the other application.
Should I hardcode the route of each application in a configuration file or so ?
Thanks!
For internal communication between the two services, you can use the name of the service as the host name when making connections. This is possible because the name of the services are added to an internal DNS server so that a host name lookup on the name will yield the correct IP for the service at that time. When the service has multiple pods, an internal IP load balancer will automatically route the request to one of the pods.
For the question about redirects, that seems to suggest you have both services exposed publicly and want to have one service return a HTTP response that redirects the HTTP client to a URL which falls to the other service. What the redirect URL needs to be is going to depend on how you are exposing the services. That is, whether each service is exposed as a different hostname or you have used path based routing of OpenShift to overlay one at a sub URL of the other under the same host.
Either way, you probably want to use an environment variable passed in via the deployment configuration to indicate to the service triggering the redirect, to tell it what the URL prefix is that it needs to redirect to. You would manually set this up. This at least means you haven't hardwired it in your code.
If you mean something else by redirect, you will need to explain better what you mean.

Google Cloud HTTP Load Balancer can't connect to my instance

I have created a HTTP load balancer to basically redirect from port 80 to port 8080. The server on my instance is running on port 8080.
I can connect to the server directly but the LB is not able to connect to the instance, both accessing the LB's IP directly and also the health check always fails. The instance group the LB is using consist of just that single instance.
I read Google Compute Engine health checks failing
and the google-address-manager is running. However, when running ip route table list local there is no routing for my LB. The user in the above question is using Network load balancing and not HTTP load balancing (as I am) so I don't know if that is related?
Or perhaps it's related to a firewall? I have added my LB's ip address to a firewall rule that allows tcp:8080
Does anybode have any idea how can I fix this? I am not experienced with debian nor gcp.
Show I just try and run the route add command referenced in the above question? If so, how come the google-address-manager is not adding the route?
Thank you in advance!
You need to make sure that your port mapping on instance group is set to correct port, the 8080 in your case.
First, edit your instance group and change the port name and port to 8080:
Then, navigate to your http backend's settings and change the default port to the port name you've configured in your instance group.
Finally, make sure that your firewall rules allow access on port 8080 from 0.0.0.0/0 or at least from the IP address of HTTP load balancer (130.211.0.0/22)
I had the same issue and fixed it by adding a firewall rule for the health checker (which is not the same IP as your LB!). See https://cloud.google.com/compute/docs/load-balancing/health-checks?hl=en_US#http_and_https_load_balancing for instructions.
In my case, I did not configure the HTTP health check correctly.
I used "/" as path, but on my backend, "/" redirects to a login-page (HTTP 301), which responds with a HTTP 200.
The health check does not follow a redirect, every HTTP response code != 200 is assumed unhealthy (from Debugging Health Checks in Load Balancing on Google Compute Engine).
So, I changed my path to "/login", this fixed my issue.