Can the GLBC load balancer be used with gunicorn sync workers - gunicorn

I have a microservice run with gunicorn:
command: ["gunicorn", "--bind", "0.0.0.0:8000", "a.b:API", "--access-logfile=-","--error-logfile=-"]
I configured a readiness and liveness probe:
readinessProbe:
httpGet:
path: /health-check
port: 8000
livenessProbe:
httpGet:
path: /health-check
port: 8000
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 2
The microservice has a very low load (about 5 requests per minute) and there are 3 copies of this microservice.
Despite this fact every 2 hours the health check is failing because of timeout getting the headers.
I found an issue connected:
https://github.com/benoitc/gunicorn/issues/1194
but what puzzles me is that the health check is not using the load balancer.
Could it be that the glbc load balancer is somehow catching the connections and not letting them release?

Related

OpenShift aPaaS v3 failed Liveness probes vs failed Readiness Probes

What will happen if in a pool of pod Liveness probes are failed and a pool of pod Readiness probes are failed ?
There is few more differences between liveness and readiness probes. But one of the main difference is that a failed readiness probe removes the pod from the pool, but DO NOT RESTART. On the other hand a failed liveness probe removes the pod from the pool and RESTARTS the pod.
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness-vs-readiness
name: liveness-vs-readiness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; touch /tmp/liveness; sleep 999999
livenessProbe:
exec:
command:
- cat
- /tmp/liveness
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Lets create this pod and show this in action: oc create -f liveness-vs-readiness.yaml
Output of pod status while we do actions inside the pod. Number in front of the name coresponds to the actions done inside the pod:
oc get pods -w
NAME READY STATUS RESTARTS AGE
[1] liveness-vs-readiness-exec 1/1 Running 0 44s
[2] liveness-vs-readiness-exec 0/1 Running 0 1m
[3] liveness-vs-readiness-exec 1/1 Running 0 2m
[4] liveness-vs-readiness-exec 0/1 Running 1 3m
liveness-vs-readiness-exec 1/1 Running 1 3m
Actions inside the container:
[root#default ~]# oc rsh liveness-vs-readiness-exec
# [1] we rsh to the pod and do nothing. Pod is healthy and live
# [2] we remove health probe file and see that pod goes to notReady state
# rm /tmp/healthy
#
# [3] we create health file. Pod goes into ready state without restart
# touch /tmp/healthy
#
# [4] we remove liveness file. Pod goes into notready state and is restarted just after that
# rm /tmp/liveness
# command terminated with exit code 137

Windows Jenkins Slave unable to connect to master hosted on Openshift instance

Unable to connect jenkins master hosted On Openshift Cluster. Terminates with below error after handshaking:
may 23, 2020 2:05:55 PM hudson.remoting.jnlp.Main$CuiListener error
GRAVE: Failed to connect to jenkins-jnlp-poc:50000
java.io.IOException: Failed to connect to jenkins-jnlp-poc:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:246)
at hudson.remoting.Engine.connectTcp(Engine.java:678)
at hudson.remoting.Engine.innerRun(Engine.java:556)
at hudson.remoting.Engine.run(Engine.java:488)
Caused by: java.net.ConnectException: Connection timed out: connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:204)
... 3 more
I added route to jenkins-jnlp service but I'm not able to expose the port, I'been trying to configure nodePort but I couldn't archive it yet. Any help will be welcomed!
Thanks.
A Route will only work with HTTP / HTTPS traffic and will not work in this case and as you correctly noted, NodePorts is most likely what you want. Here is an example for a Service type NodePort using Port 32000:
apiVersion: v1
kind: Service
metadata:
name: jenkins-jnlp-poc-service
spec:
selector:
app: jenkins-jnlp-poc
type: NodePort
ports:
- name: jnlp
port: 50000
targetPort: 50000
nodePort: 32000
protocol: TCP
Note that you may need to change multiple parts of the Service:
The port and targetPort specifying which port the Service "listens" on and where traffic is forwarded to (typically to the port your container exposes)
The selector, which Pods are targeted (you'll need to check your Pods which labels are used and adjust accordingly)

Is it possible to have hostname based routing for MySQL in kubernetes?

I have a scenario where I have multiple mysql servers running in different namespaces in single kubernetes cluster. All mysql servers belong to different departments.
My requirement is I should be able to connect to different mysql servers using hostname, i.e.,
mysqlServerA running in ServerA namespace should be reachable from outside the cluster using:
mysql -h mysqlServerA.mydomain.com -A
mysqlServerB running in ServerB namespace should be reachable from outside the cluster using:
mysql -h mysqlServerB.mydomain.com -A
and so on.
I have tried TCP based routing using config maps of Nginx Ingress controller, where I am routing traffic from clients to different mysql servers by assigning different port numbers:
for mysqlServerA:
mysql -h mysqlServer.mydomain.com -A -P 3301
for mysqlServerB:
mysql -h mysqlServer.mydomain.com -A -P 3302
this works perfectly. But I want to know if hostname based routing is possible or not, because I don't want separate load balancer for each mysql service.
Thanks
General info
I routing traffic by different port numbers
You are right, the reason for that is that connection to Mysql is done via TCP. That is why it is definitely not possible to have two simultaneous connections to two servers on the same IP:port.
Unlike HTTP, the TCP don't have headers that allows distinguishing the host the traffic shall be routed to. However, still there are at least two ways to achieve the functionality you'd like to achieve :) I'll describe that later.
I want to know if hostname based routing is possible or not
I don't want separate load balancer for each mysql service.
K8s allows a few methods for service to be reachable outside the cluster (namely hostNetwork, hostPort, NodePort , LoadBalancer, Ingress )
The LoadBalancer is the simplest way to serve traffic on LoadBalancerIP:port ; however, due to TCP nature of connection you'll have to use one LoadBalancer per one mysql instance.
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: LoadBalancer
ports:
- port: 3306
selector:
name: my-mysql
The NodePort looks good, but it allows you connecting only when you know port (which can be tedious work for clients)
Proposed solutions
External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
In the Service spec, externalIPs can be specified along with any of the ServiceTypes. In the example below, mysql-1 can be accessed by clients on 1.2.3.4:3306 (externalIP:port) and mysql-2 can be accessed by clients on 4.3.2.1:3306
$ cat stereo-mysql-3306.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-1234-inst-1
spec:
selector:
app: mysql-prod
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 1.2.3.4
---
apiVersion: v1
kind: Service
metadata:
name: mysql-4321-inst-1
spec:
selector:
app: mysql-repl
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 4.3.2.1
Note: you need to have 1.2.3.4 and 4.3.2.1 assigned to your Nodes (and resolve mysqlServerA / mysqlserverB at mydomain.comto these IPs as well). I've tested that solution on my GKE cluster and it works :).
With that config all the requests for mysqlServerA.mydomain.com:3306 that resolves to 1.2.3.4 are going to be routed to the Endpoints for service mysql-1234-inst-1 with the app: mysql-prod selector, and mysqlServerA.mydomain.com:3306 will be served by app: mysql-repl.
Of course it is possible to split that config for 2 namespaces (one namespace - one mysql - one service per one namespace).
ClusterIP+OpenVPN
Taking into consideration that your mysql pods have ClusterIPs, it is possible to spawn additional VPN pod in cluster and connect to mysqls through it.
As a result, you can establish VPN connection and have access to all the cluster resources. That is very limited solution which requires establishing the VPN connection for anyone who needs access to mysql.
Good practice is to add a bastion server on top of that solution. That server will be responsible for providing access to cluster services via VPN.
Hope that helps.

Connection to MySQL (AWS RDS) in Istio

We have a issue where connecting to AWS RDS in Istio Service Mesh is results in upstream connect error or disconnect/reset before header .
Our Egress rule is as below
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
namespace: <our-namespace>
name: rds-egress-rule-with
spec:
destination:
service: <RDS End point>
ports:
- port: 80
protocol: http
- port: 443
protocol: https
- port: 3306
protocol: https
The connection to MySQL works fine in a stand alone MySQL in EC2. The connection to AWS RDS works fine without Istio. The problem only occurs in Istio Service Mesh.
We are using istio in Disabled Mutual TLS Configuration.
The protocol in your EgressRule definition should be tcp. The service should contain the IP address or a range of IP addresses in CIDR notation.
Alternatively, you can use the --includeIPRanges flag of istioctl kube-inject, to specify which IP ranges are handled by Istio. Istio will not interfere with the the not-included IP addresses and will just allow the traffic to pass thru.
References:
https://istio.io/latest/blog/2018/egress-tcp/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services

Why does my kubernetes service endpoint IP change every time I update the pods?

I have a kubernetes service called staging that selects all app=jupiter pods. It exposes an HTTP service on port 1337. Here's the describe output:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: run=staging
Selector: app=jupiter
Type: NodePort
IP: 10.11.255.80
Port: <unnamed> 1337/TCP
NodePort: <unnamed> 30421/TCP
Endpoints: 10.8.0.21:1337
Session Affinity: None
No events.
But when I run a kubectl rolling-update on the RC, which removes the 1 pod running the application and adds another, and run describe again, I get:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: run=staging
Selector: app=jupiter
Type: NodePort
IP: 10.11.255.80
Port: <unnamed> 1337/TCP
NodePort: <unnamed> 30421/TCP
Endpoints: 10.8.0.22:1337
Session Affinity: None
No events.
Everything is the same, except for the Endpoint IP address. In fact, it goes up by 1 every time I do this. This is the one thing I expected not to change, since services are an abstraction over pods, so they shouldn't change when the pods change.
I know you can hardcode the endpoint address, so this is more of a curiosity.
Also, can anyone tell me what the IP field in the describe output is for?
IP is the address of your service, which remains constant over time. Endpoints is the collection of backend addresses across which requests to the service address are spread at a given point in time. That collection changes every time the set of pods comprising your service changes, as you've noticed when performing a rolling update on your replication controller (RC).