Openshift not forwarding packets to pod - openshift

I'm trying to setup a pod which receives packets for port 1234 coming from external hosts. I confirmed via tcpdump that the packets are indeed arriving at the openshift cluster. Now, I have pod AAAA running already which supposed to get the packets for port 1234 (routed or forwarded from the openshift master). We already have assigned an IP for the pod so the docs below has been followed thoroughly to setup the externalIP, ports, etc. I suspect the issue is with the master-config but I cant paste them here.
My question is what are the configs necessary to be put in place in the master-config in order to route port 1234 packets to pod AAAA.
Tried already below Openshift docs:
https://docs.openshift.com/container-platform/3.3/admin_guide/tcp_ingress_external_ports.html
https://docs.openshift.com/container-platform/3.3/dev_guide/getting_traffic_into_cluster.html#using-ingress-IP-self-service

First of all - You are only referring to a POD. I would recommend to deploy your app as a Deployment rather. Please refer to this and this.
Additionally, in order to expose Deployments to the outside world in Kubernetes you have to establish a Service. It can expose your app in a few different ways. Please read through this for the details.
If you using any standard app you can usually find an example deployment/service by googling the name of the app and 'kubernetes'.

In your master config (etc/origin/master/master-config.yaml), just add
servicesNodePortRange: "1234-1234"
kubernetesMasterConfig:
apiServerArguments:
controllerArguments:
masterCount: 1
masterIP: x.x.x.x
podEvictionTimeout:
proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
schedulerArguments:
schedulerConfigFile: /etc/origin/master/scheduler.json
servicesNodePortRange: "1234-1234"
servicesSubnet: 172.30.0.0/16
staticNodeNames: []
After that, restart atomic-openshift-master service.
Then, create a second service for your deployment with a load balancer type. Assuming your deployment config name is "myapp", create new file similar below,
--- "new-svc.yml" ----
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp
template: myapp-template
name: myapp-ext
spec:
ports:
- name: myapp
nodePort: 1234
port: 1234
protocol: TCP
targetPort: 1234
selector:
name: myapp
sessionAffinity: None
type: LoadBalancer
After that, create a new service
#oc create -f new-svc.yml
Finally, expose the new service "myapp-ext" by adding route (1234 <-- 1234).

Related

Is there any way to control OpenShift routes for co-working with horizontal pod auto-scaler?

I'm using Horizontal Pod Autoscaler to scale my pods in an OpenShift environment. I have a web application running in pods. As the pod scales, I got an HTTP status code 404 error in the first few seconds of an HTTP request. Is this because routes is sending a request to a pod that is in the process of being launched? If so, is there any way to prevent the error? I've tried setting router.openshift.io/haproxy.health.check.interval to a small value, but I still can't avoid this error.
It seems you did not configure your readiness checks correctly. Check the documentation on how to add readiness and liveness checks to your Deployment.
A readiness probe determines if a container is ready to accept service requests.
A liveness probe determines if a container is still running.
In newer versions of OpenShift / Kubernetes there is now also the startupProbe, which may help you in your case.
Here is an example of a Deployment with a liveness and a readiness probe:
kind: Deployment
apiVersion: apps/v1
...
spec:
...
template:
spec:
containers:
- name: example
readinessProbe:
tcpSocket:
port: 8080
livenessProbe:
tcpSocket:
port: 8080
...

Pod level route restriction

EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.

Access a K8s service via DNS name from Cloud Function

I have a K8s cluster running with a few services in it. BEcause of K8s DNS, within the cluster services can talk to each over via HTTP request with their name as the URL (e.g http://foo-bar-svc). This is great because I don't need to use an IP address, which I'm assuming would change every time a pod gets redeployed.
Now I want a Cloud Function to be able to post a request to one of these service.
I've followed this guide and successfully created a VPC Connector.
From my Cloud Function, I can make a HTTP request to a service in my K8s cluster, but only if I use an explicit IP address.
How can I instead use one of the URLS that the K8s DNS can resolve?
The best way to expose a k8s service with ingoing host request, is ingress.
You can define a Ingress ressource link with your service, example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
In this example we define a host foo.bar.com to resolve and depends of the path /foo or /bar we reroute to a service behind. Of course you can replace it by the prefixe "/*" for reroute all to one specific service path.
Please refer the documentation: https://kubernetes.io/docs/concepts/services-networking/ingress/
But with this configuration you need to have a Load balancer in front and an alias to a DNS entry:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress?hl=en
And to be more resilience you can add one ingress controller (nginx,traefik....): https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
So, the schema will be:
DNS server <-> Client resolv DNS -> LB -> Ingress Controller -> Service -> Pod -> container.
I hope it helps.

Unhealthy Ingress services

I am trying to deploy an application via GKE. As far I created two services and two deployments for the front and the back for the App .
I created an ingress ressource using "gce" controller and I mapped the services as shown
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
part: ingress
name: my-irool-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
backend:
serviceName: client-svc
servicePort: 3000
rules:
- http:
paths:
- path: /back
backend:
serviceName: back-svc
servicePort: 9000
- http:
paths:
- path: /back/*
backend:
serviceName: back-svc
servicePort: 9000
It worked almost fine ( not all the root where mapped correctly but it worked). I added modification on the code ( only the code of the application ) and I rebuild the images and recreated the services, but the ingress seemed angry with the modifications I have added and
all my services became in the unhealthy state
This is the front service
apiVersion: v1
kind: Service
metadata:
labels:
app: app
part: front
name: client
namespace: default
spec:
type: NodePort
ports:
- nodePort: 32585
port: 3000
protocol: TCP
selector:
app: app
part: front
when I do a describe , I got nothing beside that my services are unhealthy.
And in the moment of creation I keep getting
Warning GCE 6m loadbalancer-controller
googleapi: Error 409: The resource
'[project/idproject]/global/healthChecks/k8s-be-32585--17c7......01'
already exists, alreadyExists
My question is:
What is wrong about the code showed above? Should I map all the services to the port 80 ( default ingress port so it could work? )
What are the readinessProbe and livenessProbe? Should I add them or mapping one to the services to default backend should be enough ?
For your first question, deleting and re-creating the ingress may resolve the issue. For the second question, you can review the full steps of configuring Liveness and Readiness probes here. Furthermore, as defined here (as an example for a pod):
livenessProbe: Indicates whether the Container is running. If the
liveness probe fails, the kubelet kills the Container, and the
Container is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is Success.
And readinessProbe: Indicates whether the Container is ready to
service requests. If the readiness probe fails, the endpoints
controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the
initial delay is Failure. If a Container does not provide a readiness
probe, the default state is Success.

route to application stopped working in OpenShift Online 3.9

I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.
The problem has been resolved by Openshift Online, the application is working again