route to application stopped working in OpenShift Online 3.9 - openshift

I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.

The problem has been resolved by Openshift Online, the application is working again

Related

How to view Routes pod in OpenShift

I have created a routes for my service in the OpenShift,
oc get routes
NAME HOST/PORT PATH SERVICES PORT
simplewebserver simpleweb.apps.devcluster.os.fly.com simplewebserver 9999
When I ran command: curl http://simpleweb.apps.devcluster.os.fly.com/world
it failed to access my web service. I suspect my route has some problem, but I could not see any route debug information.
My question is, how to find the route pod in the OpenShift Or how to find some route activity information when I access route?
You can check the router logs in logs container of router pods. in our OCP cluster i could see router pods in openshift-ingress namespace.
oc get pods -n openshift-ingress
NAME READY STATUS RESTARTS AGE
router-default-5f9c4b6cb4-12121a 2/2 Running 0 40h
router-default-5f9c4b6cb4-12133a 2/2 Running 0 40h
To get the logs, use below command,
oc -n openshift-ingress -c logs logs -f <router_pod_name>
Also make sure haproxy logs are enabled to find out urls getting hit via router.
https://access.redhat.com/solutions/3397701
As there is limited information about your problem. Here are few things you can try.
Try to curl using a port
curl -kv http://simpleweb.apps.devcluster.os.fly.com:9999
Access the pod logs for which the route was created. Check the service simplewebserver is using the correct selector to route the traffic to the pod.
Do a oc describe service simplewebserver to see the selectors being used.
Check if any network policy is blocking the external traffic.
Check if you can access the target pod using that service from within the same namespace. You can do that by rsh to a pod and then access the service using:
curl -kv http://servicename.projectname.svc.cluster.local

Pod level route restriction

EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.

Openshift Origin registry: how to make it accessible?

We are setting up a test cloud Openshift Origin which we created using the openshift ansible playbook. We are following the documentation at: https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html
We have not done anything special concerning the openshift registry or router.
We are pretty new to this topic and we tried since few tags to bring the openshift registry accessible....
We have 3 hosts:
master (unschedulable)
node-1 which is set to the region 'infra' and has the registry and router services
node-2 (other region).
Here the services running on the default project:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.78.66 <none> 5000/TCP 3h
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 3h
registry-console 172.30.190.63 <none> 9000/TCP 3h
router 172.30.197.135 <none> 80/TCP,443/TCP,1936/TCP 3h
When we SSH directly on the node-1 where the registry and router are running, we can access the registry without problem and we can push some images. Exactly what is here described: docs.openshift.org/latest/install_config/registry/accessing_registry.html
Now we cannot access the registry for other hosts (master or node-2) and we really do not understand how we can make the registry accessible.... We have of course read: docs.openshift.org/latest/install_config/registry/securing_and_exposing_registry.html#access-insecure-registry-by-exposing-route
We have used this command:
oc expose service docker-registry --hostname=<hostname> -n default
The documentation says: You must be able to resolve this name externally via DNS to the router’s IP address.
As the router does not have any EXTERNAL-IP address attached to it, we do not understand how to reach it.
Is there any oc or oadm command for exposing the router through an external-ip address?
Thanks a lot in advance
Emmanuel
Based on your stated configuration I would expect the path to your UI/API for Openshift (openshift.yourdomain.com) to be routed to the same IP as your node-1, because that is where you are running the router.
If that is the case then you would point the hostname you are passing via the command in DNS to the same IP, or as a CNAME to that host.
oc expose service docker-registry --hostname=<hostname> -n default
In a larger setup with dedicated set of load balancer (lb) nodes you might have a specific A record for the set. You could then have the hostname be a CNAME to that record.

Openshift not forwarding packets to pod

I'm trying to setup a pod which receives packets for port 1234 coming from external hosts. I confirmed via tcpdump that the packets are indeed arriving at the openshift cluster. Now, I have pod AAAA running already which supposed to get the packets for port 1234 (routed or forwarded from the openshift master). We already have assigned an IP for the pod so the docs below has been followed thoroughly to setup the externalIP, ports, etc. I suspect the issue is with the master-config but I cant paste them here.
My question is what are the configs necessary to be put in place in the master-config in order to route port 1234 packets to pod AAAA.
Tried already below Openshift docs:
https://docs.openshift.com/container-platform/3.3/admin_guide/tcp_ingress_external_ports.html
https://docs.openshift.com/container-platform/3.3/dev_guide/getting_traffic_into_cluster.html#using-ingress-IP-self-service
First of all - You are only referring to a POD. I would recommend to deploy your app as a Deployment rather. Please refer to this and this.
Additionally, in order to expose Deployments to the outside world in Kubernetes you have to establish a Service. It can expose your app in a few different ways. Please read through this for the details.
If you using any standard app you can usually find an example deployment/service by googling the name of the app and 'kubernetes'.
In your master config (etc/origin/master/master-config.yaml), just add
servicesNodePortRange: "1234-1234"
kubernetesMasterConfig:
apiServerArguments:
controllerArguments:
masterCount: 1
masterIP: x.x.x.x
podEvictionTimeout:
proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
schedulerArguments:
schedulerConfigFile: /etc/origin/master/scheduler.json
servicesNodePortRange: "1234-1234"
servicesSubnet: 172.30.0.0/16
staticNodeNames: []
After that, restart atomic-openshift-master service.
Then, create a second service for your deployment with a load balancer type. Assuming your deployment config name is "myapp", create new file similar below,
--- "new-svc.yml" ----
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp
template: myapp-template
name: myapp-ext
spec:
ports:
- name: myapp
nodePort: 1234
port: 1234
protocol: TCP
targetPort: 1234
selector:
name: myapp
sessionAffinity: None
type: LoadBalancer
After that, create a new service
#oc create -f new-svc.yml
Finally, expose the new service "myapp-ext" by adding route (1234 <-- 1234).

Openshift: Error pulling image from remote, secure docker registry using certificates

I use the all-in-one VM of Openshift origin.
I am trying to pull images from a private, secure registry using an Image Stream. This is the ImageStream definition:
apiVersion: v1
kind: ImageStream
metadata:
name: my-image-stream
annotations:
description: Keeps track of changes in the application image
name: my-image
spec:
dockerImageRepository: "my.registry.net/myproject/my-image"
The repository is secured with a certificate. On my local machine, i have them in /etc/docker/certs.d/my.registry.net and I can login with docker login my.registry.net.
When I run oc import-image, however, I get the following error:
The import completed with errors.
Name: my-image
Namespace: myproject
Created: About an hour ago
Labels: <none>
Description: Keeps track of changes in the application image
Annotations: openshift.io/image.dockerRepositoryCheck=2017-01-27T08:09:49Z
Docker Pull Spec: 172.30.53.244:5000/myproject/my-image
Unique Images: 0
Tags: 1
latest
tagged from my.registry.net/myproject/my-image
! error: Import failed (InternalError): Internal error occurred: Get https://my.registry.net/v2/: remote error: handshake failure
About an hour ago
I have copied the certificates to the vagrant machine and restarted the docker daemon, but the problem remains. I have not found any documentation on how to properly add the certificates, so I just put them in the usual docker folder.
What is the appropriate way to make this work?
Update in response to rezie's answer:
There is no file etc/origin/master/ca-bundle.crt on my vagrant box. I found the following ca-bundle.crt files :
$ find / -iname ca-bundle.crt
/etc/pki/tls/certs/ca-bundle.crt
##multiple lines like
/var/lib/docker/devicemapper/mnt/something-hash-like/rootfs/etc/pki/tls/certs/ca-bundle.crt
/var/lib/origin/openshift.local.config/master/ca-bundle.crt
I appended the root certificate to /etc/pki/tls/certs/ca-bundle.crt and to var/lib/origin/openshift.local.config/master/ca-bundle.crt, but that did not change anything.
Please note, however, that I do not need to have this root certificate in /etc/docker/certs.d/... in order to login directly using docker login my.registry.net
I have appended
I cannot comment due tow lo karma so I'll write an answer saying almost the same as rezie.
The error:
! error: Import failed (InternalError): Internal error occurred: Get https://my.registry.net/v2/: remote error: handshake failure
About an hour ago
Comes from OpenShift, not from docker, therefore adding it to /etc/docker/certs.d/my.registry.net doesn't prevent the error from happening.
You should add the CA certificate at OS level, my guess is the steps failed for some reason so do it this way:
openssl s_client -connect my.registry.net:443 </dev/null |
sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' \
> /etc/pki/ca-trust/source/anchors/my.registry.net.crt &&
update-ca-trust check && update-ca-trust extract
Finally test if it worked running
curl https://my.registry.net/v2
If it doesn't give you a certificate error and you still can't do the oc import restart the atomic-openshift-master-api service
Try appending your CA (the same one you said you said that was used in the my.registry.net directory) into Openshift's ca bundle (e.g. /etc/origin/master/ca-bundle.crt. Then restart the service and reattempt import-image (making sure that you do not include the --insecure flag).
For reference, check out this issue from the Origin project. As you've mentioned, there's currently no way to supply certificates along with the dockercfg secret, and the suggestion from that issue is to add the CA as a trusted root CA across all the hosts.