I have been struggling with this problem for a few days now. My Springboot application's actuator work locally, but when deploy to OpenShift, it doesn't work.
application.properties
management.endpoints.web.exposure.include=info,health
management.health.probes.enabled=true
management.endpoint.health.enabled=true
management.endpoint.health.probes.enabled=true
build.gradle
implementation 'org.springframework.data:spring-data-commons:2.7.5'
implementation 'org.springframework.boot:spring-boot-starter-actuator:2.7.5'
implementation 'org.springframework.boot:spring-boot-actuator-autoconfigure:2.7.5'
With the above configuration, I can run the application locally and hit the endpoints below successfully:
http://localhost:8081/actuator/health/liveness
http://localhost:8081/actuator/health/readiness
OpenShift Environment Variables:
MANAGEMENT_SECURITY_ENABLED=false
MANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDE=*
MANAGEMENT_ENDPOINTS_WEB_BASE_PATH=/actuator
MANAGEMENT_HEALTH_PROBES_ENABLED=true
MANAGEMENT_ENDPOINT_HEALTH_ENABLED=true
MANAGEMENT_ENDPOINT_HEALTH_PROBES_ENABLED=true
MANAGEMENT_ENDPOINTS_ENABLED_BY_DEFAULT=true
SPRINGDOC_SHOW_ACTUATOR=true
SERVER_SERVLET_CONTEXT_PATH=/proxy/bankroute/
When the above configuration, the application started up, however, I am not able to get to <htt..>/proxy/bankroute/actuator/
Always get error 404.
OpenShift Health Checks Configuration:
readinessProbe:
failureThreshold: 3
httpGet:
path: /proxy/bankroute/actuator/health/readiness
port: 9080
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
When adding the OpenShift Health Checks Configuration above, the application doesn't start up at all.
What am I missing?
Related
I'm using Horizontal Pod Autoscaler to scale my pods in an OpenShift environment. I have a web application running in pods. As the pod scales, I got an HTTP status code 404 error in the first few seconds of an HTTP request. Is this because routes is sending a request to a pod that is in the process of being launched? If so, is there any way to prevent the error? I've tried setting router.openshift.io/haproxy.health.check.interval to a small value, but I still can't avoid this error.
It seems you did not configure your readiness checks correctly. Check the documentation on how to add readiness and liveness checks to your Deployment.
A readiness probe determines if a container is ready to accept service requests.
A liveness probe determines if a container is still running.
In newer versions of OpenShift / Kubernetes there is now also the startupProbe, which may help you in your case.
Here is an example of a Deployment with a liveness and a readiness probe:
kind: Deployment
apiVersion: apps/v1
...
spec:
...
template:
spec:
containers:
- name: example
readinessProbe:
tcpSocket:
port: 8080
livenessProbe:
tcpSocket:
port: 8080
...
I am deploying a web application in Openshift cluster. I want to use Openshift authentication to login to the web application. But couldn't find documentation on how to use Openshift authentication for third party apps deployed in Openshift. Can anyone give some pointers here?
Here are two sites / repositories describing how to use the oauth-proxy as a sidecar container:
https://linuxera.org/oauth-proxy-secure-applications-openshift/
https://github.com/openshift/oauth-proxy/#using-this-proxy-with-openshift
The gist of it is that you'll need to add the openshift/oauth-proxy container to your Deployment as a sidecar and route your traffic through this additional container:
apiVersion: apps/v1
kind: Deployment
[..]
spec:
[..]
template:
spec:
containers:
- <YOUR_APPLICATION_CONTAINER>
- name: oauth-proxy
args:
- -provider=openshift
- -https-address=:8888
- -http-address=
- -email-domain=*
- -upstream=http://localhost:8080
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret-file=/etc/proxy/secrets/session_secret
- -openshift-service-account=reversewords
- -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- -skip-auth-regex=^/metrics
image: quay.io/openshift/oauth-proxy:4.6
ports:
- name: oauth-proxy
containerPort: 8888
protocol: TCP
You can find full examples in the linked repository or the linked tutorial.
1. What I've tried
I want to make ocp cluster (actually, single node-all in one) like this blog
link : openshift.com/blog/revamped-openshift-all-in-one-aio-for-labs-and-fun
and I also referred to official document : Installing bare metal
So, What I have tried is like this :
(I used VirtualBox to make four vm)
- 1 bastion
- 1 dns
- 1 master
- 1 bootstrap
These vm are in the same network.
First, I made ignition file to boot master and bootstrap node.
install-config.yaml that I used :
apiVersion: v1
baseDomain: hololy-local.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
I only changed baseDomain, master's number of replica, pullSecret and sshKey.
After Making ignition files, I started to boot bootstrap node and master node with iso file.
bootstrap node was successfully installed, but problem happens master node.
2. Details
Before starting Master node installation, I have to set up dns. Because unlike bootstrap's installation, Master node requests domain info during installation.
ip address
dns : 192.168.56.114
master : 192.168.56.150
DNS Zone is like this :
And I started to set up master node using this parameters.
coreos.inst.install_dev=sda
coreos.inst.image_url=http://192.168.56.114/rhcos438.x86_64.raw.gz
coreos.inst.ignition_url=http://192.168.56.114/master.ign
ip=192.168.56.150::192.168.56.254:255.255.255.0:core0.hololy-local.com:enp0s3:none nameserver=192.168.56.114
Installation finished successfully, but when it boots without boot disk(.iso) Error comes out.
It seems to trying to find master configuration file in api-int.aio.hololy-local.com:22623, and It connects ip address that I wrote in the zone file.
But strangely, The connection refused continuously.
Since I set the static ip when rhcos installation, so Ping test works successfully to 192.168.56.150.
I think 22623 port was blocked. But How can I open the port before OS boot?...
I don't know how to I solve it.
Thanks.
I solved it.
The differences between installation of 3.11 and 4.x is whether LB's necessary.
In 4.x LB is necessary. so you should set up LB.
In my situation, I set LB by nginx, and the sample is like this:
stream{
upstream ocp_k8s_api {
#round-robin;
server 192.168.56.201:6443; #bootstrap
server 192.168.56.202:6443; #master1
server 192.168.56.203:6443; #master2
server 192.168.56.204:6443; #master3
}
server {
listen 6443;
proxy_pass ocp_k8s_api;
}
upstream ocp_m_config {
#round-robin;
server 192.168.56.201:22623; #bootstrap
server 192.168.56.202:22623; #master1
server 192.168.56.203:22623; #master2
server 192.168.56.204:22623; #master3
}
server {
listen 22623;
proxy_pass ocp_m_config;
}
upstream ocp_http {
#round-robin;
server 192.168.56.205:80; #worker1
server 192.168.56.206:80; #worker2
}
server{
listen 80;
proxy_pass ocp_http;
}
upstream ocp_https {
#round-robin;
server 192.168.56.205:443; #worker1
server 192.168.56.206:443; #worker2
}
server{
listen 443;
proxy_pass ocp_https;
}
}
thanks.
I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.
The problem has been resolved by Openshift Online, the application is working again
I'm trying to setup a pod which receives packets for port 1234 coming from external hosts. I confirmed via tcpdump that the packets are indeed arriving at the openshift cluster. Now, I have pod AAAA running already which supposed to get the packets for port 1234 (routed or forwarded from the openshift master). We already have assigned an IP for the pod so the docs below has been followed thoroughly to setup the externalIP, ports, etc. I suspect the issue is with the master-config but I cant paste them here.
My question is what are the configs necessary to be put in place in the master-config in order to route port 1234 packets to pod AAAA.
Tried already below Openshift docs:
https://docs.openshift.com/container-platform/3.3/admin_guide/tcp_ingress_external_ports.html
https://docs.openshift.com/container-platform/3.3/dev_guide/getting_traffic_into_cluster.html#using-ingress-IP-self-service
First of all - You are only referring to a POD. I would recommend to deploy your app as a Deployment rather. Please refer to this and this.
Additionally, in order to expose Deployments to the outside world in Kubernetes you have to establish a Service. It can expose your app in a few different ways. Please read through this for the details.
If you using any standard app you can usually find an example deployment/service by googling the name of the app and 'kubernetes'.
In your master config (etc/origin/master/master-config.yaml), just add
servicesNodePortRange: "1234-1234"
kubernetesMasterConfig:
apiServerArguments:
controllerArguments:
masterCount: 1
masterIP: x.x.x.x
podEvictionTimeout:
proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
schedulerArguments:
schedulerConfigFile: /etc/origin/master/scheduler.json
servicesNodePortRange: "1234-1234"
servicesSubnet: 172.30.0.0/16
staticNodeNames: []
After that, restart atomic-openshift-master service.
Then, create a second service for your deployment with a load balancer type. Assuming your deployment config name is "myapp", create new file similar below,
--- "new-svc.yml" ----
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp
template: myapp-template
name: myapp-ext
spec:
ports:
- name: myapp
nodePort: 1234
port: 1234
protocol: TCP
targetPort: 1234
selector:
name: myapp
sessionAffinity: None
type: LoadBalancer
After that, create a new service
#oc create -f new-svc.yml
Finally, expose the new service "myapp-ext" by adding route (1234 <-- 1234).