I have deployed my webservice in openshift (tomcat) and every time I request my services sometimes it works and sometimes it doesn't work.
it was working perfectly before, number of pod is 1 no logs for failure
Error is
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
O/P of oc describe routes
Name: mysample
Namespace: enzen
Created: 12 days ago
Labels: app=mysample
Annotations: openshift.io/host.generated=true
Requested Host: mysample-enzen.193b.starter-ca-central-1.openshiftapps.com
exposed on router router (host elb.193b.starter-ca-central-1.openshiftapps.com) 12 days ago
Path: <none>
TLS Termination: <none>
Insecure Policy: <none>
Endpoint Port: 8080-tcp
Service: mysample
Weight: 100 (100%)
Endpoints: 10.128.18.210:8080
O/P of oc describe services
Name: mysample
Namespace: enzen
Labels: app=mysample
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=mysample,deploymentconfig=mysample
Type: ClusterIP
IP: 172.30.145.245
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.128.18.210:8080
Session Affinity: None
Events: <none>
The initial thought is that route is trying to spread load across multiple services https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html#alternateBackends, and one of those services is down or not available. Typically in this case I would recreate the service and the route to verify that it's configured as expected. Perhaps you can share the configuration of the route, the service, and the pod?
oc describe routes
oc describe services
oc describe pods
#### EDIT 10-22-18 ####
Adding the output from the google doc with the build pods redacted (as they are not relevant) for the benefit of additional readers. Nothing immediate is jumping out yet as an app/config issue;
oc describe routes
Name: mysample
Namespace: enzen
Created: 12 days ago
Labels: app=mysample
Annotations: openshift.io/host.generated=true
Requested Host: mysample-enzen.193b.starter-ca-central-1.openshiftapps.com
exposed on router router (host elb.193b.starter-ca-central-1.openshiftapps.com) 12 days ago
Path: <none>
TLS Termination: <none>
Insecure Policy: <none>
Endpoint Port: 8080-tcp
Service: mysample
Weight: 100 (100%)
Endpoints: 10.128.18.210:8080
oc describe services
Name: mysample
Namespace: enzen
Labels: app=mysample
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=mysample,deploymentconfig=mysample
Type: ClusterIP
IP: 172.30.145.245
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.128.18.210:8080
Session Affinity: None
Events: <none>
oc describe pods
Name: mysample-15-z85zt
Namespace: enzen
Priority: 0
PriorityClassName: <none>
Node: ip-172-31-29-189.ca-central-1.compute.internal/172.31.29.189
Start Time: Sun, 21 Oct 2018 20:55:36 +0530
Labels: app=mysample
deployment=mysample-15
deploymentconfig=mysample
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu, memory request for container mysample; cpu, memory limit for container mysample
openshift.io/deployment-config.latest-version=15
openshift.io/deployment-config.name=mysample
openshift.io/deployment.name=mysample-15
openshift.io/generated-by=OpenShiftNewApp
openshift.io/scc=restricted
Status: Running
IP: 10.128.18.210
Controlled By: ReplicationController/mysample-15
Containers:
mysample:
Container ID: cri-o://0cd20854571232b310ce22a282c8d5832908533d28d5d720537bbf3618b86c44
Image: docker-registry.default.svc:5000/enzen/mysample#sha256:adadeb7decf82b29699861171c58d7ae5f87ca6eeb1c10e5a1d525e4a0888ebc
Image ID: docker-registry.default.svc:5000/enzen/mysample#sha256:adadeb7decf82b29699861171c58d7ae5f87ca6eeb1c10e5a1d525e4a0888ebc
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 21 Oct 2018 20:55:40 +0530
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 20m
memory: 256Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8xjb8 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-8xjb8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8xjb8
Optional: false
QoS Class: Burstable
Node-Selectors: type=compute
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
Events: <none>
Related
Hi I have deployed a streamlit application which acts as a UI for downloading data from our platform. After deploying on kubernetes I observe that application keeps reloading after every 30 seconds, which is very annoying.
If I access the app by port forwarding the service it works ok but somehow using via nginx, it has the above mentioned problem.
Has anyone faced this issue ?
I was looking into the streamlit forum and saw same issue like mine but there is no clear solution.
Streamlit reruns all 30 seconds
I do see a websocket time of ~30s in developer tool under network section in browser.
My k8s manifest file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: streamlit-deployment
labels:
app: streamlit
spec:
replicas: 1
selector:
matchLabels:
app: streamlit
template:
metadata:
labels:
app: streamlit
spec:
containers:
- name: streamlit
image: <image>:<tag>
imagePullPolicy: Always
ports:
- containerPort: 8501
livenessProbe:
httpGet:
path: /healthz
port: 8501
scheme: HTTP
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 8501
scheme: HTTP
timeoutSeconds: 1
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 100m
memory: 745Mi
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: data
---
apiVersion: v1
kind: Service
metadata:
labels:
app: streamlit
name: streamlit-service
spec:
ports:
- nodePort: 32640
port: 8501
protocol: TCP
targetPort: 8501
selector:
app: streamlit
sessionAffinity: None
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp
labels:
app: streamlit
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /webapp(/|$)(.*)
pathType: Prefix
backend:
service:
name: streamlit-service
port:
number: 8501
`
I tried to add following annotation in nginx definition but it didnt work
nginx.ingress.kubernetes.io/connection-draining: "true" nginx.ingress.kubernetes.io/connection-draining-timeout: "3000"
Also I looked into the source code and was wondering if its because of tornado settings in
lib/streamlit/web/server/server.py which has web socket ping timeout set as 30 seconds.
"websocket_ping_timeout": 30,
I tried to set this to a higher value and create a new image for my deployment, but unfortunately it didnt help.
I'd really appreciate any leads.
Finally, we found out that issue was due to the default time out of the GCP load balancer, which was causing the web socket connection to re-initialize every 30 secs. All we needed to do was change the time out in the backend configuration of the load balancer. I hope it helps.
I am trying to expose my application inside the AKS cluster using ingress:
It creates a service and an ingress but somehow does not assign an address to the ingress. What could be a possible reason for this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: dockerdemo
spec:
replicas: 1
selector:
matchLabels:
app: dockerdemo
template:
metadata:
labels:
app: dockerdemo
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: dockerdemo
image: devsecopsacademy/dockerapp:v3
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
apiVersion: v1
kind: Service
metadata:
name: dockerdemo-service
spec:
type: ClusterIP
ports:
port: 80
selector:
app: dockerdemo
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress15
annotations:
kubernetes.io/ingress.class: addon-http-application-rounting
spec:
rules:
host: curefirsttestapp.cluster15-dns-c42b65ee.hcp.westeurope.azmk8s.io
http:
paths:
path: /
pathType: Prefix
backend:
service:
name: dockerdemo-service
port:
number: 80
Well, first make sure your application is up and functionning inside your K8s Cluster using a port-forword to your localhost
kubectl -n $NAMESPACE port-forward svc/$SERVICE :$PORT
if app is reachable and your call are getting back 200 Status, you can now move to the ingress part:
Make sure ingress controller is well installed under your services
kubectl -n $NAMESPACE get svc
Add a DNS record in your DNS zone which maps your domain.com to ingress controller $EXTERNAL_IP
Take a look at the ingress you created for your $SERVICE
kubectl -n $NAMESPACE get ingress
At this stage, if you application is well running and also the the ingress is well set, the app should be reachable trough domain.com, otherwise we'll need further debugging.
Make sure you have an ingress controller deployed. This is a load balancer service which can have either a public or private ip depending on your situation.
Make sure you have an ingress definition which has a rule to point to your service. This is the metadata which will tell your ingress controller how to route requests to its ip address. These routing rules can contain how to handle paths like strip, exact, etc....
I'm trying to follow this guide to set up a MySQL instance to connect to. The Kubernetes cluster is run on Minikube.
From the guide, I have this to set up my persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
I ran kubectl describe pods/mysql-c85f7f79c-kqjgp and got this:
Start Time: Wed, 29 Jan 2020 09:09:18 -0800
Labels: app=mysql
pod-template-hash=c85f7f79c
Annotations: <none>
Status: Running
IP: 172.17.0.13
IPs:
IP: 172.17.0.13
Controlled By: ReplicaSet/mysql-c85f7f79c
Containers:
mysql:
Container ID: docker://f583dad6d2d689365171a72a423699860854e7e065090bc7488ade2c293087d3
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:9527bae58991a173ad7d41c8309887a69cb8bd178234386acb28b51169d0b30e
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 29 Jan 2020 19:40:21 -0800
Finished: Wed, 29 Jan 2020 19:40:22 -0800
Ready: False
Restart Count: 7
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qchv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-5qchv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qchv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/mysql-c85f7f79c-kqjgp to minikube
Normal Pulled 10h (x5 over 10h) kubelet, minikube Container image "mysql:5.6" already present on machine
Normal Created 10h (x5 over 10h) kubelet, minikube Created container mysql
Normal Started 10h (x5 over 10h) kubelet, minikube Started container mysql
Warning BackOff 2m15s (x50 over 10h) kubelet, minikube Back-off restarting failed container
When I get the logs via kubectl logs pods/mysql-c85f7f79c-kqjgp, I only see this:
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
Is there a better way to debug? Why are the logs empty?
I've faced the same issue and I solved it by increasing mysql container resources from 128Mi to 512Mi. Following configuration works for me.
containers:
- name: cant-drupal-mysql
image: mysql:5.7
resources:
limits:
memory: "512Mi"
cpu: "1500m"
Hmm really odd, I changed my mysql-deployment.yml to use MySQL 5.7 and it seems to have worked...
- image: mysql:5.7
Gonna take this as the solution until further notice/commentary.
Currently 5.8 works
image: mysql:5.8
Update above in Deployment file.
Attempting to recreate all my assets from a fresh openshift (except for my PVC), I deleted everything ($ oc delete all --all; oc delete configmap --all; oc delete secret -l namespace=visor). I did this so I could be certain my 'oc process -f template' did a complete job.
This deleted the glusterfs-dynamic Services that I didn't realize were required to mount PVCs (persistent volume claims).
Solution 1: Recreate the Services
I recreated the services so that they look just like similar glusterfs-dynamic services, but that's not enough; even if the IP address matches, the PVC are still not mountable ('endpoints "glusterfs-dynamic-xxx" not found')
Solution 2: Copy data from old PVC to new PVC
To copy, I need to be able to access the PVC from a pod--I can't mount the PVC...
My attempt at recreating the service:
- apiVersion: v1
kind: Service
metadata:
labels:
gluster.kubernetes.io/provisioned-for-pvc: prom-a-pvc
namespace: visor
name: glusterfs-dynamic-615a9bfa-57d9-11e9-b511-001a4a195f6a
namespace: visor
spec:
ports:
- port: 1
protocol: TCP
targetPort: 1
sessionAffinity: None
type: ClusterIP
I want to be able to mount my PVCs.
But instead I get this error:
MountVolume.NewMounter initialization failed for volume "pvc-89647bcb-6df4-11e9-bd79-001a4a195f6a" : endpoints "glusterfs-dynamic-89647bcb-6df4-11e9-bd79-001a4a195f6a" not found
Figured it out. Woot. There was also an "endpoints" asset that got deleted that I had to recreate:
- apiVersion: v1
kind: Service
metadata:
labels:
gluster.kubernetes.io/provisioned-for-pvc: prom-a-pvc
namespace: visor
name: glusterfs-dynamic-615a9bfa-57d9-11e9-b511-001a4a195f6a
namespace: visor
spec:
ports:
- port: 1
protocol: TCP
targetPort: 1
sessionAffinity: None
type: ClusterIP
- apiVersion: v1
kind: Endpoints
metadata:
labels:
gluster.kubernetes.io/provisioned-for-pvc: prom-a-pvc
name: glusterfs-dynamic-615a9bfa-57d9-11e9-b511-001a4a195f6a
namespace: visor
subsets:
- addresses:
- ip: 10.25.6.231
- ip: 10.25.6.232
- ip: 10.27.6.241
- ip: 10.27.6.242
- ip: 10.5.6.221
- ip: 10.5.6.222
ports:
- port: 1
protocol: TCP
I used theses commands to see what I needed to include in my yaml:
$ oc get endpoints
$ oc edit endpoints glusterfs-dynamic-820de1e7-6df6-11e9-bd79-001a4a195f6a
Links:
I'm not alone: https://github.com/heketi/heketi/issues/757
Where I learned the endpoints asset: https://lists.openshift.redhat.com/openshift-archives/users/2019-April/msg00005.html
I have a simple MYSQL pod sitting behind a MYSQL service.
Additionally I have another pod that is running a python process that is trying to connect to the MYSQL pod.
If I try connecting to the IP address of the MYSQL pod manually from the python pod, everything is A-OK. However if I try connecting to the MYSQL service then I get an error that I can't connect to MYSQL.
grant#grant-Latitude-E7450:~/k8s/objects$ kubectl describe pod mysqlpod
Name: mysqlpod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 20 Jan 2017 11:10:50 -0600
Labels: <none>
Status: Running
IP: 172.17.0.4
Controllers: <none>
grant#grant-Latitude-E7450:~/k8s/objects$ kubectl describe service mysqlservice
Name: mysqlservice
Namespace: default
Labels: <none>
Selector: db=mysqllike
Type: ClusterIP
IP: None
Port: <unset> 3306/TCP
Endpoints: 172.17.0.5:3306
Session Affinity: None
No events.
grant#grant-Latitude-E7450:~/k8s/objects$ kubectl describe pod basic-python-model
Name: basic-python-model
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 20 Jan 2017 12:01:50 -0600
Labels: db=mysqllike
Status: Running
IP: 172.17.0.5
Controllers: <none>
If I attach to my python container and do an nslookup of the mysqlservice, then I'm actually getting the wrong IP. As you saw above the IP of the mysqlpod is 172.17.0.4 while nslookup mysqlservice is resolving to 172.17.0.5.
grant#grant-Latitude-E7450:~/k8s/objects$ k8s exec -it basic-python-model bash
[root#basic-python-model /]# nslookup mysqlservice
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: mysqlservice.default.svc.cluster.local
Address: 172.17.0.5
I'm fairly new to kubernetes, but I've been banging my head on this issue for a few hours and I can't seem to understand what I'm doing wrong.
So this was the exact correct behavior but I just misconfigured my pods.
For future people who are stuck:
The selector defined in a kubernetes service must match the label of the pod(s) you wish to serve. IE) In my MySqlService.yaml file I have the name selector for "mysqlpod":
apiVersion: v1
kind: Service
metadata:
name: mysqlservice
spec:
clusterIP: None
ports:
- port: 3306
targetPort: 3306
selector:
name: mysqlpod
Thus in my MySqlPod.yaml file I need an exactly matching label.
kind: Pod
apiVersion: v1
metadata:
name: mysqlpod
labels:
name: mysqlpod
spec:
...
For anyone coming again here, please check #gnicholas answer, but also make sure that clusterIP: None is correctly set.
I happened to indent clusterIP: None too much in the .yml file and the command was ignored by Kubernetes therefore clusterIP was mistakenly assigned causing the wrong IP issue.
Be aware that the validation won't throw any error, but will silently ignore it.