I have pod and I am attempting to attach a persistent mysql storage to it. Then deployment starts and after waiting a while it fails with the following error on the log:
--> Scaling up php-4 from 0 to 1, scaling down php-1 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
Scaling php-4 up to 1
--> FailedCreate: php-4 Error creating: pods "php-4-" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi
error: timed out waiting for "php-4" to be synced
If this is caused by limits, how can I deploy a new version of a pod with new config if I can only use one at a time? Is there something that I am missing?
If you are at the limit on resources a rolling deployment will not work as you cannot create a new pod as that will exceed resource limits. You need to change the deployment strategy in the deployment config from Rolling to Recreate if you want to run at resource limits.
Related
I am deploying microservices in my openshift cluster but I can see out of 90 microservices nearly 10 got stuck in Init:0/1 status. Is there a way to troubleshoot the issue??
If you are using web page UI.
Go to your developer tab, and go to project.
There you should see the recent events in your project where the errors related to the 0/1 pods stuck state should appear. For me it was something like
Error creating: pods "xxxx" is forbidden: exceeded quota: project-quota, requested: requests.memory=500Mi, used: requests.memory=750Mi, limited: requests.memory=1Gi
So that meant that my project was attempting to have 1.25Gi of memory when 1Gi was the limit
In this case I went down to project quotas in my project screen.
and saw something like this in yaml format:
spec:
hard:
file-share-dr-off.storageclass.storage.k8s.io/requests.storage: xGi
file-share-dr-on.storageclass.storage.k8s.io/requests.storage: xGi
limits.cpu: 'x'
limits.memory: 1Gi
pods: 'x'
requests.cpu: 'x'
requests.memory: 1Gi
vsan.storageclass.storage.k8s.io/requests.storage: xGi
So I increased limits.memory and requests.memory to 2Gi for my project quota and hit save.
After that the pod errors got fixed.
And deployment went from 0/1 to 1/1 pods.
I'm trying to run WordPress by using Kubernetes link, and the only change is I changed 20Gi to 5Gi, but when I run kubectl apply -k ., I get this error:
Error from server (Forbidden): error when creating ".": persistentvolumeclaims "wp-pv-claim" is forbidden: exceeded quota: storagequota, requested: requests.storage=5Gi, used: requests.storage=5Gi, limited: requests.storage=5Gi
I searched but did not find any related answer to mine (or even maybe I'm wrong).
Could you please answer me these questions:
How to solve the above issue?
If the volume's size is limited to 5G, then the pod cannot be bigger than 5G? I mean if I exec into the pod and run a command like dd if=/dev/zero of=file bs=1M count=8000, should it create an 8G file or not? I mean this quota and volume limits whole the pod? Or only a specific path like /var/www/html?
Edit 1
describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=wordpress
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: wordpress-mysql-6c479567b-vzpm5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 4m (x222 over 59m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
I decided to summarize our comments conversation for better readability and visibility.
The issue at first seemed to be caused by resourcequota.
Error from server (Forbidden): error when creating ".": persistentvolumeclaims "wp-pv-claim" is forbidden: exceeded quota: storagequota, requested: requests.storage=5Gi, used: requests.storage=5Gi, limited: requests.storage=5Gi
It looked like there was already existing PVC and it wouldn't allow to create a new one.
OP removed the resource quota although it was not necessary in this case since the real issue was with the PVC.
kubectl describe pvc mysql-pv-claim showed the following event:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 4m (x222 over 59m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Event message:
persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Since OP created the cluster with kubeadm and kubeadm doesn't come with a predeployed storage provider out of the box; this means that it needs to be added manually. (Storage Provider is a controller that can create a volume and mount it).
Each StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified. Since there was no storage class in cluster, OP decided to create one and picked Local storage class but forgot that:
Local volumes do not currently support dynamic provisioning [...].
and
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported
This means that a local volume had to be created manually.
I'm trying to install tiller server to an Openshift project
Helm/tiller version: 2.9.0
My project name: paytiller
At step 3, executing this command (mentioned as per this document - https://www.openshift.com/blog/getting-started-helm-openshift)
oc rollout status deployment tiller
I get this error:
error: deployment "tiller" exceeded its progress deadline
I'm not clear on what's the error message or could find any logs.
Any idea why this error?
If this doesn't work, what are the other suggestions for templating in Openshift?
EDIT
oc get events
Events:
Type Reason Age From Message
---- ------ ---- ---- ---
Warning Failed 14m (x5493 over 21h) kubelet, example.com Error: ImagePullBackOff
Normal Pulling 9m (x255 over 21h) kubelet, example.com pulling image "gcr.io/kubernetes-helm/tiller:v2.9.0"
Normal BackOff 4m (x5537 over 21h) kubelet, example.com Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.9.0"
Thanks.
The issue was with the permissions on our OpenShift platform. We didn't have access to download from open-source directly.
We tried to add kubernetes-helm as a docker image to our organization repository and then we were able to pull the image to OpenShift project. It is working now. But still, we didn't get any clue of the issue from the logs.
The status ImagePullBackOff tells you that this image gcr.io/kubernetes-helm/tiller:v2.9.0 could not be pulled from the container registry. So your OpenShift node cannot pull that image for some reason. This is often due to network proxies, a non-existing image (not the issue here) or other restrictions in the (corporate) network.
You can use oc describe pod <pod that shows ImagePullBackOff> to find out the more detailed error message that may help you further.
Also, note that the blog post you linked is from 2017, which is very old. Here is a more current version: Build Kubernetes Operators from Helm Charts in 5 steps
.
I have an nginx image ans I am able to push it to openshift internal registry. However, when I try to use that image from internal registry to create an app, it gives me imagepullback error.
Below are the steps which I am following.
[root#artel1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 231d40e811cd 4 weeks ago 126 MB
[root#artel1 ~]# docker tag 231d40e811cd docker-registry-default.router.default.svc.cluster.local/openshift/nginx
[root#artel1 ~]# docker push docker-registry-default.router.default.svc.cluster.local/openshift/nginx
[root#artel1 ~]# oc new-app --docker-image=docker-registry-default.router.default.svc.cluster.local/openshift/test-image
W1227 10:18:34.761105 33535 dockerimagelookup.go:233] Docker registry lookup failed: Get https://docker-registry-default.router.default.svc.cluster.local/v2/: x509: certificate signed by unknown authority
W1227 10:18:34.784988 33535 newapp.go:479] Could not find an image stream match for "docker-registry-default.router.default.svc.cluster.local/openshift/test-image:latest". Make sure that a Docker image with that tag is available on the node for the deployment to succeed.
--> Found Docker image 7809d84 (8 days old) from docker-registry-default.router.default.svc.cluster.local for "docker-registry-default.router.default.svc.cluster.local/openshift/test-image:latest"
OpenShift Node
--------------
This is a component of OpenShift and contains the software for individual nodes when using SDN.
Tags: openshift, node
* This image will be deployed in deployment config "test-image"
* Ports 53/tcp, 8443/tcp will be load balanced by service "test-image"
* Other containers can access this service through the hostname "test-image"
* WARNING: Image "docker-registry-default.router.default.svc.cluster.local/openshift/test-image:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
deploymentconfig.apps.openshift.io "test-image" created
service "test-image" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/test-image'
Run 'oc status' to view your app.
Events logs
34s 47s 2 test-image-1-dzhmk.15e44d430e48ec8d Pod spec.containers{test-image} Normal Pulling kubelet, artel2.fyre.ibm.com pulling image "docker-registry-default.router.default.svc.cluster.local/openshift/test-image:latest"
34s 46s 2 test-image-1-dzhmk.15e44d4318ec7f53 Pod spec.containers{test-image} Warning Failed kubelet, artel2.fyre.ibm.com Failed to pull image "docker-registry-default.router.default.svc.cluster.local/openshift/test-image:latest": rpc error: code = Unknown desc = Error: image openshift/test-image:latest not found
34s 46s 2 test-image-1-dzhmk.15e44d4318ed5311 Pod spec.containers{test-image} Warning Failed kubelet, artel2.fyre.ibm.com Error: ErrImagePull
27s 46s 7 test-image-1-dzhmk.15e44d433c24e5c9 Pod Normal SandboxChanged kubelet, artel2.fyre.ibm.com Pod sandbox changed, it will be killed and re-created.
25s 43s 6 test-image-1-dzhmk.15e44d43dd6a7b57 Pod spec.containers{test-image} Warning Failed kubelet, artel2.fyre.ibm.com Error: ImagePullBackOff
25s 43s 6 test-image-1-dzhmk.15e44d43dd6a10d9 Pod spec.containers{test-image} Normal BackOff kubelet, artel2.fyre.ibm.com Back-off pulling image "docker-registry-default.router.default.svc.cluster.local/openshift/test-image:latest"
Pod status
[root#artel1 ~]# oc get po
NAME READY STATUS RESTARTS AGE
test-image-1-deploy 1/1 Running 0 3m
test-image-1-dzhmk 0/1 ImagePullBackOff 0 3m
Where exactly things are going wrong ?
It looks like 'docker push' hasn't been completed successfully. It should return 'Image successfully pushed'.
Try to login to internal registry first (see accessing_registry), and recheck registry's service hostname or use service ip
I am working with OpenShift Origin 3.9 and had an application (consisting of a service, pods, etc.) building and running alright.
However, now rebuilds fail with this error message:
Successfully built 1234567890ab
Pushing image docker- registry.default.svc:5000/my_project/my_app:latest ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Registry server Address:
Registry server User Name: serviceaccount
Registry server Email: serviceaccount#example.org
Registry server Password: <<non-empty>>
error: build error: Failed to push image:
After retrying 6 times, Push image still failed due to error:
Get https://docker-registry.default.svc:5000/v1/_ping: dial tcp 1.2.3.4:5000:
getsockopt: connection refused
I don't have admin privileges on that cluster, so it is unlikely that this is due the the nodes' DNS setup, as similar answers would suggest (e.g. here).
One possibly contributing cause could be that I had created a service account in the meantime (since the last successful build) and temporarily logged in with its API token. However I am no logged in again with (an API token for) my full account (e.g. according to oc whoami.)
This is how I am starting the rebuild:
oc login --token=$api_token
oc start-build --follow my_app
What could explain this error and how can I further diagnose and overcome it, esp. given that I don't have cluster admin rights?
The problem "somehow" went away after some days. Whether by operator intervention or otherwise I cannot tell.
You missed one steps
oc policy add-role-to-user system:image-builder
Please follow this doc
https://blog.openshift.com/remotely-push-pull-container-images-openshift/