I am running a deploy workflow for azure and getting the following error. any idea what is it complaining about
error: error validating "STDIN": error validating data: [ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "args" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "command" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "env" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "ports" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "volumeMounts" in io.k8s.api.core.v1.LocalObjectReference]; if you choose to ignore these errors, turn validation off with --validate=false
90
Error: Process completed with exit code 1.
It deployed the pod and the pod is stuck at this on AKS:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
view-app-dev-895f4c475-mrmtj 0/1 ImagePullBackOff 0 4h14m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 32m (x45 over 3h57m) kubelet Pulling image " view:latest"
Normal BackOff 2m23s (x1031 over 3h57m) kubelet Back-off pulling image " view:latest"
The issue was I put the imagepullsecrets in the manifest file at the wrong place.
that fixed the issue.
imagepullsecrets should be below volume mounts not above
Related
Following this tutorial on installing Vault with Helm on OpenShift, I encountered the following error after executing the command:
bash
helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f my_values.yaml
For the config:
values.yaml
bash
echo '# Custom values for the Vault chart
global:
# If deploying to OpenShift
openshift: true
server:
dev:
enabled: true
serviceAccount:
create: true
name: vault-sa
injector:
enabled: true
authDelegator:
enabled: true' > my_values.yaml
The error:
$ helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f values.yaml
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
What exactly is happening, or how can I reset this specific name space to point to the right release namespace?
Have you by chance tried the exact same thing before, because that is what the error is hinting.
If we dissect the error, we get the to the root of the problem:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists.
So something on the cluster already exists that you were trying to deploy via the helm chart.
Unable to continue with install:
Helm is aborting due to this failure
ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists
So the cluster role vault-agent-injector-clusterrole that the helm chart is supposed to put onto the cluster already exsits. ClusterRoles aren't namespace specific, hence the "namespace" is blank.
and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
The default behavior is to try to import existing resources that this chart requires, but it is not possible, because the owner of that ClusterRole is different from the deployment.
To fix this, you can remove the existing deployment of your chart and then give it an other try and it should work as expected.
Make sure all resources are gone. For this particular one you can check with kubectl get clusterroles
I'm trying to run WordPress by using Kubernetes link, and the only change is I changed 20Gi to 5Gi, but when I run kubectl apply -k ., I get this error:
Error from server (Forbidden): error when creating ".": persistentvolumeclaims "wp-pv-claim" is forbidden: exceeded quota: storagequota, requested: requests.storage=5Gi, used: requests.storage=5Gi, limited: requests.storage=5Gi
I searched but did not find any related answer to mine (or even maybe I'm wrong).
Could you please answer me these questions:
How to solve the above issue?
If the volume's size is limited to 5G, then the pod cannot be bigger than 5G? I mean if I exec into the pod and run a command like dd if=/dev/zero of=file bs=1M count=8000, should it create an 8G file or not? I mean this quota and volume limits whole the pod? Or only a specific path like /var/www/html?
Edit 1
describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=wordpress
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: wordpress-mysql-6c479567b-vzpm5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 4m (x222 over 59m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
I decided to summarize our comments conversation for better readability and visibility.
The issue at first seemed to be caused by resourcequota.
Error from server (Forbidden): error when creating ".": persistentvolumeclaims "wp-pv-claim" is forbidden: exceeded quota: storagequota, requested: requests.storage=5Gi, used: requests.storage=5Gi, limited: requests.storage=5Gi
It looked like there was already existing PVC and it wouldn't allow to create a new one.
OP removed the resource quota although it was not necessary in this case since the real issue was with the PVC.
kubectl describe pvc mysql-pv-claim showed the following event:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 4m (x222 over 59m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Event message:
persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Since OP created the cluster with kubeadm and kubeadm doesn't come with a predeployed storage provider out of the box; this means that it needs to be added manually. (Storage Provider is a controller that can create a volume and mount it).
Each StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified. Since there was no storage class in cluster, OP decided to create one and picked Local storage class but forgot that:
Local volumes do not currently support dynamic provisioning [...].
and
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported
This means that a local volume had to be created manually.
I'm trying to install tiller server to an Openshift project
Helm/tiller version: 2.9.0
My project name: paytiller
At step 3, executing this command (mentioned as per this document - https://www.openshift.com/blog/getting-started-helm-openshift)
oc rollout status deployment tiller
I get this error:
error: deployment "tiller" exceeded its progress deadline
I'm not clear on what's the error message or could find any logs.
Any idea why this error?
If this doesn't work, what are the other suggestions for templating in Openshift?
EDIT
oc get events
Events:
Type Reason Age From Message
---- ------ ---- ---- ---
Warning Failed 14m (x5493 over 21h) kubelet, example.com Error: ImagePullBackOff
Normal Pulling 9m (x255 over 21h) kubelet, example.com pulling image "gcr.io/kubernetes-helm/tiller:v2.9.0"
Normal BackOff 4m (x5537 over 21h) kubelet, example.com Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.9.0"
Thanks.
The issue was with the permissions on our OpenShift platform. We didn't have access to download from open-source directly.
We tried to add kubernetes-helm as a docker image to our organization repository and then we were able to pull the image to OpenShift project. It is working now. But still, we didn't get any clue of the issue from the logs.
The status ImagePullBackOff tells you that this image gcr.io/kubernetes-helm/tiller:v2.9.0 could not be pulled from the container registry. So your OpenShift node cannot pull that image for some reason. This is often due to network proxies, a non-existing image (not the issue here) or other restrictions in the (corporate) network.
You can use oc describe pod <pod that shows ImagePullBackOff> to find out the more detailed error message that may help you further.
Also, note that the blog post you linked is from 2017, which is very old. Here is a more current version: Build Kubernetes Operators from Helm Charts in 5 steps
.
I am running unbound in a FreeBSD 11.3 jail, and have noted some behaviour that seems strange (at least to me!)
When restarting the unbound service, it works error-free:
service unbound restart
# Stopping unbound.
# Waiting for PIDS: 80729.
# Obtaining a trust anchor...
# Starting unbound.
I have confirmed that it is all running a-ok and as expected.
However, when attempting to reload unbound (without a full restart) via unbound-control, it throws some config errors...
unbound-control -c /usr/local/etc/unbound/unbound.conf reload
# /usr/local/etc/unbound/mnt/config/unbound.conf:25: error: unknown keyword 'log-replies'
# /usr/local/etc/unbound/mnt/config/unbound.conf:25: error: stray ':'
# /usr/local/etc/unbound/mnt/config/unbound.conf:25: error: unknown keyword 'yes'
# /usr/local/etc/unbound/mnt/config/unbound.conf:27: error: unknown keyword 'log-tag-queryreply'
# /usr/local/etc/unbound/mnt/config/unbound.conf:27: error: stray ':'
# /usr/local/etc/unbound/mnt/config/unbound.conf:27: error: unknown keyword 'yes'
...
...
...
# read /usr/local/etc/unbound/unbound.conf failed: 20 errors in configuration file
# [1594189698] unbound-control[37432:0] fatal error: could not read config file
Does anyone know why a restart would work, yet a reload wouldn't? I have confirmed that the config being referenced is the same in both cases (by deliberately mis-formatting it to see if service unbound restart fails)
Thanks in advance :)
service unbound reload does work.
It doesn't really 'fix' whatever the underlying bug is - but solves the problem for my use-case.
Credit to #arrowd for the answer
While starting snmpd I am getting this error in /var/snmpd.log
**
> *getaddrinfo: start Temporary failure in name resolution Error opening
specified endpoint "start" Server Exiting with code 1*
**
For your info m using Fedora-14 & net-snmp-5.7.1 .
Thanks in Advance..Help me
Error opening specified endpoint "start" Server Exiting with code 1
means some process is using port 161.
For example try netstat -anp | grep 161, then stop that process and start snmpd again.