kubelet Failed to pull image rpc error: code = Unknown desc = context deadline exceeded - k3s

My problem is that afte 2m the pull will stop and it will try again.
I can't pull the image in 2m so I changed the kubelet runtime-request-timeout to 10m but it seems that value is ignored
What I did is [I'm having a k3s cluster]
Edit/create file /etc/rancher/k3s/config.yaml with content:
kubelet-arg:
- "runtime-request-timeout=10m"
but this seems to be ignored [restarted all machines]. Am I doing something wrong?
│ Type Reason Age From Message │
│ ---- ------ ---- ---- ------- │
│ Normal Scheduled 2m32s default-scheduler Successfully assigned default/image-f8dq4 to ubuntu2 │
│ Warning Failed 33s kubelet Failed to pull image "privaterepo/image:1.0.0.0": rpc error: code = Unknown desc = context deadline exceeded │
│ Warning Failed 33s kubelet Error: ErrImagePull │
│ Normal BackOff 33s kubelet Back-off pulling image "privaterepo/image:1.0.0.0" │
│ Warning Failed 33s kubelet Error: ImagePullBackOff │
│ Normal FileSystemResizeSuccessful 32s (x2 over 2m32s) kubelet MountVolume.NodeExpandVolume succeeded for volume "pvc-b6152bdd-aef4-4927-929c-674226705ddf" ubuntu2 │
│ Normal Pulling 19s (x2 over 2m32s) kubelet Pulling image "privaterepo/image:1.0.0.0"
LE:
It seems that runtime-request-timeout is not for pulls.
Something to mention is that I changed the container runtime from containerd to docker. If I'll leave it as containerd I'll not have this issue. Any ideas?

It looks like you had not given the proper permission for container. create secret to give permission
image: privaterepo/image:1.0.0.0
name: queue-manager
imagePullPolicy: Always
resources: {}
imagePullSecrets:
- name: mysecret
SECRET FILE
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: sdfsdfdsfdsf=
password: dfsfsdfdsfd

Related

Unable to run MYSQL 8.0 on AKS, getting the above errors [MY-012574] [InnoDB] Unable to lock ./#innodb_redo/#ib_redo0 error & Unable to open

Unable to run MYSQL :8.0 on AKS using PVC, below is my manifest files
[
I've just encountered a similar issue myself (when switching from mysql:5.5 to mysql:8).
The key to resolving it when using Azure Files appears to be setting the - nobrl setting on the StorageClass :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysql-sc-azurefile
provisioner: file.csi.azure.com
allowVolumeExpansion: true
mountOptions:
- file_mode=0777
- mfsymlinks
- uid=999
- dir_mode=0777
- gid=999
- actimeo=30
- cache=strict
- nobrl
parameters:
skuName: Standard_LRS
Additionally, you may need to set the securityContext on the deployment to use 999 (to prevent mysql attempting to switch the user at startup) :
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroup: 999

Beanstalk puts the nginx conf file into the wrong directory

I followed the documentation so to replace the default nginx.conf file. So the tree looks like this:
app_root/
├─ .platform/
│ ├─ nginx/
│ │ ├─ nginx.conf
│ │ ├─ conf.d/
│ │ │ ├─ my_custom_conf.conf
Although everything seems correct, after the deploy, the configuration ends up being moved from .platform to /var/proxy/staging; the eb-engine.log, in facts, reports this:
[INFO] Running command /bin/sh -c cp -rp /var/app/staging/.platform/nginx/. /var/proxy/staging/nginx
This means that the real config file /etc/nginx/nginx.conf is still the default one.
You can find the answer in the AWS documentation
/var/app/staging/ – Where application source code is processed during deployment.
/var/app/current/ – Where application source code runs after processing.
When a new code version is deployed, /var/app/staging will be used to run build commands and test the settings. It is also used to test the nginx config file. If the deployment goes through, the code in /staging will be moved to /current and the nginx config will be moved to /etc/nginx/nginx.conf.
/etc/nginx/nginx.conf is the active config file. You can see this by using nginx -t.
[ec2-user#ip-172-31-1-161 ~]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
However, the content of /etc/nginx/nginx.conf is NOT the default config. Whatever you put in /.platform/nginx/nginx.conf of your app directory will end up in /etc/nginx/nginx.conf.
In eb-engine.log you can see the config check in staging. If successful, you will see the cp command that copies the file(s) to /etc/nginx:
2022/09/07 14:23:03.027695 [INFO] Running command /bin/sh -c /usr/sbin/nginx -t -c /var/proxy/staging/nginx/nginx.conf
2022/09/07 14:23:03.078615 [INFO] nginx: the configuration file /var/proxy/staging/nginx/nginx.conf syntax is ok
nginx: configuration file /var/proxy/staging/nginx/nginx.conf test is successful
2022/09/07 14:23:03.078683 [INFO] Running command /bin/sh -c cp -rp /var/proxy/staging/nginx/* /etc/nginx

kubernetes dns discover fail

I am new in kubernetes, I am doing lab at: https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
When I deploy wordpress, I always get this log:
│ WordPress not found in /var/www/html - copying now... │
│ Complete! WordPress has been successfully copied to /var/www/html │
│ Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 22 │
│ Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 22 │
│ MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo failed: Name or service not known │
│ Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 22 │
│ Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 22 │
│ MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo failed: Name or service not known
Although mysql pod is OK and both of them are in default namespace
The stack trace you attached, line 3, suggest that you did not specify the database connection parameters correctly in the wordpress deployment file. Please try to check the deployment configuration and see the DB host name matches with your mysql pod name which is running as it looks like the name which is present is not resolvable.

How to delete pending pods in kubernetes?

I have two pending pods which I cannot delete by any means. Could you help?
OS: Cent OS 7.8
Docker: 1.13.1
kubenetes: "v1.20.1"
[root#master-node ~]# k get pods --all-namespaces (note: k = kubectl alias)
NAMESPACE NAME READY STATUS RESTARTS AGE
**default happy-panda-mariadb-master-0 0/1 Pending** 0 11m
**default happy-panda-mariadb-slave-0 0/1 Pending** 0 49m
default whoami 1/1 Running 0 5h13m
[root#master-node ~]# k describe pod/happy-panda-mariadb-master-0
Name: happy-panda-mariadb-master-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=mariadb
chart=mariadb-7.3.14
component=master
controller-revision-hash=happy-panda-mariadb-master-7b55b457c9
release=happy-panda
statefulset.kubernetes.io/pod-name=happy-panda-mariadb-master-0
IPs: <none>
Controlled By: StatefulSet/happy-panda-mariadb-master
Containers:
mariadb:
Image: docker.io/bitnami/mariadb:10.3.22-debian-10-r27
Port: 3306/TCP
Host Port: 0/TCP
Liveness: exec [sh -c password_aux="${MARIADB_ROOT_PASSWORD:-}"
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-happy-panda-mariadb-master-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: happy-panda-mariadb-master
Optional: false
default-token-wpvgf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wpvgf
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
[root#master-node ~]# k get events
LAST SEEN TYPE REASON OBJECT MESSAGE
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-master-0 no persistent volumes available for this claim and no storage class is set
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-slave-0 no persistent volumes available for this claim and no storage class is set
65m Warning FailedScheduling pod/happy-panda-mariadb-master-0 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
I already tried delete by various ways but nothing worked (I also tried to delete from the dashboard)
**kubectl delete pod happy-panda-mariadb-master-0 --namespace="default"
k delete deployment mysql-1608901361
k delete pod/happy-panda-mariadb-master-0 -n default --grace-period 0 --force**
Could you advise me on this?
kubectl delete rc replica set names
Or You forgot to specify storageClassName: manual in PersistentVolumeClaim.
You should delete the statefulset which controls the pods instead of deleting the pods directly. The reason pods are not getting deleted is because statefulset controller is recreating them after you delete it.
kubectl delete statefulset happy-panda-mariadb-master

MountVolume.setup failed for volume "...": mount failed: exit status 32

using openshift, and one pod keep pending, because nfs server cannot be mounted (nfs server is able to be mounted by mannually using command line, but cannot be mounted from the Pod)
I have installed nfs-common, so it's not the root cause. I trying to install nfs-utils, but I was failed, the error message is:
E: Unable to locate package: nfs-utils.
I also tried libnfs12 and libnfs-utils, they were the same as nfs-utils. I also used apt-get install upgade and update to solve the package locating problem, but they were useless.
I'm going to show the yaml file for connecting the nfs server
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test01
lables:
disktype: baas
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /baas
server: 9.111.140.47
readOnly: false
persistentVolumeReclaimPolicy: Recycle
after using "oc describe pod/mypod" for the pending Pod, below is the feedback:
Warning FailedMount 14s kubelet, localhost MountVolume.SetUp failed for volume "pv-test01" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/cluster-up/root/openshift.local.clusterup/openshift.local.volumes/pods/267db6f2-d875-11e9-80ba-005056bc3ce0/volumes/kubernetes.io~nfs/pv-test01 --scope -- mount -t nfs 9.111.140.47:/baas /var/lib/origin/cluster-up/root/openshift.local.clusterup/openshift.local.volumes/pods/267db6f2-d875-11e9-80ba-005056bc3ce0/volumes/kubernetes.io~nfs/pv-test01
Output: Running scope as unit run-28094.scope.
mount: wrong fs type, bad option, bad superblock on 9.111.140.47:/baas,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
so how can I mount to nfs server from the Pod? should I keep installing nfs-utils? If yes, how can I install it?