Kubernetes MySQL pod stuck with CrashLoopBackOff - mysql

I'm trying to follow this guide to set up a MySQL instance to connect to. The Kubernetes cluster is run on Minikube.
From the guide, I have this to set up my persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
I ran kubectl describe pods/mysql-c85f7f79c-kqjgp and got this:
Start Time: Wed, 29 Jan 2020 09:09:18 -0800
Labels: app=mysql
pod-template-hash=c85f7f79c
Annotations: <none>
Status: Running
IP: 172.17.0.13
IPs:
IP: 172.17.0.13
Controlled By: ReplicaSet/mysql-c85f7f79c
Containers:
mysql:
Container ID: docker://f583dad6d2d689365171a72a423699860854e7e065090bc7488ade2c293087d3
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:9527bae58991a173ad7d41c8309887a69cb8bd178234386acb28b51169d0b30e
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 29 Jan 2020 19:40:21 -0800
Finished: Wed, 29 Jan 2020 19:40:22 -0800
Ready: False
Restart Count: 7
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qchv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-5qchv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qchv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/mysql-c85f7f79c-kqjgp to minikube
Normal Pulled 10h (x5 over 10h) kubelet, minikube Container image "mysql:5.6" already present on machine
Normal Created 10h (x5 over 10h) kubelet, minikube Created container mysql
Normal Started 10h (x5 over 10h) kubelet, minikube Started container mysql
Warning BackOff 2m15s (x50 over 10h) kubelet, minikube Back-off restarting failed container
When I get the logs via kubectl logs pods/mysql-c85f7f79c-kqjgp, I only see this:
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
Is there a better way to debug? Why are the logs empty?

I've faced the same issue and I solved it by increasing mysql container resources from 128Mi to 512Mi. Following configuration works for me.
containers:
- name: cant-drupal-mysql
image: mysql:5.7
resources:
limits:
memory: "512Mi"
cpu: "1500m"

Hmm really odd, I changed my mysql-deployment.yml to use MySQL 5.7 and it seems to have worked...
- image: mysql:5.7
Gonna take this as the solution until further notice/commentary.

Currently 5.8 works
image: mysql:5.8
Update above in Deployment file.

Related

pod mysql mount azure files failed

wanna deploy a mysql in cluster, and pod need a persistent storage, here are yaml & pictures:
The pod:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mysql:5.7
name: mypod
ports:
- containerPort: 3306
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: THEROODPWD
volumeMounts:
- name: azure
mountPath: /var/lib/mysql/
volumes:
- name: azure
csi:
driver: file.csi.azure.com
volumeAttributes:
secretName: williamstoragekey
shareName: mysql
mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30"
The secret:
kind: Secret
apiVersion: v1
metadata:
name: williamstoragekey
namespace: default
uid: 0e164956-2e2c-47ef-adc3-ffed8089a4a7
resourceVersion: '20976434'
creationTimestamp: '2022-03-14T07:13:38Z'
managedFields:
- manager: kubectl-create
operation: Update
apiVersion: v1
time: '2022-03-14T07:13:38Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:azurestorageaccountkey: {}
f:azurestorageaccountname: {}
f:type: {}
data:
azurestorageaccountkey: >-
TWYrMFRaQ0ZLQVIxcEtVMFJGNUdCSEFGcXVnREdYZDZ5V2JMd2hFeGJFeTd0Z1pmNWNFQjM2d3FQM1EwY0VkeExRckhKWVZMZnJvYlN0QlIzcEJOeHc9PQ==
azurestorageaccountname: a3ViZXRlc3RpbmcwMDk=
type: Opaque
The pod event:
And inside the file share:
Pod logs:
2022-03-15 01:07:34+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started.
2022-03-15 01:07:34+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2022-03-15 01:07:34+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started.
2022-03-15 01:07:35+00:00 [Note] [Entrypoint]: Initializing database files
2022-03-15T01:07:35.308618Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2022-03-15T01:07:35.321413Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2022-03-15T01:07:35.321452Z 0 [ERROR] Aborting
It's obviously the files inside mysql file share are created by mysql, and the file share is new created and is empty. But the pod seems crashed many times to start and terminated.
https://learn.microsoft.com/en-us/answers/questions/770819/pod-mount-azure-files-failed.html
Posting the Microsoft Q&A thread(answered) discussion here, which can benefits other community reading this thread.

MySQL database in Azure cluster using Azure Files as PV won't start

I have an Azure kubernetes cluster, but because of the limitation of attached default volumes per node (8 at my node size), I had to find a different solution to provision volumes.
The solution was to use Azure files volume and I followed this article https://learn.microsoft.com/en-us/azure/aks/azure-files-volume#mount-options which works, I have a volume mounted.
But the problem is with the MySQL instance, it just won't start.
For the test purpose, I created a deployment with 2 simple DB containers, one of which is using the default storage class volume and the second one is using the Azure-files.
Here is my manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-db
labels:
prj: test-db
spec:
selector:
matchLabels:
prj: test-db
template:
metadata:
labels:
prj: test-db
spec:
containers:
- name: db-default
image: mysql:5.7.37
imagePullPolicy: IfNotPresent
args:
- "--ignore-db-dir=lost+found"
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: default-pv
mountPath: /var/lib/mysql
subPath: test
- name: db-azurefiles
image: mysql:5.7.37
imagePullPolicy: IfNotPresent
args:
- "--ignore-db-dir=lost+found"
- "--initialize-insecure"
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: azurefile-pv
mountPath: /var/lib/mysql
subPath: test
volumes:
- name: default-pv
persistentVolumeClaim:
claimName: default-pvc
- name: azurefile-pv
persistentVolumeClaim:
claimName: azurefile-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: default-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 200Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: azure-file-store
resources:
requests:
storage: 200Mi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
parameters:
skuName: Standard_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
The one with default PV works without any problem, but the second one with Azure-files throws this error:
[Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started.
[Note] [Entrypoint]: Switching to dedicated user 'mysql'
[Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started.
[Note] [Entrypoint]: Initializing database files
[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
[Warning] InnoDB: New log files created, LSN=45790
[Warning] InnoDB: Creating foreign key constraint system tables.
[Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: e86bdae0-979b-11ec-abbf-f66bf9455d85.
[Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
mysqld: Can't change permissions of the file 'ca-key.pem' (Errcode: 1 - Operation not permitted)
[ERROR] Could not set file permission for ca-key.pem
[ERROR] Aborting
Based on the error, it seems like the database can't write to the volume mount, but that's not (entirely) true. I mounted both of those volumes to another container to be able to inspect files, here is the output and we can see that database was able to write files on the volume:
-rwxrwxrwx 1 root root 56 Feb 27 07:07 auto.cnf
-rwxrwxrwx 1 root root 1680 Feb 27 07:07 ca-key.pem
-rwxrwxrwx 1 root root 215 Feb 27 07:07 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Feb 27 07:07 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Feb 27 07:07 ib_logfile1
-rwxrwxrwx 1 root root 12582912 Feb 27 07:07 ibdata1
Obviously, some files are missing, but this output disproved my thought that the Mysql can't write to the folder.
My guess is, that the MySQL can't properly work with the file system used on Azure files.
What I tried:
different versions of MySQL (5.7.16, 5.7.24, 5.7.31, 5.7.37) and MariaDB (10.6)
testing different arguments for mysql
recreate the storage with NFS v3 enabled
create a custom Mysql image with added cifs-utils
testing different permissions, gid/uid, and other attributes of the container and also storage class
It appears to be the permissions of volumes mounted this way that is causing the issue.
If we modify your storage class to match the uid/gid of the mysql user, the pod can start:
apiVersion: storage.k8s.io/v1
kind: StorageClass
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=999
- gid=999
- mfsymlinks
- cache=strict
- nosharesock
parameters:
skuName: Standard_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
The mount options permanently set the owner of the files contained in the mount, which doesn't work well for anything that wants to own the files it creates. Because things are created 777, anyone can read/write to the directories just not own them.

mariadb crashes inside kubernetes pod with hostpath volume

I'm trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I've installed kubernetes as a k3s instance inside a vagrant virtual machine.
One of these containers is a mariadb container instance, with a bind volume mapped
This is the relevant portion of the docker-compose I'm using:
academy-db:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
container_name: academy-db
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_moodle
- MARIADB_DATABASE=bitnami_moodle
volumes:
- type: bind
source: ./volumes/moodle/mariadb
target: /bitnami/mariadb
ports:
- '3306:3306'
Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).
I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files.
This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I'm using a simple hostPath volume pointing to the contents copied from the docker-compose's bind mounts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: academy-db
spec:
replicas: 1
selector:
matchLabels:
name: academy-db
strategy:
type: Recreate
template:
metadata:
labels:
name: academy-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MARIADB_DATABASE
value: bitnami_moodle
- name: MARIADB_USER
value: bn_moodle
image: docker.io/bitnami/mariadb:10.3-debian-10
name: academy-db
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /bitnami/mariadb
name: academy-db-claim
restartPolicy: Always
volumes:
- name: academy-db-claim
persistentVolumeClaim:
claimName: academy-db-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: academy-db-pv
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "<...full path to deployment folder on the server...>/volumes/moodle/mariadb"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: academy-db-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
volumeName: academy-db-pv
---
apiVersion: v1
kind: Service
metadata:
name: academy-db-service
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
name: academy-db
after applying the deployment, everything seems to work fine, in the sense that with kubectl get ... the pod and the volumes seem to be running correctly
kubectl get pods
NAME READY STATUS RESTARTS AGE
academy-db-5547cdbc5-65k79 1/1 Running 9 15d
.
.
.
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
academy-db-pv 1Gi RWO Retain Bound default/academy-db-claim 15d
.
.
.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
academy-db-claim Bound academy-db-pv 1Gi RWO 15d
.
.
.
This is the pod's log:
kubectl logs pod/academy-db-5547cdbc5-65k79
mariadb 10:32:05.66
mariadb 10:32:05.66 Welcome to the Bitnami mariadb container
mariadb 10:32:05.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb 10:32:05.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb 10:32:05.66
mariadb 10:32:05.67 INFO ==> ** Starting MariaDB setup **
mariadb 10:32:05.68 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 10:32:05.68 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb 10:32:05.69 INFO ==> Initializing mariadb database
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
and the describe pod command:
Name: academy-db-5547cdbc5-65k79
Namespace: default
Priority: 0
Node: zdmp-kube/192.168.33.99
Start Time: Tue, 22 Dec 2020 13:33:43 +0000
Labels: name=academy-db
pod-template-hash=5547cdbc5
Annotations: <none>
Status: Running
IP: 10.42.0.237
IPs:
IP: 10.42.0.237
Controlled By: ReplicaSet/academy-db-5547cdbc5
Containers:
academy-db:
Container ID: containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a
Image: docker.io/bitnami/mariadb:10.3-debian-10
Image ID: docker.io/bitnami/mariadb#sha256:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 07 Jan 2021 10:32:05 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 07 Jan 2021 10:22:03 +0000
Finished: Thu, 07 Jan 2021 10:32:05 +0000
Ready: True
Restart Count: 9
Environment:
ALLOW_EMPTY_PASSWORD: yes
MARIADB_DATABASE: bitnami_moodle
MARIADB_USER: bn_moodle
MARIADB_PASSWORD: bitnami
Mounts:
/bitnami/mariadb from academy-db-claim (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x28jh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
academy-db-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: academy-db-claim
ReadOnly: false
default-token-x28jh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x28jh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 15d (x8 over 15d) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 15d (x8 over 15d) kubelet Created container academy-db
Normal Started 15d (x8 over 15d) kubelet Started container academy-db
Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8m14s (x2 over 18m) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 8m14s (x2 over 18m) kubelet Created container academy-db
Normal Started 8m14s (x2 over 18m) kubelet Started container academy-db
Later, though, I notice that the client application has problems in connecting. After some investigation I've concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with kubectl exec and try to run for instance the mysql client I get:
kubectl exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash
I have no name!#academy-db-5547cdbc5-65k79:/$ mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)
Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I'm not an expert in Kubernetes, but started only recently to learn it)
Edit: Following #Novo's comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch.
Now my pod doesn't even start, terminating in CrashLoopBackOff !
By comparing the pod logs I notice that in the previous mariabd log there was a message:
...
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
Now replaced with
...
mariadb 14:15:57.32 INFO ==> Updating 'my.cnf' with custom configuration
mariadb 14:15:57.32 INFO ==> Setting user option
mariadb 14:15:57.35 INFO ==> Installing database
Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?
By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:
spec:
securityContext:
fsGroup: <gid>
Where gid is the group used by the process in your container.
Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:
chown-R <uid>:<gid> /path/to/volume
where uid and gid are the userId and groupId from your app.
chmod -R 777 /path/to/volume
This should solve your issue.
But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are 'StatefulSets' in Kubernetes. Use those together with a 'VolumeClaimTemplate' plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it's default storage class, which is local storage (on your node).

error during mysql deployment in kubernetes for FireFly III installation

followed https://docs.firefly-iii.org/installation/k8n
Did some extra customization to the pvc's to make them bind to my nfs pv's.
All going smooth so far. But the mysql pod wont come up and gets into a CrashLoopBackOff.
Describing the mysql pod:
~/kubernetes$ kubectl describe pod firefly-iii-mysql-67bfb68cf9-6gm9l
Name: firefly-iii-mysql-67bfb68cf9-6gm9l
Namespace: default
Priority: 0
Node: ommitted
Start Time: Fri, 30 Oct 2020 15:09:29 +0000
Labels: app=firefly-iii
pod-template-hash=67bfb68cf9
tier=mysql
Annotations: <none>
Status: Running
IP: 10.32.0.4
IPs:
IP: 10.32.0.4
Controlled By: ReplicaSet/firefly-iii-mysql-67bfb68cf9
Containers:
mysql:
Container ID: docker://7c0e5920d4e1cb3ce98308e0c02d4a98bc9926b828c50496b8d8b3486245dcb9
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:8875725ff152f77e47a563661ea010b4ca9cea42d9dda897fb565ca224e83de2
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 141
Started: Fri, 30 Oct 2020 15:25:48 +0000
Finished: Fri, 30 Oct 2020 15:25:50 +0000
Ready: False
Restart Count: 8
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_password' in secret 'firefly-iii-secrets-kkcdcb696c'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k9zt4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-k9zt4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k9zt4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 18m (x2 over 18m) default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Normal Scheduled 18m default-scheduler Successfully assigned default/firefly-iii-mysql-67bfb68cf9-6gm9l to domm-lnx-02
Normal Pulled 16m (x5 over 18m) kubelet Container image "mysql:5.6" already present on machine
Normal Created 16m (x5 over 18m) kubelet Created container mysql
Normal Started 16m (x5 over 18m) kubelet Started container mysql
Warning BackOff 3m16s (x73 over 18m) kubelet Back-off restarting failed container
From the logs I get this:
~/kubernetes$ kubectl logs firefly-iii-mysql-67bfb68cf9-6gm9l
2020-10-30 15:25:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.50-1debian9 started.
2020-10-30 15:25:49+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-10-30 15:25:49+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.50-1debian9 started.
2020-10-30 15:25:49+00:00 [Note] [Entrypoint]: Initializing database files
2020-10-30 15:25:50 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-10-30 15:25:50 0 [Note] Ignoring --secure-file-priv value as server is running with --bootstrap.
2020-10-30 15:25:50 0 [Note] /usr/sbin/mysqld (mysqld 5.6.50) starting as process 52 ...
2020-10-30 15:25:50 52 [Warning] Can't create test file /var/lib/mysql/firefly-iii-mysql-67bfb68cf9-6gm9l.lower-test
2020-10-30 15:25:50 52 [Warning] Can't create test file /var/lib/mysql/firefly-iii-mysql-67bfb68cf9-6gm9l.lower-test
/usr/sbin/mysqld: Can't change dir to '/var/lib/mysql/' (Errcode: 13 - Permission denied)
2020-10-30 15:25:50 52 [ERROR] Aborting
2020-10-30 15:25:50 52 [Note] Binlog end
2020-10-30 15:25:50 52 [Note] /usr/sbin/mysqld: Shutdown complete
pod creation yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: firefly-iii-mysql
labels:
app: firefly-iii
spec:
selector:
matchLabels:
app: firefly-iii
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: firefly-iii
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: firefly-iii-secrets
key: db_password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
I had some trouble with my pvc's at first, but that seems to be resolved now. The firefly-iii main pod seems to be working just fine, with a similar pv-pvc setup. I had to recreate the whole deployment a couple of times. But now I am stuck with this permission error. I am not sure how to fix it and I could not find much useful anywhere about this...
I hope someone here can point me in the right direction...
Figured it out. Problem was with the nfs permissions. Had to set them to "map all users to admin" on the nfs host. Now my deployment can actually edit files there... Which is something mysql likes ;)

mysql container won't start on kubernetes when backed by NFS Dynamic provisioner

I'm having issues getting the mysql container starting properly. But to sum it up, with the nfs dynamic provisioner the mysql container won't start and throws an error of mkdir: cannot create directory '/var/lib/mysql/': File exists even though the NFS mount is in the container, and appears to be functioning properly.
I installed Dyanamic NFS provisioner installed on my K8 cluster from here https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. The test claim and test pod they show on the instructions work.
Now to run mysql, I took the code snippets from here:
https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
kubectl apply mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: managed-nfs-storage <--- THIS MATCHES MY NFS STORAGECLASS
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
kubectl apply -f mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim managed-nfs-storage 5m16s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/mysql-pv-claim Bound mysql-pv-volume 20Gi RWO managed-nfs-storage 5m27s
The pv was created automatically by the dynamic provisioner
Get the error...
$ kubectl logs mysql-7d7fdd478f-l2m8h
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
mkdir: cannot create directory '/var/lib/mysql/': File exists
This error stops the container from starting...
I went and deleted the deployment and added command: [ "/bin/sh", "-c", "sleep 100000" ] so the container would start...
After getting into the container, I checked the NFS mount is properly mounted and is writable...
# df -h | grep mysql
nfs1.example.com:/k8/default-mysql-pv-claim-pvc-0808d1bd-69ca-4ff5-825a-b846b1133e3a 1.0T 1.6G 1023G 1% /var/lib/mysql
If I create a "local" pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
and created the mysql deployment, the mysql pod starts up without issue.
So at this point, with dynamic provisioning (potentially just on NFS?) the mysql container doesn't work.
Anyone have any suggestions?
I'm not exactly sure what is the cause of this so here is few options.
First you could try setting securityContext, because volume might be mounted without proper permissions.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
You can find out the proper group id and user by typing id and gid inside the container.
Or just using kubectl exec -it <pod-name> bash.
Second, try using subPath
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
subPath: mysql
If that won't work I would test the NFS on another pod with initContainer that is creating a directory.
And I would redo the whole nfs maybe using this guide.