Kubernetes: Error when creating a StatefulSet with a MySQL container - mysql

Good morning,
I'm very new to Docker and Kubernetes, and I do not really know where to start looking for help. I created a database container with Docker and I want manage it and scale with Kubernetes. I started installing minikube in my machine, and tried to create a Deployment first and then a StatefulSet for a database container. But I have a problem with the StatefulSet when creating a Pod with a database (mariadb or mysql). When I use a Deployment the Pods are loaded and work fine. However, the same Pods are not working when using them in a StatefulSet, returning errors asking for the MYSQL constants. This is the Deployment, and I use the command kubectl create -f deployment.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mydb-deployment
spec:
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: mydb
image: ignasiet/aravomysql
ports:
- containerPort: 3306
And when listing the deployments: kubectl get Deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mydb-deployment 1 1 1 1 2m
And the pods: kubectl get pods:
NAME READY STATUS RESTARTS AGE
mydb-deployment-59c867c49d-4rslh 1/1 Running 0 50s
But since I want to create a persistent database, I try to create a statefulSet object with the same container, and a persistent volume.
Thus, when creating the following StatefulSet with kubectl create -f statefulset.yaml:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
With the service kubectl create -f service-db.yaml:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
type: ClusterIP
ports:
- port: 3306
selector:
name: mydb-pod
And the permission file kubectl create -f permissions.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-mydb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
The pods do not work. They give an error:
NAME READY STATUS RESTARTS AGE
statefulset-mydb-0 0/1 CrashLoopBackOff 1 37s
And when analyzing the logs kubectl logs statefulset-mydb-0:
`error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD`
How it is possible that it does ask for these variables when the container has already an initialization script and works perfectly? And why it asks only when launching as statefulSet, and not when launching the Deployment?
Thanks in advance.

I pulled your image ignasiet/aravomysql to try to figure out what went wrong. As it turns out, your image already has an initialized MySQL data directory at /var/lib/mysql:
$ docker run -it --rm --entrypoint=sh ignasiet/aravomysql:latest
# ls -al /var/lib/mysql
total 110616
drwxr-xr-x 1 mysql mysql 240 Nov 7 13:19 .
drwxr-xr-x 1 root root 52 Oct 29 18:19 ..
-rw-rw---- 1 root root 16384 Oct 29 18:18 aria_log.00000001
-rw-rw---- 1 root root 52 Oct 29 18:18 aria_log_control
-rw-rw---- 1 root root 1014 Oct 29 18:18 ib_buffer_pool
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile0
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile1
-rw-rw---- 1 root root 12582912 Oct 29 18:18 ibdata1
-rw-rw---- 1 root root 0 Oct 29 18:18 multi-master.info
drwx------ 1 root root 2696 Nov 7 13:19 mysql
drwx------ 1 root root 12 Nov 7 13:19 performance_schema
drwx------ 1 root root 48 Nov 7 13:19 yypy
However, when mounting a PersistentVolume or just a simple Docker volume to /var/lib/mysql, it's initially empty and therefore the script thinks your database is uninitialized. You can reproduce this issue with:
$ docker run -it --rm --mount type=tmpfs,destination=/var/lib/mysql ignasiet/aravomysql:latest
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
If you have a bunch of scripts you need to run to initialize the database, you have two options:
Create a Dockerfile based on the mysql Dockerfile, and add shell scripts or SQL scripts to /docker-entrypoint-initdb.d. More details available here under "Initializing a fresh instance".
Use the initContainers property in the PodTemplateSpec, something like:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
initContainers:
- name: aravo-database-init
command:
- /script/to/initialize/database
image: ignasiet/aravomysql
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb

The issue you are facing is not specific to StatefulSet. It is because of the persistent volume. If you use StatefulSet without the persistent volume, you will not face this problem. Or, if you use Deployment with persistent volume you will face this issue.
Why? Ok, let me explain.
Setting up one of these environment variable MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD is mandatory for creating new database. Read Environment Variables part here.
But, if you initialize database from script, you will not require to provide it. Look at this line of docker-entrypont.sh here. It check if there is already a database in /var/lib/mysql directory. If there is none, it will try to create one. If you don't provide any of the specified environment variable then it will give the error you are getting. But, if it found already one database there, it will not try to create one and you will not see the error.
Now, the question is, you already have initialized the database then why it still complaining about the environment variables?
Here, the persistent volume come into play. As you have mounted the persistent volume at /var/lib/mysql directory, now this directory points to your persistent volume which is currently empty. So, when your container run docker-entrypoint.sh script, it does not found any database on /var/lib/mysql directory as it is now pointing to the persistent volume instead of original /var/lib/mysql directory of your docker image which had initialized database on this directory. So, it will try to create a new database and will complain as you haven't provided MYSQL_ROOT_PASSWORD environment variable.
When you don't use any persistent volume, your /var/lib/mysql directory points to the original directory which contains the initialized database. So, you don't see the error then.
Then, how you can initialize mysql database properly?
In order to initialize MySQL from a script, you just need to put the script into /docker-entrypoint-initdb.d. Just use a vanilla mysql image, put your initialization script into a volume then mount the volume at /docker-entrypoint-initdb.d directory. MySQL will be initialized.
Check this answer for details on how to initialize from script: https://stackoverflow.com/a/45682775/7695859

Related

MySQL database in Azure cluster using Azure Files as PV won't start

I have an Azure kubernetes cluster, but because of the limitation of attached default volumes per node (8 at my node size), I had to find a different solution to provision volumes.
The solution was to use Azure files volume and I followed this article https://learn.microsoft.com/en-us/azure/aks/azure-files-volume#mount-options which works, I have a volume mounted.
But the problem is with the MySQL instance, it just won't start.
For the test purpose, I created a deployment with 2 simple DB containers, one of which is using the default storage class volume and the second one is using the Azure-files.
Here is my manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-db
labels:
prj: test-db
spec:
selector:
matchLabels:
prj: test-db
template:
metadata:
labels:
prj: test-db
spec:
containers:
- name: db-default
image: mysql:5.7.37
imagePullPolicy: IfNotPresent
args:
- "--ignore-db-dir=lost+found"
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: default-pv
mountPath: /var/lib/mysql
subPath: test
- name: db-azurefiles
image: mysql:5.7.37
imagePullPolicy: IfNotPresent
args:
- "--ignore-db-dir=lost+found"
- "--initialize-insecure"
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: azurefile-pv
mountPath: /var/lib/mysql
subPath: test
volumes:
- name: default-pv
persistentVolumeClaim:
claimName: default-pvc
- name: azurefile-pv
persistentVolumeClaim:
claimName: azurefile-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: default-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 200Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: azure-file-store
resources:
requests:
storage: 200Mi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
parameters:
skuName: Standard_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
The one with default PV works without any problem, but the second one with Azure-files throws this error:
[Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started.
[Note] [Entrypoint]: Switching to dedicated user 'mysql'
[Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started.
[Note] [Entrypoint]: Initializing database files
[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
[Warning] InnoDB: New log files created, LSN=45790
[Warning] InnoDB: Creating foreign key constraint system tables.
[Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: e86bdae0-979b-11ec-abbf-f66bf9455d85.
[Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
mysqld: Can't change permissions of the file 'ca-key.pem' (Errcode: 1 - Operation not permitted)
[ERROR] Could not set file permission for ca-key.pem
[ERROR] Aborting
Based on the error, it seems like the database can't write to the volume mount, but that's not (entirely) true. I mounted both of those volumes to another container to be able to inspect files, here is the output and we can see that database was able to write files on the volume:
-rwxrwxrwx 1 root root 56 Feb 27 07:07 auto.cnf
-rwxrwxrwx 1 root root 1680 Feb 27 07:07 ca-key.pem
-rwxrwxrwx 1 root root 215 Feb 27 07:07 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Feb 27 07:07 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Feb 27 07:07 ib_logfile1
-rwxrwxrwx 1 root root 12582912 Feb 27 07:07 ibdata1
Obviously, some files are missing, but this output disproved my thought that the Mysql can't write to the folder.
My guess is, that the MySQL can't properly work with the file system used on Azure files.
What I tried:
different versions of MySQL (5.7.16, 5.7.24, 5.7.31, 5.7.37) and MariaDB (10.6)
testing different arguments for mysql
recreate the storage with NFS v3 enabled
create a custom Mysql image with added cifs-utils
testing different permissions, gid/uid, and other attributes of the container and also storage class
It appears to be the permissions of volumes mounted this way that is causing the issue.
If we modify your storage class to match the uid/gid of the mysql user, the pod can start:
apiVersion: storage.k8s.io/v1
kind: StorageClass
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=999
- gid=999
- mfsymlinks
- cache=strict
- nosharesock
parameters:
skuName: Standard_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
The mount options permanently set the owner of the files contained in the mount, which doesn't work well for anything that wants to own the files it creates. Because things are created 777, anyone can read/write to the directories just not own them.

MySQL Dump Importing Speed Super Slow for my Kuberenetes MySQL Pod

I am migrating my MySQL database from bare metal setup to Kubernetes. So I have exported a MySQL dump with size of around 8.9GB and uploaded the MySQL dump on my Kubernetes master node. The dump is inserted using the command
kubectl exec -it [podname] -n [namespace] -- mysql -u [db user] -p[password] [db name] < [name of the dump].sql
The insert speed is super slow and thus I import the tables one by one to observe its behavior. A 1.8GB dump takes more than 5 hours to complete.
The SELECT command itself take 0.013 seconds to select 1000 entries. The INSERT INTO for batch of data can take up to 72 seconds to 120 seconds.
I search through the internet and find that the MySQL dump insertion speed is slow to container.
Does anyone experience the same? And can give me some clue to speed up the import speed of the dump?
Some details of my cluster
MySQL Pod version: MySQL version 5.7
Kubernetes Version: v1.20.9
File System: btrfs
The MySQL Pod is deployed using pod deployment and the db is dumped to the pvc defined.
YAML config of the Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysqldb01
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 1
selector:
matchLabels:
app: mysqldb01
template:
metadata:
labels:
app: mysqldb01
spec:
schedulerName: stork
containers:
- name: mysql
image: mysql:5.7
imagePullPolicy: "Always"
env:
- name: MYSQL_ROOT_PASSWORD
value: xxxxxxxxxxxxxxxxxxxxxxxxx
args:
- --lower_case_table_names=1
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
- name: mysql-custom-config
mountPath: /etc/mysql/mysql.conf.d/custom.my.cnf
subPath: my.custom.conf
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: px-mysql-db01-pvc
- name: mysql-custom-config
configMap:
name: mysql-custom-config
One master node and three workers node are installed with rancher.
Kubernetes is installed using
curl https://releases.rancher.com/install-docker/20.10.sh | sh
Thanks in advance.

kubernetes mysql statefulset not taking new password, though password changes in env

I have created statefulset of mysql using below yaml with this command:
kubectl apply -f mysql-statefulset.yaml
Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:
kubectl exec -it mysql-sts-0 sh
and then login in mysql using:
mysql -u root -p
after giving this command a:
Enter password:
came and I entered the password:
okaoka
and successfully could login. After that I exited from the pod.
Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:
okaoka1234
and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:
kubectl apply -f mysql-statefulset.yaml
it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).
Changed Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Now the problem is when I again entered a pod using:
kubectl exec -it mysql-sts-0 sh
then used:
mysql -u root -p
and again the:
Enter password:
came and this time when I gave my new password:
okaoka1234
it gave access denied.
When I printed the env (inside the pod) using:
printenv
then I could see that:
MYSQL_ROOT_PASSWORD=okaoka1234
that means in environment variable it changed and took the new password, but I could not logged in by the new password.
The interesting thing is that I could logged in by giving my previous password okaoka, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?
Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).
Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.
After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.
What image are you using? The docker hub mysql image or a custom one?
Update
Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh)
# there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$#"
mysql_note "Starting temporary server"
docker_temp_server_start "$#"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.
Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)
Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.
Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.
It is left to the root user to manually change the password, if desired, by using a mysql client.

mariadb crashes inside kubernetes pod with hostpath volume

I'm trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I've installed kubernetes as a k3s instance inside a vagrant virtual machine.
One of these containers is a mariadb container instance, with a bind volume mapped
This is the relevant portion of the docker-compose I'm using:
academy-db:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
container_name: academy-db
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_moodle
- MARIADB_DATABASE=bitnami_moodle
volumes:
- type: bind
source: ./volumes/moodle/mariadb
target: /bitnami/mariadb
ports:
- '3306:3306'
Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).
I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files.
This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I'm using a simple hostPath volume pointing to the contents copied from the docker-compose's bind mounts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: academy-db
spec:
replicas: 1
selector:
matchLabels:
name: academy-db
strategy:
type: Recreate
template:
metadata:
labels:
name: academy-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MARIADB_DATABASE
value: bitnami_moodle
- name: MARIADB_USER
value: bn_moodle
image: docker.io/bitnami/mariadb:10.3-debian-10
name: academy-db
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /bitnami/mariadb
name: academy-db-claim
restartPolicy: Always
volumes:
- name: academy-db-claim
persistentVolumeClaim:
claimName: academy-db-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: academy-db-pv
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "<...full path to deployment folder on the server...>/volumes/moodle/mariadb"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: academy-db-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
volumeName: academy-db-pv
---
apiVersion: v1
kind: Service
metadata:
name: academy-db-service
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
name: academy-db
after applying the deployment, everything seems to work fine, in the sense that with kubectl get ... the pod and the volumes seem to be running correctly
kubectl get pods
NAME READY STATUS RESTARTS AGE
academy-db-5547cdbc5-65k79 1/1 Running 9 15d
.
.
.
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
academy-db-pv 1Gi RWO Retain Bound default/academy-db-claim 15d
.
.
.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
academy-db-claim Bound academy-db-pv 1Gi RWO 15d
.
.
.
This is the pod's log:
kubectl logs pod/academy-db-5547cdbc5-65k79
mariadb 10:32:05.66
mariadb 10:32:05.66 Welcome to the Bitnami mariadb container
mariadb 10:32:05.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb 10:32:05.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb 10:32:05.66
mariadb 10:32:05.67 INFO ==> ** Starting MariaDB setup **
mariadb 10:32:05.68 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 10:32:05.68 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb 10:32:05.69 INFO ==> Initializing mariadb database
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
and the describe pod command:
Name: academy-db-5547cdbc5-65k79
Namespace: default
Priority: 0
Node: zdmp-kube/192.168.33.99
Start Time: Tue, 22 Dec 2020 13:33:43 +0000
Labels: name=academy-db
pod-template-hash=5547cdbc5
Annotations: <none>
Status: Running
IP: 10.42.0.237
IPs:
IP: 10.42.0.237
Controlled By: ReplicaSet/academy-db-5547cdbc5
Containers:
academy-db:
Container ID: containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a
Image: docker.io/bitnami/mariadb:10.3-debian-10
Image ID: docker.io/bitnami/mariadb#sha256:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 07 Jan 2021 10:32:05 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 07 Jan 2021 10:22:03 +0000
Finished: Thu, 07 Jan 2021 10:32:05 +0000
Ready: True
Restart Count: 9
Environment:
ALLOW_EMPTY_PASSWORD: yes
MARIADB_DATABASE: bitnami_moodle
MARIADB_USER: bn_moodle
MARIADB_PASSWORD: bitnami
Mounts:
/bitnami/mariadb from academy-db-claim (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x28jh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
academy-db-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: academy-db-claim
ReadOnly: false
default-token-x28jh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x28jh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 15d (x8 over 15d) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 15d (x8 over 15d) kubelet Created container academy-db
Normal Started 15d (x8 over 15d) kubelet Started container academy-db
Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8m14s (x2 over 18m) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 8m14s (x2 over 18m) kubelet Created container academy-db
Normal Started 8m14s (x2 over 18m) kubelet Started container academy-db
Later, though, I notice that the client application has problems in connecting. After some investigation I've concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with kubectl exec and try to run for instance the mysql client I get:
kubectl exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash
I have no name!#academy-db-5547cdbc5-65k79:/$ mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)
Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I'm not an expert in Kubernetes, but started only recently to learn it)
Edit: Following #Novo's comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch.
Now my pod doesn't even start, terminating in CrashLoopBackOff !
By comparing the pod logs I notice that in the previous mariabd log there was a message:
...
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
Now replaced with
...
mariadb 14:15:57.32 INFO ==> Updating 'my.cnf' with custom configuration
mariadb 14:15:57.32 INFO ==> Setting user option
mariadb 14:15:57.35 INFO ==> Installing database
Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?
By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:
spec:
securityContext:
fsGroup: <gid>
Where gid is the group used by the process in your container.
Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:
chown-R <uid>:<gid> /path/to/volume
where uid and gid are the userId and groupId from your app.
chmod -R 777 /path/to/volume
This should solve your issue.
But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are 'StatefulSets' in Kubernetes. Use those together with a 'VolumeClaimTemplate' plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it's default storage class, which is local storage (on your node).

import mysql data to kubernetes pod

Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;
Directly,same way as you dealing with docker containers:
docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
Or using the data stored in an existing docker container volume and pass it to the pod .
Just if other people are searching for this :
kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
To answer your specific question:
You can kubectl exec into your container in order to run commands inside it. You may need to first ensure that the container has access to the file, by perhaps storing it in a location that the cluster can access (network?) and then using wget/curl within the container to make it available. One may even open up an interactive session with kubectl exec.
However, the ways to do this in increasing measure of generality would be:
Create a service that lets you access the mysql instance running on the pod from outside the cluster and connect your local mysql client to it.
If you are executing this initialization operation every time such a mysql pod is being started, it could be stored on a persistent volume and you could execute the script within your pod when you start up.
If you have several pieces of data that you typically need to copy over when starting the pod, look at init containers for fetching that data.
TL;DR
Using ConfigMaps and then use that ConfgMap as a mount into the /docker-entrypoint-initdb.d folder
Code
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
MySQL ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
Reference:
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/04-mysql-deployment.yml
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/03-UserManagement-ConfigMap.yml