MySQL statefulset deployment with persitence volume not working - mysql

Environment :
Using Helm v3 charts for deployment.
Using Bitnami MySQL charts from the following source : https://artifacthub.io/packages/helm/bitnami/mysql
Username and Password is generated randomly for every deployment.
PersistenceVolume is created of type "standard" storage class.
Entire k8 cluster is done on Baremetal.
When deploying for the first time, installation goes successfully, all the mysql db files get generated appropriately in the data directory. After doing uninstall and then doing the install does not work, because the mysql data directory has db and config files for previously generated credentials.
As of now, after doing uninstall, we are manually clearing the files from the node before installing.
Looking forward on how to :
Clearing only the config files and keeping the db files intact.
On deleting persistent volume, clear all the files from the data directory.
Alter the DB with new username and password to use existing files as it is.
Persistence Volume K8 yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv
labels:
type: local
name: mysql-pv
spec:
storageClassName: standard
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
labels:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
selector:
matchLabels:
name: mysql-pv

Related

Error while creating persistanceVolume with mysql on kubernetes

I'm trying to create a kubernetes persistence volume to connect it with a spring boot application, but I found this error on a pod of the persistance volume:
mysqld: Table 'mysql.plugin' doesn't exist.
A secret file is created for both user and admin and a configMap file also to map the spring boot image to the service of mysql. Here is my deployment file:
# Define a 'Service' To Expose mysql to Other Services
apiVersion: v1
kind: Service
metadata:
name: mysql # DNS name
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector: # mysql Pod Should contain same labels
app: mysql
tier: database
clusterIP: None # We Use DNS, Thus ClusterIP is not relevant
---
# Define a 'Persistent Volume Claim'(PVC) for Mysql Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim # name of PVC essential for identifying the storage data
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
# Configure 'Deployment' of mysql server
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector: # mysql Pod Should contain same labels
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels: # Must match 'Service' and 'Deployment' selectors
app: mysql
tier: database
spec:
containers:
- image: mysql:latest # image from docker-hub
args:
- "--ignore-db-dir"
- "lost+found" # Workaround for https://github.com/docker-library/mysql/issues/186
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD # Setting Root Password of mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-admin # Name of the 'Secret'
key: password # 'key' inside the Secret which contains required 'value'
- name: MYSQL_USER # Setting USER username on mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: MYSQL_PASSWORD # Setting USER Password on mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-user
key: password
- name: MYSQL_DATABASE # Setting Database Name from a 'ConfigMap'
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #This is the path in the container on which the mounting will take place.
volumes:
- name: mysql-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim
I'm using the latest image of mysql with docker. Is there any configuration in my file that I must change ?
First of all, using lastest mysql container you are using mysql:8.
--ignore-db-dir flag does not exist in version 8 so you should see in logs this error:
2020-09-21 0 [ERROR] [MY-000068] [Server] unknown option '--ignore-db-dir'.
2020-09-21 0 [ERROR] [MY-010119] [Server] Aborting
After removing this flag, mysql stops crashing, but there appears to be other problem.
The hostPath volume does not work as expected. When I runned:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-xxxx 1Gi RWO standard 4m16s
you can see that STORAGECLASS is set to standard, and its causing k8s to provision a pv, instead of using already created one.
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-xxxx 1Gi RWO Delete Bound default/mysql-pv-claim standard 6m31s
task-pv-volume 1Gi RWO Retain Available 6m31s
As you probably see the task-pv-volume status is Available and that means that nothing is using it.
To use it you you need to set storageClassName for Persistent Volume Claim to empty string: "" and maybe add volumeName like following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
storageClassName: ""
volumeName: task-pv-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After doing this hostPath volume will work as expected, but there is still a problem with lost+found directory.
My solution would be to create another directory in /mnt/data/ dir e.g. /mnt/data/mysqldata/ and use /mnt/data/mysqldata/ as hostPath in PersistentVolume object.
Other solution would be using older version of mysql that supports --ignore-db-dir flag.

How to create mysql container with initial data in kubernetes?

I want to set initial data(script file which creates database and table) on MySQL of container.I have another pod which will talk with mysql pod and inserts data in the table.If I delete mysql pod,it creates an another pod but the previously inserted data is lost.I don't want to lose the data which was inserted before deleting the pod.How to accomplish this?
I have created pv and pvc and the data is being lost after deleting the pod.
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/path to script file/script"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
This is the deployment.yaml
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Write a script and attach it to post start hook. Verify that the database is present and online. Then go ahead and run sql commands to create required data

What's the difference between various usage of NFS storage

My customer asked me about if there is any difference when using NFS as the following:
Method1: define PV like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqldb-volume
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
nfs:
path: /var/export/dbvol
server: master.lab.example.com
Method 2: mount nfs on local file system /home/myapp/dir1, and define PV like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqldb-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /home/myapp/dir1
The pod will run a openjdk image which will output to a file on NFS, Seems both should work, is there any difference?
Best regards
Lan
doing it with hostPath requires intervention on node - pre-mounting of the NFS before you spin up pod(s), making it happen outside of in kubernetes dependencies (ie. unavailability of NFS will not hold pod startup but start it without mounted content, which is very bad).
By design the direct NFS PV is much cleaner and obvious, but in the end, both will work in general.

Can't Share a Persistent Volume Claim for an EBS Volume between Apps

Is it possible to share a single persistent volume claim (PVC) between two apps (each using a pod)?
I read: Share persistent volume claims amongst containers in Kubernetes/OpenShift but didn't quite get the answer.
I tried to added a PHP app, and MySQL app (with persistent storage) within the same project. Deleted the original persistent volume (PV) and created a new one with read,write,many mode. I set the root password of the MySQL database, and the database works.
Then, I add storage to the PHP app using the same persistent volume claim with a different subpath. I found that I can't turn on both apps. After I turn one on, when I try to turn on the next one, it get stuck at creating container.
MySQL .yaml of the deployment step at openshift:
...
template:
metadata:
creationTimestamp: null
labels:
name: mysql
spec:
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql
containers:
- name: mysql
...
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql/data
subPath: mysql/data
...
terminationMessagePath: /dev/termination-log
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
PHP .yaml from deployment step:
template:
metadata:
creationTimestamp: null
labels:
app: wiki2
deploymentconfig: wiki2
spec:
volumes:
- name: volume-959bo <<----
persistentVolumeClaim:
claimName: mysql
containers:
- name: wiki2
...
volumeMounts:
- name: volume-959bo
mountPath: /opt/app-root/src/w/images
subPath: wiki/images
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
The volume mount names are different. But that shouldn't make the two pods can't share the PVC. Or, the problem is that they can't both mount the same volume at the same time?? I can't get the termination log at /dev because if it can't mount the volume, the pod doesn't start, and I can't get the log.
The PVC's .yaml (oc get pvc -o yaml)
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-class: ebs
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
creationTimestamp: YYYY-MM-DDTHH:MM:SSZ
name: mysql
namespace: abcdefghi
resourceVersion: "123456789"
selfLink: /api/v1/namespaces/abcdefghi/persistentvolumeclaims/mysql
uid: ________-____-____-____-____________
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: pvc-________-____-____-____-____________
status:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
phase: Bound
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""
Suspicious Entries from oc get events
Warning FailedMount {controller-manager }
Failed to attach volume "pvc-________-____-____-____-____________"
on node "ip-172-__-__-___.xx-xxxx-x.compute.internal"
with:
Error attaching EBS volume "vol-000a00a00000000a0" to instance
"i-1111b1b11b1111111": VolumeInUse: vol-000a00a00000000a0 is
already attached to an instance
Warning FailedMount {kubelet ip-172-__-__-___.xx-xxxx-x.compute.internal}
Unable to mount volumes for pod "the pod for php app":
timeout expired waiting for volumes to attach/mount for pod "the pod".
list of unattached/unmounted volumes=
[volume-959bo default-token-xxxxx]
I tried to:
turn on the MySQL app first, and then try to turn on the PHP app
found php app can't start
turn off both apps
turn on the PHP app first, and then try to turn on the MySQL app.
found mysql app can't start
The strange thing is that the event log never says it can't mount volume for the MySQL app.
The remaining volumen to mount is either default-token-xxxxx, or volume-959bo (the volume name in PHP app), but never mysql-data (the volume name in MySQL app).
So the error seems to be caused by the underlying storage you are using, in this case EBS. The OpenShift docs actually specifically state that this is the case for block storage, see here.
I know this will work for both NFS and Glusterfs storage, and have done this in numerous projects using these storage type but unfortunately, in your case it's not supported

Openshift nodes storage configuration

I'm looking for some info about what requirements are best practices for openshift storage in nodes which will execute dockers but I didn't find any clear solution.
My questions would be:
-is any shared storage mandatory for all nodes?
-can I control the directory where images will be placed?
-must be nfs directories that will be acceded by containers be already mounted in the node server?
I've been looking for information about this and these are my conclusions:
If you need persistant storage for example db, jenkins master or any kind of storage you want to maintain every time a docker boots then you have to mount the storage in the nodes that can run docker that requires that persistent storage.
Mount in nodes any of these:
NFS ,HostPath (single node testing only of course already mounted),GlusterFS,Ceph,OpenStack Cinder, AWS Elastic Block Store (EBS),GCE Persistent Disk ,iSCSI, Fibre Channel
Create persistent volumes in Openshift
Openshift nfs example creating file.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /tmp
server: 172.17.0.2
Create from the file created
oc create -f file.yaml
Create a claim from the datastore, claims will search for persistent volumes available with the capacity required.
Then a claim will be used by pods.
For example let's claim 1GB ,later we will associate a claim with a pod.
Create nfs-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Create from the file created
oc create -f nfs-claim.yaml
Create a pod fo with the storage volume and with a claim.
-
apiVersion: v1
kind: Pod
metadata:
name: nginx-nfs-pod
labels:
name: nginx-nfs-pod
spec:
containers:
- name: nginx-nfs-pod
image: fedora/nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfsvol
mountPath: /usr/share/nginx/html
volumes:
- name: nfsvol
persistentVolumeClaim:
claimName: nfs-claim1
Some extra options like selinux settings must be required, but they are so well explained here (https://docs.openshift.org/latest/install_config/storage_examples/shared_storage.html)
is any shared storage mandatory for all nodes?
No shared storage is not mandatory, but it is highly recommended (as most application will require some "state-full" storage, which can only really be obtained with a shared storage provider. The following https://docs.openshift.org/latest/install_config/persistent_storage/index.html are options for such storage providers.