Error while creating persistanceVolume with mysql on kubernetes - mysql

I'm trying to create a kubernetes persistence volume to connect it with a spring boot application, but I found this error on a pod of the persistance volume:
mysqld: Table 'mysql.plugin' doesn't exist.
A secret file is created for both user and admin and a configMap file also to map the spring boot image to the service of mysql. Here is my deployment file:
# Define a 'Service' To Expose mysql to Other Services
apiVersion: v1
kind: Service
metadata:
name: mysql # DNS name
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector: # mysql Pod Should contain same labels
app: mysql
tier: database
clusterIP: None # We Use DNS, Thus ClusterIP is not relevant
---
# Define a 'Persistent Volume Claim'(PVC) for Mysql Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim # name of PVC essential for identifying the storage data
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
# Configure 'Deployment' of mysql server
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector: # mysql Pod Should contain same labels
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels: # Must match 'Service' and 'Deployment' selectors
app: mysql
tier: database
spec:
containers:
- image: mysql:latest # image from docker-hub
args:
- "--ignore-db-dir"
- "lost+found" # Workaround for https://github.com/docker-library/mysql/issues/186
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD # Setting Root Password of mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-admin # Name of the 'Secret'
key: password # 'key' inside the Secret which contains required 'value'
- name: MYSQL_USER # Setting USER username on mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: MYSQL_PASSWORD # Setting USER Password on mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-user
key: password
- name: MYSQL_DATABASE # Setting Database Name from a 'ConfigMap'
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #This is the path in the container on which the mounting will take place.
volumes:
- name: mysql-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim
I'm using the latest image of mysql with docker. Is there any configuration in my file that I must change ?

First of all, using lastest mysql container you are using mysql:8.
--ignore-db-dir flag does not exist in version 8 so you should see in logs this error:
2020-09-21 0 [ERROR] [MY-000068] [Server] unknown option '--ignore-db-dir'.
2020-09-21 0 [ERROR] [MY-010119] [Server] Aborting
After removing this flag, mysql stops crashing, but there appears to be other problem.
The hostPath volume does not work as expected. When I runned:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-xxxx 1Gi RWO standard 4m16s
you can see that STORAGECLASS is set to standard, and its causing k8s to provision a pv, instead of using already created one.
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-xxxx 1Gi RWO Delete Bound default/mysql-pv-claim standard 6m31s
task-pv-volume 1Gi RWO Retain Available 6m31s
As you probably see the task-pv-volume status is Available and that means that nothing is using it.
To use it you you need to set storageClassName for Persistent Volume Claim to empty string: "" and maybe add volumeName like following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
storageClassName: ""
volumeName: task-pv-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After doing this hostPath volume will work as expected, but there is still a problem with lost+found directory.
My solution would be to create another directory in /mnt/data/ dir e.g. /mnt/data/mysqldata/ and use /mnt/data/mysqldata/ as hostPath in PersistentVolume object.
Other solution would be using older version of mysql that supports --ignore-db-dir flag.

Related

Helm + K8S - access the persistent volume

I am using K8S with Helm 3.
I am also using MYSQL 5.7 database.
I created MYSQL pod, whenever the database persists (it is created after first time the pod is created, and even the pod is down and up again, the data won't be lost).
I am using PersistentVolume and PersistentVolumeClaim.
Here are the YAML files:
Mysql pod:
apiVersion: v1
kind: Pod
metadata:
name: myproject-db
namespace: {{ .Release.Namespace }}
labels:
name: myproject-db
app: myproject-db
spec:
hostname: myproject-db
subdomain: {{ include "k8s.db.subdomain" . }}
containers:
- name: myproject-db
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: 12345
ports:
- name: mysql
protocol: TCP
containerPort: 3306
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 500m
memory: 600Mi
volumeMounts:
- name: mysql-persistence-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistence-storage
persistentVolumeClaim:
claimName: mysql-pvc
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql"
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
volumeName: mysql-pv
after running:
Helm install myproject myproject/
The database is created, and can be used.
If I add records and stop and remove the database pod - there is no loss of db records, even
I restart the pod by running: helm delete myproject
and by running: helm install myproject myproject/ again.
I see that the db data is kept when the MYSQL pod is restarted.
How can I access the data of mysql?
I see no files on /var/lib/mysql neither on /data/mysql - how can I access those two paths?
Also - how can I delete permanently the data? (can I delete the files themselves or just by kubectl commands?)
Thanks.
You are using a hostPath volume, thus the persistence is directly on your node.
You can ssh to your node and delete those files.

Problem with connection to apache after scaling up, my stateful database on Kubernetes

Hiii!!!
i have deployed to Kubernetes keyrock, apache and mysql..
After i used the hpa and my stateful database scaled up, i can't login to my simple site..
This is my sql code:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.21
imagePullPolicy: Always
resources:
requests:
memory: 50Mi #50
cpu: 50m
limits:
memory: 500Mi #220?
cpu: 400m #65
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
subPath: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: password
- name: MYSQL_ROOT_HOST
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: host
volumeClaimTemplates:
- metadata:
name: mysql-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard #?manual
resources:
requests:
storage: 5Gi
And it's headless service:
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql #x-app #
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql #x-app
Anyone can help me?
I'm using gke.. Keyrock and apache are deployments and mysql is statefulset..
Thank you!!
You can't just scale up a Standalone database. Using HPA for stateless application works but not for stateful applications like a database.
Increasing replica of your StaetefulSet will just create another pod with a new MySQL instance. This new replica isn't aware of the data in your old replica. Basically, you now have completely two different databases. That's why you can't login after scaling up. When your request get routed to the new replica, this instance does not have the user info that you created in the old replica.
In this case, you should deploy your database in clustered mode. Then, you can take advantage of horizontal scaling.
I recommend to use a database operator like mysql/mysql-operator, presslabs/mysql-operator, or KubeDB to manage your database in Kubernetes. Out of these operators, KubeDB has autoscaling feature. I am not sure that other operators provide this feature.
Disclosure: I am one of the developer of KubeDB operator.

mysql container won't start on kubernetes when backed by NFS Dynamic provisioner

I'm having issues getting the mysql container starting properly. But to sum it up, with the nfs dynamic provisioner the mysql container won't start and throws an error of mkdir: cannot create directory '/var/lib/mysql/': File exists even though the NFS mount is in the container, and appears to be functioning properly.
I installed Dyanamic NFS provisioner installed on my K8 cluster from here https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. The test claim and test pod they show on the instructions work.
Now to run mysql, I took the code snippets from here:
https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
kubectl apply mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: managed-nfs-storage <--- THIS MATCHES MY NFS STORAGECLASS
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
kubectl apply -f mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim managed-nfs-storage 5m16s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/mysql-pv-claim Bound mysql-pv-volume 20Gi RWO managed-nfs-storage 5m27s
The pv was created automatically by the dynamic provisioner
Get the error...
$ kubectl logs mysql-7d7fdd478f-l2m8h
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
mkdir: cannot create directory '/var/lib/mysql/': File exists
This error stops the container from starting...
I went and deleted the deployment and added command: [ "/bin/sh", "-c", "sleep 100000" ] so the container would start...
After getting into the container, I checked the NFS mount is properly mounted and is writable...
# df -h | grep mysql
nfs1.example.com:/k8/default-mysql-pv-claim-pvc-0808d1bd-69ca-4ff5-825a-b846b1133e3a 1.0T 1.6G 1023G 1% /var/lib/mysql
If I create a "local" pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
and created the mysql deployment, the mysql pod starts up without issue.
So at this point, with dynamic provisioning (potentially just on NFS?) the mysql container doesn't work.
Anyone have any suggestions?
I'm not exactly sure what is the cause of this so here is few options.
First you could try setting securityContext, because volume might be mounted without proper permissions.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
You can find out the proper group id and user by typing id and gid inside the container.
Or just using kubectl exec -it <pod-name> bash.
Second, try using subPath
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
subPath: mysql
If that won't work I would test the NFS on another pod with initContainer that is creating a directory.
And I would redo the whole nfs maybe using this guide.

Kubernetes MySql image persistent volume is non empty during init

I am working through the persistent disks tutorial found here while also creating it as a StatefulSet instead of a deployment.
When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.
[ERROR] --initialize specified but the data directory has files in it. Aborting.
Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?
Thanks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datalayer-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: datalayer-svc
labels:
app: myapplication
spec:
ports:
- port: 80
name: dbadmin
clusterIP: None
selector:
app: database
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: datalayer
spec:
selector:
matchLabels:
app: myapplication
serviceName: "datalayer-svc"
replicas: 1
template:
metadata:
labels:
app: myapplication
spec:
terminationGracePeriodSeconds: 10
containers:
- name: database
image: mysql:5.7.22
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
- name: "MYSQL_DATABASE"
value: "appdatabase"
- name: "MYSQL_USER"
value: "app_user"
- name: "MYSQL_PASSWORD"
value: "app_password"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: datalayer-pv
mountPath: /var/lib/mysql
volumes:
- name: datalayer-pv
persistentVolumeClaim:
claimName: datalayer-pvc
This issue could be caused by the lost+found directory on the filesystem of the PersistentVolume.
I was able to verify this by adding a k8s.gcr.io/busybox container (in PVC set accessModes: [ReadWriteMany], OR comment out the database container):
- name: init
image: "k8s.gcr.io/busybox"
command: ["/bin/sh","-c","ls -l /var/lib/mysql"]
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
There are a few potential workarounds...
Most preferable is to use a subPath on the volumeMounts object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
subPath: mysql
Less preferable workarounds include:
Use a one-time container to rm -rf /var/lib/mysql/lost+found (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)
Use mysql:5 image, and add args: ["--ignore-db-dir=lost+found"] to the container (this option was removed in mysql 8)
Use mariadb image instead of mysql
More details might be available at docker-library/mysql issues: #69 and #186
You would usually see if your volumes were mounted with:
kubectl get pods # gets you all the pods on the default namespace
# and
kubectl describe pod <pod-created-by-your-statefulset>
Then you can these commands to check on your PVs and PVCs
kubectl get pv # gets all the PVs on the default namespace
kubectl get pvc # same for PVCs
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name>
Then you can to the GCP console on disks and see if your disks got created:
You can add the ignore dir lost+found
containers:
- image: mysql:5.7
name: mysql
args:
- "--ignore-db-dir=lost+found"
I use a init container to remove that file (I'm using postgres in my case):
initContainers:
- name: busybox
image: busybox:latest
args: ["rm", "-rf", "/var/lib/postgresql/data/lost+found"]
volumeMounts:
- name: nfsv-db
mountPath: /var/lib/postgresql/data

Mysql k8s Pod with mount volume on AWS failing

I tried mysql with mount-volume to obtain persistence across /var/lib/mysql
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubelogs as below
sh-3.2# kubectl logs acds-catchup-db-6f7d4b6c5b-2jwds
Initializing database
2018-04-16T09:23:18.020740Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-04-16T09:23:18.022014Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2018-04-16T09:23:18.022037Z 0 [ERROR] Aborting
On investigating, I found a directory with name Lost+Found creating in MountVolume.
Can any help me on this issue
That directory is always present in the root path of the volume, so you cannot just remove it.
You have 2 options how to fix it:
Add --ignore-db-dir=lost+found option to the MySQL configuration. I think that is the better way.
Set the database directory to the path inside the volume, not to the root path of the volume. You can specify the database directory by --datadir= option.
Here is the example of args settings:
spec:
containers:
- name: mysql
image: mysql:5.6
args: ["--ignore-db-dir=lost+found"]