Kubernetes MySql image persistent volume is non empty during init - mysql

I am working through the persistent disks tutorial found here while also creating it as a StatefulSet instead of a deployment.
When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.
[ERROR] --initialize specified but the data directory has files in it. Aborting.
Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?
Thanks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datalayer-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: datalayer-svc
labels:
app: myapplication
spec:
ports:
- port: 80
name: dbadmin
clusterIP: None
selector:
app: database
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: datalayer
spec:
selector:
matchLabels:
app: myapplication
serviceName: "datalayer-svc"
replicas: 1
template:
metadata:
labels:
app: myapplication
spec:
terminationGracePeriodSeconds: 10
containers:
- name: database
image: mysql:5.7.22
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
- name: "MYSQL_DATABASE"
value: "appdatabase"
- name: "MYSQL_USER"
value: "app_user"
- name: "MYSQL_PASSWORD"
value: "app_password"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: datalayer-pv
mountPath: /var/lib/mysql
volumes:
- name: datalayer-pv
persistentVolumeClaim:
claimName: datalayer-pvc

This issue could be caused by the lost+found directory on the filesystem of the PersistentVolume.
I was able to verify this by adding a k8s.gcr.io/busybox container (in PVC set accessModes: [ReadWriteMany], OR comment out the database container):
- name: init
image: "k8s.gcr.io/busybox"
command: ["/bin/sh","-c","ls -l /var/lib/mysql"]
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
There are a few potential workarounds...
Most preferable is to use a subPath on the volumeMounts object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
subPath: mysql
Less preferable workarounds include:
Use a one-time container to rm -rf /var/lib/mysql/lost+found (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)
Use mysql:5 image, and add args: ["--ignore-db-dir=lost+found"] to the container (this option was removed in mysql 8)
Use mariadb image instead of mysql
More details might be available at docker-library/mysql issues: #69 and #186

You would usually see if your volumes were mounted with:
kubectl get pods # gets you all the pods on the default namespace
# and
kubectl describe pod <pod-created-by-your-statefulset>
Then you can these commands to check on your PVs and PVCs
kubectl get pv # gets all the PVs on the default namespace
kubectl get pvc # same for PVCs
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name>
Then you can to the GCP console on disks and see if your disks got created:

You can add the ignore dir lost+found
containers:
- image: mysql:5.7
name: mysql
args:
- "--ignore-db-dir=lost+found"

I use a init container to remove that file (I'm using postgres in my case):
initContainers:
- name: busybox
image: busybox:latest
args: ["rm", "-rf", "/var/lib/postgresql/data/lost+found"]
volumeMounts:
- name: nfsv-db
mountPath: /var/lib/postgresql/data

Related

Helm + K8S - access the persistent volume

I am using K8S with Helm 3.
I am also using MYSQL 5.7 database.
I created MYSQL pod, whenever the database persists (it is created after first time the pod is created, and even the pod is down and up again, the data won't be lost).
I am using PersistentVolume and PersistentVolumeClaim.
Here are the YAML files:
Mysql pod:
apiVersion: v1
kind: Pod
metadata:
name: myproject-db
namespace: {{ .Release.Namespace }}
labels:
name: myproject-db
app: myproject-db
spec:
hostname: myproject-db
subdomain: {{ include "k8s.db.subdomain" . }}
containers:
- name: myproject-db
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: 12345
ports:
- name: mysql
protocol: TCP
containerPort: 3306
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 500m
memory: 600Mi
volumeMounts:
- name: mysql-persistence-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistence-storage
persistentVolumeClaim:
claimName: mysql-pvc
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql"
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
volumeName: mysql-pv
after running:
Helm install myproject myproject/
The database is created, and can be used.
If I add records and stop and remove the database pod - there is no loss of db records, even
I restart the pod by running: helm delete myproject
and by running: helm install myproject myproject/ again.
I see that the db data is kept when the MYSQL pod is restarted.
How can I access the data of mysql?
I see no files on /var/lib/mysql neither on /data/mysql - how can I access those two paths?
Also - how can I delete permanently the data? (can I delete the files themselves or just by kubectl commands?)
Thanks.
You are using a hostPath volume, thus the persistence is directly on your node.
You can ssh to your node and delete those files.

Add users to a Kubernetes deployment of MySQL with secrets

My overall goal is to create MySQL users (despite root) automatically after the deployment in Kubernetes.
I found the following resources:
How to create mysql users and database during deployment of mysql in kubernetes?
Add another user to MySQL in Kubernetes
People suggested that .sql scripts can be mounted to docker-entrypoint-initdb.d with a ConfigMap to create these users. In order to do that, I have to put the password of these users in this script in plain text. This is a potential security issue. Thus, I want to store MySQL usernames and passwords as Kubernetes Secrets.
This is my ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
labels:
app: mysql-image-db
data:
initdb.sql: |-
CREATE USER <user>#'%' IDENTIFIED BY <password>;
How can I access the associated Kubernetes secrets within this ConfigMap?
I am finally able to provide a solution to my own question. Since PjoterS made me aware that you can mount Secrets into a Pod as a volume, I came up with following solution.
This is the ConfigMap for the user creation scipt:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-init-script
labels:
app: mysql-image-db
data:
init-user.sh: |-
#!/bin/bash
sleep 30s
mysql -u root -p"$(cat /etc/mysql/credentials/root_password)" -e \
"CREATE USER '$(cat /etc/mysql/credentials/user_1)'#'%' IDENTIFIED BY '$(cat /etc/mysql/credentials/password_1)';"
To made this work, I needed to mount the ConfigMap and the Secret as Volumes of my Deployment and added a postStart lifecycle hook to execute the user creation script.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-image-db
spec:
selector:
matchLabels:
app: mysql-image-db
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-image-db
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root_password
name: mysql-user-credentials
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-volume
mountPath: /var/lib/mysql
- name: mysql-config-volume
mountPath: /etc/mysql/conf.d
- name: mysql-init-script-volume
mountPath: /etc/mysql/init
- name: mysql-credentials-volume
mountPath: /etc/mysql/credentials
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "/etc/mysql/init/init-user.sh"]
volumes:
- name: mysql-persistent-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
- name: mysql-config-volume
configMap:
name: mysql-config
- name: mysql-init-script-volume
configMap:
name: mysql-init-script
defaultMode: 0777
- name: mysql-credentials-volume
secret:
secretName: mysql-user-credentials

Use External standalone-full.xml by the RHPAM kie pod in Openshift

I just want to use customized standalone-full.xml in the RHPAM kie server pod which is running in Openshift. I have created the configmap from file and not sure how to set it.
I created the configmap like this
oc create configmap my-config --from-file=standalone-full.xml.
And edited the deploymentconfig of rhpam kie server,
volumeMounts:
- name: config-volume
mountPath: /opt/eap/standalone/configuration
volumes:
- name: config-volume
configMap:
name: my-config
It starts a new container,with sttaus container creating and fails with error(sclaing down 1 to 0)
Am i setting the configmap correct?
You can mount the configmap in a pod as a volume. Here's a good example: just add 'volumes' (to specify configmap as a volume) and 'volumeMounts' (to specify mount point) blocks in the pod's spec:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never

mysql container won't start on kubernetes when backed by NFS Dynamic provisioner

I'm having issues getting the mysql container starting properly. But to sum it up, with the nfs dynamic provisioner the mysql container won't start and throws an error of mkdir: cannot create directory '/var/lib/mysql/': File exists even though the NFS mount is in the container, and appears to be functioning properly.
I installed Dyanamic NFS provisioner installed on my K8 cluster from here https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. The test claim and test pod they show on the instructions work.
Now to run mysql, I took the code snippets from here:
https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
kubectl apply mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: managed-nfs-storage <--- THIS MATCHES MY NFS STORAGECLASS
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
kubectl apply -f mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim managed-nfs-storage 5m16s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/mysql-pv-claim Bound mysql-pv-volume 20Gi RWO managed-nfs-storage 5m27s
The pv was created automatically by the dynamic provisioner
Get the error...
$ kubectl logs mysql-7d7fdd478f-l2m8h
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
mkdir: cannot create directory '/var/lib/mysql/': File exists
This error stops the container from starting...
I went and deleted the deployment and added command: [ "/bin/sh", "-c", "sleep 100000" ] so the container would start...
After getting into the container, I checked the NFS mount is properly mounted and is writable...
# df -h | grep mysql
nfs1.example.com:/k8/default-mysql-pv-claim-pvc-0808d1bd-69ca-4ff5-825a-b846b1133e3a 1.0T 1.6G 1023G 1% /var/lib/mysql
If I create a "local" pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
and created the mysql deployment, the mysql pod starts up without issue.
So at this point, with dynamic provisioning (potentially just on NFS?) the mysql container doesn't work.
Anyone have any suggestions?
I'm not exactly sure what is the cause of this so here is few options.
First you could try setting securityContext, because volume might be mounted without proper permissions.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
You can find out the proper group id and user by typing id and gid inside the container.
Or just using kubectl exec -it <pod-name> bash.
Second, try using subPath
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
subPath: mysql
If that won't work I would test the NFS on another pod with initContainer that is creating a directory.
And I would redo the whole nfs maybe using this guide.

Can't connect to mysql in kubernetes

I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service