How to create mysql container with initial data in kubernetes? - mysql

I want to set initial data(script file which creates database and table) on MySQL of container.I have another pod which will talk with mysql pod and inserts data in the table.If I delete mysql pod,it creates an another pod but the previously inserted data is lost.I don't want to lose the data which was inserted before deleting the pod.How to accomplish this?
I have created pv and pvc and the data is being lost after deleting the pod.
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/path to script file/script"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
This is the deployment.yaml
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim

Write a script and attach it to post start hook. Verify that the database is present and online. Then go ahead and run sql commands to create required data

Related

Helm + K8S - access the persistent volume

I am using K8S with Helm 3.
I am also using MYSQL 5.7 database.
I created MYSQL pod, whenever the database persists (it is created after first time the pod is created, and even the pod is down and up again, the data won't be lost).
I am using PersistentVolume and PersistentVolumeClaim.
Here are the YAML files:
Mysql pod:
apiVersion: v1
kind: Pod
metadata:
name: myproject-db
namespace: {{ .Release.Namespace }}
labels:
name: myproject-db
app: myproject-db
spec:
hostname: myproject-db
subdomain: {{ include "k8s.db.subdomain" . }}
containers:
- name: myproject-db
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: 12345
ports:
- name: mysql
protocol: TCP
containerPort: 3306
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 500m
memory: 600Mi
volumeMounts:
- name: mysql-persistence-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistence-storage
persistentVolumeClaim:
claimName: mysql-pvc
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql"
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
volumeName: mysql-pv
after running:
Helm install myproject myproject/
The database is created, and can be used.
If I add records and stop and remove the database pod - there is no loss of db records, even
I restart the pod by running: helm delete myproject
and by running: helm install myproject myproject/ again.
I see that the db data is kept when the MYSQL pod is restarted.
How can I access the data of mysql?
I see no files on /var/lib/mysql neither on /data/mysql - how can I access those two paths?
Also - how can I delete permanently the data? (can I delete the files themselves or just by kubectl commands?)
Thanks.
You are using a hostPath volume, thus the persistence is directly on your node.
You can ssh to your node and delete those files.

MySQL statefulset deployment with persitence volume not working

Environment :
Using Helm v3 charts for deployment.
Using Bitnami MySQL charts from the following source : https://artifacthub.io/packages/helm/bitnami/mysql
Username and Password is generated randomly for every deployment.
PersistenceVolume is created of type "standard" storage class.
Entire k8 cluster is done on Baremetal.
When deploying for the first time, installation goes successfully, all the mysql db files get generated appropriately in the data directory. After doing uninstall and then doing the install does not work, because the mysql data directory has db and config files for previously generated credentials.
As of now, after doing uninstall, we are manually clearing the files from the node before installing.
Looking forward on how to :
Clearing only the config files and keeping the db files intact.
On deleting persistent volume, clear all the files from the data directory.
Alter the DB with new username and password to use existing files as it is.
Persistence Volume K8 yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv
labels:
type: local
name: mysql-pv
spec:
storageClassName: standard
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
labels:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
selector:
matchLabels:
name: mysql-pv

Error while creating persistanceVolume with mysql on kubernetes

I'm trying to create a kubernetes persistence volume to connect it with a spring boot application, but I found this error on a pod of the persistance volume:
mysqld: Table 'mysql.plugin' doesn't exist.
A secret file is created for both user and admin and a configMap file also to map the spring boot image to the service of mysql. Here is my deployment file:
# Define a 'Service' To Expose mysql to Other Services
apiVersion: v1
kind: Service
metadata:
name: mysql # DNS name
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector: # mysql Pod Should contain same labels
app: mysql
tier: database
clusterIP: None # We Use DNS, Thus ClusterIP is not relevant
---
# Define a 'Persistent Volume Claim'(PVC) for Mysql Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim # name of PVC essential for identifying the storage data
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
# Configure 'Deployment' of mysql server
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector: # mysql Pod Should contain same labels
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels: # Must match 'Service' and 'Deployment' selectors
app: mysql
tier: database
spec:
containers:
- image: mysql:latest # image from docker-hub
args:
- "--ignore-db-dir"
- "lost+found" # Workaround for https://github.com/docker-library/mysql/issues/186
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD # Setting Root Password of mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-admin # Name of the 'Secret'
key: password # 'key' inside the Secret which contains required 'value'
- name: MYSQL_USER # Setting USER username on mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: MYSQL_PASSWORD # Setting USER Password on mysql From a 'Secret'
valueFrom:
secretKeyRef:
name: db-user
key: password
- name: MYSQL_DATABASE # Setting Database Name from a 'ConfigMap'
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #This is the path in the container on which the mounting will take place.
volumes:
- name: mysql-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim
I'm using the latest image of mysql with docker. Is there any configuration in my file that I must change ?
First of all, using lastest mysql container you are using mysql:8.
--ignore-db-dir flag does not exist in version 8 so you should see in logs this error:
2020-09-21 0 [ERROR] [MY-000068] [Server] unknown option '--ignore-db-dir'.
2020-09-21 0 [ERROR] [MY-010119] [Server] Aborting
After removing this flag, mysql stops crashing, but there appears to be other problem.
The hostPath volume does not work as expected. When I runned:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-xxxx 1Gi RWO standard 4m16s
you can see that STORAGECLASS is set to standard, and its causing k8s to provision a pv, instead of using already created one.
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-xxxx 1Gi RWO Delete Bound default/mysql-pv-claim standard 6m31s
task-pv-volume 1Gi RWO Retain Available 6m31s
As you probably see the task-pv-volume status is Available and that means that nothing is using it.
To use it you you need to set storageClassName for Persistent Volume Claim to empty string: "" and maybe add volumeName like following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
storageClassName: ""
volumeName: task-pv-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After doing this hostPath volume will work as expected, but there is still a problem with lost+found directory.
My solution would be to create another directory in /mnt/data/ dir e.g. /mnt/data/mysqldata/ and use /mnt/data/mysqldata/ as hostPath in PersistentVolume object.
Other solution would be using older version of mysql that supports --ignore-db-dir flag.

Can't connect to mysql in kubernetes

I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service

How to Install Mysql in Kubernetes with GCE

I have tried multiple times to install Mysql with kubernetes 1.8 in Google Container Engine by following the tutorial from Kubernetes Page. The PV, PVC and the Service are created succesfully, but the POD is always giving me error
PersistentVolumeClaim is not bound: "mysql-pv-claim" (repeated 3 times)
When I run kubectl get pvc it is bounded successfully. I don't know where did I wrong
Here is my deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
From the docs you linked:
You need to either have a dynamic PersistentVolume provisioner with a default StorageClass, or statically provision PersistentVolumes yourself to satisfy the PersistentVolumeClaims used here.
You need to define a default StorageClass for GCE. Something like:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: us-central1-a, us-central1-b
I just find out that the problem was because the nodes that I used.
I used 3 micro-type server and it failed to give resources to mysql instance and when I upgrade it small or to normal server It works as it should.