mysql service pending in kubernetes - mysql

I created .yaml file to create mysql service on kubernetes for my internal application, but it's unreachable. I can reach application and also phpmyadmin to reach database but it's not working properly. I'm stuck with pending status on mysql pod.
.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: flaskapi-cred
key: db_root_password
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
ports:
- port: 3306
protocol: TCP
name: mysql
selector:
app: db
type: LoadBalancer
kubectl get all output is:
NAME READY STATUS RESTARTS AGE
pod/flaskapi-deployment-59bcb745ff-gl8xn 1/1 Running 0 117s
pod/mysql-99fb77bf4-sbhlj 0/1 Pending 0 118s
pod/phpmyadmin-deployment-5fc964bf9d-dk59t 1/1 Running 0 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapi-deployment 1/1 1 1 117s
deployment.apps/mysql 0/1 1 0 118s
deployment.apps/phpmyadmin-deployment 1/1 1 1 118s
I already did docker pull mysql.
Edit
Name: mysql-99fb77bf4-sbhlj
Namespace: z2
Priority: 0
Node: <none>
Labels: app=db
pod-template-hash=99fb77bf4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-99fb77bf4
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_root_password' in secret 'flaskapi-secrets'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gmbnd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gmbnd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gmbnd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found

you are missing the volume to attach with the pod or deployment. PVC is required as your deployment configuration is using it.
you can see clearly : persistentvolumeclaim "mysql-pv-claim" not found
you can apply below YAML and try.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

As the error in the message clearly shows persistentvolumeclaim "mysql-pv-claim" not found. So you need to provision a persistentvolumeclaim (PVC).
There is a static & dynamic provisioning but I'll explain static provisioning here as it will be relatively easy for you to understand & setup. You need to create a PersistentVolume (PV) which the PVC will use. There are various types of Volumes, about which you read here.
Which type of Volume you would wanna create would be your choice depending on your environment and needs. A simple example would be of volume type hostPath.
Create a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: cust
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
# The configuration here specifies that the volume is at /tmp/data on the cluster's Node
path: "/tmp/data"
And then create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: cust
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv-volume
Once the PVC is successfully created, your deployment shall go through.

Related

Mysql deployment deleting my database in kubernetes

I created a mysql deployment where I connect with other pods. I do remote access to create the database and tables but I saw that at some point in the mysql lifecycle it deletes my database how can I make it not deleted?
I thought about creating a static pod but I don't know if this solves my problem, it follows my structure below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: rafaelribeirosouza86/shopping:myql
name: mysql
imagePullPolicy: Always
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
# secret:
# secretName: mysql-pass
# items:
# - key: password
persistentVolumeClaim:
claimName: mysql-pv-claim
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
# clusterIP: None
ports:
- port: 3306
selector:
app: mysql
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Does anyone have an idea how I can resolve this?

Kubectl get pod shows ErrImageNeverPull mysql

According to this docu, i try to lunch mysql with kubernetes:
deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kazi-db
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
mysql-storage.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
service.yml:
apiVersion: v1
kind: Service
metadata:
name: kazi-db
spec:
ports:
- port: 3306
selector:
app: mysql
db-secret.yml:
apiVersion: v1
kind: Secret
metadata:
name: kazi-db
type: kubernetes.io/basic-auth
stringData:
password: xcvas
I have registered all with kubectl apply -f ...
The problem when i call kubectl get pod
kazi-db-758b978ccc-7m29n 0/1 ErrImageNeverPull 0 4m48s
I have a docker hub with integrated kubernetes
May be thats because of 1. imagePullPolicy is set to "Never" and 2. image: mysql:5.6 does not seem to be present on the worker node where this pod got scheduled.
following are the two possible options:
Perform a manual pull of the image: mysql:5.6 on all worker nodes using
docker pull mysql:5.6
change imagePullPolicy to IfNotPresent.

Kubernetes Persistent Volumes With EBS (in EC2 instance)

Currently I am trying to have volume persistence for my MYSQL database using Kubernetes with Kubeadm.
The environment is based on an amazon EC2 instance using EBS storage disks.
As you can see below a storage class, a persistent volume as well as a persistent volume claim have been implemented in order to have a mysql persistence.
However an error occurs when I try to deploy the mysql pod (on the attached image).
mysql-pv.yml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: amazonEBS
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-ID
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
mysql.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7.30
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: MYPASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 31306
selector:
app: mysql
This is my mysql pod description:
Name: mysql-5c9788fc65-jq2nh
Namespace: default
Priority: 0
Node: ip-172-31-31-210/172.31.31.210
Start Time: Sat, 23 May 2020 12:19:24 +0000
Labels: app=mysql
pod-template-hash=5c9788fc65
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-5c9788fc65
Containers:
mysql:
Container ID:
Image: mysql:5.7.30
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: MYPASS
Mounts:
/data/ from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cshk2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-cshk2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cshk2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Here is the error I get:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler
Successfully assigned default/mysql-5c9788fc65-jq2nh to ip-172-31-31-210
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv
Output: Running scope as unit: run-r11fefbbda1d241c2985931d3adaaa969.scope
mount: /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 does not exist.
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Someone can help me ?
Check for the state of PV and PVC if the PVC is in bounded state or not.
kubectl describe pvc mysql-pv-claim
kubectl describe pv mysql-pv
Do you have the EBS CSI driver installed?
Other reason can be, I think you missed adding the option of --cloud-provider=aws, this is required by CCM for the nodes. Check out the similar issue.
The following link has all the IAM permissions and a working example on how to create and mount an EBS volume in Kubernetes from docs on cluster configuration for using EBS.
With a kubeadm configuration, configuration is defined in: /var/lib/kubelet/config.yaml and /var/lib/kubelet/kubeadm-flags.env
If the cluster was deployed using kubeadm then, define environment variable on all nodes in kubeadm-flags.env
To resolve this manually you need to add the --cloud-provider=aws tag to the kubeadm-flags.env and restarted the services, which will resolve the issue:
systemctl daemon-reload && systemctl restart kubelet
or provide the following configuration for kubeadm. Change openstack to AWS in your case.
Check the following blog for better understanding.
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"```
Here is some additional information:
kubectl describe pvc mysql-pv-claim :
Name: mysql-pv-claim
Namespace: default
StorageClass: standard
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-5c9788fc65-jq2nh
Events: <none>
kubectl describe pv mysql-pv :
Name: mysql-pv
Labels: type=amazonEBS
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-06212746d87534157
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
lsblk :
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 18M 1 loop /snap/amazon-ssm-agent/1566
loop1 7:1 0 93.9M 1 loop /snap/core/9066
loop2 7:2 0 93.8M 1 loop /snap/core/8935
nvme0n1 259:0 0 10G 0 disk
nvme1n1 259:1 0 15G 0 disk
└─nvme1n1p1 259:2 0 15G 0 part /
I want to use nvme0n1.
I don't have kubelet and kube-controller-manager.log file log.

Kubernetes: mysql pod failed to open log file /var/log/pods/

I am following the official tutorial here to run a stateful mysql pod on the Kubernetes cluster which is already running on GCP. I have used the exact same commands to first create the persistent volume and persistent volume chain and then deployed the contents of the mysql yaml file as per the documentation. The mysql pod is not running and is in RunContainerError state. Checking the logs of this mysql pod shows:
failed to open log file "/var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log": open /var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log: no such file or directory
Update: As asked by #Matthew in the comments, the result of kubectl describe pods -l app=mysql is provided here:
Name: mysql-fb75876c6-tk6ml
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-xv4v/10.160.0.13
Start Time: Tue, 23 Apr 2019 13:36:04 +0530
Labels: app=mysql
pod-template-hash=963143272
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container mysql
Status: Running
IP: 10.52.0.7
Controlled By: ReplicaSet/mysql-fb75876c6
Containers:
mysql:
Container ID: docker://451ec5bf67f60269493b894004120b627d9a05f38e37cb50e9f283e58dbe6e56
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:5ab881bc5abe2ac734d9fb53d76d984cc04031159152ab42edcabbd377cc0859
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: RunContainerError
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Tue, 23 Apr 2019 13:36:18 +0530
Finished: Tue, 23 Apr 2019 13:36:18 +0530
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/mysql-fb75876c6-tk6ml to gke-mycluster-default-pool-b1c1d316-xv4v
Normal Pulling 31s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v pulling image "mysql:5.6"
Normal Pulled 22s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Successfully pulled image "mysql:5.6"
Normal Pulled 4s (x2 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Container image "mysql:5.6" already present on machine
Normal Created 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Created container
Warning Failed 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Error: failed to start container "mysql": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
As asked by #Hanx:
Result of kubectl describe pv mysql-pv-volume
Name: mysql-pv-volume
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-pv-volume","namespace":""},"spec":{"a...
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data
HostPathType:
Events: <none>
Result of kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"accessModes":["R...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events: <none>
mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This is because you do not need to create those volumes and storageclasses on GKE. Those yaml files are completely valid if you would want to use minikube or kubeadm, but not in case of GKE which can take care of some of the manual steps on its own.
You can use this official guide to run mysql on GKE, or just use files edited by me and tested on GKE.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-volumeclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
And mysql Deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
Make sure you read the linked guide as it explains the GKE specific topics there.

How to use PersistentVolume for MySQL data in Kubernetes

I am developing database environment on Minikube.
I'd like to persist MySQL data by PersistentVolume function of Kubernetes.
However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).
kubernetes-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs001-pv
labels:
app: nfs001-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
nfs:
path: /share/mydata
server: 192.168.99.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
selector:
matchLabels:
app: nfs001-pv
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
---
apiVersion: v1
kind: Service
metadata:
name: sk-app
labels:
app: sk-app
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30001
selector:
app: sk-app
How can I launch it?
-- Postscript --
When I tried "kubectl logs", I got following error message.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
When I tried "kubectl describe xxx", I got following results.
kubectl describe pv:
Name: nfs001-pv
Labels: app=nfs001-pv
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.99.1
Path: /share/mydata
ReadOnly: false
Events: <none>
kubectl describe pvc:
Name: nfs-claim
Namespace: default
StorageClass:
Status: Bound
Volume: nfs001-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Gi
Access Modes: RWX
Events: <none>
kubectl describe deployment:
Name: sk-app
Namespace: default
CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900
Labels: app=sk-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sk-app
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sk-app
Containers:
sk-app:
Image: mysql:5.7
Port: 3306/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mydata (rw)
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sk-app-d58dddfb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1
Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.
You can:
1) Mount that nfs volume using nfs mount commands and run a:
chmod 777 . # This gives rwx to anybody so need to be mindful.
2) Run an initContainer in your deployment, similar to this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
initContainers:
- name: init-mysql
image: busybox
command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
accessModes:
- ReadWriteMany