Kubernetes: mysql pod failed to open log file /var/log/pods/ - mysql

I am following the official tutorial here to run a stateful mysql pod on the Kubernetes cluster which is already running on GCP. I have used the exact same commands to first create the persistent volume and persistent volume chain and then deployed the contents of the mysql yaml file as per the documentation. The mysql pod is not running and is in RunContainerError state. Checking the logs of this mysql pod shows:
failed to open log file "/var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log": open /var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log: no such file or directory
Update: As asked by #Matthew in the comments, the result of kubectl describe pods -l app=mysql is provided here:
Name: mysql-fb75876c6-tk6ml
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-xv4v/10.160.0.13
Start Time: Tue, 23 Apr 2019 13:36:04 +0530
Labels: app=mysql
pod-template-hash=963143272
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container mysql
Status: Running
IP: 10.52.0.7
Controlled By: ReplicaSet/mysql-fb75876c6
Containers:
mysql:
Container ID: docker://451ec5bf67f60269493b894004120b627d9a05f38e37cb50e9f283e58dbe6e56
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:5ab881bc5abe2ac734d9fb53d76d984cc04031159152ab42edcabbd377cc0859
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: RunContainerError
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Tue, 23 Apr 2019 13:36:18 +0530
Finished: Tue, 23 Apr 2019 13:36:18 +0530
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/mysql-fb75876c6-tk6ml to gke-mycluster-default-pool-b1c1d316-xv4v
Normal Pulling 31s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v pulling image "mysql:5.6"
Normal Pulled 22s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Successfully pulled image "mysql:5.6"
Normal Pulled 4s (x2 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Container image "mysql:5.6" already present on machine
Normal Created 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Created container
Warning Failed 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Error: failed to start container "mysql": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
As asked by #Hanx:
Result of kubectl describe pv mysql-pv-volume
Name: mysql-pv-volume
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-pv-volume","namespace":""},"spec":{"a...
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data
HostPathType:
Events: <none>
Result of kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"accessModes":["R...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events: <none>
mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim

This is because you do not need to create those volumes and storageclasses on GKE. Those yaml files are completely valid if you would want to use minikube or kubeadm, but not in case of GKE which can take care of some of the manual steps on its own.
You can use this official guide to run mysql on GKE, or just use files edited by me and tested on GKE.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-volumeclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
And mysql Deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
Make sure you read the linked guide as it explains the GKE specific topics there.

Related

Wordpress+Mysql deployment don't get IP address from another pool

My deployment is about Wordpress and MYsql. I already defined a new pool and a new namespace and I was trying to make that my pods get an ip address from this new pool defined but they never get one.
My namespace file yaml
apiVersion: v1
kind: Namespace
metadata:
name: produccion
annotations:
cni.projectcalico.org/ipv4pools: ippool
my pool code
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: ippool
spec:
cidr: 192.169.0.0/24
blockSize: 29
ipipMode: Always
natOutgoing: true
EOF
My mysql deployment is
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
namespace: produccion
labels:
app: wordpress
spec:
ports:
- port: 3306
targetPort: 3306
nodePort: 31066
selector:
app: wordpress
tier: mysql
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
namespace: produccion
annotations:
cni.projectcalico.org/ipv4pools: ippool
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: PASS
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
My wordpress deployment
apiVersion: v1
kind: Service
metadata:
name: wordpress
namespace: produccion
labels:
app: wordpress
spec:
ports:
- port: 80
nodePort: 30999
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: produccion
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_HOST
value: IP_Address:31066
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
value: PASS
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: "/var/www/html"
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-persistent-storage
Additionally, I have also two PV yaml file for each service (mysql and wordpress).
When I execute the Kubectl of any deployment, they stay on ContainerCreating and the IP column stay on none.
produccion wordpress-mysql-74578f5d6d-knzzh 0/1 ContainerCreating 0 70m <none> dockerc8.tecsinfo-ec.com
If I check this pod I get the next errors:
Normal Scheduled 88s default-scheduler Successfully assigned produccion/wordpress-mysql-74578f5d6d-65jvt to dockerc8.tecsinfo-ec.com
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cdb1460246562cac11a57073ab12489dc169cb72aa3371cb2e72489544812a9b" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "672a2c35c2bb99ebd5b7d180d4426184613c87f9bc606c15526c9d472b54bd6f" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "de4d7669206f568618a79098d564e76899779f94120bddcee080c75f81243a85" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
I was using some guides from Internet like this one: https://www.projectcalico.org/calico-ipam-explained-and-enhanced/
but even this doesn't work on my lab.
I am pretty new using Kubernetes and I don't know what else to do or check.
Your error is due to invalid values in the YAML, according to the Project Calico documentation here: Using Kubernetes annotations
You will need to provide a list of IP pools as the value in your annotation instead of a single string. The following snippet should work for you.
cni.projectcalico.org/ipv4pools: "[\"ippool\"]"

mysql service pending in kubernetes

I created .yaml file to create mysql service on kubernetes for my internal application, but it's unreachable. I can reach application and also phpmyadmin to reach database but it's not working properly. I'm stuck with pending status on mysql pod.
.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: flaskapi-cred
key: db_root_password
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
ports:
- port: 3306
protocol: TCP
name: mysql
selector:
app: db
type: LoadBalancer
kubectl get all output is:
NAME READY STATUS RESTARTS AGE
pod/flaskapi-deployment-59bcb745ff-gl8xn 1/1 Running 0 117s
pod/mysql-99fb77bf4-sbhlj 0/1 Pending 0 118s
pod/phpmyadmin-deployment-5fc964bf9d-dk59t 1/1 Running 0 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapi-deployment 1/1 1 1 117s
deployment.apps/mysql 0/1 1 0 118s
deployment.apps/phpmyadmin-deployment 1/1 1 1 118s
I already did docker pull mysql.
Edit
Name: mysql-99fb77bf4-sbhlj
Namespace: z2
Priority: 0
Node: <none>
Labels: app=db
pod-template-hash=99fb77bf4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-99fb77bf4
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_root_password' in secret 'flaskapi-secrets'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gmbnd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gmbnd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gmbnd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
you are missing the volume to attach with the pod or deployment. PVC is required as your deployment configuration is using it.
you can see clearly : persistentvolumeclaim "mysql-pv-claim" not found
you can apply below YAML and try.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
As the error in the message clearly shows persistentvolumeclaim "mysql-pv-claim" not found. So you need to provision a persistentvolumeclaim (PVC).
There is a static & dynamic provisioning but I'll explain static provisioning here as it will be relatively easy for you to understand & setup. You need to create a PersistentVolume (PV) which the PVC will use. There are various types of Volumes, about which you read here.
Which type of Volume you would wanna create would be your choice depending on your environment and needs. A simple example would be of volume type hostPath.
Create a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: cust
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
# The configuration here specifies that the volume is at /tmp/data on the cluster's Node
path: "/tmp/data"
And then create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: cust
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv-volume
Once the PVC is successfully created, your deployment shall go through.

WordPress + MySQL deployed in Kubernetes - MySQL Connection Error

A Kubernetes scenario with Wordpress + Mysql in a local environment.
Wordpress Pod is unable to connect to Mysql database with the following error from Wordpress Pod logs:
MySQL Connection Error: (1045) Access denied for user 'root'#'10.44.0.5' (using password: YES)
Warning: mysqli::mysqli(): (HY000/1045): Access denied for user 'root'#'10.44.0.5' (using password: YES) in - on line 22
Instruction taken from kubernetes.io at link. The only change i made was creating a Secret resource to store password and to be pointed from Mysql and Wordpress containers.
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQxMjMK --> that is base64 of password123
type: Opaque
Pods are in default namespace both on node1 that is a worker node:
NAME READY STATUS RESTARTS AGE IP NODE
wordpress-554dfbbc47-hnr4n 0/1 Error 1 66s 10.44.0.5 node1
wordpress-mysql-5477cbdfbf-29w2r 1/1 Running 0 74s 10.44.0.4 node1
i've no skills about mysql but if i get bash shall in Mysql container and execute:
# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Here the Service output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
wordpress LoadBalancer 10.107.114.255 192.168.1.83 80:32336/TCP
wordpress-mysql ClusterIP None <none> 3306/TCP
Some env variables from MySql Pod:
....
HOSTNAME=wordpress-mysql-5477cbdfbf-29w2r
MYSQL_MAJOR=5.6
MYSQL_ROOT_PASSWORD=password123
MYSQL_VERSION=5.6.50-1debian9
....
PersistentVolume are working fine.
Quite stucked going ahead with troubleshooting. Help would appreciated.
After testing different images for Mysql and Wordpress and reading useful links on hub.docker.com mysql & wordpress i got the web application stack working.
The configuration:
MySQL:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: root-pass
key: password
- name: MYSQL_DATABASE
value: mysql
- name: MYSQL_USER
value: mysql
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
nodeSelector:
storage: local
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Wordpress:
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
externalIPs:
- 192.168.1.83
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
imagePullPolicy: IfNotPresent
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: WORDPRESS_DB_USER
value: mysql
- name: WORDPRESS_DB_NAME
value: mysql
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
nodeSelector:
storage: local
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Output PersitentVolume:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
mysql-pv-claim Bound persistent-volume-mysql 4Gi RWO local-storage
wp-pv-claim Bound persistent-volume-wordpress 2Gi RWO local-storage
Secrets:
apiVersion: v1
kind: Secret
metadata:
name: root-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
Notes for my example configuration:
on node1 created directory /mysql/data & /wordpress/data (mount point for mysql and wordpress containers).
image used for mysql -> mysql:5.7
image used for wordpress -> wordpress
added environment variables according to the documentation of mysql and wordpress.
Did you apply your secret? is your secret available in kube env?

Kubernetes Persistent Volumes With EBS (in EC2 instance)

Currently I am trying to have volume persistence for my MYSQL database using Kubernetes with Kubeadm.
The environment is based on an amazon EC2 instance using EBS storage disks.
As you can see below a storage class, a persistent volume as well as a persistent volume claim have been implemented in order to have a mysql persistence.
However an error occurs when I try to deploy the mysql pod (on the attached image).
mysql-pv.yml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: amazonEBS
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-ID
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
mysql.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7.30
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: MYPASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 31306
selector:
app: mysql
This is my mysql pod description:
Name: mysql-5c9788fc65-jq2nh
Namespace: default
Priority: 0
Node: ip-172-31-31-210/172.31.31.210
Start Time: Sat, 23 May 2020 12:19:24 +0000
Labels: app=mysql
pod-template-hash=5c9788fc65
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-5c9788fc65
Containers:
mysql:
Container ID:
Image: mysql:5.7.30
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: MYPASS
Mounts:
/data/ from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cshk2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-cshk2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cshk2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Here is the error I get:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler
Successfully assigned default/mysql-5c9788fc65-jq2nh to ip-172-31-31-210
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv
Output: Running scope as unit: run-r11fefbbda1d241c2985931d3adaaa969.scope
mount: /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 does not exist.
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Someone can help me ?
Check for the state of PV and PVC if the PVC is in bounded state or not.
kubectl describe pvc mysql-pv-claim
kubectl describe pv mysql-pv
Do you have the EBS CSI driver installed?
Other reason can be, I think you missed adding the option of --cloud-provider=aws, this is required by CCM for the nodes. Check out the similar issue.
The following link has all the IAM permissions and a working example on how to create and mount an EBS volume in Kubernetes from docs on cluster configuration for using EBS.
With a kubeadm configuration, configuration is defined in: /var/lib/kubelet/config.yaml and /var/lib/kubelet/kubeadm-flags.env
If the cluster was deployed using kubeadm then, define environment variable on all nodes in kubeadm-flags.env
To resolve this manually you need to add the --cloud-provider=aws tag to the kubeadm-flags.env and restarted the services, which will resolve the issue:
systemctl daemon-reload && systemctl restart kubelet
or provide the following configuration for kubeadm. Change openstack to AWS in your case.
Check the following blog for better understanding.
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"```
Here is some additional information:
kubectl describe pvc mysql-pv-claim :
Name: mysql-pv-claim
Namespace: default
StorageClass: standard
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-5c9788fc65-jq2nh
Events: <none>
kubectl describe pv mysql-pv :
Name: mysql-pv
Labels: type=amazonEBS
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-06212746d87534157
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
lsblk :
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 18M 1 loop /snap/amazon-ssm-agent/1566
loop1 7:1 0 93.9M 1 loop /snap/core/9066
loop2 7:2 0 93.8M 1 loop /snap/core/8935
nvme0n1 259:0 0 10G 0 disk
nvme1n1 259:1 0 15G 0 disk
└─nvme1n1p1 259:2 0 15G 0 part /
I want to use nvme0n1.
I don't have kubelet and kube-controller-manager.log file log.

How to use PersistentVolume for MySQL data in Kubernetes

I am developing database environment on Minikube.
I'd like to persist MySQL data by PersistentVolume function of Kubernetes.
However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).
kubernetes-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs001-pv
labels:
app: nfs001-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
nfs:
path: /share/mydata
server: 192.168.99.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
selector:
matchLabels:
app: nfs001-pv
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
---
apiVersion: v1
kind: Service
metadata:
name: sk-app
labels:
app: sk-app
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30001
selector:
app: sk-app
How can I launch it?
-- Postscript --
When I tried "kubectl logs", I got following error message.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
When I tried "kubectl describe xxx", I got following results.
kubectl describe pv:
Name: nfs001-pv
Labels: app=nfs001-pv
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.99.1
Path: /share/mydata
ReadOnly: false
Events: <none>
kubectl describe pvc:
Name: nfs-claim
Namespace: default
StorageClass:
Status: Bound
Volume: nfs001-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Gi
Access Modes: RWX
Events: <none>
kubectl describe deployment:
Name: sk-app
Namespace: default
CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900
Labels: app=sk-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sk-app
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sk-app
Containers:
sk-app:
Image: mysql:5.7
Port: 3306/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mydata (rw)
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sk-app-d58dddfb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1
Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.
You can:
1) Mount that nfs volume using nfs mount commands and run a:
chmod 777 . # This gives rwx to anybody so need to be mindful.
2) Run an initContainer in your deployment, similar to this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
initContainers:
- name: init-mysql
image: busybox
command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
accessModes:
- ReadWriteMany