How do i specify where persistent volume files are stored - mysql

I'm starting to learn Kubernetes by creating LAMP stack. For my MySQL database I have created a persistent volume and persistent volume claim but now I came across a problem where I can't find the place where my PV files are stored.
This is my configuration for now:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
app: mysql
spec:
storageClassName: manual
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/lamp/pvstorage"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And then at the end of my MySQL deployment file I have those lines:
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
I was hoping that my everything from Pod's /var/lib/mysql would be backed up at the host's /lamp/pvstorage directory but that doesn't seem to happen (but files are backed up somewhere else from what i can see. Database is always recreated after deleting mysql Pod).
So my question is what am I doing wrong right now and how do I specify that location?

TL;DR
To access the files stored on a PVC with a hostPath within a minikube instance that has a --driver=docker you will need directly login to the host (minikube) with for example: $ minikube ssh and access the files directly: ls /SOME_PATH
Explanation
The topic of Persistent Volumes, Storage Classes and resources that are handling whole process are quite wide and they need to be investigated on case to case basis.
The good source of baseline knowledge could be gained from official documentation and the documentation of Kubernetes solution used:
Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes
Minikube.sigs.k8s.io: Docs
Most often than not when using minikube (see the exception with --driver=none) some kind of containerization/virtualization (the layer encircling minikube) is used to isolate the environment and also allow for easier creation and destruction of the instance. This is making all of the resources being stored inside of a some "box" (Virtualbox VM, Docker container etc.).
As in this example, the files that were created by mysql were stored on the host running minikube but in the minikube instance itself. When you create minikube with --driver=docker you create a Docker container that has all of the components inside of it. Furthermore by using a PVC with a following part of the .spec:
hostPath:
path: "/lamp/pvstorage"
hostPath
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.
-- Kubernetes.io: Docs: Concepts: Storage: Volumes: hostPath
Example
As an example of how you can access the files:
Follow the guide at:
Kubernetes.io: Docs: Tasks: Run application: Run single instance stateful application
Check if mysql is in Running state ($ kubectl get pods)
Run:
$ minikube ssh
ls /mnt/data (or what is specified in the hostPath)
docker#minikube:~$ ls /mnt/data/
auto.cnf ib_logfile0 ib_logfile1 ibdata1 mysql performance_schema
Additional resources:
Minikube.sigs.k8s.io: Docs: Drivers
Minikube.sigs.k8s.io: Docs: Commands: SSH
If there is a use case of having the host directory mounted to the minikube instance to specific directory you can opt for:
$ minikube mount SOURCE:DESTINATION

Related

How to mount persistent volume of specific directory path to default MySQL data dir in Openshift

From the Openshift documentation https://docs.openshift.com/enterprise/3.1/using_images/db_images/mysql.html, it is stated that default data directory of MySQL is set to /var/lib/mysql/data. How do I mount my specific persistent volume path to that MySQL data directory path in Openshift? As i know in docker there is this command:
docker run -d -v myvol2:/var/lib/mysql/data mysql:latest
but is there an equivalence for this command in Openshift?
Several configuration have to be done:
Create DeploymentConfig
Create PersistenceVolume (see persistence volume)
Create PersistenceVolumeClaim
Add volume and mount point to DeploymentConfig (see adding volumes)
Creating new PersistenceVolumeClaim and adding mount point to DeploymentConfig can be done by one command:
oc set volume dc mysql --add --name=mysql-volume -t pvc --claim-name=mysql-pvc --claim-size=1Gi --claim-mode='ReadWriteMany' --mount-path=/var/lib/mysql/data
Persistence volume should be add through oc apply, e.g. NFS PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /tmp
server: 172.17.0.2

Can't Share a Persistent Volume Claim for an EBS Volume between Apps

Is it possible to share a single persistent volume claim (PVC) between two apps (each using a pod)?
I read: Share persistent volume claims amongst containers in Kubernetes/OpenShift but didn't quite get the answer.
I tried to added a PHP app, and MySQL app (with persistent storage) within the same project. Deleted the original persistent volume (PV) and created a new one with read,write,many mode. I set the root password of the MySQL database, and the database works.
Then, I add storage to the PHP app using the same persistent volume claim with a different subpath. I found that I can't turn on both apps. After I turn one on, when I try to turn on the next one, it get stuck at creating container.
MySQL .yaml of the deployment step at openshift:
...
template:
metadata:
creationTimestamp: null
labels:
name: mysql
spec:
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql
containers:
- name: mysql
...
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql/data
subPath: mysql/data
...
terminationMessagePath: /dev/termination-log
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
PHP .yaml from deployment step:
template:
metadata:
creationTimestamp: null
labels:
app: wiki2
deploymentconfig: wiki2
spec:
volumes:
- name: volume-959bo <<----
persistentVolumeClaim:
claimName: mysql
containers:
- name: wiki2
...
volumeMounts:
- name: volume-959bo
mountPath: /opt/app-root/src/w/images
subPath: wiki/images
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
The volume mount names are different. But that shouldn't make the two pods can't share the PVC. Or, the problem is that they can't both mount the same volume at the same time?? I can't get the termination log at /dev because if it can't mount the volume, the pod doesn't start, and I can't get the log.
The PVC's .yaml (oc get pvc -o yaml)
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-class: ebs
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
creationTimestamp: YYYY-MM-DDTHH:MM:SSZ
name: mysql
namespace: abcdefghi
resourceVersion: "123456789"
selfLink: /api/v1/namespaces/abcdefghi/persistentvolumeclaims/mysql
uid: ________-____-____-____-____________
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: pvc-________-____-____-____-____________
status:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
phase: Bound
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""
Suspicious Entries from oc get events
Warning FailedMount {controller-manager }
Failed to attach volume "pvc-________-____-____-____-____________"
on node "ip-172-__-__-___.xx-xxxx-x.compute.internal"
with:
Error attaching EBS volume "vol-000a00a00000000a0" to instance
"i-1111b1b11b1111111": VolumeInUse: vol-000a00a00000000a0 is
already attached to an instance
Warning FailedMount {kubelet ip-172-__-__-___.xx-xxxx-x.compute.internal}
Unable to mount volumes for pod "the pod for php app":
timeout expired waiting for volumes to attach/mount for pod "the pod".
list of unattached/unmounted volumes=
[volume-959bo default-token-xxxxx]
I tried to:
turn on the MySQL app first, and then try to turn on the PHP app
found php app can't start
turn off both apps
turn on the PHP app first, and then try to turn on the MySQL app.
found mysql app can't start
The strange thing is that the event log never says it can't mount volume for the MySQL app.
The remaining volumen to mount is either default-token-xxxxx, or volume-959bo (the volume name in PHP app), but never mysql-data (the volume name in MySQL app).
So the error seems to be caused by the underlying storage you are using, in this case EBS. The OpenShift docs actually specifically state that this is the case for block storage, see here.
I know this will work for both NFS and Glusterfs storage, and have done this in numerous projects using these storage type but unfortunately, in your case it's not supported

Unable to run mysql pod in kubernetes with external volume

I have google cloud container engine setup. I wanted to spin pod of mysql with external volume.
ReplicationController:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql-controller
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql
name: mysql
ports:
- name: mysql
containerPort: 3306
hostPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
pdName: mysql-1-disk
fsType: ext4
When i run RC without external volume, MySQL works fine. It breaks with below error when i try to attach volume
Kubernetes POD Error:
Warning FailedSyncError syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 20s restarting failed container=mysql pod=mysql-controller-4hhqs_default(eb34ff46-8784-11e6-8f12-42010af00162)"
Disk (External Volume):
mysql-1-disk is the google cloud disk. I tried creating disk with both blank disk and image - ubuntu. Both failed with same error.
The error messages on mounting persistent disks are really not descriptive from my perspective. Use a blank disk based on your configuration file.
Some things to check:
Is the pdName exactly the same as in your CGE environment
Is the disk in the same availability zone (eg. europe-west1-c) as your cluster, otherwise it can't mount.
Hope this helps.
The problem that you face may be caused by using RC, not Pod to interact with the Persistent Disk.
As it's mentioned in documentation:
A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
Using a PD on a pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1.
In this case, I may suggest you to run MySQL with Persistent Disks defining the disk connection in Pod configuration file. Sample configuration you may find here.

Kubernetes + MySQL : Creating custom database and user in a Kubernetes container

I am trying to create a Django + MySQL app using Google Container Engine and Kubernetes. Following the docs from official MySQL docker image and Kubernetes docs for creating MySQL container I have created the following replication controller
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql:5.6.33
name: mysql
env:
#Root password is compulsory
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
- name: "MYSQL_DATABASE"
value: "custom_db"
- name: "MYSQL_USER"
value: "custom_user"
- name: "MYSQL_PASSWORD"
value: "custom_password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# This name must match the volumes.name below.
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mysql-disk
fsType: ext4
According to the docs, passing the environment variables MYSQL_DATABASE. MYSQL_USER, MYSQL_PASSWORD, a new user will be created with that password and assigned rights to the newly created database. But this does not happen. When I SSH into that container, the ROOT password is set. But neither the user, nor the database is created.
I have tested this by running locally and passing the same environment variables like this
docker run -d --name some-mysql \
-e MYSQL_USER="custom_user" \
-e MYSQL_DATABASE="custom_db" \
-e MYSQL_ROOT_PASSWORD="root_password" \
-e MYSQL_PASSWORD="custom_password" \
mysql
When I SSH into that container, the database and users are created and everything works fine.
I am not sure what I am doing wrong here. Could anyone please point out my mistake. I have been at this the whole day.
EDIT: 20-sept-2016
As Requested
#Julien Du Bois
The disk is created. it appears in the cloud console and when I run the describe command I get the following output
Command : gcloud compute disks describe mysql-disk
Result:
creationTimestamp: '2016-09-16T01:06:23.380-07:00'
id: '4673615691045542160'
kind: compute#disk
lastAttachTimestamp: '2016-09-19T06:11:23.297-07:00'
lastDetachTimestamp: '2016-09-19T05:48:14.320-07:00'
name: mysql-disk
selfLink: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/disks/mysql-disk
sizeGb: '20'
status: READY
type: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/diskTypes/pd-standard
users:
- https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/instances/gke-cluster-1-default-pool-e0f09576-zvh5
zone: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>
I referred to lot of tutorials and google cloud examples. To run the mysql docker container locally my main reference was the official image page on docker hub
https://hub.docker.com/_/mysql/
This works for me and locally the container created has a new database and user with right privileges.
For kubernetes, my main reference was the following
https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
I am just trying to connect to it using Django container.
I was facing the same issue when I was using volumes and mounting them to mysql pods.
As mentioned in the documentation of mysql's docker image:
When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
So after spinning wheels I managed to solve the problem by changing the hostPath of the volume that I was creating from "/data/mysql-pv-volume" to "/var/lib/mysql"
Here is a code snippet that might help create the volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete /* For development Purposes only */
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Hope that helped.
You set mysql-disk in your deployment and the disk you have is custom-disk. Change pdName to custom-disk and it will work.

Openshift nodes storage configuration

I'm looking for some info about what requirements are best practices for openshift storage in nodes which will execute dockers but I didn't find any clear solution.
My questions would be:
-is any shared storage mandatory for all nodes?
-can I control the directory where images will be placed?
-must be nfs directories that will be acceded by containers be already mounted in the node server?
I've been looking for information about this and these are my conclusions:
If you need persistant storage for example db, jenkins master or any kind of storage you want to maintain every time a docker boots then you have to mount the storage in the nodes that can run docker that requires that persistent storage.
Mount in nodes any of these:
NFS ,HostPath (single node testing only of course already mounted),GlusterFS,Ceph,OpenStack Cinder, AWS Elastic Block Store (EBS),GCE Persistent Disk ,iSCSI, Fibre Channel
Create persistent volumes in Openshift
Openshift nfs example creating file.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /tmp
server: 172.17.0.2
Create from the file created
oc create -f file.yaml
Create a claim from the datastore, claims will search for persistent volumes available with the capacity required.
Then a claim will be used by pods.
For example let's claim 1GB ,later we will associate a claim with a pod.
Create nfs-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Create from the file created
oc create -f nfs-claim.yaml
Create a pod fo with the storage volume and with a claim.
-
apiVersion: v1
kind: Pod
metadata:
name: nginx-nfs-pod
labels:
name: nginx-nfs-pod
spec:
containers:
- name: nginx-nfs-pod
image: fedora/nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfsvol
mountPath: /usr/share/nginx/html
volumes:
- name: nfsvol
persistentVolumeClaim:
claimName: nfs-claim1
Some extra options like selinux settings must be required, but they are so well explained here (https://docs.openshift.org/latest/install_config/storage_examples/shared_storage.html)
is any shared storage mandatory for all nodes?
No shared storage is not mandatory, but it is highly recommended (as most application will require some "state-full" storage, which can only really be obtained with a shared storage provider. The following https://docs.openshift.org/latest/install_config/persistent_storage/index.html are options for such storage providers.