I have google cloud container engine setup. I wanted to spin pod of mysql with external volume.
ReplicationController:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql-controller
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql
name: mysql
ports:
- name: mysql
containerPort: 3306
hostPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
pdName: mysql-1-disk
fsType: ext4
When i run RC without external volume, MySQL works fine. It breaks with below error when i try to attach volume
Kubernetes POD Error:
Warning FailedSyncError syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 20s restarting failed container=mysql pod=mysql-controller-4hhqs_default(eb34ff46-8784-11e6-8f12-42010af00162)"
Disk (External Volume):
mysql-1-disk is the google cloud disk. I tried creating disk with both blank disk and image - ubuntu. Both failed with same error.
The error messages on mounting persistent disks are really not descriptive from my perspective. Use a blank disk based on your configuration file.
Some things to check:
Is the pdName exactly the same as in your CGE environment
Is the disk in the same availability zone (eg. europe-west1-c) as your cluster, otherwise it can't mount.
Hope this helps.
The problem that you face may be caused by using RC, not Pod to interact with the Persistent Disk.
As it's mentioned in documentation:
A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
Using a PD on a pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1.
In this case, I may suggest you to run MySQL with Persistent Disks defining the disk connection in Pod configuration file. Sample configuration you may find here.
Related
I have a container image that I want to run in minikube. My conainer image has MySQL, redis and some other components needed to run my application. I have an external application. My external application need to be connected to this MySQL server. The container image is written such that it starts the MySQL and redis server on its startup.
I thought I could access my MySQL server from outside If I run the container image inside docker daemon in minikube after setting the env as mentioned as below,
eval $(minikube docker-env)
But this didn't help me since the MySQL server is not accessible in 3306 port.
I tried the second method that creating a pod,
I created the deployment using the below yaml file
apiVersion: v1
kind: Service
metadata:
name: c-service
spec:
selector:
app: c-app
ports:
- protocol: "TCP"
port: 3306
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: c-app
spec:
selector:
matchLabels:
app: c-app
replicas: 3
template:
metadata:
labels:
app: c-app
spec:
containers:
- name: c-app
image: c-app-latest:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
After creating the deployment, containers are creating. As I said before my image is designed such that it starts the MySQL and Redis server on startup.
In docker desktop while running the same image it starts the servers, opens ports and stays still and I can also perform some operations in the container terminal.
In minikube, it starts the servers and once it has started the servers, the minikube marks the pod's status as completed and tries restarting it again. It didn't open any ports. It just starts and restarts again and again until eventually it results in "CrashLoopBackOff" error.
How I did it before:
I have a container image that I previously was running in docker desktop. It was running fine in docker desktop. It starts all servers and it establishes connection with my external app. It also provides an option to interact with the container terminal.
My current requirements:
I want to run the same image in minikube that I ran in docker desktop. Upon running the image it should open ports for external connection (say port 3306 for MySQL connection) and I should be able to interact with the container through,
kubectl exec -it <pod> -- bin/bash
More importantly I don't want the pod restarting again and again. It should start one time, start all servers and open ports and stay still.
Sorry for the long post. Can anyone please help me with this.
I'm starting to learn Kubernetes by creating LAMP stack. For my MySQL database I have created a persistent volume and persistent volume claim but now I came across a problem where I can't find the place where my PV files are stored.
This is my configuration for now:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
app: mysql
spec:
storageClassName: manual
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/lamp/pvstorage"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And then at the end of my MySQL deployment file I have those lines:
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
I was hoping that my everything from Pod's /var/lib/mysql would be backed up at the host's /lamp/pvstorage directory but that doesn't seem to happen (but files are backed up somewhere else from what i can see. Database is always recreated after deleting mysql Pod).
So my question is what am I doing wrong right now and how do I specify that location?
TL;DR
To access the files stored on a PVC with a hostPath within a minikube instance that has a --driver=docker you will need directly login to the host (minikube) with for example: $ minikube ssh and access the files directly: ls /SOME_PATH
Explanation
The topic of Persistent Volumes, Storage Classes and resources that are handling whole process are quite wide and they need to be investigated on case to case basis.
The good source of baseline knowledge could be gained from official documentation and the documentation of Kubernetes solution used:
Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes
Minikube.sigs.k8s.io: Docs
Most often than not when using minikube (see the exception with --driver=none) some kind of containerization/virtualization (the layer encircling minikube) is used to isolate the environment and also allow for easier creation and destruction of the instance. This is making all of the resources being stored inside of a some "box" (Virtualbox VM, Docker container etc.).
As in this example, the files that were created by mysql were stored on the host running minikube but in the minikube instance itself. When you create minikube with --driver=docker you create a Docker container that has all of the components inside of it. Furthermore by using a PVC with a following part of the .spec:
hostPath:
path: "/lamp/pvstorage"
hostPath
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.
-- Kubernetes.io: Docs: Concepts: Storage: Volumes: hostPath
Example
As an example of how you can access the files:
Follow the guide at:
Kubernetes.io: Docs: Tasks: Run application: Run single instance stateful application
Check if mysql is in Running state ($ kubectl get pods)
Run:
$ minikube ssh
ls /mnt/data (or what is specified in the hostPath)
docker#minikube:~$ ls /mnt/data/
auto.cnf ib_logfile0 ib_logfile1 ibdata1 mysql performance_schema
Additional resources:
Minikube.sigs.k8s.io: Docs: Drivers
Minikube.sigs.k8s.io: Docs: Commands: SSH
If there is a use case of having the host directory mounted to the minikube instance to specific directory you can opt for:
$ minikube mount SOURCE:DESTINATION
Im trying to run a basic MySQL 8 pod in Kubernetes. I did a basic deployment without any resource limits or whatsoever. What i do notice that the memory consumption is high. I have a nearly empty database (i think there are max 100 rows with basic data) and the pod is consuming 750M memory.
Is there anything that you can do about this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: db
strategy:
type: Recreate
template:
metadata:
labels:
app: db
spec:
containers:
- name: db
image: mysql:8.0
resources:
env:
- name: MYSQL_DATABASE
value: mydb
- name: MYSQL_USER
value: myuser
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: secret
key: "DATABASE_PASSWORD"
ports:
- containerPort: 3306
name: transport
protocol: TCP
volumeMounts:
- name: db
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: db
persistentVolumeClaim:
claimName: db
Best
Pim
I've just ran a docker container with MySQL and it consumes ~400M of RAM with an empty database that is not even being queried by some application.
docker run -itd -e MYSQL_ROOT_PASSWORD=password mysql
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
a779ef705921 happy_lumiere 2.22% 373.4MiB / 1.943GiB 18.77% 1.05kB / 0B 0B / 0B 38
According to this assessment it seems to me that 750M is not that much.
You can check out these Best Practices for Configuring Optimal MySQL Memory Usage.
In the future please add more information about your environment (local, kubeadm, minikube, cloud) and scenario. It would be easier to reproduce or troubleshoot.
I would not say that 750M RAM is high consumption. For test I've deployed MySQL 8.0 on my GKE cluster using HELM based on this chart. I've only changed default MySql image version to 8.0.
$ helm install sql stable/mysql
Pure instance of MySQL on my cluster without any limits, requests or data:
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
sql-mysql-6c9489d5b9-m8zmh 8m 376Mi
So if MySQL works with actual data it's normal to consume more resources.
I would say it's normal behaviour.
For general resources use in MySql you can check How MySQL Uses Memory.
However if you working on local environment with some limited resources you should specify Limits in your YAMLs.
What i did to reduce the consumption mainly is disable the performance schema. Further i reduced the default connection limit to ensure that it does not reserve the memory for those who are not being used.
It will indeed mean that it can collapse if there are more users but i guess its a guess in on one of those
more cluster ram or failing to run it due to ram exhaustion
hit the max connection limit
Proper monitoring on all will do the trick
Is it possible to share a single persistent volume claim (PVC) between two apps (each using a pod)?
I read: Share persistent volume claims amongst containers in Kubernetes/OpenShift but didn't quite get the answer.
I tried to added a PHP app, and MySQL app (with persistent storage) within the same project. Deleted the original persistent volume (PV) and created a new one with read,write,many mode. I set the root password of the MySQL database, and the database works.
Then, I add storage to the PHP app using the same persistent volume claim with a different subpath. I found that I can't turn on both apps. After I turn one on, when I try to turn on the next one, it get stuck at creating container.
MySQL .yaml of the deployment step at openshift:
...
template:
metadata:
creationTimestamp: null
labels:
name: mysql
spec:
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql
containers:
- name: mysql
...
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql/data
subPath: mysql/data
...
terminationMessagePath: /dev/termination-log
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
PHP .yaml from deployment step:
template:
metadata:
creationTimestamp: null
labels:
app: wiki2
deploymentconfig: wiki2
spec:
volumes:
- name: volume-959bo <<----
persistentVolumeClaim:
claimName: mysql
containers:
- name: wiki2
...
volumeMounts:
- name: volume-959bo
mountPath: /opt/app-root/src/w/images
subPath: wiki/images
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
The volume mount names are different. But that shouldn't make the two pods can't share the PVC. Or, the problem is that they can't both mount the same volume at the same time?? I can't get the termination log at /dev because if it can't mount the volume, the pod doesn't start, and I can't get the log.
The PVC's .yaml (oc get pvc -o yaml)
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-class: ebs
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
creationTimestamp: YYYY-MM-DDTHH:MM:SSZ
name: mysql
namespace: abcdefghi
resourceVersion: "123456789"
selfLink: /api/v1/namespaces/abcdefghi/persistentvolumeclaims/mysql
uid: ________-____-____-____-____________
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: pvc-________-____-____-____-____________
status:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
phase: Bound
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""
Suspicious Entries from oc get events
Warning FailedMount {controller-manager }
Failed to attach volume "pvc-________-____-____-____-____________"
on node "ip-172-__-__-___.xx-xxxx-x.compute.internal"
with:
Error attaching EBS volume "vol-000a00a00000000a0" to instance
"i-1111b1b11b1111111": VolumeInUse: vol-000a00a00000000a0 is
already attached to an instance
Warning FailedMount {kubelet ip-172-__-__-___.xx-xxxx-x.compute.internal}
Unable to mount volumes for pod "the pod for php app":
timeout expired waiting for volumes to attach/mount for pod "the pod".
list of unattached/unmounted volumes=
[volume-959bo default-token-xxxxx]
I tried to:
turn on the MySQL app first, and then try to turn on the PHP app
found php app can't start
turn off both apps
turn on the PHP app first, and then try to turn on the MySQL app.
found mysql app can't start
The strange thing is that the event log never says it can't mount volume for the MySQL app.
The remaining volumen to mount is either default-token-xxxxx, or volume-959bo (the volume name in PHP app), but never mysql-data (the volume name in MySQL app).
So the error seems to be caused by the underlying storage you are using, in this case EBS. The OpenShift docs actually specifically state that this is the case for block storage, see here.
I know this will work for both NFS and Glusterfs storage, and have done this in numerous projects using these storage type but unfortunately, in your case it's not supported
I'm looking for some info about what requirements are best practices for openshift storage in nodes which will execute dockers but I didn't find any clear solution.
My questions would be:
-is any shared storage mandatory for all nodes?
-can I control the directory where images will be placed?
-must be nfs directories that will be acceded by containers be already mounted in the node server?
I've been looking for information about this and these are my conclusions:
If you need persistant storage for example db, jenkins master or any kind of storage you want to maintain every time a docker boots then you have to mount the storage in the nodes that can run docker that requires that persistent storage.
Mount in nodes any of these:
NFS ,HostPath (single node testing only of course already mounted),GlusterFS,Ceph,OpenStack Cinder, AWS Elastic Block Store (EBS),GCE Persistent Disk ,iSCSI, Fibre Channel
Create persistent volumes in Openshift
Openshift nfs example creating file.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /tmp
server: 172.17.0.2
Create from the file created
oc create -f file.yaml
Create a claim from the datastore, claims will search for persistent volumes available with the capacity required.
Then a claim will be used by pods.
For example let's claim 1GB ,later we will associate a claim with a pod.
Create nfs-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Create from the file created
oc create -f nfs-claim.yaml
Create a pod fo with the storage volume and with a claim.
-
apiVersion: v1
kind: Pod
metadata:
name: nginx-nfs-pod
labels:
name: nginx-nfs-pod
spec:
containers:
- name: nginx-nfs-pod
image: fedora/nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfsvol
mountPath: /usr/share/nginx/html
volumes:
- name: nfsvol
persistentVolumeClaim:
claimName: nfs-claim1
Some extra options like selinux settings must be required, but they are so well explained here (https://docs.openshift.org/latest/install_config/storage_examples/shared_storage.html)
is any shared storage mandatory for all nodes?
No shared storage is not mandatory, but it is highly recommended (as most application will require some "state-full" storage, which can only really be obtained with a shared storage provider. The following https://docs.openshift.org/latest/install_config/persistent_storage/index.html are options for such storage providers.