I have tried multiple times to install Mysql with kubernetes 1.8 in Google Container Engine by following the tutorial from Kubernetes Page. The PV, PVC and the Service are created succesfully, but the POD is always giving me error
PersistentVolumeClaim is not bound: "mysql-pv-claim" (repeated 3 times)
When I run kubectl get pvc it is bounded successfully. I don't know where did I wrong
Here is my deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
From the docs you linked:
You need to either have a dynamic PersistentVolume provisioner with a default StorageClass, or statically provision PersistentVolumes yourself to satisfy the PersistentVolumeClaims used here.
You need to define a default StorageClass for GCE. Something like:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: us-central1-a, us-central1-b
I just find out that the problem was because the nodes that I used.
I used 3 micro-type server and it failed to give resources to mysql instance and when I upgrade it small or to normal server It works as it should.
Related
I'm creating a mysql pod in kubernetes and I accessed kubectl describe svc mysql to see the endpoints but even though I know the endpoints I can't do remote access with HeidiSQL I'll post my yamls below to see if anyone can help me.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: rafaelribeirosouza86/shopping:myql
name: mysql
imagePullPolicy: Always
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
# secret:
# secretName: mysql-pass
# items:
# - key: password
persistentVolumeClaim:
claimName: mysql-pv-claim
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
# clusterIP: None
ports:
- port: 3306
selector:
app: mysql
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
I wanted to do remote access in mysql kubernetes to create the tables
Just to add to Mr #Jerry 's answer there are two ways which you can use
FIRST
kubectl exec docs
kubectl exec -it <podname> -n namespace -- <command> -<arguments-as-per-command>
SECOND
In case it is temporary and you want to do it using some client on your local, you can proxy the pod network to your local
kubectl port-forward
$ kubectl port-forward <pod-name> <port-on-pod>:<your-local-port>
I am using K8S with Helm 3.
I am also using MYSQL 5.7 database.
I created MYSQL pod, whenever the database persists (it is created after first time the pod is created, and even the pod is down and up again, the data won't be lost).
I am using PersistentVolume and PersistentVolumeClaim.
Here are the YAML files:
Mysql pod:
apiVersion: v1
kind: Pod
metadata:
name: myproject-db
namespace: {{ .Release.Namespace }}
labels:
name: myproject-db
app: myproject-db
spec:
hostname: myproject-db
subdomain: {{ include "k8s.db.subdomain" . }}
containers:
- name: myproject-db
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: 12345
ports:
- name: mysql
protocol: TCP
containerPort: 3306
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 500m
memory: 600Mi
volumeMounts:
- name: mysql-persistence-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistence-storage
persistentVolumeClaim:
claimName: mysql-pvc
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql"
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
volumeName: mysql-pv
after running:
Helm install myproject myproject/
The database is created, and can be used.
If I add records and stop and remove the database pod - there is no loss of db records, even
I restart the pod by running: helm delete myproject
and by running: helm install myproject myproject/ again.
I see that the db data is kept when the MYSQL pod is restarted.
How can I access the data of mysql?
I see no files on /var/lib/mysql neither on /data/mysql - how can I access those two paths?
Also - how can I delete permanently the data? (can I delete the files themselves or just by kubectl commands?)
Thanks.
You are using a hostPath volume, thus the persistence is directly on your node.
You can ssh to your node and delete those files.
I'm setting up a K8s cluster and I started with the database. I used Kustomize for that purpose. I use Kubernetes that comes in with Docker Desktop for Windows 10.
When I run kubectl apply -k ./, the mysql pods are running.
Then I use kubectl exec -it mysql -- bash to get inside the container.
Once in there, I try to connect to MySQL service with mysql -u root -p and all I get is
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It doesn't matter if I use secretGenerator in kustomization.yaml or put the root password directly in the deployment definition, I can't log in to mysql.
I'm using mysql image from docker hub, so nothing fancy.
I also did a test with running the container directly by docker, e.g.
docker run -d --env MYSQL_ROOT_PASSWORD=dummy --name mysql-test -p 3306:3306 mysql:5.6
Having container set up like this I can log in to the MySQL database without a problem.
I don't understand why the same image ran in docker behaves differently when ran in Kubernetes.
Maybe you have any ideas?
My yaml files look like this:
storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
mysql-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
local:
path: "/c/kubernetes/mysql-storage/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
name: mysql-backend
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: IfNotPresent
# envFrom:
# - secretRef:
# name: mysql-credentials
env:
- name: MYSQL_ROOT_PASSWORD
value: dummy
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kustomization.yaml
resources:
- storage.yaml
- mysql-persistent-volume.yaml
- mysql-deployment.yaml
generatorOptions:
disableNameSuffixHash: true
secretGenerator:
- name: mysql-credentials
literals:
- MYSQL_ROOT_PASSWORD=dummy
# - MYSQL_ALLOW_EMPTY_PASSWORD=yes
I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service
I am currently running mysql, wordpress and my custom node.js + express application on kubernetes pods in the same cluster. Everything is working quite well but my problem is that all the data will be reset if I have to rerun the deployments, services and persistent volumes.
I have configured wordpress quite extensively and would like to save all the data and insert it again after redeploying everything. How is this possible to do or am I thinking something wrong? I am using the mysql:5.6 and wordpress:4.8-apache images.
I also want to transfer my configuration to my other team members so they don't have to configure wordpress again.
This is my mysql-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: hidden
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This the wordpress-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: hidden
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
How is this possible to do or am I thinking something wrong?
It might be better to move configuration mindset from working directly on base container instances to configuring container images/manifests. You have several approaches there, just some pointers:
Create own Dockerfile based on images you referenced and bundle configuration files inside them. This is viable approach if configuration is more or less static and can be handled with env vars or infrequent builds of docker images, but require docker registry handling to work with k8s. In this approach you would add all changed files to build context of docker and then COPY them to appropriate places.
Create ConfigMaps and mount them on container filesystem as config files where change is required. This way you can still use base images you reference directly but changes are limited to kubernetes manifests instead of rebuilding docker images. Approach in this case would be to identify all changed files on container, then create kubernetes ConfigMaps out of them and finally mount appropriately. I don't know which exactly things you are changing but here is example of how you can place nginx config in ConfigMap:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-nginx-example
data:
nginx.conf: |
server {
listen 80;
...
# actual config here
...
}
and then mount it in container in appropriate place like so:
...
containers:
- name: nginx-example
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: cm-nginx-example
items:
- key: nginx.conf
path: nginx.conf
...
Mount persistent volumes (subpaths) on places where you need configs and keep configuration on persistent volumes.
Personally, I'd probably opt for ConfigMaps since you can easily share/edit those with k8s deployments and configuration details are not lost as some mystical 'extensive work' but can be reviewed, tweaked and stored to some code versioning system for version tracking...