Kubernetes - Create database in MySQL after pod start-up [duplicate] - mysql

I want to set initial data on MySQL of container.
In docker-compose.yml, such code can create initial data when running container.
volumes:
- db:/var/lib/mysql
- "./docker/mysql/conf.d:/etc/mysql/conf.d"
- "./docker/mysql/init.d:/docker-entrypoint-initdb.d"
However, how can I create initial data on Kubernetes when running?

According to the MySQL Docker image README, the part that is relevant to data initialization on container start-up is to ensure all your initialization files are mount to the container's /docker-entrypoint-initdb.d folder.
You can define your initial data in a ConfigMap, and mount the corresponding volume in your pod like this:
apiVersion: v1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
configMap:
name: mysql-initdb-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-initdb-config
data:
initdb.sql: |
CREATE TABLE friends (id INT, name VARCHAR(256), age INT, gender VARCHAR(3));
INSERT INTO friends VALUES (1, 'John Smith', 32, 'm');
INSERT INTO friends VALUES (2, 'Lilian Worksmith', 29, 'f');
INSERT INTO friends VALUES (3, 'Michael Rupert', 27, 'm');

First: create persistent volume that contains your SQL scripts
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/path/to/initdb/sql/scripts"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
Note: assume that you have your SQL scripts in /path/to/initdb/sql/scripts
Second: mount the volume to /docker-entrypoint-initdb.d
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: mysql-initdb
volumes:
- name: mysql-initdb
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
That's it.
Note: this applies to PostgreSQL too.

you need to create pv and pvclaim like this then deploy the mysql database
kind: PersistentVolume
apiVersion: v1
metadata:
name: sfg-dev-mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sfg-dev-mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
create secret:
kubectl create secret generic mysql-secret --from-literal=mysql-root-password=kube1234 --from-literal=mysql-user=testadm --from-literal=mysql-password=kube1234
kubectl create configmap db --from-literal=mysql-database: database
mysql deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sfg-dev-mysql-db
labels:
app: sfg-dev-mysql
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: sfg-dev-mysql
tier: db
spec:
containers:
- image: mysql:8.0.2
name: mysql
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db
key: mysql-database
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: sfg-dev-mysql-pv-claim

Related

Mysql deployment deleting my database in kubernetes

I created a mysql deployment where I connect with other pods. I do remote access to create the database and tables but I saw that at some point in the mysql lifecycle it deletes my database how can I make it not deleted?
I thought about creating a static pod but I don't know if this solves my problem, it follows my structure below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: rafaelribeirosouza86/shopping:myql
name: mysql
imagePullPolicy: Always
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
# secret:
# secretName: mysql-pass
# items:
# - key: password
persistentVolumeClaim:
claimName: mysql-pv-claim
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
# clusterIP: None
ports:
- port: 3306
selector:
app: mysql
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Does anyone have an idea how I can resolve this?

How to make My Sql Pod to save data in Persistent Volume

I started to use Kubernetes to understant concepts like pods, objects and so on. I started to learn about Persistent Volume and Persistent Volume Claim, from my understanding, if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
I have a spring boot pod where i save data in mysql pod, data is saved, i can retreived, but when i restart my pods, delete or replace them, that saved data is lost, so i think i messed up something, can you give me a hint, please? Thanks...
Bellow are my Kubernetes files:
Mysql pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels: #must match Service and DeploymentLabels
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql #name of the db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret #name of the secret obj
key: password #which value from inside the secret to take
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: #mount volume obtained from PVC
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #mounting in the container will be here
volumes:
- name: mysql-persistent-storage #obtaining volume from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim # can use the same claim in different pods
apiVersion: v1
kind: Service
metadata:
name: mysql #DNS name
labels:
app: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector: #mysql pod should contain same label
app: mysql
clusterIP: None # we use DNS
Persistent Volume and Persistent Volume Claim files:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim #name of our pvc
labels:
app: mysql
spec:
volumeName: host-pv #claim that volume created with this name
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
apiVersion: v1 #version of our PV
kind: PersistentVolume #kind of obj we gonna create
metadata:
name: host-pv # name of our PV
spec: #spec of our PV
capacity: #size
storage: 4Gi
volumeMode: Filesystem #storage Type, File and Blcok
storageClassName: standard
accessModes:
- ReadWriteOnce # can be mount from multiple pods on a single nod, cam be use by multiple pods, multiple pods can use this pv but only from a single node
# - ReadOnlyMany # on multiple nodes
# - WriteOnlyMany # doar pt multiple nods, nu hostPath type
hostPath: #which type of pv
path: "/mnt/data"
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
My Spring book K8 file:
apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-example
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-deployment
spec:
replicas: 1
selector:
matchLabels:
app: book-example
template:
metadata:
labels:
app: book-example
spec:
containers:
- name: book-container
image: cinevacineva/kubernetes_book_pv:latest
imagePullPolicy: Always
# ports:
# - containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
# & minikube -p minikube docker-env | Invoke-Expression links docker images we create with minikube, nu mai trebe sa ppusham
...if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
Your previous data will not be available when the pod switch node. To use hostPath you don't really need PVC/PV. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
...
spec:
...
template:
...
spec:
...
nodeSelector: # <-- make sure your pod runs on the same node
<node label>: <value unique to the mysql node>
volumes: # <-- mount the data path on the node, no pvc/pv required.
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
type: DirectoryOrCreate
containers:
- name: mysql
...
volumeMounts: # <-- let mysql write to it
- name: mysql-persistent-storage
mountPath: /var/lib/mysql

Kubectl get pod shows ErrImageNeverPull mysql

According to this docu, i try to lunch mysql with kubernetes:
deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kazi-db
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
mysql-storage.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
service.yml:
apiVersion: v1
kind: Service
metadata:
name: kazi-db
spec:
ports:
- port: 3306
selector:
app: mysql
db-secret.yml:
apiVersion: v1
kind: Secret
metadata:
name: kazi-db
type: kubernetes.io/basic-auth
stringData:
password: xcvas
I have registered all with kubectl apply -f ...
The problem when i call kubectl get pod
kazi-db-758b978ccc-7m29n 0/1 ErrImageNeverPull 0 4m48s
I have a docker hub with integrated kubernetes
May be thats because of 1. imagePullPolicy is set to "Never" and 2. image: mysql:5.6 does not seem to be present on the worker node where this pod got scheduled.
following are the two possible options:
Perform a manual pull of the image: mysql:5.6 on all worker nodes using
docker pull mysql:5.6
change imagePullPolicy to IfNotPresent.

mount file (.sql) from minikube/host to deployment (MySQL)

I'm trying to mount file from the host running minikube cluster with Hyper-V and pass in into MySQL Container with deployment yaml , I tried to add the file to the minikube vm ( with ssh) and then mount it to the deployment with PV and Claim , I tried to mount from the localhost that running the minikube ( my computer ) but still I don't see the file.
Current Configuration is : I have on the Hyper-V VM running minikube folder named data , and inside this folder i Have the file i want to transfer to the container ( pod ) .
PV Yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: sqlvolume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data
claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: sqlvolume
name: sqlvolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
deployment.yaml (MySQL)
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
labels:
io.kompose.service: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
creationTimestamp: null
labels:
io.kompose.service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: crud
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /data
name: sqlvolume
# resources:
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
hostname: mysql
restartPolicy: Always
volumes:
- name: sqlvolume
persistentVolumeClaim:
claimName: sqlvolume
status: {}
I Don't mind how to achieve that just , I have Hyper-V Minikube running on my computer and I want to transfer file mysql.sql from the host ( or from the PV I created ) to the pod.
how can I achieve that ?
You can try with a hostPath type PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/<file_name>"
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
volumeName: "pv-volume"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deployment ( changed pvc name )
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
labels:
io.kompose.service: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
creationTimestamp: null
labels:
io.kompose.service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: crud
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /data
name: sqlvolume
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
hostname: mysql
restartPolicy: Always
volumes:
- name: sqlvolume
persistentVolumeClaim:
claimName: pv-claim
status: {}

How can I deploy multi wordpress in kubernetes?

I try to deploy wordpress/mysql in kubernetes.
I want mysql and wordpress to use different volumes. I'm trying to write nfs for wordpress and hostpath for mysql.
But wordpress and mysql are not connected. I don't know why. I'd appreciate your help.
here's my code:
Mysql.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: qwer1234
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
labels:
app: mysql
spec:
type: ClusterIP
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /vol/mysql
wordpress.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql:3306
- name: WORDPRESS_DB_PASSWORD
value: P#ssw0rd
volumeMounts:
- mountPath: /nfs-volume/html
name: wordpress-pv
ports:
- protocol: TCP
containerPort: 80
volumes:
- name: wordpress-pv
persistentVolumeClaim:
claimName: wordpress-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: wordpress-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: wordpress-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.201.11
path: /nfs-volume
---
apiVersion: v1
kind: Service
metadata:
name: wordpress-svc
labels:
app: wordpress
spec:
type: LoadBalancer
selector:
app: wordpress
ports:
- protocol: TCP
port: 80
you have provided the port number at last in environment variable
please try with out it
env:
- name: WORDPRESS_DB_HOST
value: mysql:3306
instead use this
env:
- name: WORDPRESS_DB_HOST
value: MySQL
you can check the example at : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
if you read the documentation of Docker image they are also providing the host name without the port as value.
also in Wordpress environment you have to pass the MySQL password which you are passing wrong
- name: WORDPRESS_DB_PASSWORD
value: P#ssw0rd
instead it should be
- name: WORDPRESS_DB_PASSWORD
value: qwer1234