Add users to a Kubernetes deployment of MySQL with secrets - mysql

My overall goal is to create MySQL users (despite root) automatically after the deployment in Kubernetes.
I found the following resources:
How to create mysql users and database during deployment of mysql in kubernetes?
Add another user to MySQL in Kubernetes
People suggested that .sql scripts can be mounted to docker-entrypoint-initdb.d with a ConfigMap to create these users. In order to do that, I have to put the password of these users in this script in plain text. This is a potential security issue. Thus, I want to store MySQL usernames and passwords as Kubernetes Secrets.
This is my ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
labels:
app: mysql-image-db
data:
initdb.sql: |-
CREATE USER <user>#'%' IDENTIFIED BY <password>;
How can I access the associated Kubernetes secrets within this ConfigMap?

I am finally able to provide a solution to my own question. Since PjoterS made me aware that you can mount Secrets into a Pod as a volume, I came up with following solution.
This is the ConfigMap for the user creation scipt:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-init-script
labels:
app: mysql-image-db
data:
init-user.sh: |-
#!/bin/bash
sleep 30s
mysql -u root -p"$(cat /etc/mysql/credentials/root_password)" -e \
"CREATE USER '$(cat /etc/mysql/credentials/user_1)'#'%' IDENTIFIED BY '$(cat /etc/mysql/credentials/password_1)';"
To made this work, I needed to mount the ConfigMap and the Secret as Volumes of my Deployment and added a postStart lifecycle hook to execute the user creation script.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-image-db
spec:
selector:
matchLabels:
app: mysql-image-db
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-image-db
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root_password
name: mysql-user-credentials
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-volume
mountPath: /var/lib/mysql
- name: mysql-config-volume
mountPath: /etc/mysql/conf.d
- name: mysql-init-script-volume
mountPath: /etc/mysql/init
- name: mysql-credentials-volume
mountPath: /etc/mysql/credentials
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "/etc/mysql/init/init-user.sh"]
volumes:
- name: mysql-persistent-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
- name: mysql-config-volume
configMap:
name: mysql-config
- name: mysql-init-script-volume
configMap:
name: mysql-init-script
defaultMode: 0777
- name: mysql-credentials-volume
secret:
secretName: mysql-user-credentials

Related

MySql databases deleted on new deployment in kubernetes [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 months ago.
Improve this question
I have created MySql container in kubernetes Pod by using YAML deployment file.
I am able to execute the mysql queries on that container and created few databases and tables with data in it.
But when I make some changes in other code of project not related to kubernetes files and deployed it.
On the recreation of pod all my previous databases are deleted from MySql container. I have also given mount path to mount the PVC to my container.
So my question is how to store the databases permanently so that on recreation of pod it will not delete the database and able to access that databases through newly created pod
It's always best practice to run the single container inside the single POD.
You have to use the PVC and PV to store the data so that data get persistent even if the POD restarts or we update the YAML change to POD definition.
For example here for MySQL database
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ref : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
Is PVC mounted on the MySQL data directory like /var/libe/mysql or If you are having different data directory, Use appropriate mount path for PVC where mysql stores data. You can find the sample spec for mysql deployment
here
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
```

Use External standalone-full.xml by the RHPAM kie pod in Openshift

I just want to use customized standalone-full.xml in the RHPAM kie server pod which is running in Openshift. I have created the configmap from file and not sure how to set it.
I created the configmap like this
oc create configmap my-config --from-file=standalone-full.xml.
And edited the deploymentconfig of rhpam kie server,
volumeMounts:
- name: config-volume
mountPath: /opt/eap/standalone/configuration
volumes:
- name: config-volume
configMap:
name: my-config
It starts a new container,with sttaus container creating and fails with error(sclaing down 1 to 0)
Am i setting the configmap correct?
You can mount the configmap in a pod as a volume. Here's a good example: just add 'volumes' (to specify configmap as a volume) and 'volumeMounts' (to specify mount point) blocks in the pod's spec:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never

Mysql access denied when it runs in Kubernetes pod

I'm setting up a K8s cluster and I started with the database. I used Kustomize for that purpose. I use Kubernetes that comes in with Docker Desktop for Windows 10.
When I run kubectl apply -k ./, the mysql pods are running.
Then I use kubectl exec -it mysql -- bash to get inside the container.
Once in there, I try to connect to MySQL service with mysql -u root -p and all I get is
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It doesn't matter if I use secretGenerator in kustomization.yaml or put the root password directly in the deployment definition, I can't log in to mysql.
I'm using mysql image from docker hub, so nothing fancy.
I also did a test with running the container directly by docker, e.g.
docker run -d --env MYSQL_ROOT_PASSWORD=dummy --name mysql-test -p 3306:3306 mysql:5.6
Having container set up like this I can log in to the MySQL database without a problem.
I don't understand why the same image ran in docker behaves differently when ran in Kubernetes.
Maybe you have any ideas?
My yaml files look like this:
storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
mysql-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
local:
path: "/c/kubernetes/mysql-storage/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
name: mysql-backend
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: IfNotPresent
# envFrom:
# - secretRef:
# name: mysql-credentials
env:
- name: MYSQL_ROOT_PASSWORD
value: dummy
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kustomization.yaml
resources:
- storage.yaml
- mysql-persistent-volume.yaml
- mysql-deployment.yaml
generatorOptions:
disableNameSuffixHash: true
secretGenerator:
- name: mysql-credentials
literals:
- MYSQL_ROOT_PASSWORD=dummy
# - MYSQL_ALLOW_EMPTY_PASSWORD=yes

Can't connect to mysql in kubernetes

I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service

Kubernetes MySql image persistent volume is non empty during init

I am working through the persistent disks tutorial found here while also creating it as a StatefulSet instead of a deployment.
When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.
[ERROR] --initialize specified but the data directory has files in it. Aborting.
Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?
Thanks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datalayer-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: datalayer-svc
labels:
app: myapplication
spec:
ports:
- port: 80
name: dbadmin
clusterIP: None
selector:
app: database
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: datalayer
spec:
selector:
matchLabels:
app: myapplication
serviceName: "datalayer-svc"
replicas: 1
template:
metadata:
labels:
app: myapplication
spec:
terminationGracePeriodSeconds: 10
containers:
- name: database
image: mysql:5.7.22
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
- name: "MYSQL_DATABASE"
value: "appdatabase"
- name: "MYSQL_USER"
value: "app_user"
- name: "MYSQL_PASSWORD"
value: "app_password"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: datalayer-pv
mountPath: /var/lib/mysql
volumes:
- name: datalayer-pv
persistentVolumeClaim:
claimName: datalayer-pvc
This issue could be caused by the lost+found directory on the filesystem of the PersistentVolume.
I was able to verify this by adding a k8s.gcr.io/busybox container (in PVC set accessModes: [ReadWriteMany], OR comment out the database container):
- name: init
image: "k8s.gcr.io/busybox"
command: ["/bin/sh","-c","ls -l /var/lib/mysql"]
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
There are a few potential workarounds...
Most preferable is to use a subPath on the volumeMounts object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
subPath: mysql
Less preferable workarounds include:
Use a one-time container to rm -rf /var/lib/mysql/lost+found (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)
Use mysql:5 image, and add args: ["--ignore-db-dir=lost+found"] to the container (this option was removed in mysql 8)
Use mariadb image instead of mysql
More details might be available at docker-library/mysql issues: #69 and #186
You would usually see if your volumes were mounted with:
kubectl get pods # gets you all the pods on the default namespace
# and
kubectl describe pod <pod-created-by-your-statefulset>
Then you can these commands to check on your PVs and PVCs
kubectl get pv # gets all the PVs on the default namespace
kubectl get pvc # same for PVCs
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name>
Then you can to the GCP console on disks and see if your disks got created:
You can add the ignore dir lost+found
containers:
- image: mysql:5.7
name: mysql
args:
- "--ignore-db-dir=lost+found"
I use a init container to remove that file (I'm using postgres in my case):
initContainers:
- name: busybox
image: busybox:latest
args: ["rm", "-rf", "/var/lib/postgresql/data/lost+found"]
volumeMounts:
- name: nfsv-db
mountPath: /var/lib/postgresql/data