I wanted to create a MySQL container in Kubernetes with default disabled strict mode. I know the way of how to disable strict mode in docker. I tried to use the same way in Kubernetes, but it shows an errors log.
docker
docker container run -t -d --name hello-wordl mysql --sql-mode=""
kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
labels:
app: db
spec:
selector:
matchLabels:
app: db
template:
metadata:
name: my-db
labels:
app: db
spec:
containers:
- name: my-db
image: mariadb
imagePullPolicy: Always
args: ["--sql-mode=\"\""]
error:
2021-10-29 08:20:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.2.40+maria~bionic started.
2021-10-29 08:20:57+00:00 [ERROR] [Entrypoint]: mysqld failed while attempting to check config
command was: mysqld --sql-mode="" --verbose --help --log-bin-index=/tmp/tmp.i8yL5kgKoq
2021-10-29 8:20:57 140254859638464 [ERROR] mysqld: Error while setting value '""' to 'sql_mode'
Based on the error you're getting, it is reading the double quotes as value to sql_mode. You should omit the escaped double-quotes.
args: ["--sql-mode="]
It seems you have to use a configmap for this.
Configmap manifest:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
labels:
app: mysql
data:
my.cnf: |-
[client]
default-character-set=utf8mb4
[mysql]
default-character-set=utf8mb4
[mysqld]
max_connections = 2000
secure_file_priv=/var/lib/mysql
sql_mode=STRICT_TRANS_TABLES
Deployment Manifest : in Volume section
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: config
mountPath: /etc/mysql/conf.d/my.cnf
subPath: my.cnf
volumes:
- name: config
configMap:
name: mysql-config
This will replace your default my.conf in MySQL container, if you need to set more variables, better to include them in the configmap
For more detailed answer
Related
I'm working on setting up a mysql instance in K8s cluster with TLS support for the client connection.
For that I have setup a cert-manager to issue the self-signed cert. I can see ca.crt, tls.key, tls.crt created in the secrets within my mysql namespace successfully. I followed the following article https://www.jetstack.io/blog/securing-mysql-with-cert-manager/
Now to use this cert, my plan is to place the cert in the /var/lib/mysql directory and update the mysql.conf file using config map. Here is how the mysql.yaml pod spec looks.
kind: Service
metadata:
name: mysql
namespace: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
mysql.cnf: |-
[mysqld]
ssl-ca=/var/lib/mysql/ca.crt
ssl-cert= /var/lib/mysql/tls.crt
ssl-key=/var/lib/mysql/tls.key
require_secure_transport=ON
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
# securityContext:
# runAsUser: 0
containers:
- image: mysql:5.7
name: mysql
resources: {}
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-cert-secret
#mountPath: /app/ca.crt
mountPath: /var/lib/mysql/ca.crt
subPath: ca.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.crt
#mountPath: /app/tls.crt
subPath: tls.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.key
#mountPath: /app/tls.key
subPath: tls.key
- name: config-map-mysqlconf
mountPath: /etc/mysql/mysql.conf
volumes:
- name: mysql-cert-secret
secret:
secretName: mysql-server-tls
- name : config-map-mysqlconf
configMap:
name: mysql-config
If I update the mount path with say /app/ca.crt, then mounting works and I can see the certs in when I access in shell. But for the /var/lib/mysql* I get following error.
Error image
I tried using the securityContext but it didn't help since the directory is accessible by both root and mysql user. Any help would be greatly appreciated. If there is a better way to get this done, I'm happy to try that as well.
This is all done locally using KinD cluster.
Thank you
MySQL stores DB files in /var/lib/mysql by default and there would certainly be an attempt to set the ownership to mysql user. Perhaps here.
Any attempt to update a secret volume will result in an error rather than a successful change as they are read-only projections into the Pod's filesystem. I think that's the reason the article you followed does not suggest anywhere to use dir /var/lib/mysql.
If you still want to attempt this, you can perhaps try by changing the default db storage location to something other than /var/lib/mysql in file /etc/my.cnf or even the default mode of that volumeMount. But I'm not sure if it will work or there won't be any other issues.
I have created statefulset of mysql using below yaml with this command:
kubectl apply -f mysql-statefulset.yaml
Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:
kubectl exec -it mysql-sts-0 sh
and then login in mysql using:
mysql -u root -p
after giving this command a:
Enter password:
came and I entered the password:
okaoka
and successfully could login. After that I exited from the pod.
Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:
okaoka1234
and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:
kubectl apply -f mysql-statefulset.yaml
it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).
Changed Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Now the problem is when I again entered a pod using:
kubectl exec -it mysql-sts-0 sh
then used:
mysql -u root -p
and again the:
Enter password:
came and this time when I gave my new password:
okaoka1234
it gave access denied.
When I printed the env (inside the pod) using:
printenv
then I could see that:
MYSQL_ROOT_PASSWORD=okaoka1234
that means in environment variable it changed and took the new password, but I could not logged in by the new password.
The interesting thing is that I could logged in by giving my previous password okaoka, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?
Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).
Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.
After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.
What image are you using? The docker hub mysql image or a custom one?
Update
Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh)
# there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$#"
mysql_note "Starting temporary server"
docker_temp_server_start "$#"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.
Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)
Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.
Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.
It is left to the root user to manually change the password, if desired, by using a mysql client.
I want that mysql pod doesn't remove all mysql data when I restart the computer.
I should be able to store the data in my machine, so when I reboot my computer and the mysql pod starts again, the databases are still there.
here are my yaml's:
storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
mysql-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
storageClassName: local-storage
local:
path: "C:\\mysql-volume" #2 \ for escape characters right?
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
#hostPath:
# path: /mysql-volume
#type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: local
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- protocol: TCP
port: 3306
nodePort: 30001
selector:
app: mysql
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql-custom-img-here
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-secret
key: mysql-user
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: mysql-password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
After trying that, the first error I got was:
MountVolume.NewMounter initialization failed for volume "mysql-pv-volume" : path "C:\\mysql-volume" does not exist
Since im using windows, I guess that's the correct path right? Im using 2 "" for a escape character, Maybe the problem is here in the path, but not sure. If it is, how can I give my local path on my windows machine?
Then I changed the local: path: to /opt and the following error apeared:
initialize specified but the data directory has files in it.
log:
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.31-1debian10 started.
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.31-1debian10 started.
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Initializing database files
2020-09-24T12:53:00.271130Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-09-24T12:53:00.271954Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2020-09-24T12:53:00.271981Z 0 [ERROR] Aborting
but if I change the mountPath: /var/lib/mysql to for example mountPath: /var/lib/mysql-test
It works, but not as expected(saving the data after rebooting the computer).
Even after removing the PV, PVC and MYSQL deployment/service, that same error keeps appearing.
I even removed the volumes using the docker command, and changing my mysql custom image just to 'mysql:5.7' just in case, but the same initialize specified but the data directory has files in it. appears.
How does that happen, even when I remove the pod? mountPath is the container path, so the data should disappear.
And how can I give my local path in the persistentVolume?
Thanks for your time!
edit: forgot the tell that I already saw this: How to create a mysql kubernetes service with a locally mounted data volume?
I searched a lot, but no luck
I finally solved the problem...
The problem of initialize specified but the data directory has files in it. was answered by #Jakub
The MountVolume.NewMounter initialization failed for volume "mysql-pv-volume" : path "C:\\mysql-volume" does not exist .... I can't even believe the time spent because of this silly problem...
the correct path is: path: /c/mysql-volume after that, all worked as expected!
I am posting this as a community wiki answer for better visibility.
If you have problem with initialize specified but the data directory has files in it then there is github issue which will help you.
TLDR
Use --ignore-db-dir=lost+found in your container
Use older version of mysql, for example mysql:5.6
There are answers on github provided by #alexpls and #aklinkert
I had this issue with Kubernetes and MySQL 5.7.15 as well. Adding the suggestion from #yosifki to my container's definition got things working.
Here's an extract of my working YAML definition:
name: mysql-master
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
The exact same configuration is working for MySQL Version 5.6 with.
I'm having issues getting the mysql container starting properly. But to sum it up, with the nfs dynamic provisioner the mysql container won't start and throws an error of mkdir: cannot create directory '/var/lib/mysql/': File exists even though the NFS mount is in the container, and appears to be functioning properly.
I installed Dyanamic NFS provisioner installed on my K8 cluster from here https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. The test claim and test pod they show on the instructions work.
Now to run mysql, I took the code snippets from here:
https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
kubectl apply mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: managed-nfs-storage <--- THIS MATCHES MY NFS STORAGECLASS
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
kubectl apply -f mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim managed-nfs-storage 5m16s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/mysql-pv-claim Bound mysql-pv-volume 20Gi RWO managed-nfs-storage 5m27s
The pv was created automatically by the dynamic provisioner
Get the error...
$ kubectl logs mysql-7d7fdd478f-l2m8h
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
mkdir: cannot create directory '/var/lib/mysql/': File exists
This error stops the container from starting...
I went and deleted the deployment and added command: [ "/bin/sh", "-c", "sleep 100000" ] so the container would start...
After getting into the container, I checked the NFS mount is properly mounted and is writable...
# df -h | grep mysql
nfs1.example.com:/k8/default-mysql-pv-claim-pvc-0808d1bd-69ca-4ff5-825a-b846b1133e3a 1.0T 1.6G 1023G 1% /var/lib/mysql
If I create a "local" pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
and created the mysql deployment, the mysql pod starts up without issue.
So at this point, with dynamic provisioning (potentially just on NFS?) the mysql container doesn't work.
Anyone have any suggestions?
I'm not exactly sure what is the cause of this so here is few options.
First you could try setting securityContext, because volume might be mounted without proper permissions.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
You can find out the proper group id and user by typing id and gid inside the container.
Or just using kubectl exec -it <pod-name> bash.
Second, try using subPath
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
subPath: mysql
If that won't work I would test the NFS on another pod with initContainer that is creating a directory.
And I would redo the whole nfs maybe using this guide.
I tried mysql with mount-volume to obtain persistence across /var/lib/mysql
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubelogs as below
sh-3.2# kubectl logs acds-catchup-db-6f7d4b6c5b-2jwds
Initializing database
2018-04-16T09:23:18.020740Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-04-16T09:23:18.022014Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2018-04-16T09:23:18.022037Z 0 [ERROR] Aborting
On investigating, I found a directory with name Lost+Found creating in MountVolume.
Can any help me on this issue
That directory is always present in the root path of the volume, so you cannot just remove it.
You have 2 options how to fix it:
Add --ignore-db-dir=lost+found option to the MySQL configuration. I think that is the better way.
Set the database directory to the path inside the volume, not to the root path of the volume. You can specify the database directory by --datadir= option.
Here is the example of args settings:
spec:
containers:
- name: mysql
image: mysql:5.6
args: ["--ignore-db-dir=lost+found"]