Background:
Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file:
ENV_VAR_ONE=Something Prod
ENV_VAR_TWO=Something else Prod
and a test.env file:
ENV_VAR_ONE=Something Test
ENV_VAR_TWO=Something else Test
Thus we can simply use the prod.env or test.env file when starting the container:
docker run --env-file prod.env <image>
Our application then picks up its configuration based on the environment variables defined in prod.env.
Questions:
Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
-
env:
-
name: MYSQL_USER
value: mysql
-
name: MYSQL_PASSWORD
value: mysql
-
name: MYSQL_DATABASE
value: sample
-
name: MYSQL_ROOT_PASSWORD
value: supersecret
image: "mysql:latest"
name: mysql
ports:
-
containerPort: 3306
If this is not possible, what is the suggested approach?
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not.
In your Pod definition specify that the container should pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
- image: "mysql:latest"
name: mysql
ports:
- containerPort: 3306
envFrom:
- secretRef:
name: mysql-secret
Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.:
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
(Note that env take an array as value)
And repeating for every value.
Whichever approach you use, you can now define two different Secrets, one for production and one for dev.
dev-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: bXlzcWwK
MYSQL_PASSWORD: bXlzcWwK
MYSQL_DATABASE: c2FtcGxlCg==
MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK
prod-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: am9obgo=
MYSQL_PASSWORD: c2VjdXJlCg==
MYSQL_DATABASE: cHJvZC1kYgo=
MYSQL_ROOT_PASSWORD: cm9vdHkK
And deploy the correct secret to the correct Kubernetes cluster:
kubectl config use-context dev
kubectl create -f dev-secret.yaml
kubectl config use-context prod
kubectl create -f prod-secret.yaml
Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
A new update for Kubernetes(v1.6) allows what you asked for(years ago).
You can now use the envFrom like this in your yaml file:
containers:
- name: django
image: image/name
envFrom:
- secretRef:
name: prod-secrets
Where development-secrets is your secret, you can create it by:
kubectl create secret generic prod-secrets --from-env-file=prod/env.txt`
Where the txt file content is a key-value:
DB_USER=username_here
DB_PASSWORD=password_here
The docs are still lakes of examples, I had to search really hard on those places:
Secrets docs, search for --from-file - shows that this option is available.
The equivalent ConfigMap docs shows an example on how to use it.
Note: there's a difference between --from-file and --from-env-file when creating secret as described in the comments below.
When defining a pod for Kubernetes using a YAML file, there's no direct way to specify a different file containing environment variables for a container. The Kubernetes project says they will improve this area in the future (see Kubernetes docs).
In the meantime, I suggest using a provisioning tool and making the pod YAML a template. For example, using Ansible your pod YAML file would look like:
file my-pod.yaml.template:
apiVersion: v1
kind: Pod
...
spec:
containers:
...
env:
- name: MYSQL_ROOT_PASSWORD
value: {{ mysql_root_pasword }}
...
Then your Ansible playbook can specify the variable mysql_root_password somewhere convenient, and substitute it when creating the resource, for example:
file my-playbook.yaml:
- hosts: my_hosts
vars_files:
- my-env-vars-{{ deploy_to }}.yaml
tasks:
- name: create pod YAML from template
template: src=my-pod.yaml.template dst=my-pod.yaml
- name: create pod in Kubernetes
command: kubectl create -f my-pod.yaml
file my-env-vars-prod.yaml:
mysql_root_password: supersecret
file my-env-vars-test.yaml:
mysql_root_password: notsosecret
Now you create the pod resource by running, for example:
ansible-playbook -e deploy=test my-playbook.yaml
This works for me:
file env-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: env-secret
type: Opaque
stringData:
.env: |-
APP_NAME=Laravel
APP_ENV=local
and into the deployment.yaml or pod.yaml
spec:
...
volumeMounts:
- name: foo
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: foo
secret:
secretName: env-secret
````
I smashed my head aupon tyhis for 2 hours now. I found in the docs a very simple solution to minimize my (and hopefully your) pain.
Keep env.prod, env.dev as you have them.
Use a oneliner script to import those into yaml:
kubectl create configmap my-dev-config --from-env-file=env.dev
kubectl create configmap my-prod-config --from-env-file=env.prod
You can see the result (for instant gratification):
# You can also save this to disk
kubectl get configmap my-dev-config -o yaml
As a rubyist, I personally find this solution the DRYest as you have a single point to maintain (the ENV bash file, which is compatible with Python/Ruby libraries, ..) and then you YAMLize it in a single execution.
Note that you need to keep your ENV file clean (I have a lot of comments which prevented this to work so had to prepend a cat config.original | egrep -v "^#" | tee config.cleaned) but this doen't change the complexity substantially.
It's all documented here
This is an old question but let me describe my answer for future beginner.
You can use kustomize configMapGenerator.
configMapGenerator:
- name: example
env: dev.env
and refer this configMap/example in pod definition
This is an old question but it has a lot of viewers so I add my answer.
The best way to separate the configuration from K8s implementation is using Helm. Each Helm package can have a values.yaml file and we can easily use those values in the Helm chart. If we have a multi-component topology we can create an umbrella Helm package and the parent values package also can overwrite the children values files.
You can reference K8S values by specifying them in your container as environment variables.
let your deployment be mongo.yml as follows:
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is used for senective data, e.g.: passwords or certificates
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is used for non-sensitive data
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
You could try this steps:
This command will put your file prod.env in the secrets.
kubectl create secret generic env-prod --from-file=prod.env=prod.env
Then, you could reference this file in your deployment.yaml like
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-identity
labels:
app.kubernetes.io/name: api-identity
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: api-identity
template:
metadata:
labels:
app.kubernetes.io/name: api-identity
spec:
imagePullSecrets:
- name: docker-registry-credential
containers:
- name: api-identity
image: "api-identity:test"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8005
protocol: TCP
volumeMounts:
- name: env-file-vol
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: env-file-vol
secret:
secretName: env-prod
Related
Currently I have an InitContainer that creates two directories.
initContainers:
- name: files-init
image: quay.io/quay/busybox
command:
- "/bin/mkdir"
args:
- "-p"
- "/files/test1"
- "/files/test2"
volumeMounts:
- mountPath: /files
name: files
Now it is only creating two directories, but in the future there will be more.
And the readability will be going to be harder and also hard to manage.
Is it possible to use an input file, with for example the name files.txt that has the following lines:
/files/test1
/files/test2
And that this will be read out with something like this?
command:
- "/bin/mkdir"
args:
- "-p $files.txt"
Anyone that can help me with this? Do I need to use ConfigMaps for this, if so how.
Please use configmap to load files into a pod. Below is sample. Also refer K8S documentation for other options as well.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo
volumes:
- name: foo
configMap:
name: myconfigmap
My overall goal is to create MySQL users (despite root) automatically after the deployment in Kubernetes.
I found the following resources:
How to create mysql users and database during deployment of mysql in kubernetes?
Add another user to MySQL in Kubernetes
People suggested that .sql scripts can be mounted to docker-entrypoint-initdb.d with a ConfigMap to create these users. In order to do that, I have to put the password of these users in this script in plain text. This is a potential security issue. Thus, I want to store MySQL usernames and passwords as Kubernetes Secrets.
This is my ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
labels:
app: mysql-image-db
data:
initdb.sql: |-
CREATE USER <user>#'%' IDENTIFIED BY <password>;
How can I access the associated Kubernetes secrets within this ConfigMap?
I am finally able to provide a solution to my own question. Since PjoterS made me aware that you can mount Secrets into a Pod as a volume, I came up with following solution.
This is the ConfigMap for the user creation scipt:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-init-script
labels:
app: mysql-image-db
data:
init-user.sh: |-
#!/bin/bash
sleep 30s
mysql -u root -p"$(cat /etc/mysql/credentials/root_password)" -e \
"CREATE USER '$(cat /etc/mysql/credentials/user_1)'#'%' IDENTIFIED BY '$(cat /etc/mysql/credentials/password_1)';"
To made this work, I needed to mount the ConfigMap and the Secret as Volumes of my Deployment and added a postStart lifecycle hook to execute the user creation script.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-image-db
spec:
selector:
matchLabels:
app: mysql-image-db
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-image-db
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root_password
name: mysql-user-credentials
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-volume
mountPath: /var/lib/mysql
- name: mysql-config-volume
mountPath: /etc/mysql/conf.d
- name: mysql-init-script-volume
mountPath: /etc/mysql/init
- name: mysql-credentials-volume
mountPath: /etc/mysql/credentials
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "/etc/mysql/init/init-user.sh"]
volumes:
- name: mysql-persistent-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
- name: mysql-config-volume
configMap:
name: mysql-config
- name: mysql-init-script-volume
configMap:
name: mysql-init-script
defaultMode: 0777
- name: mysql-credentials-volume
secret:
secretName: mysql-user-credentials
I just want to use customized standalone-full.xml in the RHPAM kie server pod which is running in Openshift. I have created the configmap from file and not sure how to set it.
I created the configmap like this
oc create configmap my-config --from-file=standalone-full.xml.
And edited the deploymentconfig of rhpam kie server,
volumeMounts:
- name: config-volume
mountPath: /opt/eap/standalone/configuration
volumes:
- name: config-volume
configMap:
name: my-config
It starts a new container,with sttaus container creating and fails with error(sclaing down 1 to 0)
Am i setting the configmap correct?
You can mount the configmap in a pod as a volume. Here's a good example: just add 'volumes' (to specify configmap as a volume) and 'volumeMounts' (to specify mount point) blocks in the pod's spec:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
A have this pod specification :
apiVersion: v1
kind: Pod
metadata:
name: wp
spec:
containers:
- image: wordpress:4.9-apache
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
value: mysqlpwd
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpwd
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumes:
- name: data
emptyDir: {}
I deployed it using :
kubectl create -f wordpress-pod.yaml
Now it is correctly deployed :
kubectl get pods
wp 2/2 Running 3 35h
Then when i do :
kubectl describe po/wp
Name: wp
Namespace: default
Priority: 0
Node: node3/192.168.50.12
Start Time: Mon, 13 Jan 2020 23:27:16 +0100
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.233.92.7
IPs:
IP: 10.233.92.7
Containers:
My issue is that i cannot access to the app :
wget http://192.168.50.12:8080/wp-admin/install.php
Connecting to 192.168.50.12:8080... failed: Connection refused.
Neither wget http://10.233.92.7:8080/wp-admin/install.php
works
Is there any issue in the pod description or deployment process ?
Thanks
With your current setup you need to use wget http://10.233.92.7:8080/wp-admin/install.php from within the cluster i.e by performing kubectl exec into another pod because 10.233.92.7 IP is valid only within the cluster.
You should create a service for exposing your pod. Create a cluster IP type service(default) for accessing from within the cluster. If you want to access from outside the cluster i.e from your desktop then create a NodePort or LoadBalancer type service.
Other way to access the application from your desktop will be port forwarding. In this case you don't need to create a service.
Here is a tutorial for accessing pods using NodePort service. In this case your node need to have public ip.
The problem with your configuration is lack of services that will allow external access to your WordPress.
There a lot of materials explaining what are the options and how they are strictly connected with infrastructure that Kubernetes works on.
Let me elaborate on 3 of them:
minikube
kubeadm
cloud provisioned (GKE, EKS, AKS)
The base of the WordPress configuration will be the same in each case.
Table of contents:
Running MySQL
Secret
PersistentVolumeClaim
Deployment
Service
Running WordPress
PersistentVolumeClaim
Deployment
Allowing external access
minikube
kubeadm
cloud provisioned (GKE)
There is a good tutorial on Kubernetes site: HERE!
Running MySQL
Secret
As the official Kubernetes documentation:
Kubernetes secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
-- Kubernetes secrets
Example below is a YAML definition of a secret used for MySQL password:
apiVersion: v1
kind: Secret
metadata:
name: mysql-password
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
You can check if it was created correctly with:
$ kubectl get secret mysql-password -o yaml
PersistentVolumeClaim
MySQL require a dedicated space for storing the data. There is an official documentation explaining it: Persistent Volumes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Above YAML will create a storage claim for MySQL. Apply it with command:
$ kubectl apply -f FILE_NAME.
Deployment
Create a YAML definition of a deployment from the official example and adjust it if there were any changes to names of the objects:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Take a specific look on the part below, which is parsing secret password to the MySQL pod:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
Apply it with command: $ kubectl apply -f FILE_NAME.
Service
What was missing in your the configuration was service objects. This objects allows communication with other pods, external traffic etc. Look at below example:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
This definition will create a object which will point to the MySQL pod.
It will create a DNS entry with the name of wordpress-mysql and IP address of the pod.
It will not expose it to external traffic as it's not needed.
Apply it with command: $ kubectl apply -f FILE_NAME.
Running WordPress
Persistent Volume Claim
As well as MySQL, WordPress require a dedicated space for storing the data. Create it with below example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply it with command: $ kubectl apply -f FILE_NAME.
Deployment
Create YAML definition of WordPress as example below:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Take a specific look at:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
This part will parse secret value to the deployment.
Below definition will tell WordPress where MySQL is located:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
Apply it with command: $ kubectl apply -f FILE_NAME.
Allowing external access
There are many different approaches for configuring external access to applications.
Minikube
Configuration could differ between different hypervisors.
For example Minikube can expose WordPress to external traffic with:
NodePort
apiVersion: v1
kind: Service
metadata:
name: wordpress-nodeport
spec:
type: NodePort
selector:
app: wordpress
tier: frontend
ports:
- name: wordpress-port
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will need to enter the minikube IP address with appropriate port to the web browser.
This port can be found with command:
$ kubectl get svc wordpress-nodeport
Output of above command:
wordpress-nodeport NodePort 10.76.9.15 <none> 80:30173/TCP 8s
In this case it is 30173.
LoadBalancer
In this case it will create NodePort also!
apiVersion: v1
kind: Service
metadata:
name: wordpress-loadbalancer
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
Ingress resource
Please refer to this link: Minikube: create-an-ingress-resource
Also you can refer to this Stack Overflow post
Kubeadm
With the Kubernetes clusters provided by kubeadm there are:
NodePort
The configuration process is the same as in minikube. The only difference is that it will create NodePort on each of every node in the cluster. After that you can enter IP address of any of the node with appropriate port. Be aware that you will neeed to be in the same network without firewall blocking your access.
LoadBalancer
You can create LoadBalancer object with the same YAML definition as in minikube. The problem is that with kubeadm provisioning on a bare metal cluster the LoadBalancer will not get IP address. The one of the options is: MetalLB
Ingress
Ingress resources share the same problem as LoadBalancer in kubeadm provisioned infrastructure. As above one of the options is: MetalLB.
Cloud Provisioned
There are many options which are strictly related to cloud that Kubernetes works on. Below is example for configuring Ingress resource with NGINX controller on GKE:
Apply both of the YAML definitions:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Apply NodePort definition from minikube
Create Ingress resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: wordpress-nodeport
servicePort: wordpress-port
Apply it with command: $ kubectl apply -f FILE_NAME.
Check if Ingress resource got the address from cloud provider with command:
$ kubectl get ingress
The output should look like that:
NAME HOSTS ADDRESS PORTS AGE
ingress * XXX.XXX.XXX.X 80 26m
After entering the IP address from above command you should get:
Cloud provisioned example can be used for kubeadm provisioned clusters with the MetalLB configured.
I am working through the persistent disks tutorial found here while also creating it as a StatefulSet instead of a deployment.
When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.
[ERROR] --initialize specified but the data directory has files in it. Aborting.
Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?
Thanks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datalayer-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: datalayer-svc
labels:
app: myapplication
spec:
ports:
- port: 80
name: dbadmin
clusterIP: None
selector:
app: database
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: datalayer
spec:
selector:
matchLabels:
app: myapplication
serviceName: "datalayer-svc"
replicas: 1
template:
metadata:
labels:
app: myapplication
spec:
terminationGracePeriodSeconds: 10
containers:
- name: database
image: mysql:5.7.22
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
- name: "MYSQL_DATABASE"
value: "appdatabase"
- name: "MYSQL_USER"
value: "app_user"
- name: "MYSQL_PASSWORD"
value: "app_password"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: datalayer-pv
mountPath: /var/lib/mysql
volumes:
- name: datalayer-pv
persistentVolumeClaim:
claimName: datalayer-pvc
This issue could be caused by the lost+found directory on the filesystem of the PersistentVolume.
I was able to verify this by adding a k8s.gcr.io/busybox container (in PVC set accessModes: [ReadWriteMany], OR comment out the database container):
- name: init
image: "k8s.gcr.io/busybox"
command: ["/bin/sh","-c","ls -l /var/lib/mysql"]
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
There are a few potential workarounds...
Most preferable is to use a subPath on the volumeMounts object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
subPath: mysql
Less preferable workarounds include:
Use a one-time container to rm -rf /var/lib/mysql/lost+found (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)
Use mysql:5 image, and add args: ["--ignore-db-dir=lost+found"] to the container (this option was removed in mysql 8)
Use mariadb image instead of mysql
More details might be available at docker-library/mysql issues: #69 and #186
You would usually see if your volumes were mounted with:
kubectl get pods # gets you all the pods on the default namespace
# and
kubectl describe pod <pod-created-by-your-statefulset>
Then you can these commands to check on your PVs and PVCs
kubectl get pv # gets all the PVs on the default namespace
kubectl get pvc # same for PVCs
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name>
Then you can to the GCP console on disks and see if your disks got created:
You can add the ignore dir lost+found
containers:
- image: mysql:5.7
name: mysql
args:
- "--ignore-db-dir=lost+found"
I use a init container to remove that file (I'm using postgres in my case):
initContainers:
- name: busybox
image: busybox:latest
args: ["rm", "-rf", "/var/lib/postgresql/data/lost+found"]
volumeMounts:
- name: nfsv-db
mountPath: /var/lib/postgresql/data