Add another user to MySQL in Kubernetes - mysql

Here is my MySQL
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: abc-def-my-mysql
namespace: abc-sk-test
labels:
project: abc
ca: my
spec:
replicas: 1
template:
metadata:
labels:
app: abc-def-my-mysql
project: abc
ca: my
spec:
containers:
- name: mysql
image: mysql:5.6
args: ["--default-authentication-plugin=mysql_native_password", "--ignore-db-dir=lost+found"]
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
- name: MYSQL_DATABASE
value: "my_abc"
- name: MYSQL_USER
value: "test_user"
- name: MYSQL_PASSWORD
value: "12345"
volumeMounts:
- mountPath: /var/lib/mysql
name: abc-def-my-mysql-storage
volumes:
- name: abc-def-my-mysql-storage
persistentVolumeClaim:
claimName: abc-def-my-mysql-pvc
I would like to add another user to mysql so real users can connect to it. Instead of using "test_user". how can I add another user, is it like adding any other environment variable to the above config

Mount a "create user script" into container's /docker-entrypoint-initdb.d directory. it will be executed once, at first pod start.
apiVersion: extensions/v1beta1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: mysql
.....
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
.....
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
configMap:
name: initdb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb
data:
initdb.sql: |-
CREATE USER 'first_user'#'%' IDENTIFIED BY '111' ;
CREATE USER 'second_user'#'%' IDENTIFIED BY '222' ;
Test:
kubectl exec -it <PODNAME> -- mysql -uroot -p -e 'SELECT user, host FROM mysql.user;'
+-------------+------+
| user | host |
+-------------+------+
| first_user | % |
| second_user | % |
| root | % |
+-------------+------+
See Initializing a fresh instance Mysql Docker Hub image:
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order.
You can easily populate your mysql services by mounting a SQL
dump into that directory and provide custom images with contributed
data. SQL files will be imported by default to the database specified
by the MYSQL_DATABASE variable.

Depending on user life-cycle, you can create user either at container startup, either through MySQL's docker startup script, mounted at /docker-entrypoint-initdb.d, or you could do it at CLI, after the server has started.
With container startup script, you may have to take care of multiple containers running the same script, e.g. CREATE user if exists.
CLI option may be more suitable if the DB server is long lived, and you will get requests for multiple user creation even after server creation.

Related

how to overcome the readonly filesystem error for a secret that is mounted in mysql image

I'm working on setting up a mysql instance in K8s cluster with TLS support for the client connection.
For that I have setup a cert-manager to issue the self-signed cert. I can see ca.crt, tls.key, tls.crt created in the secrets within my mysql namespace successfully. I followed the following article https://www.jetstack.io/blog/securing-mysql-with-cert-manager/
Now to use this cert, my plan is to place the cert in the /var/lib/mysql directory and update the mysql.conf file using config map. Here is how the mysql.yaml pod spec looks.
kind: Service
metadata:
name: mysql
namespace: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
mysql.cnf: |-
[mysqld]
ssl-ca=/var/lib/mysql/ca.crt
ssl-cert= /var/lib/mysql/tls.crt
ssl-key=/var/lib/mysql/tls.key
require_secure_transport=ON
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
# securityContext:
# runAsUser: 0
containers:
- image: mysql:5.7
name: mysql
resources: {}
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-cert-secret
#mountPath: /app/ca.crt
mountPath: /var/lib/mysql/ca.crt
subPath: ca.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.crt
#mountPath: /app/tls.crt
subPath: tls.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.key
#mountPath: /app/tls.key
subPath: tls.key
- name: config-map-mysqlconf
mountPath: /etc/mysql/mysql.conf
volumes:
- name: mysql-cert-secret
secret:
secretName: mysql-server-tls
- name : config-map-mysqlconf
configMap:
name: mysql-config
If I update the mount path with say /app/ca.crt, then mounting works and I can see the certs in when I access in shell. But for the /var/lib/mysql* I get following error.
Error image
I tried using the securityContext but it didn't help since the directory is accessible by both root and mysql user. Any help would be greatly appreciated. If there is a better way to get this done, I'm happy to try that as well.
This is all done locally using KinD cluster.
Thank you
MySQL stores DB files in /var/lib/mysql by default and there would certainly be an attempt to set the ownership to mysql user. Perhaps here.
Any attempt to update a secret volume will result in an error rather than a successful change as they are read-only projections into the Pod's filesystem. I think that's the reason the article you followed does not suggest anywhere to use dir /var/lib/mysql.
If you still want to attempt this, you can perhaps try by changing the default db storage location to something other than /var/lib/mysql in file /etc/my.cnf or even the default mode of that volumeMount. But I'm not sure if it will work or there won't be any other issues.

kubernetes mysql statefulset not taking new password, though password changes in env

I have created statefulset of mysql using below yaml with this command:
kubectl apply -f mysql-statefulset.yaml
Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:
kubectl exec -it mysql-sts-0 sh
and then login in mysql using:
mysql -u root -p
after giving this command a:
Enter password:
came and I entered the password:
okaoka
and successfully could login. After that I exited from the pod.
Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:
okaoka1234
and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:
kubectl apply -f mysql-statefulset.yaml
it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).
Changed Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Now the problem is when I again entered a pod using:
kubectl exec -it mysql-sts-0 sh
then used:
mysql -u root -p
and again the:
Enter password:
came and this time when I gave my new password:
okaoka1234
it gave access denied.
When I printed the env (inside the pod) using:
printenv
then I could see that:
MYSQL_ROOT_PASSWORD=okaoka1234
that means in environment variable it changed and took the new password, but I could not logged in by the new password.
The interesting thing is that I could logged in by giving my previous password okaoka, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?
Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).
Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.
After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.
What image are you using? The docker hub mysql image or a custom one?
Update
Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh)
# there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$#"
mysql_note "Starting temporary server"
docker_temp_server_start "$#"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.
Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)
Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.
Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.
It is left to the root user to manually change the password, if desired, by using a mysql client.

import mysql data to kubernetes pod

Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;
Directly,same way as you dealing with docker containers:
docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
Or using the data stored in an existing docker container volume and pass it to the pod .
Just if other people are searching for this :
kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
To answer your specific question:
You can kubectl exec into your container in order to run commands inside it. You may need to first ensure that the container has access to the file, by perhaps storing it in a location that the cluster can access (network?) and then using wget/curl within the container to make it available. One may even open up an interactive session with kubectl exec.
However, the ways to do this in increasing measure of generality would be:
Create a service that lets you access the mysql instance running on the pod from outside the cluster and connect your local mysql client to it.
If you are executing this initialization operation every time such a mysql pod is being started, it could be stored on a persistent volume and you could execute the script within your pod when you start up.
If you have several pieces of data that you typically need to copy over when starting the pod, look at init containers for fetching that data.
TL;DR
Using ConfigMaps and then use that ConfgMap as a mount into the /docker-entrypoint-initdb.d folder
Code
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
MySQL ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
Reference:
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/04-mysql-deployment.yml
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/03-UserManagement-ConfigMap.yml

Kubernetes + MySQL : Creating custom database and user in a Kubernetes container

I am trying to create a Django + MySQL app using Google Container Engine and Kubernetes. Following the docs from official MySQL docker image and Kubernetes docs for creating MySQL container I have created the following replication controller
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql:5.6.33
name: mysql
env:
#Root password is compulsory
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
- name: "MYSQL_DATABASE"
value: "custom_db"
- name: "MYSQL_USER"
value: "custom_user"
- name: "MYSQL_PASSWORD"
value: "custom_password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# This name must match the volumes.name below.
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mysql-disk
fsType: ext4
According to the docs, passing the environment variables MYSQL_DATABASE. MYSQL_USER, MYSQL_PASSWORD, a new user will be created with that password and assigned rights to the newly created database. But this does not happen. When I SSH into that container, the ROOT password is set. But neither the user, nor the database is created.
I have tested this by running locally and passing the same environment variables like this
docker run -d --name some-mysql \
-e MYSQL_USER="custom_user" \
-e MYSQL_DATABASE="custom_db" \
-e MYSQL_ROOT_PASSWORD="root_password" \
-e MYSQL_PASSWORD="custom_password" \
mysql
When I SSH into that container, the database and users are created and everything works fine.
I am not sure what I am doing wrong here. Could anyone please point out my mistake. I have been at this the whole day.
EDIT: 20-sept-2016
As Requested
#Julien Du Bois
The disk is created. it appears in the cloud console and when I run the describe command I get the following output
Command : gcloud compute disks describe mysql-disk
Result:
creationTimestamp: '2016-09-16T01:06:23.380-07:00'
id: '4673615691045542160'
kind: compute#disk
lastAttachTimestamp: '2016-09-19T06:11:23.297-07:00'
lastDetachTimestamp: '2016-09-19T05:48:14.320-07:00'
name: mysql-disk
selfLink: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/disks/mysql-disk
sizeGb: '20'
status: READY
type: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/diskTypes/pd-standard
users:
- https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/instances/gke-cluster-1-default-pool-e0f09576-zvh5
zone: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>
I referred to lot of tutorials and google cloud examples. To run the mysql docker container locally my main reference was the official image page on docker hub
https://hub.docker.com/_/mysql/
This works for me and locally the container created has a new database and user with right privileges.
For kubernetes, my main reference was the following
https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
I am just trying to connect to it using Django container.
I was facing the same issue when I was using volumes and mounting them to mysql pods.
As mentioned in the documentation of mysql's docker image:
When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
So after spinning wheels I managed to solve the problem by changing the hostPath of the volume that I was creating from "/data/mysql-pv-volume" to "/var/lib/mysql"
Here is a code snippet that might help create the volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete /* For development Purposes only */
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Hope that helped.
You set mysql-disk in your deployment and the disk you have is custom-disk. Change pdName to custom-disk and it will work.

Kubernetes equivalent of env-file in Docker

Background:
Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file:
ENV_VAR_ONE=Something Prod
ENV_VAR_TWO=Something else Prod
and a test.env file:
ENV_VAR_ONE=Something Test
ENV_VAR_TWO=Something else Test
Thus we can simply use the prod.env or test.env file when starting the container:
docker run --env-file prod.env <image>
Our application then picks up its configuration based on the environment variables defined in prod.env.
Questions:
Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
-
env:
-
name: MYSQL_USER
value: mysql
-
name: MYSQL_PASSWORD
value: mysql
-
name: MYSQL_DATABASE
value: sample
-
name: MYSQL_ROOT_PASSWORD
value: supersecret
image: "mysql:latest"
name: mysql
ports:
-
containerPort: 3306
If this is not possible, what is the suggested approach?
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not.
In your Pod definition specify that the container should pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
- image: "mysql:latest"
name: mysql
ports:
- containerPort: 3306
envFrom:
- secretRef:
name: mysql-secret
Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.:
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
(Note that env take an array as value)
And repeating for every value.
Whichever approach you use, you can now define two different Secrets, one for production and one for dev.
dev-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: bXlzcWwK
MYSQL_PASSWORD: bXlzcWwK
MYSQL_DATABASE: c2FtcGxlCg==
MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK
prod-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: am9obgo=
MYSQL_PASSWORD: c2VjdXJlCg==
MYSQL_DATABASE: cHJvZC1kYgo=
MYSQL_ROOT_PASSWORD: cm9vdHkK
And deploy the correct secret to the correct Kubernetes cluster:
kubectl config use-context dev
kubectl create -f dev-secret.yaml
kubectl config use-context prod
kubectl create -f prod-secret.yaml
Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
A new update for Kubernetes(v1.6) allows what you asked for(years ago).
You can now use the envFrom like this in your yaml file:
containers:
- name: django
image: image/name
envFrom:
- secretRef:
name: prod-secrets
Where development-secrets is your secret, you can create it by:
kubectl create secret generic prod-secrets --from-env-file=prod/env.txt`
Where the txt file content is a key-value:
DB_USER=username_here
DB_PASSWORD=password_here
The docs are still lakes of examples, I had to search really hard on those places:
Secrets docs, search for --from-file - shows that this option is available.
The equivalent ConfigMap docs shows an example on how to use it.
Note: there's a difference between --from-file and --from-env-file when creating secret as described in the comments below.
When defining a pod for Kubernetes using a YAML file, there's no direct way to specify a different file containing environment variables for a container. The Kubernetes project says they will improve this area in the future (see Kubernetes docs).
In the meantime, I suggest using a provisioning tool and making the pod YAML a template. For example, using Ansible your pod YAML file would look like:
file my-pod.yaml.template:
apiVersion: v1
kind: Pod
...
spec:
containers:
...
env:
- name: MYSQL_ROOT_PASSWORD
value: {{ mysql_root_pasword }}
...
Then your Ansible playbook can specify the variable mysql_root_password somewhere convenient, and substitute it when creating the resource, for example:
file my-playbook.yaml:
- hosts: my_hosts
vars_files:
- my-env-vars-{{ deploy_to }}.yaml
tasks:
- name: create pod YAML from template
template: src=my-pod.yaml.template dst=my-pod.yaml
- name: create pod in Kubernetes
command: kubectl create -f my-pod.yaml
file my-env-vars-prod.yaml:
mysql_root_password: supersecret
file my-env-vars-test.yaml:
mysql_root_password: notsosecret
Now you create the pod resource by running, for example:
ansible-playbook -e deploy=test my-playbook.yaml
This works for me:
file env-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: env-secret
type: Opaque
stringData:
.env: |-
APP_NAME=Laravel
APP_ENV=local
and into the deployment.yaml or pod.yaml
spec:
...
volumeMounts:
- name: foo
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: foo
secret:
secretName: env-secret
````
I smashed my head aupon tyhis for 2 hours now. I found in the docs a very simple solution to minimize my (and hopefully your) pain.
Keep env.prod, env.dev as you have them.
Use a oneliner script to import those into yaml:
kubectl create configmap my-dev-config --from-env-file=env.dev
kubectl create configmap my-prod-config --from-env-file=env.prod
You can see the result (for instant gratification):
# You can also save this to disk
kubectl get configmap my-dev-config -o yaml
As a rubyist, I personally find this solution the DRYest as you have a single point to maintain (the ENV bash file, which is compatible with Python/Ruby libraries, ..) and then you YAMLize it in a single execution.
Note that you need to keep your ENV file clean (I have a lot of comments which prevented this to work so had to prepend a cat config.original | egrep -v "^#" | tee config.cleaned) but this doen't change the complexity substantially.
It's all documented here
This is an old question but let me describe my answer for future beginner.
You can use kustomize configMapGenerator.
configMapGenerator:
- name: example
env: dev.env
and refer this configMap/example in pod definition
This is an old question but it has a lot of viewers so I add my answer.
The best way to separate the configuration from K8s implementation is using Helm. Each Helm package can have a values.yaml file and we can easily use those values in the Helm chart. If we have a multi-component topology we can create an umbrella Helm package and the parent values package also can overwrite the children values files.
You can reference K8S values by specifying them in your container as environment variables.
let your deployment be mongo.yml as follows:
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is used for senective data, e.g.: passwords or certificates
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is used for non-sensitive data
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
You could try this steps:
This command will put your file prod.env in the secrets.
kubectl create secret generic env-prod --from-file=prod.env=prod.env
Then, you could reference this file in your deployment.yaml like
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-identity
labels:
app.kubernetes.io/name: api-identity
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: api-identity
template:
metadata:
labels:
app.kubernetes.io/name: api-identity
spec:
imagePullSecrets:
- name: docker-registry-credential
containers:
- name: api-identity
image: "api-identity:test"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8005
protocol: TCP
volumeMounts:
- name: env-file-vol
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: env-file-vol
secret:
secretName: env-prod