I have the following yaml file creates a Kubernetes secret for mysql database.
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
key: MYSQL_KEY
type: Opaque
data:
mysql-root-password: 11111
mysql-user: a
mysql-password: 11111
But when I try to deploy it I get the following error:
- Error from server (BadRequest): error when creating "STDIN": Secret in version "v1" cannot be handled as a Secret: json: cannot unmarshal number into Go struct field Secret.data of type []uint8
What is the problem and how can I fix it?
EDIT: The reason why do I added key: MY_SQL field is because previously I could deploy mysql on Kubernetes cluster using the secret ke created by this command:
kubectl create secret generic mysql-secret --from-literal MYSQL_KEY=11111
And I wanted to produce the exact same output. I also need a MY_SQL key to be able to connect to this pod from inside another pods like auth pod that you can see it's deployment file below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: auth
env:
- name: MYSQL_URI
value: 'mysql://auth-mysql-srv:3306/users_auth'
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_KEY
As mentioned in my comment already, your .yaml is not a valid file for secret kind. You are passing incorrect values to the data section.
While creating secrets via YAML you have to explicitly pass base64 encoded strings it does not automatically do that like in the CLI command.
Number types values 111111 are not accepted or in more general only base64 encoded string values are acceptable.
metadata.key is an unknown field in the secret kind. You have to remove it not sure what exactly you want to achieve with it so can't recommend something anything other than removing it.
Fix For Data Section.
First, you have to encode your values in base64 and then to have you use those base64 encoded values.
➜ secrets-error git:(main) ✗ echo "11111" | base64
MTExMTEK
➜ secrets-error git:(main) ✗ echo "a" | base64
YQo=
Update .yaml file with correct data values and remove metadata.key
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
mysql-root-password: MTExMTEK
mysql-user: YQo=
mysql-password: MTExMTEK
Edited
you should have base64 binary installed on your machine to use my above-mentioned commands otherwise can also use kubectl or any online base64 converter to do it for you.
With kubectl commands.
## To Just generate the Yaml file if only want to create secret via YAML.
kubectl create secret generic mysql-secret --from-literal=mysql-root-password="111111" --from-literal=mysql-password="111111" --from-literal=mysql-user="a" -o yaml --dry-run=client
I'm trying to pass credentials to Helm Chart ---> https://artifacthub.io/packages/helm/bitnami/mysql
I use Secret Store CSI Driver and Driver from AWS. Credentials are passed without any problems to my deployment via volume and environmental variables, but when I try to import these credentials using parameter --> auth.existingSecret , then I'm not authenticated to database and my instance from database is not provisioned.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
namespace: default
spec:
provider: aws
secretObjects:
- secretName: api-token
type: Opaque
data:
- objectName: secret-token
key: mysql-root-password
- objectName: secret-token
key: mysql-replication-password
- objectName: secret-token
key: mysql-password
parameters:
objects: |
- objectName: prod/zajac/mysql
objectType: secretsmanager
objectAlias: secret-token
I want to emphasize, that I was providing 3 different values to not duplicate replication and root password. So there's no problem with replication. In the parameter auth.existingSecret, I provide name of the secret "api-token", that was generated before creating the chart of database.
So my java application is running in several pod in openshift and I want to print the podname in application logs for some business purpose. Is there any way to do so? Thanks
You should be able to expose the Pod name to the application using the Kubernetes "Downward API". This can either be done by exposing an environment variable with the Pod name, or mounting a file that contains the name.
Here's the docs for doing so with an environment variable: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Here's a trimmed down version of the example on that page, to highlight just the Pod name:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
As you can see from the docs, there's a bunch of other context that you can expose also.
The equivalent docs for mounting a volume file can be found here: https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api
I am trying to create a Django + MySQL app using Google Container Engine and Kubernetes. Following the docs from official MySQL docker image and Kubernetes docs for creating MySQL container I have created the following replication controller
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql:5.6.33
name: mysql
env:
#Root password is compulsory
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
- name: "MYSQL_DATABASE"
value: "custom_db"
- name: "MYSQL_USER"
value: "custom_user"
- name: "MYSQL_PASSWORD"
value: "custom_password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# This name must match the volumes.name below.
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mysql-disk
fsType: ext4
According to the docs, passing the environment variables MYSQL_DATABASE. MYSQL_USER, MYSQL_PASSWORD, a new user will be created with that password and assigned rights to the newly created database. But this does not happen. When I SSH into that container, the ROOT password is set. But neither the user, nor the database is created.
I have tested this by running locally and passing the same environment variables like this
docker run -d --name some-mysql \
-e MYSQL_USER="custom_user" \
-e MYSQL_DATABASE="custom_db" \
-e MYSQL_ROOT_PASSWORD="root_password" \
-e MYSQL_PASSWORD="custom_password" \
mysql
When I SSH into that container, the database and users are created and everything works fine.
I am not sure what I am doing wrong here. Could anyone please point out my mistake. I have been at this the whole day.
EDIT: 20-sept-2016
As Requested
#Julien Du Bois
The disk is created. it appears in the cloud console and when I run the describe command I get the following output
Command : gcloud compute disks describe mysql-disk
Result:
creationTimestamp: '2016-09-16T01:06:23.380-07:00'
id: '4673615691045542160'
kind: compute#disk
lastAttachTimestamp: '2016-09-19T06:11:23.297-07:00'
lastDetachTimestamp: '2016-09-19T05:48:14.320-07:00'
name: mysql-disk
selfLink: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/disks/mysql-disk
sizeGb: '20'
status: READY
type: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/diskTypes/pd-standard
users:
- https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/instances/gke-cluster-1-default-pool-e0f09576-zvh5
zone: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>
I referred to lot of tutorials and google cloud examples. To run the mysql docker container locally my main reference was the official image page on docker hub
https://hub.docker.com/_/mysql/
This works for me and locally the container created has a new database and user with right privileges.
For kubernetes, my main reference was the following
https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
I am just trying to connect to it using Django container.
I was facing the same issue when I was using volumes and mounting them to mysql pods.
As mentioned in the documentation of mysql's docker image:
When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
So after spinning wheels I managed to solve the problem by changing the hostPath of the volume that I was creating from "/data/mysql-pv-volume" to "/var/lib/mysql"
Here is a code snippet that might help create the volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete /* For development Purposes only */
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Hope that helped.
You set mysql-disk in your deployment and the disk you have is custom-disk. Change pdName to custom-disk and it will work.
Background:
Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file:
ENV_VAR_ONE=Something Prod
ENV_VAR_TWO=Something else Prod
and a test.env file:
ENV_VAR_ONE=Something Test
ENV_VAR_TWO=Something else Test
Thus we can simply use the prod.env or test.env file when starting the container:
docker run --env-file prod.env <image>
Our application then picks up its configuration based on the environment variables defined in prod.env.
Questions:
Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
-
env:
-
name: MYSQL_USER
value: mysql
-
name: MYSQL_PASSWORD
value: mysql
-
name: MYSQL_DATABASE
value: sample
-
name: MYSQL_ROOT_PASSWORD
value: supersecret
image: "mysql:latest"
name: mysql
ports:
-
containerPort: 3306
If this is not possible, what is the suggested approach?
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not.
In your Pod definition specify that the container should pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
- image: "mysql:latest"
name: mysql
ports:
- containerPort: 3306
envFrom:
- secretRef:
name: mysql-secret
Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.:
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
(Note that env take an array as value)
And repeating for every value.
Whichever approach you use, you can now define two different Secrets, one for production and one for dev.
dev-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: bXlzcWwK
MYSQL_PASSWORD: bXlzcWwK
MYSQL_DATABASE: c2FtcGxlCg==
MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK
prod-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: am9obgo=
MYSQL_PASSWORD: c2VjdXJlCg==
MYSQL_DATABASE: cHJvZC1kYgo=
MYSQL_ROOT_PASSWORD: cm9vdHkK
And deploy the correct secret to the correct Kubernetes cluster:
kubectl config use-context dev
kubectl create -f dev-secret.yaml
kubectl config use-context prod
kubectl create -f prod-secret.yaml
Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
A new update for Kubernetes(v1.6) allows what you asked for(years ago).
You can now use the envFrom like this in your yaml file:
containers:
- name: django
image: image/name
envFrom:
- secretRef:
name: prod-secrets
Where development-secrets is your secret, you can create it by:
kubectl create secret generic prod-secrets --from-env-file=prod/env.txt`
Where the txt file content is a key-value:
DB_USER=username_here
DB_PASSWORD=password_here
The docs are still lakes of examples, I had to search really hard on those places:
Secrets docs, search for --from-file - shows that this option is available.
The equivalent ConfigMap docs shows an example on how to use it.
Note: there's a difference between --from-file and --from-env-file when creating secret as described in the comments below.
When defining a pod for Kubernetes using a YAML file, there's no direct way to specify a different file containing environment variables for a container. The Kubernetes project says they will improve this area in the future (see Kubernetes docs).
In the meantime, I suggest using a provisioning tool and making the pod YAML a template. For example, using Ansible your pod YAML file would look like:
file my-pod.yaml.template:
apiVersion: v1
kind: Pod
...
spec:
containers:
...
env:
- name: MYSQL_ROOT_PASSWORD
value: {{ mysql_root_pasword }}
...
Then your Ansible playbook can specify the variable mysql_root_password somewhere convenient, and substitute it when creating the resource, for example:
file my-playbook.yaml:
- hosts: my_hosts
vars_files:
- my-env-vars-{{ deploy_to }}.yaml
tasks:
- name: create pod YAML from template
template: src=my-pod.yaml.template dst=my-pod.yaml
- name: create pod in Kubernetes
command: kubectl create -f my-pod.yaml
file my-env-vars-prod.yaml:
mysql_root_password: supersecret
file my-env-vars-test.yaml:
mysql_root_password: notsosecret
Now you create the pod resource by running, for example:
ansible-playbook -e deploy=test my-playbook.yaml
This works for me:
file env-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: env-secret
type: Opaque
stringData:
.env: |-
APP_NAME=Laravel
APP_ENV=local
and into the deployment.yaml or pod.yaml
spec:
...
volumeMounts:
- name: foo
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: foo
secret:
secretName: env-secret
````
I smashed my head aupon tyhis for 2 hours now. I found in the docs a very simple solution to minimize my (and hopefully your) pain.
Keep env.prod, env.dev as you have them.
Use a oneliner script to import those into yaml:
kubectl create configmap my-dev-config --from-env-file=env.dev
kubectl create configmap my-prod-config --from-env-file=env.prod
You can see the result (for instant gratification):
# You can also save this to disk
kubectl get configmap my-dev-config -o yaml
As a rubyist, I personally find this solution the DRYest as you have a single point to maintain (the ENV bash file, which is compatible with Python/Ruby libraries, ..) and then you YAMLize it in a single execution.
Note that you need to keep your ENV file clean (I have a lot of comments which prevented this to work so had to prepend a cat config.original | egrep -v "^#" | tee config.cleaned) but this doen't change the complexity substantially.
It's all documented here
This is an old question but let me describe my answer for future beginner.
You can use kustomize configMapGenerator.
configMapGenerator:
- name: example
env: dev.env
and refer this configMap/example in pod definition
This is an old question but it has a lot of viewers so I add my answer.
The best way to separate the configuration from K8s implementation is using Helm. Each Helm package can have a values.yaml file and we can easily use those values in the Helm chart. If we have a multi-component topology we can create an umbrella Helm package and the parent values package also can overwrite the children values files.
You can reference K8S values by specifying them in your container as environment variables.
let your deployment be mongo.yml as follows:
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is used for senective data, e.g.: passwords or certificates
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is used for non-sensitive data
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
You could try this steps:
This command will put your file prod.env in the secrets.
kubectl create secret generic env-prod --from-file=prod.env=prod.env
Then, you could reference this file in your deployment.yaml like
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-identity
labels:
app.kubernetes.io/name: api-identity
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: api-identity
template:
metadata:
labels:
app.kubernetes.io/name: api-identity
spec:
imagePullSecrets:
- name: docker-registry-credential
containers:
- name: api-identity
image: "api-identity:test"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8005
protocol: TCP
volumeMounts:
- name: env-file-vol
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: env-file-vol
secret:
secretName: env-prod