Use External standalone-full.xml by the RHPAM kie pod in Openshift - openshift

I just want to use customized standalone-full.xml in the RHPAM kie server pod which is running in Openshift. I have created the configmap from file and not sure how to set it.
I created the configmap like this
oc create configmap my-config --from-file=standalone-full.xml.
And edited the deploymentconfig of rhpam kie server,
volumeMounts:
- name: config-volume
mountPath: /opt/eap/standalone/configuration
volumes:
- name: config-volume
configMap:
name: my-config
It starts a new container,with sttaus container creating and fails with error(sclaing down 1 to 0)
Am i setting the configmap correct?

You can mount the configmap in a pod as a volume. Here's a good example: just add 'volumes' (to specify configmap as a volume) and 'volumeMounts' (to specify mount point) blocks in the pod's spec:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never

Related

Add users to a Kubernetes deployment of MySQL with secrets

My overall goal is to create MySQL users (despite root) automatically after the deployment in Kubernetes.
I found the following resources:
How to create mysql users and database during deployment of mysql in kubernetes?
Add another user to MySQL in Kubernetes
People suggested that .sql scripts can be mounted to docker-entrypoint-initdb.d with a ConfigMap to create these users. In order to do that, I have to put the password of these users in this script in plain text. This is a potential security issue. Thus, I want to store MySQL usernames and passwords as Kubernetes Secrets.
This is my ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
labels:
app: mysql-image-db
data:
initdb.sql: |-
CREATE USER <user>#'%' IDENTIFIED BY <password>;
How can I access the associated Kubernetes secrets within this ConfigMap?
I am finally able to provide a solution to my own question. Since PjoterS made me aware that you can mount Secrets into a Pod as a volume, I came up with following solution.
This is the ConfigMap for the user creation scipt:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-init-script
labels:
app: mysql-image-db
data:
init-user.sh: |-
#!/bin/bash
sleep 30s
mysql -u root -p"$(cat /etc/mysql/credentials/root_password)" -e \
"CREATE USER '$(cat /etc/mysql/credentials/user_1)'#'%' IDENTIFIED BY '$(cat /etc/mysql/credentials/password_1)';"
To made this work, I needed to mount the ConfigMap and the Secret as Volumes of my Deployment and added a postStart lifecycle hook to execute the user creation script.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-image-db
spec:
selector:
matchLabels:
app: mysql-image-db
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-image-db
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root_password
name: mysql-user-credentials
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-volume
mountPath: /var/lib/mysql
- name: mysql-config-volume
mountPath: /etc/mysql/conf.d
- name: mysql-init-script-volume
mountPath: /etc/mysql/init
- name: mysql-credentials-volume
mountPath: /etc/mysql/credentials
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "/etc/mysql/init/init-user.sh"]
volumes:
- name: mysql-persistent-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
- name: mysql-config-volume
configMap:
name: mysql-config
- name: mysql-init-script-volume
configMap:
name: mysql-init-script
defaultMode: 0777
- name: mysql-credentials-volume
secret:
secretName: mysql-user-credentials

mysql container won't start on kubernetes when backed by NFS Dynamic provisioner

I'm having issues getting the mysql container starting properly. But to sum it up, with the nfs dynamic provisioner the mysql container won't start and throws an error of mkdir: cannot create directory '/var/lib/mysql/': File exists even though the NFS mount is in the container, and appears to be functioning properly.
I installed Dyanamic NFS provisioner installed on my K8 cluster from here https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. The test claim and test pod they show on the instructions work.
Now to run mysql, I took the code snippets from here:
https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
kubectl apply mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: managed-nfs-storage <--- THIS MATCHES MY NFS STORAGECLASS
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
kubectl apply -f mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim managed-nfs-storage 5m16s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/mysql-pv-claim Bound mysql-pv-volume 20Gi RWO managed-nfs-storage 5m27s
The pv was created automatically by the dynamic provisioner
Get the error...
$ kubectl logs mysql-7d7fdd478f-l2m8h
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-03-05 18:26:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
mkdir: cannot create directory '/var/lib/mysql/': File exists
This error stops the container from starting...
I went and deleted the deployment and added command: [ "/bin/sh", "-c", "sleep 100000" ] so the container would start...
After getting into the container, I checked the NFS mount is properly mounted and is writable...
# df -h | grep mysql
nfs1.example.com:/k8/default-mysql-pv-claim-pvc-0808d1bd-69ca-4ff5-825a-b846b1133e3a 1.0T 1.6G 1023G 1% /var/lib/mysql
If I create a "local" pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
and created the mysql deployment, the mysql pod starts up without issue.
So at this point, with dynamic provisioning (potentially just on NFS?) the mysql container doesn't work.
Anyone have any suggestions?
I'm not exactly sure what is the cause of this so here is few options.
First you could try setting securityContext, because volume might be mounted without proper permissions.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
You can find out the proper group id and user by typing id and gid inside the container.
Or just using kubectl exec -it <pod-name> bash.
Second, try using subPath
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
subPath: mysql
If that won't work I would test the NFS on another pod with initContainer that is creating a directory.
And I would redo the whole nfs maybe using this guide.

Kubernetes MySql image persistent volume is non empty during init

I am working through the persistent disks tutorial found here while also creating it as a StatefulSet instead of a deployment.
When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.
[ERROR] --initialize specified but the data directory has files in it. Aborting.
Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?
Thanks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datalayer-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: datalayer-svc
labels:
app: myapplication
spec:
ports:
- port: 80
name: dbadmin
clusterIP: None
selector:
app: database
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: datalayer
spec:
selector:
matchLabels:
app: myapplication
serviceName: "datalayer-svc"
replicas: 1
template:
metadata:
labels:
app: myapplication
spec:
terminationGracePeriodSeconds: 10
containers:
- name: database
image: mysql:5.7.22
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
- name: "MYSQL_DATABASE"
value: "appdatabase"
- name: "MYSQL_USER"
value: "app_user"
- name: "MYSQL_PASSWORD"
value: "app_password"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: datalayer-pv
mountPath: /var/lib/mysql
volumes:
- name: datalayer-pv
persistentVolumeClaim:
claimName: datalayer-pvc
This issue could be caused by the lost+found directory on the filesystem of the PersistentVolume.
I was able to verify this by adding a k8s.gcr.io/busybox container (in PVC set accessModes: [ReadWriteMany], OR comment out the database container):
- name: init
image: "k8s.gcr.io/busybox"
command: ["/bin/sh","-c","ls -l /var/lib/mysql"]
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
There are a few potential workarounds...
Most preferable is to use a subPath on the volumeMounts object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
subPath: mysql
Less preferable workarounds include:
Use a one-time container to rm -rf /var/lib/mysql/lost+found (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)
Use mysql:5 image, and add args: ["--ignore-db-dir=lost+found"] to the container (this option was removed in mysql 8)
Use mariadb image instead of mysql
More details might be available at docker-library/mysql issues: #69 and #186
You would usually see if your volumes were mounted with:
kubectl get pods # gets you all the pods on the default namespace
# and
kubectl describe pod <pod-created-by-your-statefulset>
Then you can these commands to check on your PVs and PVCs
kubectl get pv # gets all the PVs on the default namespace
kubectl get pvc # same for PVCs
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name>
Then you can to the GCP console on disks and see if your disks got created:
You can add the ignore dir lost+found
containers:
- image: mysql:5.7
name: mysql
args:
- "--ignore-db-dir=lost+found"
I use a init container to remove that file (I'm using postgres in my case):
initContainers:
- name: busybox
image: busybox:latest
args: ["rm", "-rf", "/var/lib/postgresql/data/lost+found"]
volumeMounts:
- name: nfsv-db
mountPath: /var/lib/postgresql/data

Restoring wordpress and mysql data to kubernetes volume

I am currently running mysql, wordpress and my custom node.js + express application on kubernetes pods in the same cluster. Everything is working quite well but my problem is that all the data will be reset if I have to rerun the deployments, services and persistent volumes.
I have configured wordpress quite extensively and would like to save all the data and insert it again after redeploying everything. How is this possible to do or am I thinking something wrong? I am using the mysql:5.6 and wordpress:4.8-apache images.
I also want to transfer my configuration to my other team members so they don't have to configure wordpress again.
This is my mysql-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: hidden
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This the wordpress-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: hidden
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
How is this possible to do or am I thinking something wrong?
It might be better to move configuration mindset from working directly on base container instances to configuring container images/manifests. You have several approaches there, just some pointers:
Create own Dockerfile based on images you referenced and bundle configuration files inside them. This is viable approach if configuration is more or less static and can be handled with env vars or infrequent builds of docker images, but require docker registry handling to work with k8s. In this approach you would add all changed files to build context of docker and then COPY them to appropriate places.
Create ConfigMaps and mount them on container filesystem as config files where change is required. This way you can still use base images you reference directly but changes are limited to kubernetes manifests instead of rebuilding docker images. Approach in this case would be to identify all changed files on container, then create kubernetes ConfigMaps out of them and finally mount appropriately. I don't know which exactly things you are changing but here is example of how you can place nginx config in ConfigMap:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-nginx-example
data:
nginx.conf: |
server {
listen 80;
...
# actual config here
...
}
and then mount it in container in appropriate place like so:
...
containers:
- name: nginx-example
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: cm-nginx-example
items:
- key: nginx.conf
path: nginx.conf
...
Mount persistent volumes (subpaths) on places where you need configs and keep configuration on persistent volumes.
Personally, I'd probably opt for ConfigMaps since you can easily share/edit those with k8s deployments and configuration details are not lost as some mystical 'extensive work' but can be reviewed, tweaked and stored to some code versioning system for version tracking...

Kubernetes equivalent of env-file in Docker

Background:
Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file:
ENV_VAR_ONE=Something Prod
ENV_VAR_TWO=Something else Prod
and a test.env file:
ENV_VAR_ONE=Something Test
ENV_VAR_TWO=Something else Test
Thus we can simply use the prod.env or test.env file when starting the container:
docker run --env-file prod.env <image>
Our application then picks up its configuration based on the environment variables defined in prod.env.
Questions:
Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
-
env:
-
name: MYSQL_USER
value: mysql
-
name: MYSQL_PASSWORD
value: mysql
-
name: MYSQL_DATABASE
value: sample
-
name: MYSQL_ROOT_PASSWORD
value: supersecret
image: "mysql:latest"
name: mysql
ports:
-
containerPort: 3306
If this is not possible, what is the suggested approach?
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not.
In your Pod definition specify that the container should pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mysql-pod
name: mysql-pod
spec:
containers:
- image: "mysql:latest"
name: mysql
ports:
- containerPort: 3306
envFrom:
- secretRef:
name: mysql-secret
Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.:
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
(Note that env take an array as value)
And repeating for every value.
Whichever approach you use, you can now define two different Secrets, one for production and one for dev.
dev-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: bXlzcWwK
MYSQL_PASSWORD: bXlzcWwK
MYSQL_DATABASE: c2FtcGxlCg==
MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK
prod-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_USER: am9obgo=
MYSQL_PASSWORD: c2VjdXJlCg==
MYSQL_DATABASE: cHJvZC1kYgo=
MYSQL_ROOT_PASSWORD: cm9vdHkK
And deploy the correct secret to the correct Kubernetes cluster:
kubectl config use-context dev
kubectl create -f dev-secret.yaml
kubectl config use-context prod
kubectl create -f prod-secret.yaml
Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
A new update for Kubernetes(v1.6) allows what you asked for(years ago).
You can now use the envFrom like this in your yaml file:
containers:
- name: django
image: image/name
envFrom:
- secretRef:
name: prod-secrets
Where development-secrets is your secret, you can create it by:
kubectl create secret generic prod-secrets --from-env-file=prod/env.txt`
Where the txt file content is a key-value:
DB_USER=username_here
DB_PASSWORD=password_here
The docs are still lakes of examples, I had to search really hard on those places:
Secrets docs, search for --from-file - shows that this option is available.
The equivalent ConfigMap docs shows an example on how to use it.
Note: there's a difference between --from-file and --from-env-file when creating secret as described in the comments below.
When defining a pod for Kubernetes using a YAML file, there's no direct way to specify a different file containing environment variables for a container. The Kubernetes project says they will improve this area in the future (see Kubernetes docs).
In the meantime, I suggest using a provisioning tool and making the pod YAML a template. For example, using Ansible your pod YAML file would look like:
file my-pod.yaml.template:
apiVersion: v1
kind: Pod
...
spec:
containers:
...
env:
- name: MYSQL_ROOT_PASSWORD
value: {{ mysql_root_pasword }}
...
Then your Ansible playbook can specify the variable mysql_root_password somewhere convenient, and substitute it when creating the resource, for example:
file my-playbook.yaml:
- hosts: my_hosts
vars_files:
- my-env-vars-{{ deploy_to }}.yaml
tasks:
- name: create pod YAML from template
template: src=my-pod.yaml.template dst=my-pod.yaml
- name: create pod in Kubernetes
command: kubectl create -f my-pod.yaml
file my-env-vars-prod.yaml:
mysql_root_password: supersecret
file my-env-vars-test.yaml:
mysql_root_password: notsosecret
Now you create the pod resource by running, for example:
ansible-playbook -e deploy=test my-playbook.yaml
This works for me:
file env-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: env-secret
type: Opaque
stringData:
.env: |-
APP_NAME=Laravel
APP_ENV=local
and into the deployment.yaml or pod.yaml
spec:
...
volumeMounts:
- name: foo
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: foo
secret:
secretName: env-secret
````
I smashed my head aupon tyhis for 2 hours now. I found in the docs a very simple solution to minimize my (and hopefully your) pain.
Keep env.prod, env.dev as you have them.
Use a oneliner script to import those into yaml:
kubectl create configmap my-dev-config --from-env-file=env.dev
kubectl create configmap my-prod-config --from-env-file=env.prod
You can see the result (for instant gratification):
# You can also save this to disk
kubectl get configmap my-dev-config -o yaml
As a rubyist, I personally find this solution the DRYest as you have a single point to maintain (the ENV bash file, which is compatible with Python/Ruby libraries, ..) and then you YAMLize it in a single execution.
Note that you need to keep your ENV file clean (I have a lot of comments which prevented this to work so had to prepend a cat config.original | egrep -v "^#" | tee config.cleaned) but this doen't change the complexity substantially.
It's all documented here
This is an old question but let me describe my answer for future beginner.
You can use kustomize configMapGenerator.
configMapGenerator:
- name: example
env: dev.env
and refer this configMap/example in pod definition
This is an old question but it has a lot of viewers so I add my answer.
The best way to separate the configuration from K8s implementation is using Helm. Each Helm package can have a values.yaml file and we can easily use those values in the Helm chart. If we have a multi-component topology we can create an umbrella Helm package and the parent values package also can overwrite the children values files.
You can reference K8S values by specifying them in your container as environment variables.
let your deployment be mongo.yml as follows:
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is used for senective data, e.g.: passwords or certificates
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is used for non-sensitive data
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
You could try this steps:
This command will put your file prod.env in the secrets.
kubectl create secret generic env-prod --from-file=prod.env=prod.env
Then, you could reference this file in your deployment.yaml like
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-identity
labels:
app.kubernetes.io/name: api-identity
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: api-identity
template:
metadata:
labels:
app.kubernetes.io/name: api-identity
spec:
imagePullSecrets:
- name: docker-registry-credential
containers:
- name: api-identity
image: "api-identity:test"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8005
protocol: TCP
volumeMounts:
- name: env-file-vol
mountPath: "/var/www/html/.env"
subPath: .env
volumes:
- name: env-file-vol
secret:
secretName: env-prod