I have created configmap using setenv.sh file with name configmap-setenv.I just want to consume this configmap in one of my replicationcontroller.
apiVersion: v1
kind: ReplicationController
metadata:
name: sample-registrationweb-rc
spec:
replicas: 1
selector:
app: "JWS"
role: "FO-registrationweb"
tier: "app"
template:
metadata:
labels:
app: "JWS"
role: "FO-registrationweb"
tier: "app"
spec:
containers:
- name: jws
image: samplejws/demo:v1
imagePullPolicy: Always
ports:
- name: jws
containerPort: 8080
resources:
requests:
cpu: 1000m
memory: 100Mi
limits:
cpu: 2000m
memory: 7629Mi
volumeMounts:
- mountPath: /opt/soft/jws-3.0/tomcat8/bin
name: tomcatbin
volumes:
- name: data
emptyDir: {}
- configMap:
name: tomcatbin
name: configmap-setenv
items:
- key: setenv.sh
path: setenv.sh
I am getting below error during creation fo replicationcontroller.
error validating "registartion-rc.yaml": error validating data: found invalid field configMap for v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false
You have a syntax error. A newer version of kubectl would give you a more specific error:
yaml: line 40: mapping values are not allowed in this context
The configmap volume mount should look like:
volumes:
- name: tomcatbin
configMap:
name: configmap-setenv
items:
- key: setenv.sh
path: setenv.sh
Related
I am trying to run Kubernetes Wordpress sample on OpenShift. I tried it already on Minikube and it worked. However, when I try to deploy it to OpenShift sandbox using oc (with oc apply -k ./), I get this error inside the MySQL pod:
MySQL Connection Error: (1130) Host '10.128.4.18' is not allowed to connect to this MySQL server
Warning: mysqli::mysqli(): (HY000/1130): Host '10.128.4.18' is not allowed to connect to this MySQL server in - on line 22
MySQL Connection Error: (1130) Host '10.128.4.18' is not allowed to connect to this MySQL server
Warning: mysqli::mysqli(): (HY000/1130): Host '10.128.4.18' is not allowed to connect to this MySQL
Here are my files:
kustomization.yaml:
secretGenerator:
- name: mysql-pass
literals:
- password=#MyPass1000
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml
mysql-deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: docker.io/library/mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
resources:
requests:
cpu: "250m"
memory: "750Mi"
limits:
cpu: "500m"
memory: "1000Mi"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
wordpress-deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: docker.io/library/wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
resources:
requests:
cpu: "250m"
memory: "250Mi"
limits:
cpu: "500m"
memory: "500Mi"
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Here's the output of oc get pods:
NAME READY STATUS RESTARTS AGE
wordpress-5994c89c98-jmwpp 0/1 CrashLoopBackOff 6 (3m22s ago) 12m
wordpress-mysql-969ddcd5c-j2m46 1/1 Running 0 12m
I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?
Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql
You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.
I'm new to kubernetes (using minikube) and i want to deploy an springboot app which uses mysql to store data.
I'm running my app inside a pod with 2 containers (one for my app and one for mysql), it works fine and as expected, my data are lost once i restard the pods (with a scale --replicas=0; scale --replicas=1 for exemple).
I'm using PersistentVolumeClaim, but still the data aren't stored, i'm for sure missing something important.
Here's my configuration file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse-1
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root
volumeMounts:
- name: mysql-persistent-storage-esse-1
mountPath: /home/esse-1/data/mysql
- image: esse-application
name: esse-app
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: ESSE_DATABASE_USERNAME
value: root
- name: ESSE_DATABASE_PASSWORD
value: root
- name: ESSE_APPLICATION_CONTEXT
value: /esse-1
volumes:
- name: mysql-persistent-storage-esse-1
persistentVolumeClaim:
claimName: mysql-persistent-volume-claim-esse-1
---
apiVersion: v1
kind: Service
metadata:
name: esse-service-1
labels:
app: esse-1
spec:
selector:
app: esse-1
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-persistent-volume-claim-esse-1
labels:
app: esse-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
You need to mount the persistent volume to the directory where mysql is actually writing the database data to (adjust mountPath for the container). This is /var/lib/mysql in this case.
I'm having some difficulties deploying an Openshift template, specifically with attaching a persistent volume. The template is meant to deploy Jira and a MYSQL database for persistence. I have the following persistent volume configuration deployed:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqlpv0003
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /var/nfs/mysql
server: 192.168.0.171
persistentVolumeReclaimPolicy: Retain
Where 192.168.0.171 is a valid, working nfs server. My aim is to use this persistent volume as storage for the MYSQL server. The template I'm trying to deploy is as follows:
---
apiVersion: v1
kind: Template
labels:
app: jira-persistent
template: jira-persistent
message: |-
The following service(s) have been created in your project: ${NAME}, ${DATABASE_SERVICE_NAME}.
metadata:
annotations:
description: Deploys an instance of Jira, backed by a mysql database
iconClass: icon-perl
openshift.io/display-name: Jira + Mysql
openshift.io/documentation-url: https://github.com/sclorg/dancer-ex
openshift.io/long-description: Deploys an instance of Jira, backed by a mysql database
openshift.io/provider-display-name: ABXY Games, Inc.
openshift.io/support-url: abxygames.com
tags: quickstart,JIRA
template.openshift.io/bindable: 'false'
name: jira-persistent
objects:
# Database secrets
- apiVersion: v1
kind: Secret
metadata:
name: "${NAME}"
stringData:
database-password: "${DATABASE_PASSWORD}"
database-user: "${DATABASE_USER}"
keybase: "${SECRET_KEY_BASE}"
# application service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes and load balances the application pods
service.alpha.openshift.io/dependencies: '[{"name": "${DATABASE_SERVICE_NAME}",
"kind": "Service"}]'
name: "${NAME}"
spec:
ports:
- name: web
port: 8080
targetPort: 8080
selector:
name: "${NAME}"
# application route
- apiVersion: v1
kind: Route
metadata:
name: "${NAME}"
spec:
host: "${APPLICATION_DOMAIN}"
to:
kind: Service
name: "${NAME}"
# application image
- apiVersion: v1
kind: ImageStream
metadata:
annotations:
description: Keeps track of changes in the application image
name: "${NAME}"
# Application buildconfig
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
description: Defines how to build the application
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
output:
to:
kind: ImageStreamTag
name: "${NAME}:latest"
source:
contextDir: "${CONTEXT_DIR}"
git:
ref: "${SOURCE_REPOSITORY_REF}"
uri: "${SOURCE_REPOSITORY_URL}"
type: Git
strategy:
dockerStrategy:
env:
- name: CPAN_MIRROR
value: "${CPAN_MIRROR}"
dockerfilePath: Dockerfile
type: Source
triggers:
- type: ImageChange
- type: ConfigChange
- github:
secret: "${GITHUB_WEBHOOK_SECRET}"
type: GitHub
# application deployConfig
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the application server
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
replicas: 1
selector:
name: "${NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${NAME}"
name: "${NAME}"
spec:
containers:
- env:
- name: DATABASE_SERVICE_NAME
value: "${DATABASE_SERVICE_NAME}"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
key: keybase
name: "${NAME}"
- name: PERL_APACHE2_RELOAD
value: "${PERL_APACHE2_RELOAD}"
image: " "
livenessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 3
name: jira-mysql-persistent
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 3
timeoutSeconds: 3
resources:
limits:
memory: "${MEMORY_LIMIT}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- jira-mysql-persistent
from:
kind: ImageStreamTag
name: "${NAME}:latest"
type: ImageChange
- type: ConfigChange
# database persistentvolumeclaim
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "${DATABASE_SERVICE_NAME}"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "${VOLUME_CAPACITY}"
# database service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes the database server
name: "${DATABASE_SERVICE_NAME}"
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
name: "${DATABASE_SERVICE_NAME}"
# database deployment config
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the database
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${DATABASE_SERVICE_NAME}"
spec:
replicas: 1
selector:
name: "${DATABASE_SERVICE_NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${DATABASE_SERVICE_NAME}"
name: "${DATABASE_SERVICE_NAME}"
spec:
containers:
- env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
image: " "
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 3306
timeoutSeconds: 1
name: mysql
ports:
- containerPort: 3306
readinessProbe:
exec:
command:
- "/bin/sh"
- "-i"
- "-c"
- MYSQL_PWD='${DATABASE_PASSWORD}' mysql -h 127.0.0.1 -u ${DATABASE_USER}
-D ${DATABASE_NAME} -e 'SELECT 1'
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
limits:
memory: "${MEMORY_MYSQL_LIMIT}"
volumeMounts:
- mountPath: "/var/lib/mysql/data"
name: "${DATABASE_SERVICE_NAME}-data"
volumes:
- name: "${DATABASE_SERVICE_NAME}-data"
persistentVolumeClaim:
claimName: "${DATABASE_SERVICE_NAME}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mysql
from:
kind: ImageStreamTag
name: mysql:5.7
namespace: "${NAMESPACE}"
type: ImageChange
- type: ConfigChange
parameters:
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: NAME
required: true
value: jira-persistent
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
required: true
value: openshift
- description: Maximum amount of memory the JIRA container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: Maximum amount of memory the MySQL container can use.
displayName: Memory Limit (MySQL)
name: MEMORY_MYSQL_LIMIT
required: true
value: 512Mi
- description: Volume space available for data, e.g. 512Mi, 2Gi
displayName: Volume Capacity
name: VOLUME_CAPACITY
required: true
value: 1Gi
- description: The URL of the repository with your application source code.
displayName: Git Repository URL
name: SOURCE_REPOSITORY_URL
required: true
value: https://github.com/stpork/jira.git
- description: Set this to a branch name, tag or other ref of your repository if you
are not using the default branch.
displayName: Git Reference
name: SOURCE_REPOSITORY_REF
- description: Set this to the relative path to your project if it is not in the root
of your repository.
displayName: Context Directory
name: CONTEXT_DIR
- description: The exposed hostname that will route to the jira service, if left
blank a value will be defaulted.
displayName: Application Hostname
name: APPLICATION_DOMAIN
value: ''
- description: Github trigger secret. A difficult to guess string encoded as part
of the webhook URL. Not encrypted.
displayName: GitHub Webhook Secret
from: "[a-zA-Z0-9]{40}"
generate: expression
name: GITHUB_WEBHOOK_SECRET
- displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: database
- displayName: Database Username
from: user[A-Z0-9]{3}
generate: expression
name: DATABASE_USER
- displayName: Database Password
from: "[a-zA-Z0-9]{8}"
generate: expression
name: DATABASE_PASSWORD
- displayName: Database Name
name: DATABASE_NAME
required: true
value: sampledb
- description: Set this to "true" to enable automatic reloading of modified Perl modules.
displayName: Perl Module Reload
name: PERL_APACHE2_RELOAD
value: ''
- description: Your secret key for verifying the integrity of signed cookies.
displayName: Secret Key
from: "[a-z0-9]{127}"
generate: expression
name: SECRET_KEY_BASE
- description: The custom CPAN mirror URL
displayName: Custom CPAN Mirror URL
name: CPAN_MIRROR
value: ''
When run, the deployment for the MYSQL server eventually fails with the following error:
Unable to mount volumes for pod
"database-1-qvv86_test3(54f01c55-6885-11e9-bc42-3a342852673a)":
timeout expired waiting for volumes to attach or mount for pod
"test3"/"database-1-qvv86". list of unmounted volumes=[database-data
default-token-8hjgv]. list of unattached volumes=[database-data
default-token-8hjgv]
The persistent volume claim is attaching to the persistent volume successfully, but as far as I can tell the pod is not attaching to that volume. The template is being deployed in a fresh project, and the PV is freshly created and the nfs is empty. I can't see any errors with how the pod is referencing the persistent volume claim. I'm not sure why this error is occurring, but I'm just learning templates and am clearly missing something. Does anyone see what I'm missing?
The issue was in my NFS permissions. Here is the working content of my /etc/exports file:
/var/nfs *(rw,root_squash,no_wdelay)
I am developing database environment on Minikube.
I'd like to persist MySQL data by PersistentVolume function of Kubernetes.
However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).
kubernetes-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs001-pv
labels:
app: nfs001-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
nfs:
path: /share/mydata
server: 192.168.99.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
selector:
matchLabels:
app: nfs001-pv
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
---
apiVersion: v1
kind: Service
metadata:
name: sk-app
labels:
app: sk-app
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30001
selector:
app: sk-app
How can I launch it?
-- Postscript --
When I tried "kubectl logs", I got following error message.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
When I tried "kubectl describe xxx", I got following results.
kubectl describe pv:
Name: nfs001-pv
Labels: app=nfs001-pv
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.99.1
Path: /share/mydata
ReadOnly: false
Events: <none>
kubectl describe pvc:
Name: nfs-claim
Namespace: default
StorageClass:
Status: Bound
Volume: nfs001-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Gi
Access Modes: RWX
Events: <none>
kubectl describe deployment:
Name: sk-app
Namespace: default
CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900
Labels: app=sk-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sk-app
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sk-app
Containers:
sk-app:
Image: mysql:5.7
Port: 3306/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mydata (rw)
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sk-app-d58dddfb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1
Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.
You can:
1) Mount that nfs volume using nfs mount commands and run a:
chmod 777 . # This gives rwx to anybody so need to be mindful.
2) Run an initContainer in your deployment, similar to this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
initContainers:
- name: init-mysql
image: busybox
command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
accessModes:
- ReadWriteMany