I use openshift 4.7 and have this custom SCC (the goal is to have read-only access on some directories in the host node):
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
apiVersion: security.openshift.io/v1
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: 'test scc'
name: test-access
priority: 15
readOnlyRootFilesystem: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
volumes:
- 'hostPath'
- 'secret'
and here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-test
namespace: ubuntu-test
spec:
replicas: 1
selector:
matchLabels:
app: ubuntu-test
template:
metadata:
labels:
app: ubuntu-test
spec:
serviceAccountName: ubuntu-test
containers:
- name: ubuntu-test
image: ubuntu:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
resources:
limits:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: docker
readOnly: true
mountPath: /var/lib/docker/containers
- name: containers
readOnly: true
mountPath: /var/log/containers
- name: pods
readOnly: true
mountPath: /var/log/pods
volumes:
- name: docker
hostPath:
path: /var/lib/docker/containers
type: ''
- name: containers
hostPath:
path: /var/log/containers
type: ''
- name: pods
hostPath:
path: /var/log/pods
type: ''
But when I rsh to the container, I can't see the mounted hostPath:
root#ubuntu-test-6b4fcb5bd7-fnc6f:/# ls /var/log/pods
ls: cannot open directory '/var/log/pods': Permission denied
As I check the permissions, everything seems fine:
drwxr-xr-x. 44 root root 8192 Oct 12 14:30 pods
Using selinux can solve this problem. Reference article: https://zhimin-wen.medium.com/selinux-policy-for-openshift-containers-40baa1c86aa5
In addition: You can refer to the selinux parameters to set the addition, deletion, and modification of the mount directory https://selinuxproject.org/page /ObjectClassesPerms
In openshift4, if you use hostpath as the backend data volume, you need to configure the selinux policy when selinux is enabled. By default, you need to give container_file_t
Related
I'm working on migrating our existing docker containers into openshift, and am running into an issue trying to get 2 of our containers into a single pod. We're using Spring Cloud Config Server for our services, with a Gitea backend. I'd like to have these in a single pod so that the java server and git server are always tied together.
I'm able to get to each of the containers individually via the associated routes, but the config server is unable to reach the git server, and vice versa. I get a 404 when the config server tries to clone the git repo. I've tried using gitea-${INSTANCE_IDENTIFIER} (INSTANCE_IDENTIFIER just being a generated value to tie all of the objects together at a glance), gitea-${INSTANCE_IDENTIFIER}.myproject.svc, and gitea-${INSTANCE_IDENTIFIER}.myproject.svc.cluster.local, as well as the full url for the route that gets created, and nothing works.
Here is my template, with a few things removed (...) for security:
apiVersion: v1
kind: Template
metadata:
name: configuration-template
annotations:
description: 'Configuration containers template'
iconClass: 'fa fa-gear'
tags: 'git, Spring Cloud Configuration'
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
data:
local-docker.ini: |
APP_NAME = Git Server
RUN_USER = git
RUN_MODE = prod
[repository]
ROOT = /home/git/data/git/repositories
[repository.upload]
TEMP_PATH = /home/git/data/gitea/uploads
[server]
APP_DATA_PATH = /home/git/data/gitea
HTTP_PORT = 8443
DISABLE_SSH = true
SSH_PORT = 22
LFS_START_SERVER = false
OFFLINE_MODE = false
PROTOCOL = https
CERT_FILE = /var/run/secrets/service-cert/tls.crt
KEY_FILE = /var/run/secrets/service-cert/tls.key
REDIRECT_OTHER_PORT = true
PORT_TO_REDIRECT = 8080
[database]
PATH = /home/git/data/gitea/gitea.db
DB_TYPE = sqlite3
NAME = gitea
USER = gitea
PASSWD = XXXX
[session]
PROVIDER_CONFIG = /home/git/data/gitea/sessions
PROVIDER = file
[picture]
AVATAR_UPLOAD_PATH = /home/git/data/gitea/avatars
DISABLE_GRAVATAR = false
ENABLE_FEDERATED_AVATAR = false
[attachment]
PATH = /home/git/data/gitea/attachments
[log]
ROOT_PATH = /home/git/data/gitea/log
MODE = file
LEVEL = Info
[mailer]
ENABLED = false
[service]
REGISTER_EMAIL_CONFIRM = false
ENABLE_NOTIFY_MAIL = false
DISABLE_REGISTRATION = false
ENABLE_CAPTCHA = false
REQUIRE_SIGNIN_VIEW = false
DEFAULT_KEEP_EMAIL_PRIVATE = false
NO_REPLY_ADDRESS = noreply.example.org
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- apiVersion: v1
kind: Route
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
port:
targetPort: 'https'
tls:
termination: 'passthrough'
to:
kind: Service
name: 'gitea-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Service
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
annotations:
service.alpha.openshift.io/serving-cert-secret-name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
spec:
type: ClusterIP
ports:
- name: 'https'
port: 443
targetPort: 8443
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Route
metadata:
name: 'configuration-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
port:
targetPort: 'https'
tls:
termination: 'passthrough'
to:
kind: Service
name: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Service
metadata:
name: 'configuration-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
annotations:
service.alpha.openshift.io/serving-cert-secret-name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
spec:
type: ClusterIP
ports:
- name: 'https'
port: 443
targetPort: 8105
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
replicas: 1
template:
metadata:
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
spec:
initContainers:
- name: pem-to-keystore
image: nginx
env:
- name: keyfile
value: /var/run/secrets/openshift.io/services_serving_certs/tls.key
- name: crtfile
value: /var/run/secrets/openshift.io/services_serving_certs/tls.crt
- name: keystore_pkcs12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
- name: password
value: '${STORE_PASSWORD}'
command: ['sh']
args: ['-c', "openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password -name 'server certificate'"]
volumeMounts:
- mountPath: /var/run/secrets/java.io/keystores
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
- mountPath: /var/run/secrets/openshift.io/services_serving_certs
name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
- name: pem-to-truststore
image: openjdk:alpine
env:
- name: ca_bundle
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- name: truststore_jks
value: /var/run/secrets/java.io/keystores/truststore.jks
- name: password
value: '${STORE_PASSWORD}'
command: ['/bin/sh']
args: ["-c",
"keytool -noprompt -importkeystore -srckeystore $JAVA_HOME/jre/lib/security/cacerts -srcstoretype JKS -destkeystore $truststore_jks -storepass $password -srcstorepass changeit && cd /var/run/secrets/java.io/keystores/ && awk '/-----BEGIN CERTIFICATE-----/{filename=\"crt-\"NR}; {print >filename}' $ca_bundle && for file in crt-*; do keytool -import -noprompt -keystore $truststore_jks -file $file -storepass $password -alias service-$file; done && rm crt-*"]
volumeMounts:
- mountPath: /var/run/secrets/java.io/keystores
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
containers:
- name: 'gitea-${INSTANCE_IDENTIFIER}'
image: '...'
command: ['sh',
'-c',
'tar xf /app/gitea/gitea-data.tar.gz -C /home/git/data && cp /app/config/local-docker.ini /home/git/config/local-docker.ini && gitea web --config /home/git/config/local-docker.ini']
ports:
- containerPort: 8443
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- mountPath: '/home/git/data'
name: 'gitea-data-${INSTANCE_IDENTIFIER}'
readOnly: false
- mountPath: '/app/config'
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
readOnly: false
- mountPath: '/var/run/secrets/service-cert'
name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-${INSTANCE_IDENTIFIER}'
image: '...'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command: [
"java",
"-Djava.security.egd=file:/dev/./urandom",
"-Dspring.profiles.active=terraform",
"-Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks",
"-Djavax.net.ssl.trustStoreType=JKS",
"-Djavax.net.ssl.trustStorePassword=${STORE_PASSWORD}",
"-Dserver.ssl.key-store=/var/run/secrets/java.io/keystores/keystore.pkcs12",
"-Dserver.ssl.key-store-password=${STORE_PASSWORD}",
"-Dserver.ssl.key-store-type=PKCS12",
"-Dserver.ssl.trust-store=/var/run/secrets/java.io/keystores/truststore.jks",
"-Dserver.ssl.trust-store-password=${STORE_PASSWORD}",
"-Dserver.ssl.trust-store-type=JKS",
"-Dspring.cloud.config.server.git.uri=https://gitea-${INSTANCE_IDENTIFIER}.svc.cluster.local/org/centralrepo.git",
"-jar",
"/app.jar"
]
ports:
- containerPort: 8105
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- mountPath: '/var/run/secrets/java.io/keystores'
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
readOnly: true
- mountPath: 'target/centralrepo'
name: 'configuration-${INSTANCE_IDENTIFIER}'
readOnly: false
volumes:
- name: 'gitea-data-${INSTANCE_IDENTIFIER}'
persistentVolumeClaim:
claimName: 'gitea-${INSTANCE_IDENTIFIER}'
- name: 'gitea-config-${INSTANCE_IDENTIFIER}'
configMap:
defaultMode: 0660
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
- name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
secret:
defaultMode: 0640
secretName: 'gitea-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
emptyDir:
- name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
secret:
defaultMode: 0640
secretName: 'configuration-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-${INSTANCE_IDENTIFIER}'
emptyDir:
defaultMode: 660
restartPolicy: Always
terminationGracePeriodSeconds: 62
dnsPolicy: ClusterFirst
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
parameters:
- name: GITEA_VERSION
displayName: Gitea Image Version
description: The version of the gitea image.
required: true
- name: CONFIGURATION_VERSION
displayName: Configuration Service Image Version
description: The version of the configuration service image.
required: true
- name: INSTANCE_IDENTIFIER
description: Provides an identifier to tie all objects in the deployment together.
generate: expression
from: "[a-z0-9]{6}"
- name: STORE_PASSWORD
generate: expression
from: "[a-zA-Z0-9]{25}"
I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?
Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql
You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.
I am following the official tutorial here to run a stateful mysql pod on the Kubernetes cluster which is already running on GCP. I have used the exact same commands to first create the persistent volume and persistent volume chain and then deployed the contents of the mysql yaml file as per the documentation. The mysql pod is not running and is in RunContainerError state. Checking the logs of this mysql pod shows:
failed to open log file "/var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log": open /var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log: no such file or directory
Update: As asked by #Matthew in the comments, the result of kubectl describe pods -l app=mysql is provided here:
Name: mysql-fb75876c6-tk6ml
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-xv4v/10.160.0.13
Start Time: Tue, 23 Apr 2019 13:36:04 +0530
Labels: app=mysql
pod-template-hash=963143272
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container mysql
Status: Running
IP: 10.52.0.7
Controlled By: ReplicaSet/mysql-fb75876c6
Containers:
mysql:
Container ID: docker://451ec5bf67f60269493b894004120b627d9a05f38e37cb50e9f283e58dbe6e56
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:5ab881bc5abe2ac734d9fb53d76d984cc04031159152ab42edcabbd377cc0859
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: RunContainerError
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Tue, 23 Apr 2019 13:36:18 +0530
Finished: Tue, 23 Apr 2019 13:36:18 +0530
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/mysql-fb75876c6-tk6ml to gke-mycluster-default-pool-b1c1d316-xv4v
Normal Pulling 31s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v pulling image "mysql:5.6"
Normal Pulled 22s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Successfully pulled image "mysql:5.6"
Normal Pulled 4s (x2 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Container image "mysql:5.6" already present on machine
Normal Created 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Created container
Warning Failed 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Error: failed to start container "mysql": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
As asked by #Hanx:
Result of kubectl describe pv mysql-pv-volume
Name: mysql-pv-volume
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-pv-volume","namespace":""},"spec":{"a...
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data
HostPathType:
Events: <none>
Result of kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"accessModes":["R...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events: <none>
mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This is because you do not need to create those volumes and storageclasses on GKE. Those yaml files are completely valid if you would want to use minikube or kubeadm, but not in case of GKE which can take care of some of the manual steps on its own.
You can use this official guide to run mysql on GKE, or just use files edited by me and tested on GKE.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-volumeclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
And mysql Deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
Make sure you read the linked guide as it explains the GKE specific topics there.
I'm having some difficulties deploying an Openshift template, specifically with attaching a persistent volume. The template is meant to deploy Jira and a MYSQL database for persistence. I have the following persistent volume configuration deployed:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqlpv0003
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /var/nfs/mysql
server: 192.168.0.171
persistentVolumeReclaimPolicy: Retain
Where 192.168.0.171 is a valid, working nfs server. My aim is to use this persistent volume as storage for the MYSQL server. The template I'm trying to deploy is as follows:
---
apiVersion: v1
kind: Template
labels:
app: jira-persistent
template: jira-persistent
message: |-
The following service(s) have been created in your project: ${NAME}, ${DATABASE_SERVICE_NAME}.
metadata:
annotations:
description: Deploys an instance of Jira, backed by a mysql database
iconClass: icon-perl
openshift.io/display-name: Jira + Mysql
openshift.io/documentation-url: https://github.com/sclorg/dancer-ex
openshift.io/long-description: Deploys an instance of Jira, backed by a mysql database
openshift.io/provider-display-name: ABXY Games, Inc.
openshift.io/support-url: abxygames.com
tags: quickstart,JIRA
template.openshift.io/bindable: 'false'
name: jira-persistent
objects:
# Database secrets
- apiVersion: v1
kind: Secret
metadata:
name: "${NAME}"
stringData:
database-password: "${DATABASE_PASSWORD}"
database-user: "${DATABASE_USER}"
keybase: "${SECRET_KEY_BASE}"
# application service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes and load balances the application pods
service.alpha.openshift.io/dependencies: '[{"name": "${DATABASE_SERVICE_NAME}",
"kind": "Service"}]'
name: "${NAME}"
spec:
ports:
- name: web
port: 8080
targetPort: 8080
selector:
name: "${NAME}"
# application route
- apiVersion: v1
kind: Route
metadata:
name: "${NAME}"
spec:
host: "${APPLICATION_DOMAIN}"
to:
kind: Service
name: "${NAME}"
# application image
- apiVersion: v1
kind: ImageStream
metadata:
annotations:
description: Keeps track of changes in the application image
name: "${NAME}"
# Application buildconfig
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
description: Defines how to build the application
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
output:
to:
kind: ImageStreamTag
name: "${NAME}:latest"
source:
contextDir: "${CONTEXT_DIR}"
git:
ref: "${SOURCE_REPOSITORY_REF}"
uri: "${SOURCE_REPOSITORY_URL}"
type: Git
strategy:
dockerStrategy:
env:
- name: CPAN_MIRROR
value: "${CPAN_MIRROR}"
dockerfilePath: Dockerfile
type: Source
triggers:
- type: ImageChange
- type: ConfigChange
- github:
secret: "${GITHUB_WEBHOOK_SECRET}"
type: GitHub
# application deployConfig
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the application server
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
replicas: 1
selector:
name: "${NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${NAME}"
name: "${NAME}"
spec:
containers:
- env:
- name: DATABASE_SERVICE_NAME
value: "${DATABASE_SERVICE_NAME}"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
key: keybase
name: "${NAME}"
- name: PERL_APACHE2_RELOAD
value: "${PERL_APACHE2_RELOAD}"
image: " "
livenessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 3
name: jira-mysql-persistent
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 3
timeoutSeconds: 3
resources:
limits:
memory: "${MEMORY_LIMIT}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- jira-mysql-persistent
from:
kind: ImageStreamTag
name: "${NAME}:latest"
type: ImageChange
- type: ConfigChange
# database persistentvolumeclaim
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "${DATABASE_SERVICE_NAME}"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "${VOLUME_CAPACITY}"
# database service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes the database server
name: "${DATABASE_SERVICE_NAME}"
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
name: "${DATABASE_SERVICE_NAME}"
# database deployment config
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the database
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${DATABASE_SERVICE_NAME}"
spec:
replicas: 1
selector:
name: "${DATABASE_SERVICE_NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${DATABASE_SERVICE_NAME}"
name: "${DATABASE_SERVICE_NAME}"
spec:
containers:
- env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
image: " "
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 3306
timeoutSeconds: 1
name: mysql
ports:
- containerPort: 3306
readinessProbe:
exec:
command:
- "/bin/sh"
- "-i"
- "-c"
- MYSQL_PWD='${DATABASE_PASSWORD}' mysql -h 127.0.0.1 -u ${DATABASE_USER}
-D ${DATABASE_NAME} -e 'SELECT 1'
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
limits:
memory: "${MEMORY_MYSQL_LIMIT}"
volumeMounts:
- mountPath: "/var/lib/mysql/data"
name: "${DATABASE_SERVICE_NAME}-data"
volumes:
- name: "${DATABASE_SERVICE_NAME}-data"
persistentVolumeClaim:
claimName: "${DATABASE_SERVICE_NAME}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mysql
from:
kind: ImageStreamTag
name: mysql:5.7
namespace: "${NAMESPACE}"
type: ImageChange
- type: ConfigChange
parameters:
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: NAME
required: true
value: jira-persistent
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
required: true
value: openshift
- description: Maximum amount of memory the JIRA container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: Maximum amount of memory the MySQL container can use.
displayName: Memory Limit (MySQL)
name: MEMORY_MYSQL_LIMIT
required: true
value: 512Mi
- description: Volume space available for data, e.g. 512Mi, 2Gi
displayName: Volume Capacity
name: VOLUME_CAPACITY
required: true
value: 1Gi
- description: The URL of the repository with your application source code.
displayName: Git Repository URL
name: SOURCE_REPOSITORY_URL
required: true
value: https://github.com/stpork/jira.git
- description: Set this to a branch name, tag or other ref of your repository if you
are not using the default branch.
displayName: Git Reference
name: SOURCE_REPOSITORY_REF
- description: Set this to the relative path to your project if it is not in the root
of your repository.
displayName: Context Directory
name: CONTEXT_DIR
- description: The exposed hostname that will route to the jira service, if left
blank a value will be defaulted.
displayName: Application Hostname
name: APPLICATION_DOMAIN
value: ''
- description: Github trigger secret. A difficult to guess string encoded as part
of the webhook URL. Not encrypted.
displayName: GitHub Webhook Secret
from: "[a-zA-Z0-9]{40}"
generate: expression
name: GITHUB_WEBHOOK_SECRET
- displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: database
- displayName: Database Username
from: user[A-Z0-9]{3}
generate: expression
name: DATABASE_USER
- displayName: Database Password
from: "[a-zA-Z0-9]{8}"
generate: expression
name: DATABASE_PASSWORD
- displayName: Database Name
name: DATABASE_NAME
required: true
value: sampledb
- description: Set this to "true" to enable automatic reloading of modified Perl modules.
displayName: Perl Module Reload
name: PERL_APACHE2_RELOAD
value: ''
- description: Your secret key for verifying the integrity of signed cookies.
displayName: Secret Key
from: "[a-z0-9]{127}"
generate: expression
name: SECRET_KEY_BASE
- description: The custom CPAN mirror URL
displayName: Custom CPAN Mirror URL
name: CPAN_MIRROR
value: ''
When run, the deployment for the MYSQL server eventually fails with the following error:
Unable to mount volumes for pod
"database-1-qvv86_test3(54f01c55-6885-11e9-bc42-3a342852673a)":
timeout expired waiting for volumes to attach or mount for pod
"test3"/"database-1-qvv86". list of unmounted volumes=[database-data
default-token-8hjgv]. list of unattached volumes=[database-data
default-token-8hjgv]
The persistent volume claim is attaching to the persistent volume successfully, but as far as I can tell the pod is not attaching to that volume. The template is being deployed in a fresh project, and the PV is freshly created and the nfs is empty. I can't see any errors with how the pod is referencing the persistent volume claim. I'm not sure why this error is occurring, but I'm just learning templates and am clearly missing something. Does anyone see what I'm missing?
The issue was in my NFS permissions. Here is the working content of my /etc/exports file:
/var/nfs *(rw,root_squash,no_wdelay)
I am developing database environment on Minikube.
I'd like to persist MySQL data by PersistentVolume function of Kubernetes.
However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).
kubernetes-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs001-pv
labels:
app: nfs001-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
nfs:
path: /share/mydata
server: 192.168.99.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
selector:
matchLabels:
app: nfs001-pv
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
---
apiVersion: v1
kind: Service
metadata:
name: sk-app
labels:
app: sk-app
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30001
selector:
app: sk-app
How can I launch it?
-- Postscript --
When I tried "kubectl logs", I got following error message.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
When I tried "kubectl describe xxx", I got following results.
kubectl describe pv:
Name: nfs001-pv
Labels: app=nfs001-pv
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.99.1
Path: /share/mydata
ReadOnly: false
Events: <none>
kubectl describe pvc:
Name: nfs-claim
Namespace: default
StorageClass:
Status: Bound
Volume: nfs001-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Gi
Access Modes: RWX
Events: <none>
kubectl describe deployment:
Name: sk-app
Namespace: default
CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900
Labels: app=sk-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sk-app
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sk-app
Containers:
sk-app:
Image: mysql:5.7
Port: 3306/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mydata (rw)
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sk-app-d58dddfb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1
Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.
You can:
1) Mount that nfs volume using nfs mount commands and run a:
chmod 777 . # This gives rwx to anybody so need to be mindful.
2) Run an initContainer in your deployment, similar to this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
initContainers:
- name: init-mysql
image: busybox
command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
accessModes:
- ReadWriteMany