how to connect a cloud sql instance to an sql cluster? - mysql

i have the deployment yaml file on the cluster and the connection name of the sql instance and the public ip address, so what should i add and where in order for me to connect the intance and cluster? i wanna be able to add something to the sql cluster and it gets automatically saved to the instance and vice-versa.
this is the deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp:
generation: 1
labels:
app: mysql
name: mysql
namespace: default
resourceVersion: "1420"
selfLink:
uid:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mysql
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mysql
spec:
containers:
- env:
- name:
valueFrom:
secretKeyRef:
key:
name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name:
persistentVolumeClaim:
claimName:
status:
availableReplicas: 1
conditions:
- lastTransitionTime:
lastUpdateTime:
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime:
lastUpdateTime:
message: ReplicaSet "mysql" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1

You should use the Cloud SQL proxy and add it as a sidecar to your application making queries to the Cloud SQL instance. Google has a suggested best practice found here.

Related

Unable to deploy Keycloak (9.0.0) deployment on Minishift (1.34.0): keycloak-add-user.json (Permission denied)

I am unable to launch Keycloak (9.0.0) on Minishift (v1.34.0+f5db7cb) and getting Crash loop back off error. This Deployment will be integrated with a Postgres deployment.
Keycloak Pod logs:
/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json (Permission denied)
Here is the yaml file which I deployed through the console (oc apply -f):
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: keycloak
name: keycloak
spec:
selector:
matchLabels:
io.kompose.service: keycloak
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: keycloak
spec:
containers:
- env:
- name: DB_ADDR
value: postgres
- name: DB_DATABASE
value: keycloak
- name: DB_PASSWORD
value: password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: keycloak
- name: DB_VENDOR
value: POSTGRES
- name: KEYCLOAK_LOGLEVEL
value: DEBUG
- name: KEYCLOAK_PASSWORD
value: Pa55w0rd
- name: KEYCLOAK_USER
value: admin
image: localhost:5000/keycloak
name: keycloak
ports:
- containerPort: 8080
- containerPort: 8443
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: keycloak
name: keycloak
spec:
ports:
- name: "8880"
port: 8880
targetPort: 8080
- name: "8888"
port: 8888
targetPort: 8443
type: LoadBalancer
selector:
io.kompose.service: keycloak
Is there any way to resolve this? Thanks in advance!
keycloak-add-user.json is generated by KEYCLOAK_HOME/bin/add-user-keycloak.sh utility. Keycloak server on startup checks presence of this file and if found specified user will be added.
In its turn Keycloak pod during startup resolve whether there is an variables for user creation like KEYCLOAK_USER and KEYCLOAK_PASSWORD, and if they exist, add-user-keycloak.sh utility would be called with those params to create user.
So in your case you should make /opt/jboss/keycloak/standalone/configuration directory writable.

WordPress + MySQL deployed in Kubernetes - MySQL Connection Error

A Kubernetes scenario with Wordpress + Mysql in a local environment.
Wordpress Pod is unable to connect to Mysql database with the following error from Wordpress Pod logs:
MySQL Connection Error: (1045) Access denied for user 'root'#'10.44.0.5' (using password: YES)
Warning: mysqli::mysqli(): (HY000/1045): Access denied for user 'root'#'10.44.0.5' (using password: YES) in - on line 22
Instruction taken from kubernetes.io at link. The only change i made was creating a Secret resource to store password and to be pointed from Mysql and Wordpress containers.
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQxMjMK --> that is base64 of password123
type: Opaque
Pods are in default namespace both on node1 that is a worker node:
NAME READY STATUS RESTARTS AGE IP NODE
wordpress-554dfbbc47-hnr4n 0/1 Error 1 66s 10.44.0.5 node1
wordpress-mysql-5477cbdfbf-29w2r 1/1 Running 0 74s 10.44.0.4 node1
i've no skills about mysql but if i get bash shall in Mysql container and execute:
# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Here the Service output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
wordpress LoadBalancer 10.107.114.255 192.168.1.83 80:32336/TCP
wordpress-mysql ClusterIP None <none> 3306/TCP
Some env variables from MySql Pod:
....
HOSTNAME=wordpress-mysql-5477cbdfbf-29w2r
MYSQL_MAJOR=5.6
MYSQL_ROOT_PASSWORD=password123
MYSQL_VERSION=5.6.50-1debian9
....
PersistentVolume are working fine.
Quite stucked going ahead with troubleshooting. Help would appreciated.
After testing different images for Mysql and Wordpress and reading useful links on hub.docker.com mysql & wordpress i got the web application stack working.
The configuration:
MySQL:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: root-pass
key: password
- name: MYSQL_DATABASE
value: mysql
- name: MYSQL_USER
value: mysql
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
nodeSelector:
storage: local
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Wordpress:
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
externalIPs:
- 192.168.1.83
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
imagePullPolicy: IfNotPresent
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: WORDPRESS_DB_USER
value: mysql
- name: WORDPRESS_DB_NAME
value: mysql
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
nodeSelector:
storage: local
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Output PersitentVolume:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
mysql-pv-claim Bound persistent-volume-mysql 4Gi RWO local-storage
wp-pv-claim Bound persistent-volume-wordpress 2Gi RWO local-storage
Secrets:
apiVersion: v1
kind: Secret
metadata:
name: root-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
Notes for my example configuration:
on node1 created directory /mysql/data & /wordpress/data (mount point for mysql and wordpress containers).
image used for mysql -> mysql:5.7
image used for wordpress -> wordpress
added environment variables according to the documentation of mysql and wordpress.
Did you apply your secret? is your secret available in kube env?

MySQL router in kubernetes as a service

I want to deploy MySQL-router in Kubernetes working as a service.
My plan..
Deploy MySQL-router inside k8 and expose MySQL-router as a service using LoadBalancer (MetalLB)
Applications running inside k8 sees mysql-router service as its database.
MySQL-router sends application data to outside InnoDB cluster.
I tried to deploy using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "192.168.123.130"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 6446
192.168.123.130 is MySQL cluster Master IP.
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- protocol: TCP
port: 6446
type: LoadBalancer
loadBalancerIP: 192.168.123.123
When I check mysql-router container logs, I see something like this:
Waiting for mysql server 192.168.123.130 (0/12)
Waiting for mysql server 192.168.123.130 (1/12)
Waiting for mysql server 192.168.123.130 (2/12)
....
After setting my external MySQL cluster info in deployment, I get following errors:
Successfully contacted mysql server at 192.168.123.130. Checking for cluster state.
Can not connect to database. Exiting.
I can not deploy mysql-router without specifying MYSQL_HOST. What am I missing here?
My ideal deployment
Of course you have to provide the MySQL Host. You could doing this with k8s DNS which setup with in the services.
MySQL Router is middleware that provides transparent routing between your application and any backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers.
Examples
For examples below i use dynamic volume provisioning for data using openebs-hostpath And using StatefulSet for the MySQL Server.
Deployment
MySQL Router :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "mariadb-galera.galera-cluster"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 3306
MySQL Server
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: galera-cluster
name: mariadb-galera
spec:
podManagementPolicy: OrderedReady
replicas: 1
selector:
matchLabels:
app: mariadb-galera
serviceName: mariadb-galera
template:
metadata:
labels:
app: mariadb-galera
spec:
restartPolicy: Always
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- command:
- bash
- -ec
- |
# Bootstrap from the indicated node
NODE_ID="${MY_POD_NAME#"mariadb-galera-"}"
if [[ "$NODE_ID" -eq "0" ]]; then
export MARIADB_GALERA_CLUSTER_BOOTSTRAP=yes
export MARIADB_GALERA_FORCE_SAFETOBOOTSTRAP=no
fi
exec /opt/bitnami/scripts/mariadb-galera/entrypoint.sh /opt/bitnami/scripts/mariadb-galera/run.sh
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: BITNAMI_DEBUG
value: "false"
- name: MARIADB_GALERA_CLUSTER_NAME
value: galera
- name: MARIADB_GALERA_CLUSTER_ADDRESS
value: gcomm://mariadb-galera.galera-cluster
- name: MARIADB_ROOT_PASSWORD
value: root#123
- name: MARIADB_DATABASE
value: my_database
- name: MARIADB_GALERA_MARIABACKUP_USER
value: mariabackup
- name: MARIADB_GALERA_MARIABACKUP_PASSWORD
value: root#123
- name: MARIADB_ENABLE_LDAP
value: "no"
- name: MARIADB_ENABLE_TLS
value: "no"
image: docker.io/bitnami/mariadb-galera:10.4.13-debian-10-r23
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- bash
- -ec
- |
exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: mariadb-galera
ports:
- containerPort: 3306
name: mysql
protocol: TCP
- containerPort: 4567
name: galera
protocol: TCP
- containerPort: 4568
name: ist
protocol: TCP
- containerPort: 4444
name: sst
protocol: TCP
readinessProbe:
exec:
command:
- bash
- -ec
- |
exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /opt/bitnami/mariadb/.bootstrap
name: previous-boot
- mountPath: /bitnami/mariadb
name: data
- mountPath: /opt/bitnami/mariadb/conf
name: mariadb-galera-config
volumes:
- emptyDir: {}
name: previous-boot
- configMap:
defaultMode: 420
name: my.cnf
name: mariadb-galera-config
volumeClaimTemplates:
- apiVersion: v1
metadata:
name: data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Services
MySQL Router Service
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- protocol: TCP
port: 3306
type: LoadBalancer
loadBalancerIP: 192.168.123.123
MySQL Service
apiVersion: v1
kind: Service
metadata:
namespace: galera-cluster
name: mariadb-galera
labels:
app: mariadb-galera
spec:
type: ClusterIP
ports:
- name: mysql
port: 3306
selector:
app: mariadb-galera
---
apiVersion: v1
kind: Service
metadata:
namespace: galera-cluster
name: mariadb-galera-headless
labels:
app: mariadb-galera
spec:
type: ClusterIP
ports:
- name: galera
port: 4567
- name: ist
port: 4568
- name: sst
port: 4444
selector:
app: mariadb-galera
What you need its #1 communication from App1-x to Mysql router and #2 a VIP/LB from MysqlRoutere to external mysql instances.
Well start with #2 configuration of Mysql instances VIP. You will need a service without selector.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: 3306
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysql-service
subsets:
- addresses:
- ip: 192.168.123.130
- ip: 192.168.123.131
- ip: 192.168.123.132
ports:
- name: mysql
port: 3306
protocol: TCP
You don't need LoadBalancer cuz you will connect only inside cluster. So, use ClusterIp instead.
#1 Create MysqlRouter deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "mysql-service"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 6446
To connect to external MySQL instances trough VIP/ClusterIP use mysql-service service and if deployment and service is in same namespace use mysql-service as hostname or put there a CLusterIP from kubectl get service mysql-service
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- name: mysql
port: 6446
protocol: TCP
targetPort: 6446
type: ClusterIP
You can connect within kubernetes cluster to mysql-router-service hostname in same namespace and outside namespace to mysql-router-service.namespace.svc or outside kubernetes cluster use NodePort or LoadBalancer.

Mysql container not starting up on Kubernetes

I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?
Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql
You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.

How to use PersistentVolume for MySQL data in Kubernetes

I am developing database environment on Minikube.
I'd like to persist MySQL data by PersistentVolume function of Kubernetes.
However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).
kubernetes-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs001-pv
labels:
app: nfs001-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
nfs:
path: /share/mydata
server: 192.168.99.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
selector:
matchLabels:
app: nfs001-pv
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
---
apiVersion: v1
kind: Service
metadata:
name: sk-app
labels:
app: sk-app
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30001
selector:
app: sk-app
How can I launch it?
-- Postscript --
When I tried "kubectl logs", I got following error message.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
When I tried "kubectl describe xxx", I got following results.
kubectl describe pv:
Name: nfs001-pv
Labels: app=nfs001-pv
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.99.1
Path: /share/mydata
ReadOnly: false
Events: <none>
kubectl describe pvc:
Name: nfs-claim
Namespace: default
StorageClass:
Status: Bound
Volume: nfs001-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Gi
Access Modes: RWX
Events: <none>
kubectl describe deployment:
Name: sk-app
Namespace: default
CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900
Labels: app=sk-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sk-app
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sk-app
Containers:
sk-app:
Image: mysql:5.7
Port: 3306/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mydata (rw)
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sk-app-d58dddfb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1
Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.
You can:
1) Mount that nfs volume using nfs mount commands and run a:
chmod 777 . # This gives rwx to anybody so need to be mindful.
2) Run an initContainer in your deployment, similar to this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
initContainers:
- name: init-mysql
image: busybox
command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
accessModes:
- ReadWriteMany