Mysql container not starting up on Kubernetes - mysql

I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?

Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql

You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.

Related

How to make My Sql Pod to save data in Persistent Volume

I started to use Kubernetes to understant concepts like pods, objects and so on. I started to learn about Persistent Volume and Persistent Volume Claim, from my understanding, if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
I have a spring boot pod where i save data in mysql pod, data is saved, i can retreived, but when i restart my pods, delete or replace them, that saved data is lost, so i think i messed up something, can you give me a hint, please? Thanks...
Bellow are my Kubernetes files:
Mysql pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels: #must match Service and DeploymentLabels
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql #name of the db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret #name of the secret obj
key: password #which value from inside the secret to take
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: #mount volume obtained from PVC
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #mounting in the container will be here
volumes:
- name: mysql-persistent-storage #obtaining volume from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim # can use the same claim in different pods
apiVersion: v1
kind: Service
metadata:
name: mysql #DNS name
labels:
app: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector: #mysql pod should contain same label
app: mysql
clusterIP: None # we use DNS
Persistent Volume and Persistent Volume Claim files:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim #name of our pvc
labels:
app: mysql
spec:
volumeName: host-pv #claim that volume created with this name
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
apiVersion: v1 #version of our PV
kind: PersistentVolume #kind of obj we gonna create
metadata:
name: host-pv # name of our PV
spec: #spec of our PV
capacity: #size
storage: 4Gi
volumeMode: Filesystem #storage Type, File and Blcok
storageClassName: standard
accessModes:
- ReadWriteOnce # can be mount from multiple pods on a single nod, cam be use by multiple pods, multiple pods can use this pv but only from a single node
# - ReadOnlyMany # on multiple nodes
# - WriteOnlyMany # doar pt multiple nods, nu hostPath type
hostPath: #which type of pv
path: "/mnt/data"
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
My Spring book K8 file:
apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-example
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-deployment
spec:
replicas: 1
selector:
matchLabels:
app: book-example
template:
metadata:
labels:
app: book-example
spec:
containers:
- name: book-container
image: cinevacineva/kubernetes_book_pv:latest
imagePullPolicy: Always
# ports:
# - containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
# & minikube -p minikube docker-env | Invoke-Expression links docker images we create with minikube, nu mai trebe sa ppusham
...if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
Your previous data will not be available when the pod switch node. To use hostPath you don't really need PVC/PV. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
...
spec:
...
template:
...
spec:
...
nodeSelector: # <-- make sure your pod runs on the same node
<node label>: <value unique to the mysql node>
volumes: # <-- mount the data path on the node, no pvc/pv required.
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
type: DirectoryOrCreate
containers:
- name: mysql
...
volumeMounts: # <-- let mysql write to it
- name: mysql-persistent-storage
mountPath: /var/lib/mysql

Unable to deploy Keycloak (9.0.0) deployment on Minishift (1.34.0): keycloak-add-user.json (Permission denied)

I am unable to launch Keycloak (9.0.0) on Minishift (v1.34.0+f5db7cb) and getting Crash loop back off error. This Deployment will be integrated with a Postgres deployment.
Keycloak Pod logs:
/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json (Permission denied)
Here is the yaml file which I deployed through the console (oc apply -f):
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: keycloak
name: keycloak
spec:
selector:
matchLabels:
io.kompose.service: keycloak
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: keycloak
spec:
containers:
- env:
- name: DB_ADDR
value: postgres
- name: DB_DATABASE
value: keycloak
- name: DB_PASSWORD
value: password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: keycloak
- name: DB_VENDOR
value: POSTGRES
- name: KEYCLOAK_LOGLEVEL
value: DEBUG
- name: KEYCLOAK_PASSWORD
value: Pa55w0rd
- name: KEYCLOAK_USER
value: admin
image: localhost:5000/keycloak
name: keycloak
ports:
- containerPort: 8080
- containerPort: 8443
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: keycloak
name: keycloak
spec:
ports:
- name: "8880"
port: 8880
targetPort: 8080
- name: "8888"
port: 8888
targetPort: 8443
type: LoadBalancer
selector:
io.kompose.service: keycloak
Is there any way to resolve this? Thanks in advance!
keycloak-add-user.json is generated by KEYCLOAK_HOME/bin/add-user-keycloak.sh utility. Keycloak server on startup checks presence of this file and if found specified user will be added.
In its turn Keycloak pod during startup resolve whether there is an variables for user creation like KEYCLOAK_USER and KEYCLOAK_PASSWORD, and if they exist, add-user-keycloak.sh utility would be called with those params to create user.
So in your case you should make /opt/jboss/keycloak/standalone/configuration directory writable.

how to connect a cloud sql instance to an sql cluster?

i have the deployment yaml file on the cluster and the connection name of the sql instance and the public ip address, so what should i add and where in order for me to connect the intance and cluster? i wanna be able to add something to the sql cluster and it gets automatically saved to the instance and vice-versa.
this is the deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp:
generation: 1
labels:
app: mysql
name: mysql
namespace: default
resourceVersion: "1420"
selfLink:
uid:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mysql
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mysql
spec:
containers:
- env:
- name:
valueFrom:
secretKeyRef:
key:
name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name:
persistentVolumeClaim:
claimName:
status:
availableReplicas: 1
conditions:
- lastTransitionTime:
lastUpdateTime:
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime:
lastUpdateTime:
message: ReplicaSet "mysql" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
You should use the Cloud SQL proxy and add it as a sidecar to your application making queries to the Cloud SQL instance. Google has a suggested best practice found here.

How to use PersistentVolume for MySQL data in Kubernetes

I am developing database environment on Minikube.
I'd like to persist MySQL data by PersistentVolume function of Kubernetes.
However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).
kubernetes-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs001-pv
labels:
app: nfs001-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
nfs:
path: /share/mydata
server: 192.168.99.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
selector:
matchLabels:
app: nfs001-pv
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
---
apiVersion: v1
kind: Service
metadata:
name: sk-app
labels:
app: sk-app
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30001
selector:
app: sk-app
How can I launch it?
-- Postscript --
When I tried "kubectl logs", I got following error message.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
When I tried "kubectl describe xxx", I got following results.
kubectl describe pv:
Name: nfs001-pv
Labels: app=nfs001-pv
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.99.1
Path: /share/mydata
ReadOnly: false
Events: <none>
kubectl describe pvc:
Name: nfs-claim
Namespace: default
StorageClass:
Status: Bound
Volume: nfs001-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Gi
Access Modes: RWX
Events: <none>
kubectl describe deployment:
Name: sk-app
Namespace: default
CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900
Labels: app=sk-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sk-app
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sk-app
Containers:
sk-app:
Image: mysql:5.7
Port: 3306/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mydata (rw)
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sk-app-d58dddfb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1
Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.
You can:
1) Mount that nfs volume using nfs mount commands and run a:
chmod 777 . # This gives rwx to anybody so need to be mindful.
2) Run an initContainer in your deployment, similar to this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: sk-app
labels:
app: sk-app
spec:
replicas: 1
selector:
matchLabels:
app: sk-app
template:
metadata:
labels:
app: sk-app
spec:
initContainers:
- name: init-mysql
image: busybox
command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
containers:
- name: sk-app
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim
accessModes:
- ReadWriteMany

Not able to consume configmap in one of my replicationcontroller in Kubernertes

I have created configmap using setenv.sh file with name configmap-setenv.I just want to consume this configmap in one of my replicationcontroller.
apiVersion: v1
kind: ReplicationController
metadata:
name: sample-registrationweb-rc
spec:
replicas: 1
selector:
app: "JWS"
role: "FO-registrationweb"
tier: "app"
template:
metadata:
labels:
app: "JWS"
role: "FO-registrationweb"
tier: "app"
spec:
containers:
- name: jws
image: samplejws/demo:v1
imagePullPolicy: Always
ports:
- name: jws
containerPort: 8080
resources:
requests:
cpu: 1000m
memory: 100Mi
limits:
cpu: 2000m
memory: 7629Mi
volumeMounts:
- mountPath: /opt/soft/jws-3.0/tomcat8/bin
name: tomcatbin
volumes:
- name: data
emptyDir: {}
- configMap:
name: tomcatbin
name: configmap-setenv
items:
- key: setenv.sh
path: setenv.sh
I am getting below error during creation fo replicationcontroller.
error validating "registartion-rc.yaml": error validating data: found invalid field configMap for v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false
You have a syntax error. A newer version of kubectl would give you a more specific error:
yaml: line 40: mapping values are not allowed in this context
The configmap volume mount should look like:
volumes:
- name: tomcatbin
configMap:
name: configmap-setenv
items:
- key: setenv.sh
path: setenv.sh