i follow this tutorial https://medium.com/better-programming/kubernetes-a-detailed-example-of-deployment-of-a-stateful-application-de3de33c8632
I create mysql pod and backend pod, but when application get error com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
pod mysql: running
pod backend: CrashLoopBackOff
Dockerfile
FROM openjdk:14-ea-8-jdk-alpine3.10
ADD target/credit-0.0.1-SNAPSHOT.jar .
EXPOSE 8200
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=container","-jar","/credit-0.0.1-SNAPSHOT.jar"]
credit-deployment.yml
# Define 'Service' to expose backend application deployment
apiVersion: v1
kind: Service
metadata:
name: to-do-app-backend
spec:
selector: # backend application pod lables should match these
app: to-do-app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer # use NodePort, if you are not running Kubernetes on cloud
---
# Configure 'Deployment' of backend application
apiVersion: apps/v1
kind: Deployment
metadata:
name: to-do-app-backend
labels:
app: to-do-app
tier: backend
spec:
replicas: 2 # Number of replicas of back-end application to be deployed
selector:
matchLabels: # backend application pod labels should match these
app: to-do-app
tier: backend
template:
metadata:
labels: # Must macth 'Service' and 'Deployment' labels
app: to-do-app
tier: backend
spec:
containers:
- name: to-do-app-backend
image: gitim21/credit_repo:1.0 # docker image of backend application
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-conf # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-conf
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-credentials # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 8080
application.yml
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
hikari:
idle-timeout: 10000
platform: mysql
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}/${DB_NAME}
jpa:
hibernate:
naming:
physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
I placed the application.yml file in the application folder "resources"
EDIT
Name: mysql-64c7df597c-s4gbt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 17:50:18 +0200
Labels: app=mysql
pod-template-hash=64c7df597c
tier=database
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/mysql-64c7df597c
Containers:
mysql:
Container ID: docker://514d3f5af76f5e7ac11f6bf6e36b44ee4012819dc1cef581829a6b5b2ce7c09e
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
Args:
--ignore-db-dir=lost+found
State: Running
Started: Thu, 12 Sep 2019 17:50:19 +0200
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'db-root-credentials'> Optional: false
MYSQL_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
MYSQL_DATABASE: <set to the key 'name' of config map 'db-conf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/mysql-64c7df597c-s4gbt to minikube
Normal Pulled 49m kubelet, minikube Container image "mysql:5.7" already present on machine
Normal Created 49m kubelet, minikube Created container mysql
Normal Started 49m kubelet, minikube Started container mysql
Name: to-do-app-backend-8669b5467-hrr9q
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 18:27:45 +0200
Labels: app=to-do-app
pod-template-hash=8669b5467
tier=backend
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/to-do-app-backend-8669b5467
Containers:
to-do-app-backend:
Container ID: docker://1eb8453939710aed7a93cddbd5046f49be3382858aa17d5943195207eaeb3065
Image: gitim21/credit_repo:1.0
Image ID: docker-pullable://gitim21/credit_repo#sha256:1fb2991394fc59f37068164c72263749d64cb5c9fe741021f476a65589f40876
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 12 Sep 2019 18:51:25 +0200
Finished: Thu, 12 Sep 2019 18:51:36 +0200
Ready: False
Restart Count: 9
Environment:
DB_HOST: <set to the key 'host' of config map 'db-conf'> Optional: false
DB_NAME: <set to the key 'name' of config map 'db-conf'> Optional: false
DB_USERNAME: <set to the key 'username' in secret 'db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_PORT: <set to the key 'port' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/to-do-app-backend-8669b5467-hrr9q to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "gitim21/credit_repo:1.0" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container to-do-app-backend
Normal Started 23m (x5 over 25m) kubelet, minikube Started container to-do-app-backend
Warning BackOff 50s (x104 over 25m) kubelet, minikube Back-off restarting failed container
First and foremost make sure that you fillfull all requirements that are described in article.
During creating deployments objects like (eg. pods, services ) environment variables are injected from the configMaps and secrets that are created earlier. This deployment uses the image kubernetesdemo/to-do-app-backend which is created in step one. Make sure you've created configmap and secrets before, otherwise delete created during deployment objects, create configMap, secret and then run deployment config file once again.
Another possibility if get:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications
link failure
error it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
~~4. DB server is down.~~
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I assume that if your mysql pod is running your DB server is running and point ~~4. DB server is down.~~ is wrong.
To solve the one or the other, follow the following advices:
Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead.
Check if it is based on my.cnf of MySQL DB.
Start the DB once again. Check if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
Similar error you can find here: communication-error.
Related
I'm facing an issue while deploying a Spring API which should connect to a MySQL database.
I am deploying a standalone MySQL using the [bitnami helm chart][1] with the following values:
primary:
service:
type: ClusterIP
persistence:
enabled: true
size: 3Gi
storageClass: ""
extraVolumes:
- name: mysql-passwords
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: mysql-spc
extraVolumeMounts:
- name: mysql-passwords
mountPath: "/vault/secrets"
readOnly: true
configuration: |-
[mysqld]
default_authentication_plugin=mysql_native_password
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
plugin_dir=/opt/bitnami/mysql/lib/plugin
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
datadir=/bitnami/mysql/data
tmpdir=/opt/bitnami/mysql/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
log-error=/opt/bitnami/mysql/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
slow_query_log=0
slow_query_log_file=/opt/bitnami/mysql/logs/mysqld.log
long_query_time=10.0
[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin
[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
auth:
createDatabase: true
database: api-db
username: api
usePasswordFiles: true
customPasswordFiles:
root: /vault/secrets/db-root-pwd
user: /vault/secrets/db-pwd
replicator: /vault/secrets/db-replica-pwd
serviceAccount:
create: false
name: social-app
I use the following deployment which runs a spring API (with Vault secret injection):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: social-api
name: social-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: social-api
template:
metadata:
labels:
app: social-api
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: 'social'
spec:
serviceAccountName: social-app
containers:
- image: quay.io/paulbarrie7/social-network-api
name: social-network-api
command:
- java
args:
- -jar
- "-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false"
- "-DSPRING_DATASOURCE_USERNAME=api"
- "-DSPRING_DATASOURCE_PASSWORD=$(cat /secrets/db-pwd)"
- "-DJWT_SECRET=$(cat /secrets/jwt-secret)"
- "-DS3_BUCKET=$(cat /secrets/s3-bucket)"
- -Dlogging.level.root=DEBUG
- -Dspring.datasource.hikari.maximum-pool-size=5
- -Dlogging.level.com.zaxxer.hikari.HikariConfig=DEBUG
- -Dlogging.level.com.zaxxer.hikari=TRACE
- social-network-api-1.0-SNAPSHOT.jar
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8080
volumeMounts:
- name: aws-credentials
mountPath: "/root/.aws"
readOnly: true
- name: java-secrets
mountPath: "/secrets"
readOnly: true
volumes:
- name: aws-credentials
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-spc
- name: java-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: java-spc
Identifier are ok, when I run an interactive mysql pod I can connect to the database. However name resolution for the Spring API is wrong since I get the error:
java.sql.SQLException: Access denied for user 'api'#'10.24.0.194' (using password: YES)
which is wrong since 10.24.0.194 is the API pod address and not the mysql pod or service address, and I cant solve why.
Any idea?
[1]: https://artifacthub.io/packages/helm/bitnami/mysql
Thanks to David's suggestion I succeeded in solving my problem.
Actually there were two issues in my configs.
First the secrets were indeed misinterpreted, then I've changed my command/args to:
command:
- "/bin/sh"
- "-c"
args:
- |
DB_USER=$(cat /secrets/db-user)
DB_PWD=$(cat /secrets/db-pwd)
JWT=$(cat /secrets/jwt-secret)
BUCKET=$(cat /secrets/s3-bucket)
java -jar \
-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false \
"-DSPRING_DATASOURCE_USERNAME=$DB_USER" \
"-DSPRING_DATASOURCE_PASSWORD=$DB_PWD" \
"-DJWT_SECRET=$JWT" \
"-DS3_BUCKET=$BUCKET" \
-Dlogging.level.root=DEBUG \
social-network-api-1.0-SNAPSHOT.jar
And the memory resources set were also too low, so I have changed them to:
resources:
limits:
cpu: 100m
memory: 400Mi
requests:
cpu: 100m
memory: 400Mi
while redeploy it stucks as following but while scaling (0,1) deployment it work,
White might be the cause
kubectl describe pod backendnew-6f9cbc5fb-v2nbc
Name: backendnew-6f9cbc5fb-v2nbc
Namespace: default
Priority: 0
Node:
Labels: pod-template-hash=6f9cbc5fb
workload.user.cattle.io/workloadselector=deployment-default-backendnew
Annotations: cattle.io/timestamp: 2021-09-27T11:55:38Z
field.cattle.io/ports:
[[{"containerPort":7080,"dnsName":"backendnew-nodeport","hostPort":7080,"kind":"NodePort","name":"port","protocol":"TCP","sourcePort":7080...
field.cattle.io/publicEndpoints: [{"addresses":["192.168.178.13"],"nodeId":"c-jq2bh:machine-7g9vs","port":7080,"protocol":"TCP"}]
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/backendnew-6f9cbc5fb
Containers:
backendnew:
Image: kub-repo.f1soft.com/fonepay/grpay-admin:8
Port: 7080/TCP
Host Port: 7080/TCP
Environment:
server.port: 7080
spring.profiles.active: PROD
Mounts:
/app/config from vol1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rmt6g (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
vol1:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: configoriginal
Optional: false
default-token-rmt6g:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rmt6g
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
I am trying to follow the tutorial Deploying Debezium using the new KafkaConnector resource.
Based on the tutorial, I am also using minikube but with docker driver. Basically just follow exactly step by step.
However, for the step "Create the connector", after creating the connector by
cat <<EOF | kubectl -n kafka apply -f -
apiVersion: "kafka.strimzi.io/v1alpha1"
kind: "KafkaConnector"
metadata:
name: "inventory-connector"
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 192.168.99.1
database.port: "3306"
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
database.server.id: "184054"
database.server.name: "dbserver1"
database.whitelist: "inventory"
database.history.kafka.bootstrap.servers: "my-cluster-kafka-bootstrap:9092"
database.history.kafka.topic: "schema-changes.inventory"
include.schema.changes: "true"
EOF
and check by
kubectl -n kafka get kctr inventory-connector -o yaml
I got error
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I tried to change
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
to
database.user: "debezium"
database.password: "dbz"
directly and re-apply, based on the user and password info in "Secure the database credentials" step.
Also, based on the description in the tutorial
I’m using database.hostname: 192.168.99.1 as IP address for connecting to MySQL because I’m using minikube with the virtualbox VM driver If you’re using a different VM driver with minikube you might need a different IP address.
I am actually a little confused for above description. MySQL in the demo is deployed in Docker, while the rest of parts like Kafka are deployed in minikube. Why the description about database.hostname says minikube instead of Docker?
Anyway, when I run minikube ip, I got 192.168.49.2. However, after I change database.hostname to 192.168.49.2, and run kubectl get kctr inventory-connector -o yaml -n kafka, I got
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I can access MySQL by localhost as it is hosted in Docker.
However, I still same error when I changed database.hostname to localhost.
Any idea? Thanks!
The issue is related with the service in minikube failed to communicate with the MySQL in the docker.
Regarding how to access host's localhost from inside Kubernetes cluster, I found How to access host's localhost from inside kubernetes cluster
However, I end up with deploying MySQL in Kubernetes direction by
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
(Copied from https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
with
database.hostname: "mysql.default" # service `mysql` in namespace `default`
database.port: "3306"
database.user: "root"
database.password: "password"
Now when I run
kubectl -n kafka get kctr inventory-connector -o yaml
I got a new error saying MySQL not enabling row-level binlog, however, it means it can connect the MySQL now.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"mysql.default","database.password":"password","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"root","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T19:36:52Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "2918"
uid: 48bb46e1-42bb-4574-a3dc-221ae7d6a803
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.default
database.password: password
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: root
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T19:36:53.605Z"
status: "True"
type: Ready
connectorStatus:
connector:
state: UNASSIGNED
worker_id: 172.17.0.8:8083
name: inventory-connector
tasks:
- id: 0
state: FAILED
trace: "org.apache.kafka.connect.errors.ConnectException: The MySQL server is
not configured to use a row-level binlog, which is required for this connector
to work properly. Change the MySQL configuration to use a row-level binlog
and restart the connector.\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:207)\n\tat
io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"
worker_id: 172.17.0.8:8083
type: source
observedGeneration: 1
Currently I am trying to have volume persistence for my MYSQL database using Kubernetes with Kubeadm.
The environment is based on an amazon EC2 instance using EBS storage disks.
As you can see below a storage class, a persistent volume as well as a persistent volume claim have been implemented in order to have a mysql persistence.
However an error occurs when I try to deploy the mysql pod (on the attached image).
mysql-pv.yml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: amazonEBS
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-ID
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
mysql.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7.30
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: MYPASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 31306
selector:
app: mysql
This is my mysql pod description:
Name: mysql-5c9788fc65-jq2nh
Namespace: default
Priority: 0
Node: ip-172-31-31-210/172.31.31.210
Start Time: Sat, 23 May 2020 12:19:24 +0000
Labels: app=mysql
pod-template-hash=5c9788fc65
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-5c9788fc65
Containers:
mysql:
Container ID:
Image: mysql:5.7.30
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: MYPASS
Mounts:
/data/ from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cshk2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-cshk2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cshk2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Here is the error I get:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler
Successfully assigned default/mysql-5c9788fc65-jq2nh to ip-172-31-31-210
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv
Output: Running scope as unit: run-r11fefbbda1d241c2985931d3adaaa969.scope
mount: /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 does not exist.
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Someone can help me ?
Check for the state of PV and PVC if the PVC is in bounded state or not.
kubectl describe pvc mysql-pv-claim
kubectl describe pv mysql-pv
Do you have the EBS CSI driver installed?
Other reason can be, I think you missed adding the option of --cloud-provider=aws, this is required by CCM for the nodes. Check out the similar issue.
The following link has all the IAM permissions and a working example on how to create and mount an EBS volume in Kubernetes from docs on cluster configuration for using EBS.
With a kubeadm configuration, configuration is defined in: /var/lib/kubelet/config.yaml and /var/lib/kubelet/kubeadm-flags.env
If the cluster was deployed using kubeadm then, define environment variable on all nodes in kubeadm-flags.env
To resolve this manually you need to add the --cloud-provider=aws tag to the kubeadm-flags.env and restarted the services, which will resolve the issue:
systemctl daemon-reload && systemctl restart kubelet
or provide the following configuration for kubeadm. Change openstack to AWS in your case.
Check the following blog for better understanding.
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"```
Here is some additional information:
kubectl describe pvc mysql-pv-claim :
Name: mysql-pv-claim
Namespace: default
StorageClass: standard
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-5c9788fc65-jq2nh
Events: <none>
kubectl describe pv mysql-pv :
Name: mysql-pv
Labels: type=amazonEBS
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-06212746d87534157
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
lsblk :
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 18M 1 loop /snap/amazon-ssm-agent/1566
loop1 7:1 0 93.9M 1 loop /snap/core/9066
loop2 7:2 0 93.8M 1 loop /snap/core/8935
nvme0n1 259:0 0 10G 0 disk
nvme1n1 259:1 0 15G 0 disk
└─nvme1n1p1 259:2 0 15G 0 part /
I want to use nvme0n1.
I don't have kubelet and kube-controller-manager.log file log.
I am following the official tutorial here to run a stateful mysql pod on the Kubernetes cluster which is already running on GCP. I have used the exact same commands to first create the persistent volume and persistent volume chain and then deployed the contents of the mysql yaml file as per the documentation. The mysql pod is not running and is in RunContainerError state. Checking the logs of this mysql pod shows:
failed to open log file "/var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log": open /var/log/pods/045cea87-6408-11e9-84d3-42010aa001c3/mysql/2.log: no such file or directory
Update: As asked by #Matthew in the comments, the result of kubectl describe pods -l app=mysql is provided here:
Name: mysql-fb75876c6-tk6ml
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-xv4v/10.160.0.13
Start Time: Tue, 23 Apr 2019 13:36:04 +0530
Labels: app=mysql
pod-template-hash=963143272
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container mysql
Status: Running
IP: 10.52.0.7
Controlled By: ReplicaSet/mysql-fb75876c6
Containers:
mysql:
Container ID: docker://451ec5bf67f60269493b894004120b627d9a05f38e37cb50e9f283e58dbe6e56
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:5ab881bc5abe2ac734d9fb53d76d984cc04031159152ab42edcabbd377cc0859
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: RunContainerError
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Tue, 23 Apr 2019 13:36:18 +0530
Finished: Tue, 23 Apr 2019 13:36:18 +0530
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/mysql-fb75876c6-tk6ml to gke-mycluster-default-pool-b1c1d316-xv4v
Normal Pulling 31s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v pulling image "mysql:5.6"
Normal Pulled 22s kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Successfully pulled image "mysql:5.6"
Normal Pulled 4s (x2 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Container image "mysql:5.6" already present on machine
Normal Created 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Created container
Warning Failed 3s (x3 over 18s) kubelet, gke-mycluster-default-pool-b1c1d316-xv4v Error: failed to start container "mysql": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
As asked by #Hanx:
Result of kubectl describe pv mysql-pv-volume
Name: mysql-pv-volume
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-pv-volume","namespace":""},"spec":{"a...
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data
HostPathType:
Events: <none>
Result of kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"accessModes":["R...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events: <none>
mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This is because you do not need to create those volumes and storageclasses on GKE. Those yaml files are completely valid if you would want to use minikube or kubeadm, but not in case of GKE which can take care of some of the manual steps on its own.
You can use this official guide to run mysql on GKE, or just use files edited by me and tested on GKE.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-volumeclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
And mysql Deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
Make sure you read the linked guide as it explains the GKE specific topics there.