Cannot connect to mariadb on kubernetes when using secret - mysql

I'm hosting a mariadb in a kubernetes cluster on Google Kubernetes Engine. I'm using the official mariadb image from dockerhub (mariadb:10.5).
This is my yml for the service and deployment
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
ports:
- port: 3306
selector:
app: mariadb
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: mariadb
spec:
containers:
- image: mariadb:10.5
name: mariadb
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mariadb-secret
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secret
key: password
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secret
key: rootpassword
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
name: mariadb-port
volumeMounts:
- name: mariadb-volume
mountPath: /var/lib/mysql
volumes:
- name: mariadb-volume
persistentVolumeClaim:
claimName: mariadb-pvc
As you can see, I'm using a secret to configure the environment. The yml for the secret looks like this:
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
data:
rootpassword: dGVzdHJvb3RwYXNzCg==
username: dGVzdHVzZXIK
password: dGVzdHBhc3MK
After apply this configuration everything seems fine, except that I cannot connect with the user and it's password to the DB. Not from localhost and also not from remote:
# mysql -u testuser -ptestpass
ERROR 1045 (28000): Access denied for user 'testuser'#'localhost' (using password: YES)
I can only connect using root and it's password (same connection string). When I take a look at my users in mariadb they look like this:
+-----------+-------------+-------------------------------------------+
| Host | User | Password |
+-----------+-------------+-------------------------------------------+
| localhost | mariadb.sys | |
| localhost | root | *293286706D5322A73D8D9B087BE8D14C950AB0FA |
| % | root | *293286706D5322A73D8D9B087BE8D14C950AB0FA |
| % | testuser | *B07683D91842E0B3FEE182C5182AB7E4F8B3972D |
+-----------+-------------+-------------------------------------------+
If I change my Secret to use stringData instead of data and use non-encoded strings everything works as expected:
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
stringData:
rootpassword: testrootpass
username: testuser
password: testpass
I use the following commands (on Mac OS) to generate the base64 encoded strings:
echo testuser | base64
echo testpass | base64
echo testrootpass | base64
What am I doing wrong here? I would like to use the base64-encoded strings instead of the normal strings.

You created all your values with:
$ echo "value" | base64
which instead you should use: $ echo -n "value" | base64
Following official man page of echo:
Description
Echo the STRING(s) to standard output.
-n = do not output the trailing newline
TL;DR: You need to edit your Secret.yaml definition with new values:
$ echo -n "testuser" | base64
$ echo -n "testpass" | base64
$ echo -n "testrootpass" | base64
Following above explanation, your Secret.yaml should look like:
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
data:
rootpassword: dGVzdHJvb3RwYXNz
username: dGVzdHVzZXI=
password: dGVzdHBhc3M=
After that you should be able to connect to your mariadb like below:
$ mysql -u testuser -ptestpass
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.5.5-MariaDB-1:10.5.5+maria~focal mariadb.org binary distribution
<---->
MariaDB [(none)]>
$ mysql -u root -ptestrootpass
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.5.5-MariaDB-1:10.5.5+maria~focal mariadb.org binary distribution
<---->
MariaDB [(none)]>
Additional resources:
Stackoverflow.com: How to get into postgres in kubernetes with local dev minikube

Related

Kubernetes php container can't seem to connect to mysql service

Strange one, i have a php container with symfony, and a mysql pod that is attached to a service called mysql-service. I send the mysql connection details in the env variables on the php deployment.
Weirdly when symfony can't connect to the mysql pod using the service name.When i describe the php pod it says:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I can see it's the name resolution:
getaddrinfo failed: Temporary failure in name resolution
If i change the deployment config so the DB_HOST env variable is that of another mysql server on the local network it connects just fine, and i know the mysql user and pass are correct as the mysql deployment and the php deployment both use the mysql secret file; i have also logged into the shell of the mysql pod and connected the database with the same user and pass no problem.
It looks like it's something about the mysql service itself.
Php deployment (some of it's removed as not relevant to my problem):
containers:
- name: php
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "cd /usr/share/nginx/html && php bin/console cache:clear"]
env:
- name: APP_ENV
value: "prod"
- name: DB_NAME
valueFrom:
secretKeyRef:
name: mysql-secret
key: database
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: DB_HOST
value: "mysql-service"
- name: DB_PORT
value: "3306"
imagePullPolicy: Always
image: php-image-here
ports:
- containerPort: 9000
the mysql deployment and service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
deploy: mysql
spec:
replicas: 1
selector:
matchLabels:
deploy: mysql
template:
metadata:
labels:
deploy: mysql
spec:
containers:
- name: mysql
imagePullPolicy: Always
image: mysql-image-here
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-volume
env:
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: "yes"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: database
volumes:
- name: mysql-volume
hostPath:
path: /data/mysql
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
deploy: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
The service and the mysql pod are up and running:
NAME READY STATUS RESTARTS AGE
pod/backend-deployment-7d585fd8fd-9z5dp 1/2 CrashLoopBackOff 5 (2m54s ago) 7m9s
pod/mysql-deployment-7cb7999d98-blpzr 1/1 Running 0 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend-service NodePort 10.97.250.92 <none> 80:30002/TCP 38m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d18h
service/mysql-service ClusterIP 10.111.226.120 <none> 3306/TCP 50m

Unable to connect: Communications link failure

I am trying to follow the tutorial Deploying Debezium using the new KafkaConnector resource.
Based on the tutorial, I am also using minikube but with docker driver. Basically just follow exactly step by step.
However, for the step "Create the connector", after creating the connector by
cat <<EOF | kubectl -n kafka apply -f -
apiVersion: "kafka.strimzi.io/v1alpha1"
kind: "KafkaConnector"
metadata:
name: "inventory-connector"
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 192.168.99.1
database.port: "3306"
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
database.server.id: "184054"
database.server.name: "dbserver1"
database.whitelist: "inventory"
database.history.kafka.bootstrap.servers: "my-cluster-kafka-bootstrap:9092"
database.history.kafka.topic: "schema-changes.inventory"
include.schema.changes: "true"
EOF
and check by
kubectl -n kafka get kctr inventory-connector -o yaml
I got error
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I tried to change
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
to
database.user: "debezium"
database.password: "dbz"
directly and re-apply, based on the user and password info in "Secure the database credentials" step.
Also, based on the description in the tutorial
I’m using database.hostname: 192.168.99.1 as IP address for connecting to MySQL because I’m using minikube with the virtualbox VM driver If you’re using a different VM driver with minikube you might need a different IP address.
I am actually a little confused for above description. MySQL in the demo is deployed in Docker, while the rest of parts like Kafka are deployed in minikube. Why the description about database.hostname says minikube instead of Docker?
Anyway, when I run minikube ip, I got 192.168.49.2. However, after I change database.hostname to 192.168.49.2, and run kubectl get kctr inventory-connector -o yaml -n kafka, I got
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I can access MySQL by localhost as it is hosted in Docker.
However, I still same error when I changed database.hostname to localhost.
Any idea? Thanks!
The issue is related with the service in minikube failed to communicate with the MySQL in the docker.
Regarding how to access host's localhost from inside Kubernetes cluster, I found How to access host's localhost from inside kubernetes cluster
However, I end up with deploying MySQL in Kubernetes direction by
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
(Copied from https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
with
database.hostname: "mysql.default" # service `mysql` in namespace `default`
database.port: "3306"
database.user: "root"
database.password: "password"
Now when I run
kubectl -n kafka get kctr inventory-connector -o yaml
I got a new error saying MySQL not enabling row-level binlog, however, it means it can connect the MySQL now.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"mysql.default","database.password":"password","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"root","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T19:36:52Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "2918"
uid: 48bb46e1-42bb-4574-a3dc-221ae7d6a803
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.default
database.password: password
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: root
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T19:36:53.605Z"
status: "True"
type: Ready
connectorStatus:
connector:
state: UNASSIGNED
worker_id: 172.17.0.8:8083
name: inventory-connector
tasks:
- id: 0
state: FAILED
trace: "org.apache.kafka.connect.errors.ConnectException: The MySQL server is
not configured to use a row-level binlog, which is required for this connector
to work properly. Change the MySQL configuration to use a row-level binlog
and restart the connector.\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:207)\n\tat
io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"
worker_id: 172.17.0.8:8083
type: source
observedGeneration: 1

cant run mysql statefulset in kubernetes

I've just started learning kubernetes and was playing around in katakoda platform. I created a statefulset for mysql. It is just a test so i didnt declare any pvc and mount any volumes. It's declaration and the service's declaration in yml:
---
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
labels:
run: mysql-sts-demo
spec:
ports:
- port: 3306
name: db
selector:
run: mysql-sts-demo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts-demo
spec:
serviceName: "mysql-headless"
replicas: 1
selector:
matchLabels:
run: mysql-sts-demo
template:
metadata:
labels:
run: mysql-sts-demo
spec:
containers:
- name: mysql
image: mysql:5.7.8
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secrets
key: DBNAME
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: PASSWORD
it creates those resources successfully , but when I type kubectl get statefulsets , my ss is always being displayed as not ready. What may the issue be? Btw I need it for using with a spring petclinic app which I declared and launched previously as a deployment .
can you paste the logs for statefulsets
or an output of kubectl get events and kubectl describe <your stateful-set name>
now coming to secrets can you check whether those secrets which you are using in your stateful-sets definitions are already present using kubectl get secrets

Unable to create a MySQL database backup in GCP storage

We deploy a laravel project in k8s(GCP) with mysql database. Now i want time to time backup of this database with the help of cronjob and I followed an articles but i'm unable to create a backup file. However, as per article we need to create the storage bucket and service account in GCP
It is working properly still there is no backup file in storage bucket.
cronjob.yaml file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup-container
image: gcr.io/thereport/abcd
env:
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: backup-configmap
key: db
- name: GCS_BUCKET
valueFrom:
configMapKeyRef:
name: backup-configmap
key: gcs-bucket
- name: DB_HOST
valueFrom:
secretKeyRef:
name: backup
key: db_host
- name: DB_USER
valueFrom:
secretKeyRef:
name: backup
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: backup
key: password
- name: GCS_SA
valueFrom:
secretKeyRef:
name: backup
key: thereport-541be75e66dd.json
args:
- /bin/bash
- -c
- mysqldump --u root --p"root" homestead > trydata.sql; gcloud config set project thereport; gcloud auth activate-service-account --key-file backup; gsutil cp /trydata.sql gs://backup-buck
restartPolicy: OnFailure
You don't copy the right file:
mysqldump --u root --p"root" homestead > trydata.sql; gcloud config set project thereport; gcloud auth activate-service-account --key-file backup; gsutil cp /laravel.sql gs://backup-buck

Can't login mysql server deployed in k8s cluster

I am using k8s in mac-docker-desktop. I deploy a mysql pod with below config.
run with: kubectl apply -f mysql.yaml
# secret
apiVersion: v1
kind: Secret
metadata:
name: mysql
type: Opaque
data:
# root
mysql-root-password: cm9vdAo=
---
# configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-conf
data:
database: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql
persistentVolumeClaim:
claimName: mysql
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: mysql-root-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-conf
key: database
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql
labels:
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# services
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
After that. it shows ok . and then, I want to connect the mysql server with node ip, but failed. then I exec in the pod, and got failed either.
I execute in the pod and can't login.
☁ gogs-k8s kubectl get pods
NAME READY STATUS RESTARTS AGE
blog-59fb8cbd44-frmtx 1/1 Running 0 37m
blog-59fb8cbd44-gdskp 1/1 Running 0 37m
blog-59fb8cbd44-qrs8f 1/1 Running 0 37m
mysql-6c794ccb7b-dz9f4 1/1 Running 0 31s
☁ gogs-k8s kubectl exec mysql-6c794ccb7b-dz9f4 -it bash
root#mysql-6c794ccb7b-dz9f4:/# ls
bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
root#mysql-6c794ccb7b-dz9f4:/# echo $MYSQL_ROOT_PASSWORD
root
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It there any problems with my config file ?
Probably you have invalid base64 encoded password. Try this one:
data:
pass: cm9vdA==
As #Vasily Angapov pointed out, your base64 encoding is wrong.
When you do the following you are encoding the base for root\n
echo "root" | base64
Output:
cm9vdAo=
If you want to remove the newline character you should use the option -n:
echo -n "root" | base64
Output:
cm9vdA==
Even better is to do the following:
echo -n "root" | base64 -w 0
That way base64 will not insert new lines in longer outputs.
Also you can verify if your encoding is right by decoding the encoded text:
echo "cm9vdA==" | base64 --decode
The output should not create a new line.