Can't login mysql server deployed in k8s cluster - mysql

I am using k8s in mac-docker-desktop. I deploy a mysql pod with below config.
run with: kubectl apply -f mysql.yaml
# secret
apiVersion: v1
kind: Secret
metadata:
name: mysql
type: Opaque
data:
# root
mysql-root-password: cm9vdAo=
---
# configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-conf
data:
database: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql
persistentVolumeClaim:
claimName: mysql
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: mysql-root-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-conf
key: database
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql
labels:
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# services
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
After that. it shows ok . and then, I want to connect the mysql server with node ip, but failed. then I exec in the pod, and got failed either.
I execute in the pod and can't login.
☁ gogs-k8s kubectl get pods
NAME READY STATUS RESTARTS AGE
blog-59fb8cbd44-frmtx 1/1 Running 0 37m
blog-59fb8cbd44-gdskp 1/1 Running 0 37m
blog-59fb8cbd44-qrs8f 1/1 Running 0 37m
mysql-6c794ccb7b-dz9f4 1/1 Running 0 31s
☁ gogs-k8s kubectl exec mysql-6c794ccb7b-dz9f4 -it bash
root#mysql-6c794ccb7b-dz9f4:/# ls
bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
root#mysql-6c794ccb7b-dz9f4:/# echo $MYSQL_ROOT_PASSWORD
root
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It there any problems with my config file ?

Probably you have invalid base64 encoded password. Try this one:
data:
pass: cm9vdA==

As #Vasily Angapov pointed out, your base64 encoding is wrong.
When you do the following you are encoding the base for root\n
echo "root" | base64
Output:
cm9vdAo=
If you want to remove the newline character you should use the option -n:
echo -n "root" | base64
Output:
cm9vdA==
Even better is to do the following:
echo -n "root" | base64 -w 0
That way base64 will not insert new lines in longer outputs.
Also you can verify if your encoding is right by decoding the encoded text:
echo "cm9vdA==" | base64 --decode
The output should not create a new line.

Related

Spring in Kubernetes tries to reach DB at pod IP

I'm facing an issue while deploying a Spring API which should connect to a MySQL database.
I am deploying a standalone MySQL using the [bitnami helm chart][1] with the following values:
primary:
service:
type: ClusterIP
persistence:
enabled: true
size: 3Gi
storageClass: ""
extraVolumes:
- name: mysql-passwords
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: mysql-spc
extraVolumeMounts:
- name: mysql-passwords
mountPath: "/vault/secrets"
readOnly: true
configuration: |-
[mysqld]
default_authentication_plugin=mysql_native_password
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
plugin_dir=/opt/bitnami/mysql/lib/plugin
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
datadir=/bitnami/mysql/data
tmpdir=/opt/bitnami/mysql/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
log-error=/opt/bitnami/mysql/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
slow_query_log=0
slow_query_log_file=/opt/bitnami/mysql/logs/mysqld.log
long_query_time=10.0
[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin
[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
auth:
createDatabase: true
database: api-db
username: api
usePasswordFiles: true
customPasswordFiles:
root: /vault/secrets/db-root-pwd
user: /vault/secrets/db-pwd
replicator: /vault/secrets/db-replica-pwd
serviceAccount:
create: false
name: social-app
I use the following deployment which runs a spring API (with Vault secret injection):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: social-api
name: social-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: social-api
template:
metadata:
labels:
app: social-api
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: 'social'
spec:
serviceAccountName: social-app
containers:
- image: quay.io/paulbarrie7/social-network-api
name: social-network-api
command:
- java
args:
- -jar
- "-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false"
- "-DSPRING_DATASOURCE_USERNAME=api"
- "-DSPRING_DATASOURCE_PASSWORD=$(cat /secrets/db-pwd)"
- "-DJWT_SECRET=$(cat /secrets/jwt-secret)"
- "-DS3_BUCKET=$(cat /secrets/s3-bucket)"
- -Dlogging.level.root=DEBUG
- -Dspring.datasource.hikari.maximum-pool-size=5
- -Dlogging.level.com.zaxxer.hikari.HikariConfig=DEBUG
- -Dlogging.level.com.zaxxer.hikari=TRACE
- social-network-api-1.0-SNAPSHOT.jar
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8080
volumeMounts:
- name: aws-credentials
mountPath: "/root/.aws"
readOnly: true
- name: java-secrets
mountPath: "/secrets"
readOnly: true
volumes:
- name: aws-credentials
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-spc
- name: java-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: java-spc
Identifier are ok, when I run an interactive mysql pod I can connect to the database. However name resolution for the Spring API is wrong since I get the error:
java.sql.SQLException: Access denied for user 'api'#'10.24.0.194' (using password: YES)
which is wrong since 10.24.0.194 is the API pod address and not the mysql pod or service address, and I cant solve why.
Any idea?
[1]: https://artifacthub.io/packages/helm/bitnami/mysql
Thanks to David's suggestion I succeeded in solving my problem.
Actually there were two issues in my configs.
First the secrets were indeed misinterpreted, then I've changed my command/args to:
command:
- "/bin/sh"
- "-c"
args:
- |
DB_USER=$(cat /secrets/db-user)
DB_PWD=$(cat /secrets/db-pwd)
JWT=$(cat /secrets/jwt-secret)
BUCKET=$(cat /secrets/s3-bucket)
java -jar \
-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false \
"-DSPRING_DATASOURCE_USERNAME=$DB_USER" \
"-DSPRING_DATASOURCE_PASSWORD=$DB_PWD" \
"-DJWT_SECRET=$JWT" \
"-DS3_BUCKET=$BUCKET" \
-Dlogging.level.root=DEBUG \
social-network-api-1.0-SNAPSHOT.jar
And the memory resources set were also too low, so I have changed them to:
resources:
limits:
cpu: 100m
memory: 400Mi
requests:
cpu: 100m
memory: 400Mi

Kubernetes php container can't seem to connect to mysql service

Strange one, i have a php container with symfony, and a mysql pod that is attached to a service called mysql-service. I send the mysql connection details in the env variables on the php deployment.
Weirdly when symfony can't connect to the mysql pod using the service name.When i describe the php pod it says:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I can see it's the name resolution:
getaddrinfo failed: Temporary failure in name resolution
If i change the deployment config so the DB_HOST env variable is that of another mysql server on the local network it connects just fine, and i know the mysql user and pass are correct as the mysql deployment and the php deployment both use the mysql secret file; i have also logged into the shell of the mysql pod and connected the database with the same user and pass no problem.
It looks like it's something about the mysql service itself.
Php deployment (some of it's removed as not relevant to my problem):
containers:
- name: php
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "cd /usr/share/nginx/html && php bin/console cache:clear"]
env:
- name: APP_ENV
value: "prod"
- name: DB_NAME
valueFrom:
secretKeyRef:
name: mysql-secret
key: database
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: DB_HOST
value: "mysql-service"
- name: DB_PORT
value: "3306"
imagePullPolicy: Always
image: php-image-here
ports:
- containerPort: 9000
the mysql deployment and service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
deploy: mysql
spec:
replicas: 1
selector:
matchLabels:
deploy: mysql
template:
metadata:
labels:
deploy: mysql
spec:
containers:
- name: mysql
imagePullPolicy: Always
image: mysql-image-here
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-volume
env:
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: "yes"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: database
volumes:
- name: mysql-volume
hostPath:
path: /data/mysql
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
deploy: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
The service and the mysql pod are up and running:
NAME READY STATUS RESTARTS AGE
pod/backend-deployment-7d585fd8fd-9z5dp 1/2 CrashLoopBackOff 5 (2m54s ago) 7m9s
pod/mysql-deployment-7cb7999d98-blpzr 1/1 Running 0 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend-service NodePort 10.97.250.92 <none> 80:30002/TCP 38m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d18h
service/mysql-service ClusterIP 10.111.226.120 <none> 3306/TCP 50m

Kubernetes Inject Env Variable with File in a Volume

Just for training purpose, I'm trying to inject those env variables with this ConfigMap in my Wordpress and Mysql app by using a File with a Volume.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
namespace: ex2
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: ex2
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 3306
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
---
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
namespace: ex2
spec:
ports:
- nodePort: 30999
port: 80
protocol: TCP
targetPort: 80
selector:
app: wordpress
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
When I deploy the app the mysql pod fails with this error:
kubectl -n ex2 logs mysql-56ddd69598-ql229
2020-12-26 19:57:58+00:00 [ERROR] [Entrypoint]: Database is
uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
I don't understand because I have specified everything in my configMap. I also have tried by using envFrom and Single Env Variables and it works just fine. I'm just having an issue with File in a Volume
#DavidMaze is correct; you're mixing two useful features.
Using test.yaml:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: busybox
name: test
args:
- ash
- -c
- while true; do sleep 15s; done
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
volumes:
- name: config
configMap:
name: wordpress-mysql
Then:
kubectl apply --filename=./test.yaml
kubectl exec --stdin --tty deployment/test -- ls /etc/env
mysql.conf wordpress.conf
kubectl exec --stdin --tty deployment/test -- more /etc/env/mysql.conf
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
NOTE the files are missing (and should probably include) = between the variable and its value e.g. MYSQL_DATABASE=wordpress
So, what you have is a ConfigMap that represents 2 files (mysql.conf and wordpress.conf) and, if you use e.g. busybox and mount the ConfigMap as a volume, you can see that it includes 2 files and that the files contain the configurations.
So, if you can run e.g. WordPress or MySQL and pass a configuration file to them, you're good but what you probably want to do is reference the ConfigMap entries as environment variables, per #DavidMaze suggestion, i.e. run Pods with environment variables set by the ConfigMap entries, i.e.:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
I would really suggest not to use configmap for wordpress. You can use directly the official repo https://github.com/docker-library/wordpress/tree/master/php7.4/apache it has a docker-entrypoint.sh which you can use to inject the env values from the deployment.yaml directly or if you use vault that works perfectly too.

WordPress + MySQL deployed in Kubernetes - MySQL Connection Error

A Kubernetes scenario with Wordpress + Mysql in a local environment.
Wordpress Pod is unable to connect to Mysql database with the following error from Wordpress Pod logs:
MySQL Connection Error: (1045) Access denied for user 'root'#'10.44.0.5' (using password: YES)
Warning: mysqli::mysqli(): (HY000/1045): Access denied for user 'root'#'10.44.0.5' (using password: YES) in - on line 22
Instruction taken from kubernetes.io at link. The only change i made was creating a Secret resource to store password and to be pointed from Mysql and Wordpress containers.
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQxMjMK --> that is base64 of password123
type: Opaque
Pods are in default namespace both on node1 that is a worker node:
NAME READY STATUS RESTARTS AGE IP NODE
wordpress-554dfbbc47-hnr4n 0/1 Error 1 66s 10.44.0.5 node1
wordpress-mysql-5477cbdfbf-29w2r 1/1 Running 0 74s 10.44.0.4 node1
i've no skills about mysql but if i get bash shall in Mysql container and execute:
# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Here the Service output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
wordpress LoadBalancer 10.107.114.255 192.168.1.83 80:32336/TCP
wordpress-mysql ClusterIP None <none> 3306/TCP
Some env variables from MySql Pod:
....
HOSTNAME=wordpress-mysql-5477cbdfbf-29w2r
MYSQL_MAJOR=5.6
MYSQL_ROOT_PASSWORD=password123
MYSQL_VERSION=5.6.50-1debian9
....
PersistentVolume are working fine.
Quite stucked going ahead with troubleshooting. Help would appreciated.
After testing different images for Mysql and Wordpress and reading useful links on hub.docker.com mysql & wordpress i got the web application stack working.
The configuration:
MySQL:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: root-pass
key: password
- name: MYSQL_DATABASE
value: mysql
- name: MYSQL_USER
value: mysql
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
nodeSelector:
storage: local
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Wordpress:
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
externalIPs:
- 192.168.1.83
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
imagePullPolicy: IfNotPresent
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: WORDPRESS_DB_USER
value: mysql
- name: WORDPRESS_DB_NAME
value: mysql
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
nodeSelector:
storage: local
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Output PersitentVolume:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
mysql-pv-claim Bound persistent-volume-mysql 4Gi RWO local-storage
wp-pv-claim Bound persistent-volume-wordpress 2Gi RWO local-storage
Secrets:
apiVersion: v1
kind: Secret
metadata:
name: root-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
Notes for my example configuration:
on node1 created directory /mysql/data & /wordpress/data (mount point for mysql and wordpress containers).
image used for mysql -> mysql:5.7
image used for wordpress -> wordpress
added environment variables according to the documentation of mysql and wordpress.
Did you apply your secret? is your secret available in kube env?

Cannot connect to mariadb on kubernetes when using secret

I'm hosting a mariadb in a kubernetes cluster on Google Kubernetes Engine. I'm using the official mariadb image from dockerhub (mariadb:10.5).
This is my yml for the service and deployment
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
ports:
- port: 3306
selector:
app: mariadb
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: mariadb
spec:
containers:
- image: mariadb:10.5
name: mariadb
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mariadb-secret
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secret
key: password
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secret
key: rootpassword
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
name: mariadb-port
volumeMounts:
- name: mariadb-volume
mountPath: /var/lib/mysql
volumes:
- name: mariadb-volume
persistentVolumeClaim:
claimName: mariadb-pvc
As you can see, I'm using a secret to configure the environment. The yml for the secret looks like this:
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
data:
rootpassword: dGVzdHJvb3RwYXNzCg==
username: dGVzdHVzZXIK
password: dGVzdHBhc3MK
After apply this configuration everything seems fine, except that I cannot connect with the user and it's password to the DB. Not from localhost and also not from remote:
# mysql -u testuser -ptestpass
ERROR 1045 (28000): Access denied for user 'testuser'#'localhost' (using password: YES)
I can only connect using root and it's password (same connection string). When I take a look at my users in mariadb they look like this:
+-----------+-------------+-------------------------------------------+
| Host | User | Password |
+-----------+-------------+-------------------------------------------+
| localhost | mariadb.sys | |
| localhost | root | *293286706D5322A73D8D9B087BE8D14C950AB0FA |
| % | root | *293286706D5322A73D8D9B087BE8D14C950AB0FA |
| % | testuser | *B07683D91842E0B3FEE182C5182AB7E4F8B3972D |
+-----------+-------------+-------------------------------------------+
If I change my Secret to use stringData instead of data and use non-encoded strings everything works as expected:
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
stringData:
rootpassword: testrootpass
username: testuser
password: testpass
I use the following commands (on Mac OS) to generate the base64 encoded strings:
echo testuser | base64
echo testpass | base64
echo testrootpass | base64
What am I doing wrong here? I would like to use the base64-encoded strings instead of the normal strings.
You created all your values with:
$ echo "value" | base64
which instead you should use: $ echo -n "value" | base64
Following official man page of echo:
Description
Echo the STRING(s) to standard output.
-n = do not output the trailing newline
TL;DR: You need to edit your Secret.yaml definition with new values:
$ echo -n "testuser" | base64
$ echo -n "testpass" | base64
$ echo -n "testrootpass" | base64
Following above explanation, your Secret.yaml should look like:
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
data:
rootpassword: dGVzdHJvb3RwYXNz
username: dGVzdHVzZXI=
password: dGVzdHBhc3M=
After that you should be able to connect to your mariadb like below:
$ mysql -u testuser -ptestpass
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.5.5-MariaDB-1:10.5.5+maria~focal mariadb.org binary distribution
<---->
MariaDB [(none)]>
$ mysql -u root -ptestrootpass
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.5.5-MariaDB-1:10.5.5+maria~focal mariadb.org binary distribution
<---->
MariaDB [(none)]>
Additional resources:
Stackoverflow.com: How to get into postgres in kubernetes with local dev minikube