Access MySQL Single-Instance Kubernetes Deployment - mysql

I was following the Run a Single-Instance Stateful Application tutorial of Kubernetes (I changed the MySQL docker image's tag to 8), and it seems the server is running correctly:
But when I try to connect the server as the tutorial suggesting:
kubectl run -it --rm --image=mysql:8 --restart=Never mysql-client -- mysql -h mysql -ppassword
I get the following error:
ERROR 1045 (28000): Access denied for user 'root'#'10.1.0.99' (using password: YES)
pod "mysql-client" deleted
I already looked at those questions:
Can't access mysql root or user after kubernetes deployment
Access MySQL Kubernetes Deployment in MySQL Workbench
But changing the mountPath or port didn't work.

Default behavior of root account can only be connected to from inside the container. Here's an updated version of the example that allows you to connect from remote:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.26
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_ROOT_HOST
value: "%"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
emptyDir: {}
# Following the original example, comment the emptyDir and uncomment the following if you have StorageClass installed.
# persistentVolumeClaim:
# claimName: mysql-pv-claim
No change to the client connect except for the image tag:
kubectl run -it --rm --image=mysql:8.0.26 --restart=Never mysql-client -- mysql -h mysql -ppassword
Test with show databases;:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)

Related

kubernetes mysql statefulset not taking new password, though password changes in env

I have created statefulset of mysql using below yaml with this command:
kubectl apply -f mysql-statefulset.yaml
Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:
kubectl exec -it mysql-sts-0 sh
and then login in mysql using:
mysql -u root -p
after giving this command a:
Enter password:
came and I entered the password:
okaoka
and successfully could login. After that I exited from the pod.
Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:
okaoka1234
and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:
kubectl apply -f mysql-statefulset.yaml
it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).
Changed Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Now the problem is when I again entered a pod using:
kubectl exec -it mysql-sts-0 sh
then used:
mysql -u root -p
and again the:
Enter password:
came and this time when I gave my new password:
okaoka1234
it gave access denied.
When I printed the env (inside the pod) using:
printenv
then I could see that:
MYSQL_ROOT_PASSWORD=okaoka1234
that means in environment variable it changed and took the new password, but I could not logged in by the new password.
The interesting thing is that I could logged in by giving my previous password okaoka, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?
Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).
Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.
After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.
What image are you using? The docker hub mysql image or a custom one?
Update
Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh)
# there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$#"
mysql_note "Starting temporary server"
docker_temp_server_start "$#"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.
Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)
Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.
Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.
It is left to the root user to manually change the password, if desired, by using a mysql client.

Deploying bitnami/mysql helm chart with an existing Persistence Volume Claim

I'm trying to deploy bitnami/mysql chart inside my minikube.
I'm using Kubernetes v1.19, Minikube v1.17.1 and Helm 3
I've created a PVC and PV as follow:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
selector:
matchLabels:
id: mysql-pv
----
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv
labels:
type: local
id: mysql-pv
spec:
storageClassName: standard
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/lib/mysql
I've created the directory /var/lib/mysql by doing sudo mkdir -p /var/lib/mysql
And this is how I create my PVC and PC:
kubectl apply -f mysql-pv-dev.yaml
kubectl apply -f mysql-pvc-dev.yaml
Which seems to work:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pvc Bound mysql-pv 8Gi RWO standard 59s
I am deploying my mysql with:
helm upgrade --install dev-mysql -f mysql-dev.yaml bitnami/mysql
Custom value file - mysql-dev.yaml:
auth:
database: dev_db
username: dev_user
password: passworddev
rootPassword: rootpass
image:
debug: true
primary:
persistence:
existingClaim: mysql-pvc
extraVolumeMounts: |
- name: init
mountPath: /docker-entrypoint-initdb.d
extraVolumes: |
- name: init
hostPath:
path: /home/dev/init_db_scripts/
type: Directory
volumePermissions:
enabled: true
The deployement works:
NAME READY STATUS RESTARTS AGE
dev-mysql-0 0/1 Running 0 8s
the problem is that the pod never gets ready because:
Warning Unhealthy 0s (x2 over 10s) kubelet Readiness probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Access denied for user 'root'#'localhost' (using password: YES)'
mysqld is running inside the pod but for some reasons the root password isn't properly set because when I exec to the pod and try to connect to mysql I get:
$ kubectl exec -ti dev-mysql bash
I have no name!#dev-mysql-0:/$ mysql -u root -prootpass
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
I have no name!#dev-mysql-0:/$
Instead it's using the default values so if I try:
mysql -u root -p without password it works great.
Thanks
A bitnami engineer here,
I was able to reproduce the issue, I'm going to create an internal task to resolve this. We will update this thread when we have more information.

How to connect mysql kubernetes container internally with nodejs k8s container?

I have created mysql k8s container and nodejs k8s container under same namespace.I can't able to connect mysql db.(sequalize)
I have tried to connect using '''http://mysql.e-commerce.svc.cluster.local:3306'''.But i got "SequelizeHostNotFoundError" error.
Here is my service and deployment yaml files.
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: e-commerce
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
app: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
namespace: e-commerce
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql-container
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim```
From the ClusterIP worked for me or better way to go with the host name of the local cluster service ex. db-mysql.default.svc.cluster.local. This way if you cluster restarts and your IP changes, then you got it covered.
You are trying to access database with http protocol, leave it or change with mysql://ip:3306. Some clients won't accept DNS name for databases so you can check ClusterIP of service and try that IP too.
As mentioned by community member FL3SH you can change your spec.type to clusterIP.
You can reproduce this task using stable helm chart wordpress/mysql.
For newly created pods:
mysql-mariadb-0
mysql-wordpress
and services:
mysql-mariadb
mysql-wordpress
After successfully deployment you can verify if your service is working from the mysql-wordpress pod by running:
kubectl exec -it mysql-wordpress-7cb4958654-tqxm6 -- /bin/bash
In addition, you can install additional tools like nslooukp, telnet:
apt-get update && apt-get install dnsutils telnet
Services and connectivity with db you can test by running f.e. those commands:
nslookup mysql-mariadb
telnet mysql-mariadb 3306
mysql -uroot -hmysql-mariadb -p<your_db_password>
example output:
nslookup mysql-mariadb
Server: 10.125.0.10
Address: 10.125.0.10#53
Non-authoritative answer:
Name: mysql-mariadb.default.svc.cluster.local
Address: 10.125.0.76
mysql -u root -hmysql-mariadb -p<your_db_password>
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2068
Server version: 10.1.40-MariaDB Source distribution
You should be able to connect using service name or using ip address.
Inside this helm chart you can find also template for statefulset in order to create mysql pods.
Update
From the second pod f.e. ubuntu run this example - Node.js Mysql, install nodes.js and create connection to the database demo_db_connection.js
example:
var mysql = require('mysql');
var con = mysql.createConnection({
host: "mysql-mariadb",
user: "root",
password: "yourpassword"
});
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
});
run it:
root#ubuntu:~/test# node demo_db_connection.js
Connected!
Try with:
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: e-commerce
spec:
clusterIP: None
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
with the same connection string.

Add another user to MySQL in Kubernetes

Here is my MySQL
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: abc-def-my-mysql
namespace: abc-sk-test
labels:
project: abc
ca: my
spec:
replicas: 1
template:
metadata:
labels:
app: abc-def-my-mysql
project: abc
ca: my
spec:
containers:
- name: mysql
image: mysql:5.6
args: ["--default-authentication-plugin=mysql_native_password", "--ignore-db-dir=lost+found"]
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
- name: MYSQL_DATABASE
value: "my_abc"
- name: MYSQL_USER
value: "test_user"
- name: MYSQL_PASSWORD
value: "12345"
volumeMounts:
- mountPath: /var/lib/mysql
name: abc-def-my-mysql-storage
volumes:
- name: abc-def-my-mysql-storage
persistentVolumeClaim:
claimName: abc-def-my-mysql-pvc
I would like to add another user to mysql so real users can connect to it. Instead of using "test_user". how can I add another user, is it like adding any other environment variable to the above config
Mount a "create user script" into container's /docker-entrypoint-initdb.d directory. it will be executed once, at first pod start.
apiVersion: extensions/v1beta1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: mysql
.....
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
.....
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
configMap:
name: initdb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb
data:
initdb.sql: |-
CREATE USER 'first_user'#'%' IDENTIFIED BY '111' ;
CREATE USER 'second_user'#'%' IDENTIFIED BY '222' ;
Test:
kubectl exec -it <PODNAME> -- mysql -uroot -p -e 'SELECT user, host FROM mysql.user;'
+-------------+------+
| user | host |
+-------------+------+
| first_user | % |
| second_user | % |
| root | % |
+-------------+------+
See Initializing a fresh instance Mysql Docker Hub image:
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order.
You can easily populate your mysql services by mounting a SQL
dump into that directory and provide custom images with contributed
data. SQL files will be imported by default to the database specified
by the MYSQL_DATABASE variable.
Depending on user life-cycle, you can create user either at container startup, either through MySQL's docker startup script, mounted at /docker-entrypoint-initdb.d, or you could do it at CLI, after the server has started.
With container startup script, you may have to take care of multiple containers running the same script, e.g. CREATE user if exists.
CLI option may be more suitable if the DB server is long lived, and you will get requests for multiple user creation even after server creation.

Can't connect to port-forwarded Pod running MySQL

I have the following deployment which puts up MySQL instance:
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
The password is just root :
kind: Secret
apiVersion: v1
metadata:
name: mysql-root-password
type: Opaque
data:
password: cm9vdA==
The problem is I try to connect to the instance after port forwarding the MySQL port, following the instructions from here, but get an error:
$ kubectl port-forward mysql-824284009-rpbpk 3306
Forwarding from 127.0.0.1:3306 -> 3306
Forwarding from [::1]:3306 -> 3306
# from another terminal
$ mysql -u root -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Connecting to the server from the pod itself works:
$ kubectl exec -it mysql-824284009-rpbpk -- /bin/bash
root#mysql-824284009-rpbpk:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
...
mysql>
I have basically the same setup like here, except I'm running the cluster in minikube instead of GCP.
My local MySQL is not runnning, so I assume there is no chance of clashing.
The port forwarding is likely there, but you need to tell mysql client to connect using host/port and not unix socket (default)
mysql --host=localhost --protocol tcp --port=3306 -u root -p
If you don't, mysql by default uses local linux socket to connect to he server: /var/run/mysqld/mysqld.sock .. It even tells you so ;)
Update: As Gabriel checked - adding --protocol tcp had finally made it works, so I am addding it to my answer