I deployed a StatefulSet on Kubernetes (K3S actually). I used the following manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
namespace: dev
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
serviceName: database-svc
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mysql-db
image: mysql:8.0.30-debian
ports:
- name: tcp
protocol: TCP
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: ROOT_PASSWORD
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The other referenced resources exist (Service and Secret) and the pod is created correctly. But then I try to run the command kubectl exec -n dev -it mysql-0 -- mysql -u root -p, I insert the correct password and I get the error
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
command terminated with exit code 1
If I try to sh into the container and echo the MYSQL_ROOT_PASSWORD variable I get the password that I inserted without success.
What am I missing? How should I connect to the pod from command line?
I was following the Run a Single-Instance Stateful Application tutorial of Kubernetes (I changed the MySQL docker image's tag to 8), and it seems the server is running correctly:
But when I try to connect the server as the tutorial suggesting:
kubectl run -it --rm --image=mysql:8 --restart=Never mysql-client -- mysql -h mysql -ppassword
I get the following error:
ERROR 1045 (28000): Access denied for user 'root'#'10.1.0.99' (using password: YES)
pod "mysql-client" deleted
I already looked at those questions:
Can't access mysql root or user after kubernetes deployment
Access MySQL Kubernetes Deployment in MySQL Workbench
But changing the mountPath or port didn't work.
Default behavior of root account can only be connected to from inside the container. Here's an updated version of the example that allows you to connect from remote:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.26
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_ROOT_HOST
value: "%"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
emptyDir: {}
# Following the original example, comment the emptyDir and uncomment the following if you have StorageClass installed.
# persistentVolumeClaim:
# claimName: mysql-pv-claim
No change to the client connect except for the image tag:
kubectl run -it --rm --image=mysql:8.0.26 --restart=Never mysql-client -- mysql -h mysql -ppassword
Test with show databases;:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
I have created statefulset of mysql using below yaml with this command:
kubectl apply -f mysql-statefulset.yaml
Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:
kubectl exec -it mysql-sts-0 sh
and then login in mysql using:
mysql -u root -p
after giving this command a:
Enter password:
came and I entered the password:
okaoka
and successfully could login. After that I exited from the pod.
Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:
okaoka1234
and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:
kubectl apply -f mysql-statefulset.yaml
it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).
Changed Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Now the problem is when I again entered a pod using:
kubectl exec -it mysql-sts-0 sh
then used:
mysql -u root -p
and again the:
Enter password:
came and this time when I gave my new password:
okaoka1234
it gave access denied.
When I printed the env (inside the pod) using:
printenv
then I could see that:
MYSQL_ROOT_PASSWORD=okaoka1234
that means in environment variable it changed and took the new password, but I could not logged in by the new password.
The interesting thing is that I could logged in by giving my previous password okaoka, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?
Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).
Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.
After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.
What image are you using? The docker hub mysql image or a custom one?
Update
Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh)
# there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$#"
mysql_note "Starting temporary server"
docker_temp_server_start "$#"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.
Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)
Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.
Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.
It is left to the root user to manually change the password, if desired, by using a mysql client.
Hi just started using Kubernetes. I have deployed my flask application on Kubernetes with minkikube. I have running the MySQL server on my local machine. When I try to access the DB I will get error
InternalError: (pymysql.err.InternalError) (1130, u"Host '157.37.85.26'
is not allowed to connect to this MySQL server")
(Background on this error at: http://sqlalche.me/e/2j85)
the IP is dynamic here, every time I try to access..it will use different IP to connect
Here is my deployment.ymal file
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-deployment
spec:
selector:
matchLabels:
app: flask-crud-app
replicas: 3
template:
metadata:
labels:
app: flask-crud-app
spec:
containers:
- name: flask-crud-app
image: flask-crud-app:latest
ports:
- containerPort: 80
And service.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
selector:
app: flask-crud-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
It because your current configuration doesn't allow requests coming from that IP address. Say, you're connecting as root user, then a workaround will be(not recommended), giving root user the privilege to connect from that IP.
Connect to your mysql server and perform:
$ mysql -u root -p
$ GRANT ALL PRIVILEGES ON *.* TO 'root'#'my_ip' IDENTIFIED BY 'root_password' WITH GRANT OPTION;
$ FLUSH PRIVILEGES;
Recommendation: Set up a new user with limited privileges and allow requests from the given IP for that user.
I have installed mysql in minikube
exposed port 3306 using port forwarding [ want to access MySQL using workbench ]
getting the error when trying to connect to MySQL [ Access denied for user 'root'#'127.0.0.1' (using password: YES) ]
My YML file for mysql
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:latest
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: adminadmin
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
My port forwarding command is
kubectl port-forward <<PODNAME>> 3306:3306 --address 0.0.0.0
When trying to access from local MySQL workbench ..i am getting following error
Access denied for user 'root'#'127.0.0.1' (using password: YES)
Expectation is to connect to MySQL from local workbench.