MySql server deployed in kubernetes is very slow to respond - mysql

I have deployed mysql in kubernetes. The pods are up and running. But when I tried to create db, table and insert data, all these operations seems to be very slow. Here is the yaml files I used for deployment. Can you look into the yaml and suggest me what could be the reason for it.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.20
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rbd-default
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
I tried creating database after I exec into the pod, the operation took 40 sec to complete. When I tried connecting it to visual studio and perform same operation it took me more than 4 minutes. I think 40 sec itself is too long. However fetching data just took 300 ms from visual studio.
I connected it to visual studio using IP and node port

Thank you all for spending time to answer the question. I think I solved the issue. It was basically the storage class that I used which was causing the issue. Once I updated it to rbd-fast, the response got much faster.

This sounds like a DNS lookup delay/timeout.
If so, you need to set skip_name_resolve in your config file, in the [mysqld] section:
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_skip_name_resolve
Note that you will not be able to use 'user'#'host' for access permissions if you do this, you will have to 'use user'#'ip_address' in your access permissions. You can still use the % wildcard, e.g. 'user'#'10.0.%' is valid. 'user'#'localhost' will also continue to work as this actually means local socket.

Related

Exposing pod to outside world with MySQL database in Azure Kubernetes Service

Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration.
My deployment of single instance MySQL DB instance is:
apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: ClusterIP
ports:
- port: 3306
#targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
value: test12345
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
As I've mentioned I can get to the container via CLI:
Console output regarding the working pod with db looks like:
Console output regarding the service:
Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help.
You are using ClusterIP service(line 7). The kubernetes ClusterIP service is not made to allow you to access a pod outside of the cluster. ClusterIP just provide a way to have a not changing IP for other internal services to access your pod.
You should use instead Loadbalanacer.
Cf https://stackoverflow.com/a/48281728/8398523 for differences
You have used the type: ClusterIP so it won't expose the MYSQL outside the cluster ideally, your Microservices running in the cluster will be able to access it however you can not use it externally.
To expose the service we generally have to use the type: LoadBalancer. It will directly expose your MySQL service internet and from your local workbench, you can connect to DB running on K8s.
If you really don't want to expose the MySQL service directly to internet you can deploy the adminer.
So traffic will flow like
internet > adminer > internal communication > MySQL service > MySQL POD
YAML file to deploy and get the UI output directly in the browser, it will ask of MySQL DB username, password, Host (mysql-db-testing-service.testing.svc.cluster.local) to connect
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
labels:
app: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4.6.3
ports:
- containerPort: 8080
env:
- name: ADMINER_DESIGN
value: "pappu687"
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
type: ClusterIP(Internally to cluster)/LoadBalancer (Expose to internet)
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Port-forward for local access or use service type: LoadBalancer
kubectl port-forward svc/adminer-svc 8080:8080
Open localhost:8080 in browser

Problem with connection to apache after scaling up, my stateful database on Kubernetes

Hiii!!!
i have deployed to Kubernetes keyrock, apache and mysql..
After i used the hpa and my stateful database scaled up, i can't login to my simple site..
This is my sql code:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.21
imagePullPolicy: Always
resources:
requests:
memory: 50Mi #50
cpu: 50m
limits:
memory: 500Mi #220?
cpu: 400m #65
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
subPath: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: password
- name: MYSQL_ROOT_HOST
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: host
volumeClaimTemplates:
- metadata:
name: mysql-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard #?manual
resources:
requests:
storage: 5Gi
And it's headless service:
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql #x-app #
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql #x-app
Anyone can help me?
I'm using gke.. Keyrock and apache are deployments and mysql is statefulset..
Thank you!!
You can't just scale up a Standalone database. Using HPA for stateless application works but not for stateful applications like a database.
Increasing replica of your StaetefulSet will just create another pod with a new MySQL instance. This new replica isn't aware of the data in your old replica. Basically, you now have completely two different databases. That's why you can't login after scaling up. When your request get routed to the new replica, this instance does not have the user info that you created in the old replica.
In this case, you should deploy your database in clustered mode. Then, you can take advantage of horizontal scaling.
I recommend to use a database operator like mysql/mysql-operator, presslabs/mysql-operator, or KubeDB to manage your database in Kubernetes. Out of these operators, KubeDB has autoscaling feature. I am not sure that other operators provide this feature.
Disclosure: I am one of the developer of KubeDB operator.

how to overcome the readonly filesystem error for a secret that is mounted in mysql image

I'm working on setting up a mysql instance in K8s cluster with TLS support for the client connection.
For that I have setup a cert-manager to issue the self-signed cert. I can see ca.crt, tls.key, tls.crt created in the secrets within my mysql namespace successfully. I followed the following article https://www.jetstack.io/blog/securing-mysql-with-cert-manager/
Now to use this cert, my plan is to place the cert in the /var/lib/mysql directory and update the mysql.conf file using config map. Here is how the mysql.yaml pod spec looks.
kind: Service
metadata:
name: mysql
namespace: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
mysql.cnf: |-
[mysqld]
ssl-ca=/var/lib/mysql/ca.crt
ssl-cert= /var/lib/mysql/tls.crt
ssl-key=/var/lib/mysql/tls.key
require_secure_transport=ON
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
# securityContext:
# runAsUser: 0
containers:
- image: mysql:5.7
name: mysql
resources: {}
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-cert-secret
#mountPath: /app/ca.crt
mountPath: /var/lib/mysql/ca.crt
subPath: ca.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.crt
#mountPath: /app/tls.crt
subPath: tls.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.key
#mountPath: /app/tls.key
subPath: tls.key
- name: config-map-mysqlconf
mountPath: /etc/mysql/mysql.conf
volumes:
- name: mysql-cert-secret
secret:
secretName: mysql-server-tls
- name : config-map-mysqlconf
configMap:
name: mysql-config
If I update the mount path with say /app/ca.crt, then mounting works and I can see the certs in when I access in shell. But for the /var/lib/mysql* I get following error.
Error image
I tried using the securityContext but it didn't help since the directory is accessible by both root and mysql user. Any help would be greatly appreciated. If there is a better way to get this done, I'm happy to try that as well.
This is all done locally using KinD cluster.
Thank you
MySQL stores DB files in /var/lib/mysql by default and there would certainly be an attempt to set the ownership to mysql user. Perhaps here.
Any attempt to update a secret volume will result in an error rather than a successful change as they are read-only projections into the Pod's filesystem. I think that's the reason the article you followed does not suggest anywhere to use dir /var/lib/mysql.
If you still want to attempt this, you can perhaps try by changing the default db storage location to something other than /var/lib/mysql in file /etc/my.cnf or even the default mode of that volumeMount. But I'm not sure if it will work or there won't be any other issues.

How to create mysql service using local persistent volume to store the data on windows local machine

I want that mysql pod doesn't remove all mysql data when I restart the computer.
I should be able to store the data in my machine, so when I reboot my computer and the mysql pod starts again, the databases are still there.
here are my yaml's:
storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
mysql-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
storageClassName: local-storage
local:
path: "C:\\mysql-volume" #2 \ for escape characters right?
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
#hostPath:
# path: /mysql-volume
#type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: local
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- protocol: TCP
port: 3306
nodePort: 30001
selector:
app: mysql
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql-custom-img-here
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-secret
key: mysql-user
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: mysql-password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
After trying that, the first error I got was:
MountVolume.NewMounter initialization failed for volume "mysql-pv-volume" : path "C:\\mysql-volume" does not exist
Since im using windows, I guess that's the correct path right? Im using 2 "" for a escape character, Maybe the problem is here in the path, but not sure. If it is, how can I give my local path on my windows machine?
Then I changed the local: path: to /opt and the following error apeared:
initialize specified but the data directory has files in it.
log:
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.31-1debian10 started.
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.31-1debian10 started.
2020-09-24 12:53:00+00:00 [Note] [Entrypoint]: Initializing database files
2020-09-24T12:53:00.271130Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-09-24T12:53:00.271954Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2020-09-24T12:53:00.271981Z 0 [ERROR] Aborting
but if I change the mountPath: /var/lib/mysql to for example mountPath: /var/lib/mysql-test
It works, but not as expected(saving the data after rebooting the computer).
Even after removing the PV, PVC and MYSQL deployment/service, that same error keeps appearing.
I even removed the volumes using the docker command, and changing my mysql custom image just to 'mysql:5.7' just in case, but the same initialize specified but the data directory has files in it. appears.
How does that happen, even when I remove the pod? mountPath is the container path, so the data should disappear.
And how can I give my local path in the persistentVolume?
Thanks for your time!
edit: forgot the tell that I already saw this: How to create a mysql kubernetes service with a locally mounted data volume?
I searched a lot, but no luck
I finally solved the problem...
The problem of initialize specified but the data directory has files in it. was answered by #Jakub
The MountVolume.NewMounter initialization failed for volume "mysql-pv-volume" : path "C:\\mysql-volume" does not exist .... I can't even believe the time spent because of this silly problem...
the correct path is: path: /c/mysql-volume after that, all worked as expected!
I am posting this as a community wiki answer for better visibility.
If you have problem with initialize specified but the data directory has files in it then there is github issue which will help you.
TLDR
Use --ignore-db-dir=lost+found in your container
Use older version of mysql, for example mysql:5.6
There are answers on github provided by #alexpls and #aklinkert
I had this issue with Kubernetes and MySQL 5.7.15 as well. Adding the suggestion from #yosifki to my container's definition got things working.
Here's an extract of my working YAML definition:
name: mysql-master
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
The exact same configuration is working for MySQL Version 5.6 with.

MySQL connection failed between 2 pods in kubernetes

I am a newbie on Kubernetes and try to generate 2 pods including front-end application and back-end mysql. First I make a yaml file which contains both application and mysql server like below,
apiVersion: v1
kind: Pod
metadata:
name: blog-system
spec:
containers:
- name: blog-app
image: blog-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
link: blog-mysql
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
The mysql jdbc url of front-end application is jdbc:mysql://localhost:3306/test. And pod generation is successful. The application and mysql are connected without errors. And this time I seperate application pod and mysql pod into 2 yaml files.
== pod-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: blog-app
spec:
selector:
app: blog-mysql
containers:
- name: blog-app
image: app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
link: blog-mysql
== pod-db.yaml
apiVersion: v1
kind: Pod
metadata:
name: blog-mysql
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
But the front-end application can not connect to mysql pod. It throws the connection exceptions. I am afraid the mysql jdbc url has some incorrect values or the yaml value has inappropriate values. I hope any advices.
In the working case since same pod has two containers they are able to talk using localhost but in the second case since you have two pods you can not use localhost anymore. In this case you need to use the pod IP of the mysql pod in the frontend application. But problem with using POD IP is that it may change. Better is to expose mysql pod as service and use service name instead of IP in the frontend application. Check this guide
For this you need to write service for exposing the db pod.
There are 4 types of services.
ClusterIP
NodePort
LoadBalancer
ExternalName
Now you need only inside the cluster then use ClusterIP
For reference use following yaml file.
kind: Service
apiVersion: v1
metadata:
name: mysql-svc
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: blog-mysql
Now you will be access this pod using mysql-svc:3306
Refer this in blog-app yaml with
env:
- name: MYSQL_URL
value: mysql-svc
- name: MYSQL_PORT
value: 3306
For more info use Url :https://kubernetes.io/docs/concepts/services-networking/service/
Pods created will have dns configured in the following manner
pod_name.namespace.svc.cluster.local
In your case assuming these pods are in default namespace your jdbc connection string will be
jdbc:mysql://blog-mysql.default.svc.cluster.local:3306/test
Refer: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods
Like Arghya Sadhu and Sachin Arote suggested you can always create a service and deployment. Service and deployment helps you in the cases where you have more than one replicas of pods and service takes care of the load-balancing.