How do I access MySQL as a Service on Kubernetes? - mysql

I have the following deployment...
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
This works great I can access the db like this...
kubectl exec -it mysql-deployment-<POD-ID> -- /bin/bash
Then I run...
mysql -u root -h localhost -p
And I can log into it. However, when I try to access it as a service by using the following yaml...
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
I can see it by running this kubectl describe service mysql-service I get...
Name: mysql-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mysql-service","namespace":"default"},"spec":{"ports":[{"port":33...
Selector: app=mysql
Type: ClusterIP
IP: 10.101.1.232
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 172.17.0.4:3306
Session Affinity: None
Events: <none>
and I get the ip by running kubectl cluster-info
#kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
but when I try to connect using Oracle SQL Developer like this...
It says it cannot connect.
How do I connect to the MySQL running on K8s?

Service type ClusterIP will not be accessible outside of Pod network.
If you don't have LoadBalancer option, then you have to use either Service type NodePort or kubectl port-forward

You need your mysql service to be of Type NodePort instead of ClusterIP to access it outside Kubernetes.
Use the Node Port in your client config
Example Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
nodePort: 30036
targetPort: 3306
So then you can use the port: 30036 in your client.

Related

How to connect my webservice with mysql on k8s?

I am trying to deploy a webservice, which has to connect with Mysql. Everything is working when I run them on docker-compose but on Kubernetes (Minikube) I am getting these error: dial tcp: lookup mysql on 10.96.0.10:53: no such host. Any idea what I may be missing? Here are my manifest-files:
The webservice hast to be accessible over the internet and is listening on port 8080. Service:
apiVersion: v1
kind: Service
metadata:
name: webservice-svc
spec:
selector:
app: blur
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: blur-service
spec:
selector:
matchLabels:
app: blur
template:
metadata:
labels:
app: blur
spec:
containers:
- name: blur-service
image: marjugoncalves/blur-service
ports:
- containerPort: 8080
The manifest-files of mysql:
Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: blur
ports:
- protocol: TCP
port: 3306
targetPort: 3306
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: blur
template:
metadata:
labels:
app: blur
spec:
containers:
- name: mysql
image: mysql:8.0
ports:
- containerPort: 3306
I found out what was my mistake: In my app, in the string used to connect to the database I used "mysql" as hostname so that the service name also has to be called like that. My service should look like this:
apiVersion: v1
kind: Service
metadata:
name: mysql //here was the mistake
spec:
selector:
app: blur
ports:
- protocol: TCP
port: 3306
targetPort: 3306

Mysql access denied when it runs in Kubernetes pod

I'm setting up a K8s cluster and I started with the database. I used Kustomize for that purpose. I use Kubernetes that comes in with Docker Desktop for Windows 10.
When I run kubectl apply -k ./, the mysql pods are running.
Then I use kubectl exec -it mysql -- bash to get inside the container.
Once in there, I try to connect to MySQL service with mysql -u root -p and all I get is
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It doesn't matter if I use secretGenerator in kustomization.yaml or put the root password directly in the deployment definition, I can't log in to mysql.
I'm using mysql image from docker hub, so nothing fancy.
I also did a test with running the container directly by docker, e.g.
docker run -d --env MYSQL_ROOT_PASSWORD=dummy --name mysql-test -p 3306:3306 mysql:5.6
Having container set up like this I can log in to the MySQL database without a problem.
I don't understand why the same image ran in docker behaves differently when ran in Kubernetes.
Maybe you have any ideas?
My yaml files look like this:
storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
mysql-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
local:
path: "/c/kubernetes/mysql-storage/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
name: mysql-backend
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: IfNotPresent
# envFrom:
# - secretRef:
# name: mysql-credentials
env:
- name: MYSQL_ROOT_PASSWORD
value: dummy
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
kustomization.yaml
resources:
- storage.yaml
- mysql-persistent-volume.yaml
- mysql-deployment.yaml
generatorOptions:
disableNameSuffixHash: true
secretGenerator:
- name: mysql-credentials
literals:
- MYSQL_ROOT_PASSWORD=dummy
# - MYSQL_ALLOW_EMPTY_PASSWORD=yes

Can't connect to mysql in kubernetes

I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service

how to check if kubernetes services like mysql, rabbitmq are reachable by the deployed app

I am new to Kubernetes. I am trying to deploy a microservices architecture based springboot web application in Kubernetes. I have setup Kubernetes on OpenStack. All Kubernetes services are running fine.
I followed https://github.com/fabric8io/gitcontroller/tree/master/vendor/k8s.io/kubernetes/examples/javaweb-tomcat-sidecar
to deploy a sample springboot application at
https://www.mkyong.com/spring-boot/spring-boot-hello-world-example-jsp/
and I could see the web app at localhost:8080 and <'node-ip'>:8080.
But the application which I am trying to deploy needs MySQL and rabbitmq, so I created MySQL and rabbitmq services using the yaml file(javaweb.yaml):
javaweb.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
# Expose the management HTTP port on each node
name: rabbitmq-management
labels:
app: rabbitmq
spec:
type: NodePort # Or LoadBalancer in production w/ proper security
ports:
- port: 15672
name: http
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
clusterIP: None
selector:
app: rabbitmq
---
apiVersion: v1
kind: Pod
metadata:
name: javaweb
spec:
containers:
- image: chakravarthych/sample:v1
name: war
volumeMounts:
- mountPath: /app
name: app-volume
- image: mysql:5.7
name: mysql
ports:
- protocol: TCP
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root123
command: ["sh","-c","service mysql start; tail -f /dev/null"]
- image: rabbitmq:3.7-management
name: rabbitmq
ports:
- name: http
protocol: TCP
containerPort: 15672
- name: amqp
protocol: TCP
containerPort: 5672
command: ["sh","-c","service rabbitmq-server start; tail -f /dev/null"]
- image: tomcat:8.5.33
name: tomcat
volumeMounts:
- mountPath: /usr/local/tomcat/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8080
command: ["sh","-c","/usr/local/tomcat/bin/startup.sh; tail -f /dev/null"]
volumes:
- name: app-volume
emptyDir: {}
When I try to access my application at localhost:8080 or <'node-ip'>:8080 I see a blank page.
command kubectl get all -o wide gave me below output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/javaweb 4/4 Running 0 1h 192.168.9.123 kube-node1 <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d <none>
service/mysql NodePort 10.101.253.11 <none> 3306:30527/TCP 1h app=mysql
service/rabbitmq ClusterIP None <none> 5672/TCP,4369/TCP,25672/TCP 1h app=rabbitmq
service/rabbitmq-management NodePort 10.108.7.162 <none> 15672:30525/TCP 1h app=rabbitmq
which shows that MySQL and rabbitmq are running.
My question is how to check if my application has access to MySQL and rabbitmq services running in Kubernetes.
Note:
I could access rabbitmq at 192.168.9.123:15672 only.
I could also log in to MySQL inside of Docker container.
Did you try a "kubectl log" on the springboot pod? You may have some indication of what is going wrong.

Kubernetes -- unable to connect to mysql from spring application

I have kubernetes cluster. I have started mysql from kubectl. I have a image of spring boot application. I am confused with the JDBC url to be used in application.yml. I have tried multiple IP addresses by describing pods, services etc. It is getting errored out with "communication Link failure"
Below is my mysql-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
#type: NodePort
ports:
- port: 3306
#targetPort: 3306
#nodePort: 31000
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_ROOT_PASSWORD: cGFzc3dvcmQ= #password
MYSQL_DATABASE: dGVzdA== #test
MYSQL_USER: dGVzdHVzZXI= #testuser
MYSQL_PASSWORD: dGVzdDEyMw== #test123
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Your K8S service should expose port and targetPort 3306 and in your JDBC URL use the name of that service:
jdbc:mysql://mysql/database
If your MySQL is a backend service only for apps running in K8S you don't need nodePort in the service manifest.
If you get a SQLException: Connection refused or Connection timed out or a MySQL specific CommunicationsException: Communications link failure, then it means that the DB isn't reachable at all.
This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
DB server is down.
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy. 
I suggest these steps to better understand the problem:
Connect to MySQL pod and verify the content of the
/etc/mysql/my.cnf file
Connect to MySQL from inside the pod to verify it works
Remove clusterIP: None from Service manifest
Get the IP address of the node where the MySQL pod is running by running the command:-
kubectl get nodes -o wide
If the MySQL service is exposed to type NodePort get the assigned nodeport:-
kubectl get svc
In your application.properties edit the JDBC URL with
ip-address of node:nodeport assigned to that service
below is my output of kubectl get svc
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
mysql ClusterIP None <none> 3306/TCP 2h
registry NodePort 10.110.33.13 <none> 8761:31881/TCP 7d