I deployed a mysql service in Kubernetes and created a database(db1) and a custom user('foo'#'localhost' identified by 'bar') with all privileges. I can verify it.
but somehow my Flask application cannot connect to it. So, I turned on the debug mode and got
sqlalchemy.exc.OperationalError
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 99] Address not available)")
(Background on this error at: http://sqlalche.me/e/13/e3q8)
This never happens in Linux machines. This is my DATABASE_URL mysql+pymysql://foo:bar#localhost:3306/db1
Is there anything I am doing wrong? I am new to kubernetes, I don't even know how to go about debugging it. Please let me know if I am not providing enough information.
Thank you.
[EDITED]
mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "123"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: mysql
Related
Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration.
My deployment of single instance MySQL DB instance is:
apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: ClusterIP
ports:
- port: 3306
#targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
value: test12345
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
As I've mentioned I can get to the container via CLI:
Console output regarding the working pod with db looks like:
Console output regarding the service:
Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help.
You are using ClusterIP service(line 7). The kubernetes ClusterIP service is not made to allow you to access a pod outside of the cluster. ClusterIP just provide a way to have a not changing IP for other internal services to access your pod.
You should use instead Loadbalanacer.
Cf https://stackoverflow.com/a/48281728/8398523 for differences
You have used the type: ClusterIP so it won't expose the MYSQL outside the cluster ideally, your Microservices running in the cluster will be able to access it however you can not use it externally.
To expose the service we generally have to use the type: LoadBalancer. It will directly expose your MySQL service internet and from your local workbench, you can connect to DB running on K8s.
If you really don't want to expose the MySQL service directly to internet you can deploy the adminer.
So traffic will flow like
internet > adminer > internal communication > MySQL service > MySQL POD
YAML file to deploy and get the UI output directly in the browser, it will ask of MySQL DB username, password, Host (mysql-db-testing-service.testing.svc.cluster.local) to connect
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
labels:
app: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4.6.3
ports:
- containerPort: 8080
env:
- name: ADMINER_DESIGN
value: "pappu687"
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
type: ClusterIP(Internally to cluster)/LoadBalancer (Expose to internet)
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Port-forward for local access or use service type: LoadBalancer
kubectl port-forward svc/adminer-svc 8080:8080
Open localhost:8080 in browser
I am a newbie on Kubernetes and try to generate 2 pods including front-end application and back-end mysql. First I make a yaml file which contains both application and mysql server like below,
apiVersion: v1
kind: Pod
metadata:
name: blog-system
spec:
containers:
- name: blog-app
image: blog-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
link: blog-mysql
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
The mysql jdbc url of front-end application is jdbc:mysql://localhost:3306/test. And pod generation is successful. The application and mysql are connected without errors. And this time I seperate application pod and mysql pod into 2 yaml files.
== pod-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: blog-app
spec:
selector:
app: blog-mysql
containers:
- name: blog-app
image: app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
link: blog-mysql
== pod-db.yaml
apiVersion: v1
kind: Pod
metadata:
name: blog-mysql
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
But the front-end application can not connect to mysql pod. It throws the connection exceptions. I am afraid the mysql jdbc url has some incorrect values or the yaml value has inappropriate values. I hope any advices.
In the working case since same pod has two containers they are able to talk using localhost but in the second case since you have two pods you can not use localhost anymore. In this case you need to use the pod IP of the mysql pod in the frontend application. But problem with using POD IP is that it may change. Better is to expose mysql pod as service and use service name instead of IP in the frontend application. Check this guide
For this you need to write service for exposing the db pod.
There are 4 types of services.
ClusterIP
NodePort
LoadBalancer
ExternalName
Now you need only inside the cluster then use ClusterIP
For reference use following yaml file.
kind: Service
apiVersion: v1
metadata:
name: mysql-svc
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: blog-mysql
Now you will be access this pod using mysql-svc:3306
Refer this in blog-app yaml with
env:
- name: MYSQL_URL
value: mysql-svc
- name: MYSQL_PORT
value: 3306
For more info use Url :https://kubernetes.io/docs/concepts/services-networking/service/
Pods created will have dns configured in the following manner
pod_name.namespace.svc.cluster.local
In your case assuming these pods are in default namespace your jdbc connection string will be
jdbc:mysql://blog-mysql.default.svc.cluster.local:3306/test
Refer: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods
Like Arghya Sadhu and Sachin Arote suggested you can always create a service and deployment. Service and deployment helps you in the cases where you have more than one replicas of pods and service takes care of the load-balancing.
I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service
I have the following deployment...
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
This works great I can access the db like this...
kubectl exec -it mysql-deployment-<POD-ID> -- /bin/bash
Then I run...
mysql -u root -h localhost -p
And I can log into it. However, when I try to access it as a service by using the following yaml...
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
I can see it by running this kubectl describe service mysql-service I get...
Name: mysql-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mysql-service","namespace":"default"},"spec":{"ports":[{"port":33...
Selector: app=mysql
Type: ClusterIP
IP: 10.101.1.232
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 172.17.0.4:3306
Session Affinity: None
Events: <none>
and I get the ip by running kubectl cluster-info
#kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
but when I try to connect using Oracle SQL Developer like this...
It says it cannot connect.
How do I connect to the MySQL running on K8s?
Service type ClusterIP will not be accessible outside of Pod network.
If you don't have LoadBalancer option, then you have to use either Service type NodePort or kubectl port-forward
You need your mysql service to be of Type NodePort instead of ClusterIP to access it outside Kubernetes.
Use the Node Port in your client config
Example Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
nodePort: 30036
targetPort: 3306
So then you can use the port: 30036 in your client.
I have kubernetes cluster. I have started mysql from kubectl. I have a image of spring boot application. I am confused with the JDBC url to be used in application.yml. I have tried multiple IP addresses by describing pods, services etc. It is getting errored out with "communication Link failure"
Below is my mysql-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
#type: NodePort
ports:
- port: 3306
#targetPort: 3306
#nodePort: 31000
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_ROOT_PASSWORD: cGFzc3dvcmQ= #password
MYSQL_DATABASE: dGVzdA== #test
MYSQL_USER: dGVzdHVzZXI= #testuser
MYSQL_PASSWORD: dGVzdDEyMw== #test123
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Your K8S service should expose port and targetPort 3306 and in your JDBC URL use the name of that service:
jdbc:mysql://mysql/database
If your MySQL is a backend service only for apps running in K8S you don't need nodePort in the service manifest.
If you get a SQLException: Connection refused or Connection timed out or a MySQL specific CommunicationsException: Communications link failure, then it means that the DB isn't reachable at all.
This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
DB server is down.
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I suggest these steps to better understand the problem:
Connect to MySQL pod and verify the content of the
/etc/mysql/my.cnf file
Connect to MySQL from inside the pod to verify it works
Remove clusterIP: None from Service manifest
Get the IP address of the node where the MySQL pod is running by running the command:-
kubectl get nodes -o wide
If the MySQL service is exposed to type NodePort get the assigned nodeport:-
kubectl get svc
In your application.properties edit the JDBC URL with
ip-address of node:nodeport assigned to that service
below is my output of kubectl get svc
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
mysql ClusterIP None <none> 3306/TCP 2h
registry NodePort 10.110.33.13 <none> 8761:31881/TCP 7d