Kubernetes php container can't seem to connect to mysql service - mysql

Strange one, i have a php container with symfony, and a mysql pod that is attached to a service called mysql-service. I send the mysql connection details in the env variables on the php deployment.
Weirdly when symfony can't connect to the mysql pod using the service name.When i describe the php pod it says:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I can see it's the name resolution:
getaddrinfo failed: Temporary failure in name resolution
If i change the deployment config so the DB_HOST env variable is that of another mysql server on the local network it connects just fine, and i know the mysql user and pass are correct as the mysql deployment and the php deployment both use the mysql secret file; i have also logged into the shell of the mysql pod and connected the database with the same user and pass no problem.
It looks like it's something about the mysql service itself.
Php deployment (some of it's removed as not relevant to my problem):
containers:
- name: php
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "cd /usr/share/nginx/html && php bin/console cache:clear"]
env:
- name: APP_ENV
value: "prod"
- name: DB_NAME
valueFrom:
secretKeyRef:
name: mysql-secret
key: database
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: DB_HOST
value: "mysql-service"
- name: DB_PORT
value: "3306"
imagePullPolicy: Always
image: php-image-here
ports:
- containerPort: 9000
the mysql deployment and service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
deploy: mysql
spec:
replicas: 1
selector:
matchLabels:
deploy: mysql
template:
metadata:
labels:
deploy: mysql
spec:
containers:
- name: mysql
imagePullPolicy: Always
image: mysql-image-here
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-volume
env:
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: "yes"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: database
volumes:
- name: mysql-volume
hostPath:
path: /data/mysql
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
deploy: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
The service and the mysql pod are up and running:
NAME READY STATUS RESTARTS AGE
pod/backend-deployment-7d585fd8fd-9z5dp 1/2 CrashLoopBackOff 5 (2m54s ago) 7m9s
pod/mysql-deployment-7cb7999d98-blpzr 1/1 Running 0 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend-service NodePort 10.97.250.92 <none> 80:30002/TCP 38m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d18h
service/mysql-service ClusterIP 10.111.226.120 <none> 3306/TCP 50m

Related

How to make My Sql Pod to save data in Persistent Volume

I started to use Kubernetes to understant concepts like pods, objects and so on. I started to learn about Persistent Volume and Persistent Volume Claim, from my understanding, if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
I have a spring boot pod where i save data in mysql pod, data is saved, i can retreived, but when i restart my pods, delete or replace them, that saved data is lost, so i think i messed up something, can you give me a hint, please? Thanks...
Bellow are my Kubernetes files:
Mysql pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels: #must match Service and DeploymentLabels
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql #name of the db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret #name of the secret obj
key: password #which value from inside the secret to take
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: #mount volume obtained from PVC
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #mounting in the container will be here
volumes:
- name: mysql-persistent-storage #obtaining volume from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim # can use the same claim in different pods
apiVersion: v1
kind: Service
metadata:
name: mysql #DNS name
labels:
app: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector: #mysql pod should contain same label
app: mysql
clusterIP: None # we use DNS
Persistent Volume and Persistent Volume Claim files:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim #name of our pvc
labels:
app: mysql
spec:
volumeName: host-pv #claim that volume created with this name
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
apiVersion: v1 #version of our PV
kind: PersistentVolume #kind of obj we gonna create
metadata:
name: host-pv # name of our PV
spec: #spec of our PV
capacity: #size
storage: 4Gi
volumeMode: Filesystem #storage Type, File and Blcok
storageClassName: standard
accessModes:
- ReadWriteOnce # can be mount from multiple pods on a single nod, cam be use by multiple pods, multiple pods can use this pv but only from a single node
# - ReadOnlyMany # on multiple nodes
# - WriteOnlyMany # doar pt multiple nods, nu hostPath type
hostPath: #which type of pv
path: "/mnt/data"
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
My Spring book K8 file:
apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-example
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-deployment
spec:
replicas: 1
selector:
matchLabels:
app: book-example
template:
metadata:
labels:
app: book-example
spec:
containers:
- name: book-container
image: cinevacineva/kubernetes_book_pv:latest
imagePullPolicy: Always
# ports:
# - containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
# & minikube -p minikube docker-env | Invoke-Expression links docker images we create with minikube, nu mai trebe sa ppusham
...if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
Your previous data will not be available when the pod switch node. To use hostPath you don't really need PVC/PV. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
...
spec:
...
template:
...
spec:
...
nodeSelector: # <-- make sure your pod runs on the same node
<node label>: <value unique to the mysql node>
volumes: # <-- mount the data path on the node, no pvc/pv required.
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
type: DirectoryOrCreate
containers:
- name: mysql
...
volumeMounts: # <-- let mysql write to it
- name: mysql-persistent-storage
mountPath: /var/lib/mysql

MySQL Kubernetes

I have such a mysql configuration for kubernetes. But I can not connect to database with my local mysql. I am doing port-forward:
kubectl port-forward svc/mysql 3307
and then try to connect with command:
mysql -h 127.0.0.1 -P 3307 -uroot -p
with password: pass
This password is defined in secret file for the root user.
The error is: ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Do you have idea what could be wrong?
mysql-deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3307
targetPort: 3306
selector:
app: mysql
tier: database
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
#
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
tier: database
spec:
containers:
- image: mysql:5.7 # image from docker-hub
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-root-credentials
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-conf
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath:
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
mysqldb-root-credentials:
apiVersion: v1
kind: Secret
metadata:
name: db-root-credentials
data:
password: cGFzcwo=
mysqldb-credentials:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
data:
username: c2ViYQo=
password: c2ViYQo=
I've reproduced your issue and solve it by changing the way secrets are created. I used kubectl CLI to create secrets:
kubectl create secret generic db-credentials --from-literal=password=xyz --from-literal=username=xyz
kubectl create secret generic mysql-pass --from-literal=password=pass
Then deployed PVC, Deployment and Service:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tier: database
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
tier: database
spec:
containers:
- image: mysql:5.7 # image from docker-hub
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Exec to pod:
kubectl exec -it mysql-78d9b7b765-2ms5n -- mysql -h 127.0.0.1 -P 3306 -uroot -p
Once I enter the root password everything works fine:
Welcome to the MySQL monitor. Commands end with ; or \g.
[...]
mysql>

WordPress + MySQL deployed in Kubernetes - MySQL Connection Error

A Kubernetes scenario with Wordpress + Mysql in a local environment.
Wordpress Pod is unable to connect to Mysql database with the following error from Wordpress Pod logs:
MySQL Connection Error: (1045) Access denied for user 'root'#'10.44.0.5' (using password: YES)
Warning: mysqli::mysqli(): (HY000/1045): Access denied for user 'root'#'10.44.0.5' (using password: YES) in - on line 22
Instruction taken from kubernetes.io at link. The only change i made was creating a Secret resource to store password and to be pointed from Mysql and Wordpress containers.
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQxMjMK --> that is base64 of password123
type: Opaque
Pods are in default namespace both on node1 that is a worker node:
NAME READY STATUS RESTARTS AGE IP NODE
wordpress-554dfbbc47-hnr4n 0/1 Error 1 66s 10.44.0.5 node1
wordpress-mysql-5477cbdfbf-29w2r 1/1 Running 0 74s 10.44.0.4 node1
i've no skills about mysql but if i get bash shall in Mysql container and execute:
# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Here the Service output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
wordpress LoadBalancer 10.107.114.255 192.168.1.83 80:32336/TCP
wordpress-mysql ClusterIP None <none> 3306/TCP
Some env variables from MySql Pod:
....
HOSTNAME=wordpress-mysql-5477cbdfbf-29w2r
MYSQL_MAJOR=5.6
MYSQL_ROOT_PASSWORD=password123
MYSQL_VERSION=5.6.50-1debian9
....
PersistentVolume are working fine.
Quite stucked going ahead with troubleshooting. Help would appreciated.
After testing different images for Mysql and Wordpress and reading useful links on hub.docker.com mysql & wordpress i got the web application stack working.
The configuration:
MySQL:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: root-pass
key: password
- name: MYSQL_DATABASE
value: mysql
- name: MYSQL_USER
value: mysql
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
nodeSelector:
storage: local
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Wordpress:
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
externalIPs:
- 192.168.1.83
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
imagePullPolicy: IfNotPresent
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: WORDPRESS_DB_USER
value: mysql
- name: WORDPRESS_DB_NAME
value: mysql
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
nodeSelector:
storage: local
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Output PersitentVolume:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
mysql-pv-claim Bound persistent-volume-mysql 4Gi RWO local-storage
wp-pv-claim Bound persistent-volume-wordpress 2Gi RWO local-storage
Secrets:
apiVersion: v1
kind: Secret
metadata:
name: root-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
data:
password: cGFzc3dvcmQ=
type: Opaque
Notes for my example configuration:
on node1 created directory /mysql/data & /wordpress/data (mount point for mysql and wordpress containers).
image used for mysql -> mysql:5.7
image used for wordpress -> wordpress
added environment variables according to the documentation of mysql and wordpress.
Did you apply your secret? is your secret available in kube env?

cant run mysql statefulset in kubernetes

I've just started learning kubernetes and was playing around in katakoda platform. I created a statefulset for mysql. It is just a test so i didnt declare any pvc and mount any volumes. It's declaration and the service's declaration in yml:
---
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
labels:
run: mysql-sts-demo
spec:
ports:
- port: 3306
name: db
selector:
run: mysql-sts-demo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts-demo
spec:
serviceName: "mysql-headless"
replicas: 1
selector:
matchLabels:
run: mysql-sts-demo
template:
metadata:
labels:
run: mysql-sts-demo
spec:
containers:
- name: mysql
image: mysql:5.7.8
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secrets
key: DBNAME
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: PASSWORD
it creates those resources successfully , but when I type kubectl get statefulsets , my ss is always being displayed as not ready. What may the issue be? Btw I need it for using with a spring petclinic app which I declared and launched previously as a deployment .
can you paste the logs for statefulsets
or an output of kubectl get events and kubectl describe <your stateful-set name>
now coming to secrets can you check whether those secrets which you are using in your stateful-sets definitions are already present using kubectl get secrets

How to make connection with Google Cloud SQL using Google Container Engine?

i am using Node JS and deployed it using Kubernetes in Google Container Engine. but I cant make a connection to MySQL.
this is my node JS connection
var pool = mysql.createPool({
connectionLimit : 100,
user : process.env.DB_USER,
password : process.env.DB_PASSWORD,
database : process.env.DB_NAME,
multipleStatements : true,
socketPath : '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME
})
I was using this for my Google App Engine and its work. now i need to move to GKE and it sent an error that said mySQL is not defined.
this is my app-frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
tier: frontend
spec:
containers:
- name: app
image: gcr.io/app-12345/app:1.0
env :
- name : DB_HOST
value : 127.0.0.1:3306
- name : DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name : DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
ports:
- name: http-server
containerPort: 8080
imagePullPolicy: Always
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
imagePullPolicy: Always
command:
- /cloud_sql_proxy
- --dir=/cloudsql
- --instances=mulung=tcp:3306
- --credential_file=/secrets/cloudsql/credentials.json
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
ports:
- name: portdb
containerPort: 3306
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
what should i do to fix it? thank you guys.
Can you check the command part
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
Here is the working deployment.yaml from one of my backend application
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <appname>
spec:
replicas: 2
template:
metadata:
labels:
app: <appname>
spec:
containers:
- image: gcr.io/<some_name>/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: <appname>
image: IMAGE_NAME
ports:
- containerPort: 8888
readinessProbe:
httpGet:
path: /<appname>/health
port: 8888
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 5
env:
- name: PROJECT_NAME
value: <some_name>
- name: PROJECT_ZONE
value: <some_name>
- name: INSTANCE_NAME
value: <some_name>
- name: INSTANCE_PORT
value: <some_name>
- name: CONTEXT_PATH
value: <appname>
volumeMounts:
- name: application-config
mountPath: /opt/config-mount
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
- name: application-config
secret:
secretName: <appname>-ENV_NAME-config