Wordpress+mysql helm deployment - mysql

Learning kubernetes, docker and helm;
I am diving into a Devops programm, and have been asked to deploy a wordpress+mysql with helm at my internship enterprises. Tried to do something but the wordpress is unable to connect to the database and i think the database is not able to write to the mounted nfs volume path; really need help and explanation if possible.. tried everything i saw but did not work.
The error i have from wordpress in web browser is Error connecting to databases
Here is my project manifest files, pods status
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
name: wordpress-mysql
protocol: TCP
selector:
app: wordpress
tier: mysql
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
name: wordpress
nodePort: 32000
selector:
app: wordpress
tier: frontend
type: NodePort
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: bitnami/mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-pvc
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-pvc
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_NAME
value: wordpressdb
- name: WORDPRESS_DB_PASSWORD
value: test123
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-pvc
mountPath: "/var/www/html"
volumes:
- name: wordpress-pvc
persistentVolumeClaim:
claimName: wordpress-pvc
Pods status
NAME READY STATUS RESTARTS AGE
pod/wordpress-fcf86fbd9-q9csh 1/1 Running 0 34h
pod/wordpress-mysql-6dfb484d54-wrlnm 1/1 Running 0 34h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 9d
service/wordpress NodePort 10.98.97.25 80:32000/TCP 34h
service/wordpress-mysql ClusterIP 10.97.50.98 3306/TCP 34h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 34h
deployment.apps/wordpress-mysql 1/1 1 1 34h
NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-fcf86fbd9 1 1 1 34h
replicaset.apps/wordpress-mysql-6dfb484d54 1 1 1 34h
Thank in advance for your help. Really need it

Check out your DB password and WordPress connection password that you are adding into for connection both are different, ideally, it should be the same.
MYSQL_ROOT_PASSWORD : root but WORDPRESS_DB_PASSWORD: test123 try with updating the password for Wordpress so it can connect to MySQL.
WORDPRESS_DB_NAME you have created this DB into MySQL or just adding name ? directly ? it could be due to Wordpress finding for that db and you have not created it. try following this official tut and check once as it is : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ it's mostly same only

Related

MySql databases deleted on new deployment in kubernetes [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 months ago.
Improve this question
I have created MySql container in kubernetes Pod by using YAML deployment file.
I am able to execute the mysql queries on that container and created few databases and tables with data in it.
But when I make some changes in other code of project not related to kubernetes files and deployed it.
On the recreation of pod all my previous databases are deleted from MySql container. I have also given mount path to mount the PVC to my container.
So my question is how to store the databases permanently so that on recreation of pod it will not delete the database and able to access that databases through newly created pod
It's always best practice to run the single container inside the single POD.
You have to use the PVC and PV to store the data so that data get persistent even if the POD restarts or we update the YAML change to POD definition.
For example here for MySQL database
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ref : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
Is PVC mounted on the MySQL data directory like /var/libe/mysql or If you are having different data directory, Use appropriate mount path for PVC where mysql stores data. You can find the sample spec for mysql deployment
here
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
```

Wordpress and mysql pod rollout correct but wordpress not showing up on my browser

I'm beginner to K8s so please bear with me.
I've rollout a wordpress with a mysql using Kubernetes. The rollout has completed and is running on my machine using minikube.
However, the thing is wordpress is not showing up on my browser
These are my pods
mysql291020-68d989895b-vxbwg 1/1 Running 0 18h
wp291020-7dccd94bd5-dfqqn 1/1 Running 0 19h
These are my services
mysql291020-68d989895b-vxbwg 1/1 Running 0 18h
wp291020-7dccd94bd5-dfqqn 1/1 Running 0 19h
After some thoughts, I thought it maybe related to how I setup my service for wordpress (see code below).
apiVersion: v1
kind: Service
metadata:
name: wp291020
labels:
app: wp291020
spec:
ports:
- port: 80
selector:
app: wp291020
tier: frontend
type: LoadBalancer
Not sure if it is the right place to look at. I'm adding below my deployement for the wordpress, and also the service for mysql and the deployment for mysql, in case it is needed.
deployment for wordpress
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp291020
spec:
selector:
matchLabels:
app: wp291020
replicas: 1
template:
metadata:
labels:
app: wp291020
spec:
containers:
- name: wp-deployment
image: andykwo/test123:wp_291020
ports:
- containerPort: 80
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
emptyDir: {}
service for mysql
apiVersion: v1
kind: Service
metadata:
name: mysql291020
labels:
app: mysql291020
spec:
ports:
- port: 3306
selector:
app: mysql291020
tier: mysql
clusterIP: None
deployment for mysql
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql291020
spec:
selector:
matchLabels:
app: mysql291020
replicas: 1
template:
metadata:
labels:
app: mysql291020
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_PASSWORD
value: my_wordpress_db_password
- name: MYSQL_ROOT_PASSWORD
value: my_wordpress_db_password
- name: MYSQL_USER
value: wordpress
name: db
image: andykwo/test123:wp_291020
ports:
- containerPort: 3306
volumeMounts:
- name: db-data
mountPath: /var/lib/mysql
volumes:
- name: db-data
emptyDir: {}
Just to mention that the docker containers are functionning also correctly when running only on the containers but I do have access to the wordpress through my browser.
I can provide my docker compose yaml if asked.
Thank you.
PS: I'm adding my docker compose file, in case
version: '3.3'
services:
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_files:/var/www/html
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: my_wordpress_db_password
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: my_db_root_password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: my_wordpress_db_password
volumes:
wordpress_files:
db_data:
To access the frontend from your web browser, you can port-forward the service to your local machine.
kubectl port-forward svc/wp291020 80:80
If you are using a cloud system with an external IP, you would need to verify that IP address is getting attached to your service, as shown here
If you're using Minikube you can easily expose your frontend Deployment via Service of a type NodePort or LoadBalancer. As Minikube is unable to create a real load balancer it uses for it NodePort under the hood anyway.
So if you exposed your wp291020 Deployment via wp291020 Service you can get its URL by typing:
minikube service wp291020 --url
and it will show you something like in the example below:
http://172.17.0.15:31637
I'm wondering if you have any good reason for clusterIP: None in your Service definition. mysql291020 Deployment is exposed within your kubernets cluster to your frontend Pods via Service which type is ClusterIP (if you don't specify the type explicitely, by default ClusterIP is created) so it should have its Cluster IP to be accessible by frontend Pods. I think you can simply get rid of clusterIP: None line in your mysql291020 Service definition. What you have in your example is called headless service with selector but I guess there is no real need for it in your case.

Can't connect to mysql in kubernetes

I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And here's my yaml for persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/Work/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After this I get this by running minikube service list:
default | mysql-service | http://192.168.99.101:31613
However I cannot access the database neither from my application nor my local machine.
What am I missing or did I misconfigure something?
EDIT:
I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.
Mysql must not have started, confirm it by checking the logs. kubectl get pods | grep mysql; kubectl logs -f $POD_ID. Remember you have to specify the environment variables MYSQL_DATABASE and MYSQL_ROOT_PASSWORD for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql_db
imagePullPolicy: Never
env:
- name: MYSQL_DATABASE
value: main_db
- name: MYSQL_ROOT_PASSWORD
value: s4cur4p4ss
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Ok, I figured it out. After looking through the logs I noticed the error Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied).
I had to add this to my docker image when building:
RUN usermod -u 1000 mysql
After rebuilding the image everything started working. Thank you guys.
I thought I was connecting to my DB server correctly, but I was wrong. My DB deployment was online (tested with kubectl exec -it xxxx -- bash and then mysql -u root --password=$MYSQL_ROOT_PASSWORD) but that wasn't the problem.
I made the simple mistake of getting my service and deployment labels confused. My DB service used a different label, than what my Joomla configMap had specified as MySQL host.
To summarize, the DB service yaml was
metadata:
labels:
app: fnjoomlaopencart-db-service
and the Joomla configMap yaml needed
data:
# point to the DB service
MYSQL_HOST: fnjoomlaopencart-db-service

How do I access MySQL as a Service on Kubernetes?

I have the following deployment...
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
This works great I can access the db like this...
kubectl exec -it mysql-deployment-<POD-ID> -- /bin/bash
Then I run...
mysql -u root -h localhost -p
And I can log into it. However, when I try to access it as a service by using the following yaml...
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
I can see it by running this kubectl describe service mysql-service I get...
Name: mysql-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mysql-service","namespace":"default"},"spec":{"ports":[{"port":33...
Selector: app=mysql
Type: ClusterIP
IP: 10.101.1.232
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 172.17.0.4:3306
Session Affinity: None
Events: <none>
and I get the ip by running kubectl cluster-info
#kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
but when I try to connect using Oracle SQL Developer like this...
It says it cannot connect.
How do I connect to the MySQL running on K8s?
Service type ClusterIP will not be accessible outside of Pod network.
If you don't have LoadBalancer option, then you have to use either Service type NodePort or kubectl port-forward
You need your mysql service to be of Type NodePort instead of ClusterIP to access it outside Kubernetes.
Use the Node Port in your client config
Example Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
nodePort: 30036
targetPort: 3306
So then you can use the port: 30036 in your client.

Restoring wordpress and mysql data to kubernetes volume

I am currently running mysql, wordpress and my custom node.js + express application on kubernetes pods in the same cluster. Everything is working quite well but my problem is that all the data will be reset if I have to rerun the deployments, services and persistent volumes.
I have configured wordpress quite extensively and would like to save all the data and insert it again after redeploying everything. How is this possible to do or am I thinking something wrong? I am using the mysql:5.6 and wordpress:4.8-apache images.
I also want to transfer my configuration to my other team members so they don't have to configure wordpress again.
This is my mysql-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: hidden
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This the wordpress-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: hidden
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
How is this possible to do or am I thinking something wrong?
It might be better to move configuration mindset from working directly on base container instances to configuring container images/manifests. You have several approaches there, just some pointers:
Create own Dockerfile based on images you referenced and bundle configuration files inside them. This is viable approach if configuration is more or less static and can be handled with env vars or infrequent builds of docker images, but require docker registry handling to work with k8s. In this approach you would add all changed files to build context of docker and then COPY them to appropriate places.
Create ConfigMaps and mount them on container filesystem as config files where change is required. This way you can still use base images you reference directly but changes are limited to kubernetes manifests instead of rebuilding docker images. Approach in this case would be to identify all changed files on container, then create kubernetes ConfigMaps out of them and finally mount appropriately. I don't know which exactly things you are changing but here is example of how you can place nginx config in ConfigMap:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-nginx-example
data:
nginx.conf: |
server {
listen 80;
...
# actual config here
...
}
and then mount it in container in appropriate place like so:
...
containers:
- name: nginx-example
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: cm-nginx-example
items:
- key: nginx.conf
path: nginx.conf
...
Mount persistent volumes (subpaths) on places where you need configs and keep configuration on persistent volumes.
Personally, I'd probably opt for ConfigMaps since you can easily share/edit those with k8s deployments and configuration details are not lost as some mystical 'extensive work' but can be reviewed, tweaked and stored to some code versioning system for version tracking...