OpenShift aPaaS v3 failed Liveness probes vs failed Readiness Probes - openshift

What will happen if in a pool of pod Liveness probes are failed and a pool of pod Readiness probes are failed ?

There is few more differences between liveness and readiness probes. But one of the main difference is that a failed readiness probe removes the pod from the pool, but DO NOT RESTART. On the other hand a failed liveness probe removes the pod from the pool and RESTARTS the pod.
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness-vs-readiness
name: liveness-vs-readiness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; touch /tmp/liveness; sleep 999999
livenessProbe:
exec:
command:
- cat
- /tmp/liveness
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Lets create this pod and show this in action: oc create -f liveness-vs-readiness.yaml
Output of pod status while we do actions inside the pod. Number in front of the name coresponds to the actions done inside the pod:
oc get pods -w
NAME READY STATUS RESTARTS AGE
[1] liveness-vs-readiness-exec 1/1 Running 0 44s
[2] liveness-vs-readiness-exec 0/1 Running 0 1m
[3] liveness-vs-readiness-exec 1/1 Running 0 2m
[4] liveness-vs-readiness-exec 0/1 Running 1 3m
liveness-vs-readiness-exec 1/1 Running 1 3m
Actions inside the container:
[root#default ~]# oc rsh liveness-vs-readiness-exec
# [1] we rsh to the pod and do nothing. Pod is healthy and live
# [2] we remove health probe file and see that pod goes to notReady state
# rm /tmp/healthy
#
# [3] we create health file. Pod goes into ready state without restart
# touch /tmp/healthy
#
# [4] we remove liveness file. Pod goes into notready state and is restarted just after that
# rm /tmp/liveness
# command terminated with exit code 137

Related

mariadb crashes inside kubernetes pod with hostpath volume

I'm trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I've installed kubernetes as a k3s instance inside a vagrant virtual machine.
One of these containers is a mariadb container instance, with a bind volume mapped
This is the relevant portion of the docker-compose I'm using:
academy-db:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
container_name: academy-db
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_moodle
- MARIADB_DATABASE=bitnami_moodle
volumes:
- type: bind
source: ./volumes/moodle/mariadb
target: /bitnami/mariadb
ports:
- '3306:3306'
Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).
I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files.
This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I'm using a simple hostPath volume pointing to the contents copied from the docker-compose's bind mounts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: academy-db
spec:
replicas: 1
selector:
matchLabels:
name: academy-db
strategy:
type: Recreate
template:
metadata:
labels:
name: academy-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MARIADB_DATABASE
value: bitnami_moodle
- name: MARIADB_USER
value: bn_moodle
image: docker.io/bitnami/mariadb:10.3-debian-10
name: academy-db
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /bitnami/mariadb
name: academy-db-claim
restartPolicy: Always
volumes:
- name: academy-db-claim
persistentVolumeClaim:
claimName: academy-db-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: academy-db-pv
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "<...full path to deployment folder on the server...>/volumes/moodle/mariadb"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: academy-db-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
volumeName: academy-db-pv
---
apiVersion: v1
kind: Service
metadata:
name: academy-db-service
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
name: academy-db
after applying the deployment, everything seems to work fine, in the sense that with kubectl get ... the pod and the volumes seem to be running correctly
kubectl get pods
NAME READY STATUS RESTARTS AGE
academy-db-5547cdbc5-65k79 1/1 Running 9 15d
.
.
.
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
academy-db-pv 1Gi RWO Retain Bound default/academy-db-claim 15d
.
.
.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
academy-db-claim Bound academy-db-pv 1Gi RWO 15d
.
.
.
This is the pod's log:
kubectl logs pod/academy-db-5547cdbc5-65k79
mariadb 10:32:05.66
mariadb 10:32:05.66 Welcome to the Bitnami mariadb container
mariadb 10:32:05.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb 10:32:05.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb 10:32:05.66
mariadb 10:32:05.67 INFO ==> ** Starting MariaDB setup **
mariadb 10:32:05.68 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 10:32:05.68 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb 10:32:05.69 INFO ==> Initializing mariadb database
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
and the describe pod command:
Name: academy-db-5547cdbc5-65k79
Namespace: default
Priority: 0
Node: zdmp-kube/192.168.33.99
Start Time: Tue, 22 Dec 2020 13:33:43 +0000
Labels: name=academy-db
pod-template-hash=5547cdbc5
Annotations: <none>
Status: Running
IP: 10.42.0.237
IPs:
IP: 10.42.0.237
Controlled By: ReplicaSet/academy-db-5547cdbc5
Containers:
academy-db:
Container ID: containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a
Image: docker.io/bitnami/mariadb:10.3-debian-10
Image ID: docker.io/bitnami/mariadb#sha256:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 07 Jan 2021 10:32:05 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 07 Jan 2021 10:22:03 +0000
Finished: Thu, 07 Jan 2021 10:32:05 +0000
Ready: True
Restart Count: 9
Environment:
ALLOW_EMPTY_PASSWORD: yes
MARIADB_DATABASE: bitnami_moodle
MARIADB_USER: bn_moodle
MARIADB_PASSWORD: bitnami
Mounts:
/bitnami/mariadb from academy-db-claim (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x28jh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
academy-db-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: academy-db-claim
ReadOnly: false
default-token-x28jh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x28jh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 15d (x8 over 15d) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 15d (x8 over 15d) kubelet Created container academy-db
Normal Started 15d (x8 over 15d) kubelet Started container academy-db
Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8m14s (x2 over 18m) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 8m14s (x2 over 18m) kubelet Created container academy-db
Normal Started 8m14s (x2 over 18m) kubelet Started container academy-db
Later, though, I notice that the client application has problems in connecting. After some investigation I've concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with kubectl exec and try to run for instance the mysql client I get:
kubectl exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash
I have no name!#academy-db-5547cdbc5-65k79:/$ mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)
Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I'm not an expert in Kubernetes, but started only recently to learn it)
Edit: Following #Novo's comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch.
Now my pod doesn't even start, terminating in CrashLoopBackOff !
By comparing the pod logs I notice that in the previous mariabd log there was a message:
...
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
Now replaced with
...
mariadb 14:15:57.32 INFO ==> Updating 'my.cnf' with custom configuration
mariadb 14:15:57.32 INFO ==> Setting user option
mariadb 14:15:57.35 INFO ==> Installing database
Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?
By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:
spec:
securityContext:
fsGroup: <gid>
Where gid is the group used by the process in your container.
Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:
chown-R <uid>:<gid> /path/to/volume
where uid and gid are the userId and groupId from your app.
chmod -R 777 /path/to/volume
This should solve your issue.
But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are 'StatefulSets' in Kubernetes. Use those together with a 'VolumeClaimTemplate' plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it's default storage class, which is local storage (on your node).

how to securely expose the API address for ipfs cluster services?

I implemented the following from the docs and this all works but the API access is set 0.0.0.0 which is a security hole allowing people from outside the network to connect and add files. I want to create a private network and hence secure the network by allowing just localhost on the API access or access from a known server. But then I find the peers themselves do not connect. Is there a solution for this?
version: '3.4'
# This is an example docker-compose file to quickly test an IPFS Cluster
# with multiple peers on a contained environment.
# It runs 3 cluster peers (cluster0, cluster1...) attached to go-ipfs daemons
# (ipfs0, ipfs1...) using the CRDT consensus component. Cluster peers
# autodiscover themselves using mDNS on the docker internal network.
#
# To interact with the cluster use "ipfs-cluster-ctl" (the cluster0 API port is
# exposed to the locahost. You can also "docker exec -ti cluster0 sh" and run
# it from the container. "ipfs-cluster-ctl peers ls" should show all 3 peers a few
# seconds after start.
#
# For persistance, a "compose" folder is created and used to store configurations
# and states. This can be used to edit configurations in subsequent runs. It looks
# as follows:
#
# compose/
# |-- cluster0
# |-- cluster1
# |-- ...
# |-- ipfs0
# |-- ipfs1
# |-- ...
#
# During the first start, default configurations are created for all peers.
services:
##################################################################################
## Cluster PEER 0 ################################################################
##################################################################################
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs:release
# ports:
# - "4001:4001" # ipfs swarm - expose if needed/wanted
# - "5001:5001" # ipfs api - expose if needed/wanted
# - "8080:8080" # ipfs gateway - expose if needed/wanted
volumes:
- ./compose/ipfs0:/data/ipfs
cluster0:
container_name: cluster0
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs0
environment:
CLUSTER_PEERNAME: cluster0
CLUSTER_SECRET: ${CLUSTER_SECRET} # From shell variable if set
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs0/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*' # Trust all peers in Cluster
CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS: /ip4/0.0.0.0/tcp/9094 # Expose API
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
ports:
# Open API port (allows ipfs-cluster-ctl usage on host)
- "127.0.0.1:9094:9094"
# The cluster swarm port would need to be exposed if this container
# was to connect to cluster peers on other hosts.
# But this is just a testing cluster.
# - "9096:9096" # Cluster IPFS Proxy endpoint
volumes:
- ./compose/cluster0:/data/ipfs-cluster
##################################################################################
## Cluster PEER 1 ################################################################
##################################################################################
# See Cluster PEER 0 for comments (all removed here and below)
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs:release
volumes:
- ./compose/ipfs1:/data/ipfs
cluster1:
container_name: cluster1
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs1
environment:
CLUSTER_PEERNAME: cluster1
CLUSTER_SECRET: ${CLUSTER_SECRET}
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs1/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*'
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
volumes:
- ./compose/cluster1:/data/ipfs-cluster
##################################################################################
## Cluster PEER 2 ################################################################
##################################################################################
# See Cluster PEER 0 for comments (all removed here and below)
ipfs2:
container_name: ipfs2
image: ipfs/go-ipfs:release
volumes:
- ./compose/ipfs2:/data/ipfs
cluster2:
container_name: cluster2
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs2
environment:
CLUSTER_PEERNAME: cluster2
CLUSTER_SECRET: ${CLUSTER_SECRET}
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs2/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*'
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
volumes:
- ./compose/cluster2:/data/ipfs-cluster
# For adding more peers, copy PEER 1 and rename things to ipfs2, cluster2.
# Keep bootstrapping to cluster0.
First you need to create the private network in IPFS, this allow your ipfs nodes connect to ipfs nodes that have the same swarm key.
In you ipfs0 and ipfs1 services, you need to add two new enviroments variables, and a new volume:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs:release
# ports:
# - "4001:4001" # ipfs swarm - expose if needed/wanted
# - "5001:5001" # ipfs api - expose if needed/wanted
# - "8080:8080" # ipfs gateway - expose if needed/wanted
environment:
- LIBP2P_FORCE_PNET=1
- IPFS_SWARM_KEY_FILE=/data/ipfs/swarm.key
volumes:
- ./compose/ipfs0:/data/ipfs
- ./swarm.key:/data/ipfs/swarm.key
To generate the swarm.key check this link. The swarm.key must be in your ipfs root path (For default, ~/.ipfs, in the container ipfs path is: /data/ipfs). This swarm.key should be the same for all ipfs nodes.
For IPFS Cluster, you have it good, with this command you can generate you cluster key:
export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n')
I recommend you to add files using ipfs cluster REST Api. Check this link to configure ipfs cluster and make more secure to upload files (Using the api secret key), or you can only allow localhost as a ipfs cluster network:
ports:
- "127.0.0.1:9094:9094" # Only open the port 9094 in localhost

Kubernetes: Error when creating a StatefulSet with a MySQL container

Good morning,
I'm very new to Docker and Kubernetes, and I do not really know where to start looking for help. I created a database container with Docker and I want manage it and scale with Kubernetes. I started installing minikube in my machine, and tried to create a Deployment first and then a StatefulSet for a database container. But I have a problem with the StatefulSet when creating a Pod with a database (mariadb or mysql). When I use a Deployment the Pods are loaded and work fine. However, the same Pods are not working when using them in a StatefulSet, returning errors asking for the MYSQL constants. This is the Deployment, and I use the command kubectl create -f deployment.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mydb-deployment
spec:
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: mydb
image: ignasiet/aravomysql
ports:
- containerPort: 3306
And when listing the deployments: kubectl get Deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mydb-deployment 1 1 1 1 2m
And the pods: kubectl get pods:
NAME READY STATUS RESTARTS AGE
mydb-deployment-59c867c49d-4rslh 1/1 Running 0 50s
But since I want to create a persistent database, I try to create a statefulSet object with the same container, and a persistent volume.
Thus, when creating the following StatefulSet with kubectl create -f statefulset.yaml:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
With the service kubectl create -f service-db.yaml:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
type: ClusterIP
ports:
- port: 3306
selector:
name: mydb-pod
And the permission file kubectl create -f permissions.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-mydb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
The pods do not work. They give an error:
NAME READY STATUS RESTARTS AGE
statefulset-mydb-0 0/1 CrashLoopBackOff 1 37s
And when analyzing the logs kubectl logs statefulset-mydb-0:
`error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD`
How it is possible that it does ask for these variables when the container has already an initialization script and works perfectly? And why it asks only when launching as statefulSet, and not when launching the Deployment?
Thanks in advance.
I pulled your image ignasiet/aravomysql to try to figure out what went wrong. As it turns out, your image already has an initialized MySQL data directory at /var/lib/mysql:
$ docker run -it --rm --entrypoint=sh ignasiet/aravomysql:latest
# ls -al /var/lib/mysql
total 110616
drwxr-xr-x 1 mysql mysql 240 Nov 7 13:19 .
drwxr-xr-x 1 root root 52 Oct 29 18:19 ..
-rw-rw---- 1 root root 16384 Oct 29 18:18 aria_log.00000001
-rw-rw---- 1 root root 52 Oct 29 18:18 aria_log_control
-rw-rw---- 1 root root 1014 Oct 29 18:18 ib_buffer_pool
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile0
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile1
-rw-rw---- 1 root root 12582912 Oct 29 18:18 ibdata1
-rw-rw---- 1 root root 0 Oct 29 18:18 multi-master.info
drwx------ 1 root root 2696 Nov 7 13:19 mysql
drwx------ 1 root root 12 Nov 7 13:19 performance_schema
drwx------ 1 root root 48 Nov 7 13:19 yypy
However, when mounting a PersistentVolume or just a simple Docker volume to /var/lib/mysql, it's initially empty and therefore the script thinks your database is uninitialized. You can reproduce this issue with:
$ docker run -it --rm --mount type=tmpfs,destination=/var/lib/mysql ignasiet/aravomysql:latest
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
If you have a bunch of scripts you need to run to initialize the database, you have two options:
Create a Dockerfile based on the mysql Dockerfile, and add shell scripts or SQL scripts to /docker-entrypoint-initdb.d. More details available here under "Initializing a fresh instance".
Use the initContainers property in the PodTemplateSpec, something like:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
initContainers:
- name: aravo-database-init
command:
- /script/to/initialize/database
image: ignasiet/aravomysql
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
The issue you are facing is not specific to StatefulSet. It is because of the persistent volume. If you use StatefulSet without the persistent volume, you will not face this problem. Or, if you use Deployment with persistent volume you will face this issue.
Why? Ok, let me explain.
Setting up one of these environment variable MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD is mandatory for creating new database. Read Environment Variables part here.
But, if you initialize database from script, you will not require to provide it. Look at this line of docker-entrypont.sh here. It check if there is already a database in /var/lib/mysql directory. If there is none, it will try to create one. If you don't provide any of the specified environment variable then it will give the error you are getting. But, if it found already one database there, it will not try to create one and you will not see the error.
Now, the question is, you already have initialized the database then why it still complaining about the environment variables?
Here, the persistent volume come into play. As you have mounted the persistent volume at /var/lib/mysql directory, now this directory points to your persistent volume which is currently empty. So, when your container run docker-entrypoint.sh script, it does not found any database on /var/lib/mysql directory as it is now pointing to the persistent volume instead of original /var/lib/mysql directory of your docker image which had initialized database on this directory. So, it will try to create a new database and will complain as you haven't provided MYSQL_ROOT_PASSWORD environment variable.
When you don't use any persistent volume, your /var/lib/mysql directory points to the original directory which contains the initialized database. So, you don't see the error then.
Then, how you can initialize mysql database properly?
In order to initialize MySQL from a script, you just need to put the script into /docker-entrypoint-initdb.d. Just use a vanilla mysql image, put your initialization script into a volume then mount the volume at /docker-entrypoint-initdb.d directory. MySQL will be initialized.
Check this answer for details on how to initialize from script: https://stackoverflow.com/a/45682775/7695859

MatchNodeSelector error while deploying pods in OpenShift

I created the openshift cluster with 1 master and 2 nodes. I'm able to deploy the hawkular, cassandra and heapster pods for monitoring and I'm able to setup the openshift web console.
However, I tried to deploy a pod manually but I get an error MatchNodeSelector.
inputs:
The hello.yaml file for deploying the pod with command oc create -f hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
spec:
containers:
- name: hello
image: hello
imagePullPolicy: IfNotPresent
Expected output:
The pods should be in running state and should reflect the performance on the web console.
Actual output:
The pod status after running oc create -f hello.yaml
[root#master docker]# oc get pods -n demo
NAME READY STATUS RESTARTS AGE
pod3 0/1 Pending 0 44m
More detailed log of the pod
[root#master docker]# oc describe pods pod3 -n demo
Name: pod3
Namespace: demo
Node: <none>
Labels: <none>
Annotations: openshift.io/scc=anyuid
Status: Pending
IP:
Containers:
hello:
Image: hello
Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-87b8b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-87b8b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-87b8b
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1m (x141 over 41m) default-scheduler 0/2 nodes are available: 2 MatchNodeSelector.
The status would suggest that none of the nodes are matching the Node-Selector:
node-role.kubernetes.io/compute=true
Please review the labels on your nodes (oc get nodes).

Mysql Communications link failure in kubernetes sample

Step1:finish installing etcd and kubernetes with YUM in CentOS7 and shutdown firewall
Step2:modify related configuration item in /etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --insecure-registry gcr.io'
Step3:modify related configuration item in /etc/kubernetes/apiserver
remove
ServiceAccount
in KUBE_ADMISSION_CONTROL configuration item
Step4:start all the related services of etcd and kubernetes
Step5:start ReplicationController for mysql db
kubectl create -f mysql-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: hub.c.163.com/library/mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
Step6:start related mysql db service
kubectl create -f mysql-svc.yaml
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
Step7:start ReplicationController for myweb
kubectl create -f myweb-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 3
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: docker.io/kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
Step8:start related tomcat service
kubectl create -f myweb-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
When I visit from browser with nodeport(30001),I get the following Exception:
Error:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure The last packet sent successfully to the
server was 0 milliseconds ago. The driver has not received any packets
from the server.
kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.57.129:6443 1d
mysql 172.17.0.2:3306 1d
myweb 172.17.0.3:8080,172.17.0.4:8080,172.17.0.5:8080 1d
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
mysql 10.254.0.5 <none> 3306/TCP 1d
myweb 10.254.220.2 <nodes> 8080:30001/TCP 1d
From the interior of any tomcat container I can see the mysql env and the related mysql link code in JSP is as below:
Class.forName("com.mysql.jdbc.Driver");
String ip=System.getenv("MYSQL_SERVICE_HOST");
String port=System.getenv("MYSQL_SERVICE_PORT");
ip=(ip==null)?"localhost":ip;
port=(port==null)?"3306":port;
System.out.println("Connecting to database...");
conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"?useUnicode=true&characterEncoding=UTF-8", "root","123456");
[root#promote ~]# docker exec -it 1470cfaa1b1c /bin/bash
root#myweb-xswfb:/usr/local/tomcat# env |grep MYSQL_SERVICE
MYSQL_SERVICE_PORT=3306
MYSQL_SERVICE_HOST=mysql
root#myweb-xswfb:/usr/local/tomcat# ping mysql
ping: unknown host
Can someone tell me why I could not ping mysqldb hostname from inner tomcat server?Or how to locate the problem further?
I know the reason, it's the DNS problems. The web server cannot find the IP address of the mysql server. so it failed. Temp solution is change the web server's IP to the mysql db server. Hope can help you. Thank you.
Try to use a Headless Service http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services
by setting in your mysql Service
clusterIP: None
UPDATE
I have tried your yaml file.
Pods are running:
➜ kb get po
NAME READY STATUS RESTARTS AGE
mysql-ndtxn 1/1 Running 0 7m
myweb-j8xgh 1/1 Running 0 8m
myweb-qc7ws 1/1 Running 0 8m
myweb-zhzll 1/1 Running 0 8m
Services are:
kb get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
mysql ClusterIP 10.102.178.190 <none> 3306/TCP 20m
myweb NodePort 10.98.74.113 <none> 8080:30001/TCP 19m
Endpoints are:
kb get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 1h
mysql 172.17.0.7:3306 20m
myweb 172.17.0.2:8080,172.17.0.4:8080,172.17.0.6:8080 19m
I exec bash on a tomcat pod and I can ping my service (it is resolved):
kb exec -ti myweb-zhzll -- bash
root#myweb-zhzll:/usr/local/tomcat# ping mysql
PING mysql.default.svc.cluster.local (10.102.178.190): 56 data bytes
^C--- mysql.default.svc.cluster.local ping statistics ---
I can ping the endpoint:
ping 172.17.0.7
PING 172.17.0.7 (172.17.0.7): 56 data bytes
64 bytes from 172.17.0.7: icmp_seq=0 ttl=64 time=0.181 ms
64 bytes from 172.17.0.7: icmp_seq=1 ttl=64 time=0.105 ms
64 bytes from 172.17.0.7: icmp_seq=2 ttl=64 time=0.119 ms
^C--- 172.17.0.7 ping statistics ---
Connecting to
http://192.168.99.100:30001/
I can see the tomcat page:
UPDATE 2
Here my screenshot... I see data in your database with no error.
I suggest to check your db configuration.
As a beginner, I did the same work with you and got the same problems.
This is my solution,maybe you can have a try:
Delete these configurations in myweb-rc.yaml, because it will override the system default values:
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
Change the mysql image tag in mysql-rc.yaml. use the low version mysql:
image: hub.c.163.com/library/mysql:5.5
Create the service first, then create the pod. refer to the following sequence:
kubectl create -f myweb-svc.yaml
kubectl create -f mysql-svc.yaml
kubectl create -f mysql-rc.yaml
kubectl create -f myweb-rc.yaml
You can refer to this doc:Discovering services
Good luck!