I created the openshift cluster with 1 master and 2 nodes. I'm able to deploy the hawkular, cassandra and heapster pods for monitoring and I'm able to setup the openshift web console.
However, I tried to deploy a pod manually but I get an error MatchNodeSelector.
inputs:
The hello.yaml file for deploying the pod with command oc create -f hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
spec:
containers:
- name: hello
image: hello
imagePullPolicy: IfNotPresent
Expected output:
The pods should be in running state and should reflect the performance on the web console.
Actual output:
The pod status after running oc create -f hello.yaml
[root#master docker]# oc get pods -n demo
NAME READY STATUS RESTARTS AGE
pod3 0/1 Pending 0 44m
More detailed log of the pod
[root#master docker]# oc describe pods pod3 -n demo
Name: pod3
Namespace: demo
Node: <none>
Labels: <none>
Annotations: openshift.io/scc=anyuid
Status: Pending
IP:
Containers:
hello:
Image: hello
Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-87b8b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-87b8b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-87b8b
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1m (x141 over 41m) default-scheduler 0/2 nodes are available: 2 MatchNodeSelector.
The status would suggest that none of the nodes are matching the Node-Selector:
node-role.kubernetes.io/compute=true
Please review the labels on your nodes (oc get nodes).
Related
I have deployed a MySQL database (statefulset) on Kubernetes zonal cluster, running as a service (GKE) in Google Cloud Platform.
The zonal cluster consist of 3 instances of type e2-medium.
The MySQL container cannot start due to the following error.
kubectl logs mysql-statefulset-0
2022-02-07 05:55:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.35-1debian10 started.
find: '/var/lib/mysql/': Input/output error
Last seen events.
4m57s Warning Ext4Error gke-cluster-default-pool-rnfh kernel-monitor, gke-cluster-default-pool-rnfh EXT4-fs error (device sdb): __ext4_find_entry:1532: inode #2: comm mysqld: reading directory lblock 0 40d 8062 gke-cluster-default-pool-rnfh
3m22s Warning BackOff pod/mysql-statefulset-0 spec.containers{mysql} kubelet, gke-cluster-default-pool-rnfh Back-off restarting failed container
Nodes.
kubectl get node -owide
gke-cluster-default-pool-ayqo Ready <none> 54d v1.21.5-gke.1302 So.Me.I.P So.Me.I.P Container-Optimized OS from Google 5.4.144+ containerd://1.4.8
gke-cluster-default-pool-rnfh Ready <none> 54d v1.21.5-gke.1302 So.Me.I.P So.Me.I.P Container-Optimized OS from Google 5.4.144+ containerd://1.4.8
gke-cluster-default-pool-sc3p Ready <none> 54d v1.21.5-gke.1302 So.Me.I.P So.Me.I.P Container-Optimized OS from Google 5.4.144+ containerd://1.4.8
I also noticed that rnfh node is out of memory.
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-cluster-default-pool-ayqo 117m 12% 992Mi 35%
gke-cluster-default-pool-rnfh 180m 19% 2953Mi 104%
gke-cluster-default-pool-sc3p 179m 19% 1488Mi 52%
MySql mainfest
# HEADLESS SERVICE
apiVersion: v1
kind: Service
metadata:
name: mysql-headless-service
labels:
kind: mysql-headless-service
spec:
clusterIP: None
selector:
tier: mysql-db
ports:
- name: 'mysql-http'
protocol: 'TCP'
port: 3306
---
# STATEFUL SET
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-statefulset
spec:
selector:
matchLabels:
tier: mysql-db
serviceName: mysql-statefulset
replicas: 1
template:
metadata:
labels:
tier: mysql-db
spec:
terminationGracePeriodSeconds: 10
containers:
- name: my-mysql
image: my-mysql:latest
imagePullPolicy: Always
args:
- "--ignore-db-dir=lost+found"
ports:
- name: 'http'
protocol: 'TCP'
containerPort: 3306
volumeMounts:
- name: mysql-pvc
mountPath: /var/lib/mysql
env:
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-username
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: mysql-config
key: mysql-username
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: mysql-config
key: mysql-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-config
key: mysql-database
volumeClaimTemplates:
- metadata:
name: mysql-pvc
spec:
storageClassName: 'mysql-fast'
resources:
requests:
storage: 120Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
MySQL storage class manifest:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql-fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
Why Kubernetes is trying to schedule pod in out of memory node?
UPDATES
I've added requests and limits to MySQL manifest to improve the Qos Class. Now the Qos Class is Guaranteed.
Unfortunately, Kubernetes still trying to schedule to out of memory rnfh node.
kubectl describe po mysql-statefulset-0 | grep node -i
Node: gke-cluster-default-pool-rnfh/So.Me.I.P
kubectl describe po mysql-statefulset-0 | grep qos -i
QoS Class: Guaranteed
I ran a few more tests but I couldn't replicate this.
To answer this one correctly, we would need much more logs. Not sure if you still have them. If I could guess which was the root cause of this issue I would say it was connected with the PersistentVolume.
In one of the Github issue - Volume was remounted as read only after error #752 I found very similar behavior to OP's behavior.
You have created a special storageclass for your MySQL. You've set reclaimPolicy: Retain so PV was not removed. When Statefulset pod (with the same suffix -0) has been recreated (restarted due to error with connectivity, some issues on DB, hard to say) it tried to re-claim this Volume. In the mentioned Github issue, user had very similar situation. Also got inode #262147: comm mysqld: reading directory lblock issue, but in the bellow there was also entry [ +0.003695] EXT4-fs (sda): Remounting filesystem read-only. Maybe it changed permissions when re-mounted?
Another thing that your volumeClaimTemplates contained
accessModes:
- ReadWriteOnce
- ReadOnlyMany
So one PersistentVolume could be used as ReadWriteOnce by one node or only ReadOnlyMany by many nodes. There is a possibility that POD was recreated in different node with Read-Only assessMode.
[ +35.912075] EXT4-fs warning (device sda): htree_dirblock_to_tree:977: inode #2: lblock 0: comm mysqld: error -5 reading directory block
[ +6.294232] EXT4-fs error (device sda): ext4_find_entry:1436: inode #262147: comm mysqld: reading directory lblock ...
[ +0.005226] EXT4-fs error (device sda): ext4_find_entry:1436: inode #2: comm mysqld: reading directory lblock 0
[ +1.666039] EXT4-fs error (device sda): ext4_journal_check_start:61: Detected aborted journal
[ +0.003695] EXT4-fs (sda): Remounting filesystem read-only
It would fit to OP's comment:
Two days ago for reasons unknown to me Kubernetes restarted the container and was keep trying to run it on rnfa machine. The container was probably evicted from another node.
Another thing is that node or cluster might be updated (depending if the auto update option was turned on) which might enforce restart of the pod.
Issue with '/var/lib/mysql/': Input/output error might point to database corruption like mentioned here.
In general, the issue has been resolved by cordoning affected node. Additional information about the difference between cordon and drain can be found here.
Just as an addition, to assign pods to specific node or node with specified label, you can use Affinity
I'm trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I've installed kubernetes as a k3s instance inside a vagrant virtual machine.
One of these containers is a mariadb container instance, with a bind volume mapped
This is the relevant portion of the docker-compose I'm using:
academy-db:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
container_name: academy-db
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_moodle
- MARIADB_DATABASE=bitnami_moodle
volumes:
- type: bind
source: ./volumes/moodle/mariadb
target: /bitnami/mariadb
ports:
- '3306:3306'
Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).
I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files.
This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I'm using a simple hostPath volume pointing to the contents copied from the docker-compose's bind mounts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: academy-db
spec:
replicas: 1
selector:
matchLabels:
name: academy-db
strategy:
type: Recreate
template:
metadata:
labels:
name: academy-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MARIADB_DATABASE
value: bitnami_moodle
- name: MARIADB_USER
value: bn_moodle
image: docker.io/bitnami/mariadb:10.3-debian-10
name: academy-db
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /bitnami/mariadb
name: academy-db-claim
restartPolicy: Always
volumes:
- name: academy-db-claim
persistentVolumeClaim:
claimName: academy-db-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: academy-db-pv
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "<...full path to deployment folder on the server...>/volumes/moodle/mariadb"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: academy-db-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
volumeName: academy-db-pv
---
apiVersion: v1
kind: Service
metadata:
name: academy-db-service
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
name: academy-db
after applying the deployment, everything seems to work fine, in the sense that with kubectl get ... the pod and the volumes seem to be running correctly
kubectl get pods
NAME READY STATUS RESTARTS AGE
academy-db-5547cdbc5-65k79 1/1 Running 9 15d
.
.
.
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
academy-db-pv 1Gi RWO Retain Bound default/academy-db-claim 15d
.
.
.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
academy-db-claim Bound academy-db-pv 1Gi RWO 15d
.
.
.
This is the pod's log:
kubectl logs pod/academy-db-5547cdbc5-65k79
mariadb 10:32:05.66
mariadb 10:32:05.66 Welcome to the Bitnami mariadb container
mariadb 10:32:05.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb 10:32:05.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb 10:32:05.66
mariadb 10:32:05.67 INFO ==> ** Starting MariaDB setup **
mariadb 10:32:05.68 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 10:32:05.68 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb 10:32:05.69 INFO ==> Initializing mariadb database
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
and the describe pod command:
Name: academy-db-5547cdbc5-65k79
Namespace: default
Priority: 0
Node: zdmp-kube/192.168.33.99
Start Time: Tue, 22 Dec 2020 13:33:43 +0000
Labels: name=academy-db
pod-template-hash=5547cdbc5
Annotations: <none>
Status: Running
IP: 10.42.0.237
IPs:
IP: 10.42.0.237
Controlled By: ReplicaSet/academy-db-5547cdbc5
Containers:
academy-db:
Container ID: containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a
Image: docker.io/bitnami/mariadb:10.3-debian-10
Image ID: docker.io/bitnami/mariadb#sha256:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 07 Jan 2021 10:32:05 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 07 Jan 2021 10:22:03 +0000
Finished: Thu, 07 Jan 2021 10:32:05 +0000
Ready: True
Restart Count: 9
Environment:
ALLOW_EMPTY_PASSWORD: yes
MARIADB_DATABASE: bitnami_moodle
MARIADB_USER: bn_moodle
MARIADB_PASSWORD: bitnami
Mounts:
/bitnami/mariadb from academy-db-claim (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x28jh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
academy-db-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: academy-db-claim
ReadOnly: false
default-token-x28jh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x28jh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 15d (x8 over 15d) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 15d (x8 over 15d) kubelet Created container academy-db
Normal Started 15d (x8 over 15d) kubelet Started container academy-db
Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8m14s (x2 over 18m) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine
Normal Created 8m14s (x2 over 18m) kubelet Created container academy-db
Normal Started 8m14s (x2 over 18m) kubelet Started container academy-db
Later, though, I notice that the client application has problems in connecting. After some investigation I've concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with kubectl exec and try to run for instance the mysql client I get:
kubectl exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash
I have no name!#academy-db-5547cdbc5-65k79:/$ mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)
Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I'm not an expert in Kubernetes, but started only recently to learn it)
Edit: Following #Novo's comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch.
Now my pod doesn't even start, terminating in CrashLoopBackOff !
By comparing the pod logs I notice that in the previous mariabd log there was a message:
...
mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO ==> Using persisted data
mariadb 10:32:05.71 INFO ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO ==> Starting mariadb in background
Now replaced with
...
mariadb 14:15:57.32 INFO ==> Updating 'my.cnf' with custom configuration
mariadb 14:15:57.32 INFO ==> Setting user option
mariadb 14:15:57.35 INFO ==> Installing database
Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?
By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:
spec:
securityContext:
fsGroup: <gid>
Where gid is the group used by the process in your container.
Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:
chown-R <uid>:<gid> /path/to/volume
where uid and gid are the userId and groupId from your app.
chmod -R 777 /path/to/volume
This should solve your issue.
But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are 'StatefulSets' in Kubernetes. Use those together with a 'VolumeClaimTemplate' plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it's default storage class, which is local storage (on your node).
Attempting to recreate all my assets from a fresh openshift (except for my PVC), I deleted everything ($ oc delete all --all; oc delete configmap --all; oc delete secret -l namespace=visor). I did this so I could be certain my 'oc process -f template' did a complete job.
This deleted the glusterfs-dynamic Services that I didn't realize were required to mount PVCs (persistent volume claims).
Solution 1: Recreate the Services
I recreated the services so that they look just like similar glusterfs-dynamic services, but that's not enough; even if the IP address matches, the PVC are still not mountable ('endpoints "glusterfs-dynamic-xxx" not found')
Solution 2: Copy data from old PVC to new PVC
To copy, I need to be able to access the PVC from a pod--I can't mount the PVC...
My attempt at recreating the service:
- apiVersion: v1
kind: Service
metadata:
labels:
gluster.kubernetes.io/provisioned-for-pvc: prom-a-pvc
namespace: visor
name: glusterfs-dynamic-615a9bfa-57d9-11e9-b511-001a4a195f6a
namespace: visor
spec:
ports:
- port: 1
protocol: TCP
targetPort: 1
sessionAffinity: None
type: ClusterIP
I want to be able to mount my PVCs.
But instead I get this error:
MountVolume.NewMounter initialization failed for volume "pvc-89647bcb-6df4-11e9-bd79-001a4a195f6a" : endpoints "glusterfs-dynamic-89647bcb-6df4-11e9-bd79-001a4a195f6a" not found
Figured it out. Woot. There was also an "endpoints" asset that got deleted that I had to recreate:
- apiVersion: v1
kind: Service
metadata:
labels:
gluster.kubernetes.io/provisioned-for-pvc: prom-a-pvc
namespace: visor
name: glusterfs-dynamic-615a9bfa-57d9-11e9-b511-001a4a195f6a
namespace: visor
spec:
ports:
- port: 1
protocol: TCP
targetPort: 1
sessionAffinity: None
type: ClusterIP
- apiVersion: v1
kind: Endpoints
metadata:
labels:
gluster.kubernetes.io/provisioned-for-pvc: prom-a-pvc
name: glusterfs-dynamic-615a9bfa-57d9-11e9-b511-001a4a195f6a
namespace: visor
subsets:
- addresses:
- ip: 10.25.6.231
- ip: 10.25.6.232
- ip: 10.27.6.241
- ip: 10.27.6.242
- ip: 10.5.6.221
- ip: 10.5.6.222
ports:
- port: 1
protocol: TCP
I used theses commands to see what I needed to include in my yaml:
$ oc get endpoints
$ oc edit endpoints glusterfs-dynamic-820de1e7-6df6-11e9-bd79-001a4a195f6a
Links:
I'm not alone: https://github.com/heketi/heketi/issues/757
Where I learned the endpoints asset: https://lists.openshift.redhat.com/openshift-archives/users/2019-April/msg00005.html
Step1:finish installing etcd and kubernetes with YUM in CentOS7 and shutdown firewall
Step2:modify related configuration item in /etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --insecure-registry gcr.io'
Step3:modify related configuration item in /etc/kubernetes/apiserver
remove
ServiceAccount
in KUBE_ADMISSION_CONTROL configuration item
Step4:start all the related services of etcd and kubernetes
Step5:start ReplicationController for mysql db
kubectl create -f mysql-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: hub.c.163.com/library/mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
Step6:start related mysql db service
kubectl create -f mysql-svc.yaml
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
Step7:start ReplicationController for myweb
kubectl create -f myweb-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 3
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: docker.io/kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
Step8:start related tomcat service
kubectl create -f myweb-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
When I visit from browser with nodeport(30001),I get the following Exception:
Error:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure The last packet sent successfully to the
server was 0 milliseconds ago. The driver has not received any packets
from the server.
kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.57.129:6443 1d
mysql 172.17.0.2:3306 1d
myweb 172.17.0.3:8080,172.17.0.4:8080,172.17.0.5:8080 1d
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
mysql 10.254.0.5 <none> 3306/TCP 1d
myweb 10.254.220.2 <nodes> 8080:30001/TCP 1d
From the interior of any tomcat container I can see the mysql env and the related mysql link code in JSP is as below:
Class.forName("com.mysql.jdbc.Driver");
String ip=System.getenv("MYSQL_SERVICE_HOST");
String port=System.getenv("MYSQL_SERVICE_PORT");
ip=(ip==null)?"localhost":ip;
port=(port==null)?"3306":port;
System.out.println("Connecting to database...");
conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"?useUnicode=true&characterEncoding=UTF-8", "root","123456");
[root#promote ~]# docker exec -it 1470cfaa1b1c /bin/bash
root#myweb-xswfb:/usr/local/tomcat# env |grep MYSQL_SERVICE
MYSQL_SERVICE_PORT=3306
MYSQL_SERVICE_HOST=mysql
root#myweb-xswfb:/usr/local/tomcat# ping mysql
ping: unknown host
Can someone tell me why I could not ping mysqldb hostname from inner tomcat server?Or how to locate the problem further?
I know the reason, it's the DNS problems. The web server cannot find the IP address of the mysql server. so it failed. Temp solution is change the web server's IP to the mysql db server. Hope can help you. Thank you.
Try to use a Headless Service http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services
by setting in your mysql Service
clusterIP: None
UPDATE
I have tried your yaml file.
Pods are running:
➜ kb get po
NAME READY STATUS RESTARTS AGE
mysql-ndtxn 1/1 Running 0 7m
myweb-j8xgh 1/1 Running 0 8m
myweb-qc7ws 1/1 Running 0 8m
myweb-zhzll 1/1 Running 0 8m
Services are:
kb get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
mysql ClusterIP 10.102.178.190 <none> 3306/TCP 20m
myweb NodePort 10.98.74.113 <none> 8080:30001/TCP 19m
Endpoints are:
kb get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 1h
mysql 172.17.0.7:3306 20m
myweb 172.17.0.2:8080,172.17.0.4:8080,172.17.0.6:8080 19m
I exec bash on a tomcat pod and I can ping my service (it is resolved):
kb exec -ti myweb-zhzll -- bash
root#myweb-zhzll:/usr/local/tomcat# ping mysql
PING mysql.default.svc.cluster.local (10.102.178.190): 56 data bytes
^C--- mysql.default.svc.cluster.local ping statistics ---
I can ping the endpoint:
ping 172.17.0.7
PING 172.17.0.7 (172.17.0.7): 56 data bytes
64 bytes from 172.17.0.7: icmp_seq=0 ttl=64 time=0.181 ms
64 bytes from 172.17.0.7: icmp_seq=1 ttl=64 time=0.105 ms
64 bytes from 172.17.0.7: icmp_seq=2 ttl=64 time=0.119 ms
^C--- 172.17.0.7 ping statistics ---
Connecting to
http://192.168.99.100:30001/
I can see the tomcat page:
UPDATE 2
Here my screenshot... I see data in your database with no error.
I suggest to check your db configuration.
As a beginner, I did the same work with you and got the same problems.
This is my solution,maybe you can have a try:
Delete these configurations in myweb-rc.yaml, because it will override the system default values:
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
Change the mysql image tag in mysql-rc.yaml. use the low version mysql:
image: hub.c.163.com/library/mysql:5.5
Create the service first, then create the pod. refer to the following sequence:
kubectl create -f myweb-svc.yaml
kubectl create -f mysql-svc.yaml
kubectl create -f mysql-rc.yaml
kubectl create -f myweb-rc.yaml
You can refer to this doc:Discovering services
Good luck!
I want my deploy configuration to use an image that was the output of a build configuration.
I am currently using something like this:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
app: myapp
deploymentconfig: myapp
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
deploymentconfig: myapp
spec:
containers:
- name: myapp
image: 123.123.123.123/myproject/myapp-staging:latest
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers: []
status: {}
I had to hard-code the integrated docker registry's IP address; otherwise Kubernetes/OpenShift is not able to find the image to pull down. I would like to not hard-code the integrated docker registry's IP address, and instead use something like this:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
app: myapp
deploymentconfig: myapp
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
deploymentconfig: myapp
spec:
containers:
- name: myapp
from:
kind: "ImageStreamTag"
name: "myapp-staging:latest"
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers: []
status: {}
But this causes Kubernetes/OpenShift to complain with:
The DeploymentConfig "myapp" is invalid.
spec.template.spec.containers[0].image: required value
How can I specify the output of a build configuration as the image to use in a deploy configuration?
Thank you for your time!
Also, oddly enough, if I link the deploy configuration to the build configuration with a trigger, Kubernetes/OpenShift knows to look in the integrated docker for the image:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-staging
name: myapp-staging
spec:
replicas: 1
selector:
app: myapp-staging
deploymentconfig: myapp-staging
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-staging
deploymentconfig: myapp-staging
spec:
containers:
- name: myapp-staging
image: myapp-staging:latest
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- myapp-staging
from:
kind: ImageStreamTag
name: myapp-staging:latest
status: {}
But I don't want the automated triggering...
Update 1 (11/21/2016):
Configuring the trigger but having the trigger disabled (hence manually triggering the deploy), still left the deployment unable to find the image:
$ oc describe pod myapp-1-oodr5
Name: myapp-1-oodr5
Namespace: myproject
Security Policy: restricted
Node: node.url/123.123.123.123
Start Time: Mon, 21 Nov 2016 09:20:26 -1000
Labels: app=myapp
deployment=myapp-1
deploymentconfig=myapp
Status: Pending
IP: 123.123.123.123
Controllers: ReplicationController/myapp-1
Containers:
myapp:
Container ID:
Image: myapp-staging:latest
Image ID:
Port: 8000/TCP
Command:
scripts/start_server.sh
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-goe98 (ro)
Environment Variables:
ALLOWED_HOSTS: myapp-myproject.url
Conditions:
Type Status
Ready False
Volumes:
default-token-goe98:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-goe98
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
42s 42s 1 {scheduler } Scheduled Successfully assigned myapp-1-oodr5 to node.url
40s 40s 1 {kubelet node.url} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.7" already present on machine
40s 40s 1 {kubelet node.url} implicitly required container POD Created Created with docker id d3318e880e4a
40s 40s 1 {kubelet node.url} implicitly required container POD Started Started with docker id d3318e880e4a
40s 24s 2 {kubelet node.url} spec.containers{myapp} Pulling pulling image "myapp-staging:latest"
38s 23s 2 {kubelet node.url} spec.containers{myapp} Failed Failed to pull image "myapp-staging:latest": Error: image library/myapp-staging:latest not found
35s 15s 2 {kubelet node.url} spec.containers{myapp} Back-off Back-off pulling image "myapp-staging:latest"
Update 2 (08/23/2017):
In case, this helps others, here's a summary of the solution.
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true # this is required to link the build and deployment
containerNames:
- myapp-staging
from:
kind: ImageStreamTag
name: myapp-staging:latest
With the trigger and automatic set to true, the deployment should use the build's image in the internal registry.
The other comments relating to making the build not trigger a deploy relates to a separate requirement of wanting to manually deploy images from the internal registry. Here's more information about that portion:
The build needs to trigger the deployment at least once before automatic is set to false. So far a while, I was:
setting automatic to true
initiate a build and deploy
after deployment finishes, manually change automatic to false
manually, trigger a deployment later (though I did not verify if this deployed the older, out-of-date image or not)
I was initially trying to use this manual deployment as a way for a non-developer to go into the web console and make deployments. But this requirement has since been removed, so having build trigger deployments each time works just fine for us now. Builds can build at different branches and then tag the images differently. Deployments can then just use the appropriately tagged images.
Hope that helps!
Are you constructing the resource definitions by hand?
It would be easier to use oc new-build and then oc new-app if you really need to set this up as two steps for some reason. If you just want to setup the build and deployment in one go, just use oc new-app.
For example, to setup build and deployment in one go use:
oc new-app --name myapp <repository-url>
To do it in two steps use:
oc new-build --name myapp <repository-url>
oc new-app myapp
If you still rather use hand created resources, at least use the single step variant with the --dry-run -o yaml options to see what it would create for the image stream, plus build and deployment configuration. That way you can learn from it how to do it. The bit you currently have missing is an image stream.
BTW. It looks a bit suspicious that you have the entry point set to python3. That is highly unusual. What are you trying to do as right now it looks like you may be trying to do something in a way which may not work with how OpenShift works. OpenShift is mainly about long running processes and not for doing single docker run. You can do the latter, but not how you are currently doing it.