How to set up DNS for Ingress with MetallLB not work? - kubernetes-ingress

Previously, I tried to make it work ingress using NodeIP.
How to make My First ingress work on baremetal NodeIP?
It did not work too, perhaps the problem is the same as now, that I did not configure it correctly.
I gave up this option and tried MetallLB + Ingress
!What I did in both cases:
!I installed DNS via /etc/hosts only on my work machine.
10.0.57.28 cluster.local test.local ingress.example.com dashboard.cluster.local test.cluster.local test.com
Installation metallb With Helm:
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb -f values.yaml
values.yaml
configInline:
address-pools:
- name: default
protocol: layer2
addresses:
- 10.0.57.28-10.0.57.29
Using Helm¶ install Ingress Controller:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default ingress-nginx-controller LoadBalancer 10.233.3.75 10.0.57.28 80:30963/TCP,443:32376/TCP 19s
make service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: dev
spec:
rules:
- host: "test.com"
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: /
pathType: Prefix
Make ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: dev
spec:
rules:
- host: cluster.local
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: "/hello"
pathType: Prefix
curl -D- http://cluster.local/hello
HTTP/1.1 404 Not Found
Date: Sat, 11 Sep 2021 17:26:27 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default ingress-nginx-controller LoadBalancer 10.233.3.75 10.0.57.28 80:30963/TCP,443:32376/TCP 25m
default ingress-nginx-controller-admission ClusterIP 10.233.13.161 <none> 443/TCP 25m
default ireg ClusterIP 10.233.34.105 <none> 8080/TCP 8d
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 10d
dev hello-node-service NodePort 10.233.3.50 <none> 80:31263/TCP 19h
dev hello-service ClusterIP 10.233.45.159 <none> 80/TCP 2d6h
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 10d
kube-system metrics-server ClusterIP 10.233.27.232 <none> 443/TCP 34h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.233.29.129 <none> 8000/TCP 10d
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.233.36.25 <none> 443/TCP 10d
Check "hello" pod
service_hello_Node.yml
---
apiVersion: v1
kind: Service
metadata:
name: hello-node-service
namespace: dev
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
curl -I 10.0.57.35:31263
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 17:28:46 GMT
Content-Length: 66
Content-Type: text/plain; charset=utf-8
kubectl describe pod ingress-nginx-controller-fd7bb8d66-mvc9d
Please help me why ingress does not work. Maybe I need to customize the DNS in a particular way?
Service and Ingress in same Namespace. Ingress Controller in different.
I look at the ingress controller logs - there is nothing, is this normal?
kubectl describe pod ingress-nginx-controller-fd7bb8d66-mvc9d
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/ingress-nginx-controller-fd7bb8d66-mvc9d to kuber-node-01
Normal Pulled 19m kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v1.0.0#sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6" already present on machine
Normal Created 19m kubelet Created container controller
Normal Started 19m kubelet Started container controller
Normal RELOAD 19m nginx-ingress-controller NGINX reload triggered due to a change in configuration
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default ingress-nginx-controller-fd7bb8d66-mvc9d 1/1 Running 0 22m
default ireg-685d4b86fb-rwjpj 1/1 Running 1 27h
default metallb-controller-748756655f-ss6w7 1/1 Running 0 93m
default metallb-speaker-2tf86 1/1 Running 0 93m
default metallb-speaker-6xht6 1/1 Running 0 93m
default metallb-speaker-9wjrm 1/1 Running 0 93m
default metallb-speaker-b28fv 1/1 Running 0 93m
default metallb-speaker-jdv4z 1/1 Running 0 93m
default metallb-speaker-svwjz 1/1 Running 0 93m
default metallb-speaker-xd22w 1/1 Running 0 93m
dev hello-app-78f957775f-7d7bw 1/1 Running 1 27h
dev hello-app-78f957775f-hj9gb 1/1 Running 1 9h
dev hello-app-78f957775f-wr7b2 1/1 Running 1 9h
kube-system calico-kube-controllers-5b4d7b4594-5qfjc 1/1 Running 1 27h
kube-system calico-node-7mcqc 1/1 Running 1 10d
kube-system calico-node-9trpd 1/1 Running 1 10d
kube-system calico-node-fl55n 1/1 Running 1 10d
kube-system calico-node-g9zxw 1/1 Running 1 10d
kube-system calico-node-j8fqp 1/1 Running 0 10d
kube-system calico-node-jhz72 1/1 Running 0 10d
kube-system calico-node-rrcm4 1/1 Running 0 10d
kube-system coredns-8474476ff8-552fq 1/1 Running 0 27h
kube-system coredns-8474476ff8-h45sp 1/1 Running 0 27h
kube-system dns-autoscaler-7df78bfcfb-xzkg9 1/1 Running 0 27h
kube-system kube-apiserver-kuber-master1 1/1 Running 0 10d
kube-system kube-apiserver-kuber-master2 1/1 Running 0 34h
kube-system kube-apiserver-kuber-master3 1/1 Running 0 34h
kube-system kube-controller-manager-kuber-master1 1/1 Running 0 10d
kube-system kube-controller-manager-kuber-master2 1/1 Running 1 10d
kube-system kube-controller-manager-kuber-master3 1/1 Running 1 10d
kube-system kube-proxy-52566 1/1 Running 1 27h
kube-system kube-proxy-6bwrt 1/1 Running 0 27h
kube-system kube-proxy-fxkv6 1/1 Running 1 27h
kube-system kube-proxy-kmjnf 1/1 Running 1 27h
kube-system kube-proxy-pnbss 1/1 Running 0 27h
kube-system kube-proxy-tf9ck 1/1 Running 1 27h
kube-system kube-proxy-tt4gv 1/1 Running 0 27h
kube-system kube-scheduler-kuber-master1 1/1 Running 0 10d
kube-system kube-scheduler-kuber-master2 1/1 Running 0 10d
kube-system kube-scheduler-kuber-master3 1/1 Running 1 10d
kube-system metrics-server-ddf5ffb86-27q7x 2/2 Running 0 27h
kube-system nginx-proxy-kuber-ingress-01 1/1 Running 1 10d
kube-system nginx-proxy-kuber-node-01 1/1 Running 1 10d
kube-system nginx-proxy-kuber-node-02 1/1 Running 1 10d
kube-system nginx-proxy-kuber-node-03 1/1 Running 1 10d
kube-system nodelocaldns-2clp8 1/1 Running 0 10d
kube-system nodelocaldns-b4552 1/1 Running 1 10d
kube-system nodelocaldns-hkffk 1/1 Running 1 10d
kube-system nodelocaldns-jflnt 1/1 Running 0 10d
kube-system nodelocaldns-k7cn7 1/1 Running 1 10d
kube-system nodelocaldns-ksd4t 1/1 Running 1 10d
kube-system nodelocaldns-xm544 1/1 Running 0 10d
kubernetes-dashboard dashboard-metrics-scraper-856586f554-thz5d 1/1 Running 1 27h
kubernetes-dashboard kubernetes-dashboard-67484c44f6-mgqgr 1/1 Running 0 9h

The solution is to add the following content to the ingress - annotation.
Then the ingress controller starts to see the DNS addresses.
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
Also, for convenience, changed path: / to a regular expression:
- path: /v1(/|$)(.*)

Related

why is My Ingress Ip same as Minikube Ip. I am not able to access the Minikube Ip in my Broswer

I have created a Service as Load Balancer and tried accessing the service using Minikube tunnel. It is working.
When I tried creating Ingress for the service i get the IP as same as minikube IP and not the tunnel IP.
My ingress Controller is of type NodePort
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
default springboot NodePort 10.103.228.107 <none> 8090:32389/TCP 16h
ingress-nginx ingress-nginx-controller NodePort 10.98.92.81 <none> 80:31106/TCP,443:32307/TCP 17h
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.99.224.119 <none> 443/TCP 17h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.23.18 <none> 8000/TCP 16h
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.98.172.252 <none> 80/TCP 16h
I tunnel this using:
minikube service ingress-nginx-controller -n ingress-nginx --url
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:58628 |
| | | | http://127.0.0.1:58629 |
|---------------|--------------------------|-------------|------------------------|
http://127.0.0.1:58628
http://127.0.0.1:58629
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
I get the Url as http://127.0.0.1:58628.
I now apply ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingresstest
spec:
rules:
- host: "ravi.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: springboot
port:
number: 8090
But the ingress addressed is exposed in
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingresstest <none> ravi.com 192.168.49.2 80 64m
I need the tunnel URL in ingress
Unfortunately you cannot have the tunnel URL in your ingress. Ingress is working as expected.
You can add minikube ingress by commmand: minikube addons enable ingress. After enabling ingress addon it is specifically stated that: After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1". This tunnel creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. You can find more info here.
So you can install ingress, but unfortunately it won't work the way you want it.
You should also know, that Minikube is mainly used for testing and learning purposes so some of it's features might not be ideal.

OpenShift aPaaS v3 failed Liveness probes vs failed Readiness Probes

What will happen if in a pool of pod Liveness probes are failed and a pool of pod Readiness probes are failed ?
There is few more differences between liveness and readiness probes. But one of the main difference is that a failed readiness probe removes the pod from the pool, but DO NOT RESTART. On the other hand a failed liveness probe removes the pod from the pool and RESTARTS the pod.
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness-vs-readiness
name: liveness-vs-readiness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; touch /tmp/liveness; sleep 999999
livenessProbe:
exec:
command:
- cat
- /tmp/liveness
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Lets create this pod and show this in action: oc create -f liveness-vs-readiness.yaml
Output of pod status while we do actions inside the pod. Number in front of the name coresponds to the actions done inside the pod:
oc get pods -w
NAME READY STATUS RESTARTS AGE
[1] liveness-vs-readiness-exec 1/1 Running 0 44s
[2] liveness-vs-readiness-exec 0/1 Running 0 1m
[3] liveness-vs-readiness-exec 1/1 Running 0 2m
[4] liveness-vs-readiness-exec 0/1 Running 1 3m
liveness-vs-readiness-exec 1/1 Running 1 3m
Actions inside the container:
[root#default ~]# oc rsh liveness-vs-readiness-exec
# [1] we rsh to the pod and do nothing. Pod is healthy and live
# [2] we remove health probe file and see that pod goes to notReady state
# rm /tmp/healthy
#
# [3] we create health file. Pod goes into ready state without restart
# touch /tmp/healthy
#
# [4] we remove liveness file. Pod goes into notready state and is restarted just after that
# rm /tmp/liveness
# command terminated with exit code 137

Kubernetes MySQL pod stuck with CrashLoopBackOff

I'm trying to follow this guide to set up a MySQL instance to connect to. The Kubernetes cluster is run on Minikube.
From the guide, I have this to set up my persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
I ran kubectl describe pods/mysql-c85f7f79c-kqjgp and got this:
Start Time: Wed, 29 Jan 2020 09:09:18 -0800
Labels: app=mysql
pod-template-hash=c85f7f79c
Annotations: <none>
Status: Running
IP: 172.17.0.13
IPs:
IP: 172.17.0.13
Controlled By: ReplicaSet/mysql-c85f7f79c
Containers:
mysql:
Container ID: docker://f583dad6d2d689365171a72a423699860854e7e065090bc7488ade2c293087d3
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:9527bae58991a173ad7d41c8309887a69cb8bd178234386acb28b51169d0b30e
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 29 Jan 2020 19:40:21 -0800
Finished: Wed, 29 Jan 2020 19:40:22 -0800
Ready: False
Restart Count: 7
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qchv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-5qchv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qchv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/mysql-c85f7f79c-kqjgp to minikube
Normal Pulled 10h (x5 over 10h) kubelet, minikube Container image "mysql:5.6" already present on machine
Normal Created 10h (x5 over 10h) kubelet, minikube Created container mysql
Normal Started 10h (x5 over 10h) kubelet, minikube Started container mysql
Warning BackOff 2m15s (x50 over 10h) kubelet, minikube Back-off restarting failed container
When I get the logs via kubectl logs pods/mysql-c85f7f79c-kqjgp, I only see this:
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-01-30 03:50:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.47-1debian9 started.
Is there a better way to debug? Why are the logs empty?
I've faced the same issue and I solved it by increasing mysql container resources from 128Mi to 512Mi. Following configuration works for me.
containers:
- name: cant-drupal-mysql
image: mysql:5.7
resources:
limits:
memory: "512Mi"
cpu: "1500m"
Hmm really odd, I changed my mysql-deployment.yml to use MySQL 5.7 and it seems to have worked...
- image: mysql:5.7
Gonna take this as the solution until further notice/commentary.
Currently 5.8 works
image: mysql:5.8
Update above in Deployment file.

MatchNodeSelector error while deploying pods in OpenShift

I created the openshift cluster with 1 master and 2 nodes. I'm able to deploy the hawkular, cassandra and heapster pods for monitoring and I'm able to setup the openshift web console.
However, I tried to deploy a pod manually but I get an error MatchNodeSelector.
inputs:
The hello.yaml file for deploying the pod with command oc create -f hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
spec:
containers:
- name: hello
image: hello
imagePullPolicy: IfNotPresent
Expected output:
The pods should be in running state and should reflect the performance on the web console.
Actual output:
The pod status after running oc create -f hello.yaml
[root#master docker]# oc get pods -n demo
NAME READY STATUS RESTARTS AGE
pod3 0/1 Pending 0 44m
More detailed log of the pod
[root#master docker]# oc describe pods pod3 -n demo
Name: pod3
Namespace: demo
Node: <none>
Labels: <none>
Annotations: openshift.io/scc=anyuid
Status: Pending
IP:
Containers:
hello:
Image: hello
Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-87b8b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-87b8b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-87b8b
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1m (x141 over 41m) default-scheduler 0/2 nodes are available: 2 MatchNodeSelector.
The status would suggest that none of the nodes are matching the Node-Selector:
node-role.kubernetes.io/compute=true
Please review the labels on your nodes (oc get nodes).

Mysql Communications link failure in kubernetes sample

Step1:finish installing etcd and kubernetes with YUM in CentOS7 and shutdown firewall
Step2:modify related configuration item in /etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --insecure-registry gcr.io'
Step3:modify related configuration item in /etc/kubernetes/apiserver
remove
ServiceAccount
in KUBE_ADMISSION_CONTROL configuration item
Step4:start all the related services of etcd and kubernetes
Step5:start ReplicationController for mysql db
kubectl create -f mysql-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: hub.c.163.com/library/mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
Step6:start related mysql db service
kubectl create -f mysql-svc.yaml
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
Step7:start ReplicationController for myweb
kubectl create -f myweb-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 3
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: docker.io/kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
Step8:start related tomcat service
kubectl create -f myweb-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
When I visit from browser with nodeport(30001),I get the following Exception:
Error:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure The last packet sent successfully to the
server was 0 milliseconds ago. The driver has not received any packets
from the server.
kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.57.129:6443 1d
mysql 172.17.0.2:3306 1d
myweb 172.17.0.3:8080,172.17.0.4:8080,172.17.0.5:8080 1d
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
mysql 10.254.0.5 <none> 3306/TCP 1d
myweb 10.254.220.2 <nodes> 8080:30001/TCP 1d
From the interior of any tomcat container I can see the mysql env and the related mysql link code in JSP is as below:
Class.forName("com.mysql.jdbc.Driver");
String ip=System.getenv("MYSQL_SERVICE_HOST");
String port=System.getenv("MYSQL_SERVICE_PORT");
ip=(ip==null)?"localhost":ip;
port=(port==null)?"3306":port;
System.out.println("Connecting to database...");
conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"?useUnicode=true&characterEncoding=UTF-8", "root","123456");
[root#promote ~]# docker exec -it 1470cfaa1b1c /bin/bash
root#myweb-xswfb:/usr/local/tomcat# env |grep MYSQL_SERVICE
MYSQL_SERVICE_PORT=3306
MYSQL_SERVICE_HOST=mysql
root#myweb-xswfb:/usr/local/tomcat# ping mysql
ping: unknown host
Can someone tell me why I could not ping mysqldb hostname from inner tomcat server?Or how to locate the problem further?
I know the reason, it's the DNS problems. The web server cannot find the IP address of the mysql server. so it failed. Temp solution is change the web server's IP to the mysql db server. Hope can help you. Thank you.
Try to use a Headless Service http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services
by setting in your mysql Service
clusterIP: None
UPDATE
I have tried your yaml file.
Pods are running:
➜ kb get po
NAME READY STATUS RESTARTS AGE
mysql-ndtxn 1/1 Running 0 7m
myweb-j8xgh 1/1 Running 0 8m
myweb-qc7ws 1/1 Running 0 8m
myweb-zhzll 1/1 Running 0 8m
Services are:
kb get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
mysql ClusterIP 10.102.178.190 <none> 3306/TCP 20m
myweb NodePort 10.98.74.113 <none> 8080:30001/TCP 19m
Endpoints are:
kb get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 1h
mysql 172.17.0.7:3306 20m
myweb 172.17.0.2:8080,172.17.0.4:8080,172.17.0.6:8080 19m
I exec bash on a tomcat pod and I can ping my service (it is resolved):
kb exec -ti myweb-zhzll -- bash
root#myweb-zhzll:/usr/local/tomcat# ping mysql
PING mysql.default.svc.cluster.local (10.102.178.190): 56 data bytes
^C--- mysql.default.svc.cluster.local ping statistics ---
I can ping the endpoint:
ping 172.17.0.7
PING 172.17.0.7 (172.17.0.7): 56 data bytes
64 bytes from 172.17.0.7: icmp_seq=0 ttl=64 time=0.181 ms
64 bytes from 172.17.0.7: icmp_seq=1 ttl=64 time=0.105 ms
64 bytes from 172.17.0.7: icmp_seq=2 ttl=64 time=0.119 ms
^C--- 172.17.0.7 ping statistics ---
Connecting to
http://192.168.99.100:30001/
I can see the tomcat page:
UPDATE 2
Here my screenshot... I see data in your database with no error.
I suggest to check your db configuration.
As a beginner, I did the same work with you and got the same problems.
This is my solution,maybe you can have a try:
Delete these configurations in myweb-rc.yaml, because it will override the system default values:
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
Change the mysql image tag in mysql-rc.yaml. use the low version mysql:
image: hub.c.163.com/library/mysql:5.5
Create the service first, then create the pod. refer to the following sequence:
kubectl create -f myweb-svc.yaml
kubectl create -f mysql-svc.yaml
kubectl create -f mysql-rc.yaml
kubectl create -f myweb-rc.yaml
You can refer to this doc:Discovering services
Good luck!