GCE LB not picking up readinessProbe URL - google-compute-engine

I updated readiness probe in my deployment file and also specify port in ingress, service and deployment.
But my GCE Load Balancer health check is not reflecting the correct path (even I delete and recreate ingress).
Am I missing any configuration?
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: custom-app-managed-cert
spec:
domains:
- uat.xyz.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: app-gce-uat-ip
networking.gke.io/managed-certificates: custom-app-uat-managed-cert
spec:
rules:
- host: "uat.xyz.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: custom-app
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: custom-app
spec:
type: LoadBalancer
selector:
app: custom-app
ports:
- port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-app-gke
spec:
replicas: 1
selector:
matchLabels:
app: custom-app
template:
metadata:
labels:
app: custom-app
spec:
serviceAccountName: app-svc
containers:
- name: custom-app-app
image: xxx
imagePullPolicy: Always
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health
port: 80
env:
- name: PORT
value: "80"
---

Related

K3S: Can't access my web. is there any problem in my yaml files?

I deploy my web service on K3S. I use DuckDNS to access https. i can access my domain with https. But i can't access my web service specific URL.
Here is my yaml files
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: auto-trade-api
spec:
replicas: 1
selector:
matchLabels:
app: auto-trade-api
template:
metadata:
labels:
app: auto-trade-api
spec:
containers:
- name: auto-trade-api
image: image
args: ['yarn', 'start']
resources:
requests:
cpu: '200m'
memory: '200Mi'
limits:
cpu: '200m'
memory: '200Mi'
envFrom:
- secretRef:
name: auto-trade-api
ports:
- containerPort: 3001
restartPolicy: Always
imagePullSecrets:
- name: regcred
Service
kind: Service
metadata:
name: auto-trade-api
spec:
type: "NodePort"
selector:
app: auto-trade-api
ports:
- protocol: TCP
port: 3001
targetPort: 3001
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auto-trade-api
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt"
spec:
tls:
- hosts:
- mydomain
secretName: auto-trade-api-tls
rules:
- host: mydomain
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: auto-trade-api
port:
number: 3001
i tried access https://mydomain/users that i made. but only display 404 page not found
I think i did connect each component well.

AWS EKS service ingress and ALB --no ADDRESS

I seem to be having an issue with the way my ports are setup on this manifest, which is a simple go app. The app is configured to listen on port 3000.
This container runs fine on my local machine (localhost:3000), but I get no ADDRESS when I look at the Ingress (k get ingress ...).
I am getting an error logged in the AWS aws-load-balancer-controller log when I try to run this image on EKS:
controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"fiber-demo","namespace":"demo","error":"ingress: demo/fiber-demo: unable to find port 3000 on service demo/fiber-demo"
This is my k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
replicas: 1
selector:
matchLabels:
app: fiber-demo
template:
metadata:
labels:
app: fiber-demo
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: fiber-demo
image: 240195868935.dkr.ecr.us-east-2.amazonaws.com/fiber-demo:0.0.2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fiber-demo
namespace: demo
labels:
app: fiber-demo
spec:
selector:
app: fiber-demo
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fiber-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: fiber-demo
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: fiber-demo
servicePort: 3000
Am I simply not able to specify a targetPort other than port 80 in the Service?
Am I simply not able to specify a targetPort other than port 80 in the Service?
backend.servicePort refers to port exposed by service, not container.
...
backend:
serviceName: fiber-demo # <-- ingress for this service, not container
servicePort: 80 # <-- port which the service exposed, not the port which the container exposed.

Traefik V2: Why requests to subfolders are not routed properly?

I try to migrate from Traefik V1 to V2 without the IngressRoute or Middleware.
my requests to /backend/something should be routed to the root of my backend service with /something.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/redirect-regex: /backend$
traefik.ingress.kubernetes.io/redirect-replacement: /backend/
traefik.ingress.kubernetes.io/request-modifier: "ReplacePathRegex: ^/backend/(.*) /$1"
spec:
rules:
- host: demo.myapp.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: frontend-app
servicePort: 80
- path: /backend # requests to /backend/something should end up in /something
pathType: ImplementationSpecific
backend:
service:
name: backend-api
port: http
How can I strip that?
At the moment requests to
/backend/something ends up in backend/something/ but should be /something
Thank you in advance
PS: is there a "tool" to monitor or test such requests?
This is a demo deployment I work with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
namespace: default
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
selector:
app: nginx
type: ClusterIP
sessionAffinity: None
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-api
namespace: default
labels:
app: hello-world
spec:
selector:
matchLabels:
app: hello-world
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: rancher/hello-world:latest
resources: {}
ports:
- containerPort: 80
name: http
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: default
spec:
selector:
app: hello-world
type: ClusterIP
sessionAffinity: None
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: production-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
# usage: namespace-middlewareName#kubernetescrdIsMandatory
traefik.ingress.kubernetes.io/router.middlewares: production-middleware-frontend#kubernetescrd,production-middleware-backend#kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
rules:
- host: yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 5000
- path: /backend
pathType: ImplementationSpecific
backend:
service:
name: backend
port:
number: 3000
tls:
- hosts:
- "yourdomain.com"
secretName: yourdomain-com-production-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: middleware-backend
namespace: production
annotations:
kubernetes.io/ingress.class: traefik
spec:
stripPrefix:
prefixes:
- /backend
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: middleware-frontend
namespace: production
annotations:
kubernetes.io/ingress.class: traefik
spec:
stripPrefix:
prefixes:
- /

How can I deploy multi wordpress in kubernetes?

I try to deploy wordpress/mysql in kubernetes.
I want mysql and wordpress to use different volumes. I'm trying to write nfs for wordpress and hostpath for mysql.
But wordpress and mysql are not connected. I don't know why. I'd appreciate your help.
here's my code:
Mysql.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: qwer1234
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
labels:
app: mysql
spec:
type: ClusterIP
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /vol/mysql
wordpress.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql:3306
- name: WORDPRESS_DB_PASSWORD
value: P#ssw0rd
volumeMounts:
- mountPath: /nfs-volume/html
name: wordpress-pv
ports:
- protocol: TCP
containerPort: 80
volumes:
- name: wordpress-pv
persistentVolumeClaim:
claimName: wordpress-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: wordpress-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: wordpress-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.201.11
path: /nfs-volume
---
apiVersion: v1
kind: Service
metadata:
name: wordpress-svc
labels:
app: wordpress
spec:
type: LoadBalancer
selector:
app: wordpress
ports:
- protocol: TCP
port: 80
you have provided the port number at last in environment variable
please try with out it
env:
- name: WORDPRESS_DB_HOST
value: mysql:3306
instead use this
env:
- name: WORDPRESS_DB_HOST
value: MySQL
you can check the example at : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
if you read the documentation of Docker image they are also providing the host name without the port as value.
also in Wordpress environment you have to pass the MySQL password which you are passing wrong
- name: WORDPRESS_DB_PASSWORD
value: P#ssw0rd
instead it should be
- name: WORDPRESS_DB_PASSWORD
value: qwer1234

Reference nginx-ingress host name to internal service

After creating all require resource and config the nginx-ingress controller.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
replicas: 1
selector:
matchLabels:
app: user-api
strategy: {}
template:
metadata:
labels:
app: user-api
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: user-api
image: doumeyi/user-api-amd64:1.0
ports:
- name: user-api
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: user-api
spec:
selector:
app: user-api
ports:
- name: user-api
port: 3000
targetPort: 3000
type: LoadBalancer
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api
backend:
serviceName: user-api
servicePort: 3000
I can view example.com show the 404 not found page, but also can not see example.com/user-api to show any message I build in user-api service.
It seems the nginx-ingress cannot resolve the host name to the internal service, how should I fix it?
If NGINX cannot find a route to your Pod, it should not respond with 404. Instead it should give an 502 (Bad Gateway) AFAIK. I assume the 404 is from your application.
NGINX-Ingress changed behavior in 0.22 as mentioned here
The ingress resource should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: example.com
http:
paths:
- path: /user-api(/|$)(.*)
backend:
serviceName: user-api