I have a microk8s running on my raspberry pi and I'm hoping to use a traefik ingressroute to expose kubernetes API to my subdomain
below is my ingressroute:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kube-api
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`kubernetes.mydomain.com`)
kind: Rule
services:
- kind: Service
name: kubernetes
port: 16443 # have also tried 443
tls:
secretName: kubernetes.mydomain.com
This works fine for my other services+ingressroute but not with the api.
For the kubernetes api I'm only able to see my certificate was successfully generated but the page just displays 'Internal Server Error'
Please let me know what additional information I can provide and I will gladly do so!
This issue was because traefik was trying to connect with kube-apiserver over https.
I had to use a serverTransport to allow insecure communication between traefik and kube-apiserver. This is not a security concern as communication to traefik will verify ssl.
The way to do this can be found at the very bottom of this page.
https://doc.traefik.io/traefik/v2.4/routing/providers/kubernetes-crd/#kind-serverstransport
Related
My hosting provider is DigitalOcean. Main page (e.g. /) requires user to be authenticated. If user is not authenticated he is redirected to identity server. Once user enters credentials POST request is sent to application as last step of OAuth flow.
Application receives this request and handles it correctly. This fact was verified by logs produced by application. It performs redirect to main page https://ui.example.com (302 status code + some cookies).
User sees 502 error message issues by gateway.
Ingress configuration is very simple (and it works for GET requests):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rie-ui-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- ui.example.com
secretName: rie-ui-prd-tls
rules:
- host: ui.example.com
http:
paths:
- backend:
serviceName: rie-ui-svc
servicePort: 9000
I'm wondering what could be wrong with this configuration?
UPDATE 1 The following log message was found in log stream of Ingress controller:
2019/11/20 05:33:09 [error] 1465#1465: *813467 upstream sent too big header while reading response header from upstream, client: 10.131.18.136, server: ui.example.com, request: "POST /signin-oidc HTTP/2.0", upstream: "http://10.244.1.228:9000/signin-oidc", host: "ui.example.com"
I have impression that this is something to fix with Nginx controller ingress settings?
ANSWER:
After studying documentation the following changes where made to Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rie-ui-ingress
annotations:
...
nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
These settings increase buffer sizes. More details can be found on the following page.
I have a K8s cluster running with a few services in it. BEcause of K8s DNS, within the cluster services can talk to each over via HTTP request with their name as the URL (e.g http://foo-bar-svc). This is great because I don't need to use an IP address, which I'm assuming would change every time a pod gets redeployed.
Now I want a Cloud Function to be able to post a request to one of these service.
I've followed this guide and successfully created a VPC Connector.
From my Cloud Function, I can make a HTTP request to a service in my K8s cluster, but only if I use an explicit IP address.
How can I instead use one of the URLS that the K8s DNS can resolve?
The best way to expose a k8s service with ingoing host request, is ingress.
You can define a Ingress ressource link with your service, example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
In this example we define a host foo.bar.com to resolve and depends of the path /foo or /bar we reroute to a service behind. Of course you can replace it by the prefixe "/*" for reroute all to one specific service path.
Please refer the documentation: https://kubernetes.io/docs/concepts/services-networking/ingress/
But with this configuration you need to have a Load balancer in front and an alias to a DNS entry:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress?hl=en
And to be more resilience you can add one ingress controller (nginx,traefik....): https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
So, the schema will be:
DNS server <-> Client resolv DNS -> LB -> Ingress Controller -> Service -> Pod -> container.
I hope it helps.
I am trying to setup an nginx ingress controller in my GKE cluster and I'd like to use a static global IP address but I am struggling figuring it out how.
After a lot of research, most guides/stackoverflow/blogs just say "use the kubernetes.io/ingress.global-static-ip-name annotation on your ingress resource" however that does not do anything.
Below is an example of my Ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.org/websocket-services: "ws-svc"
kubernetes.io/ingress.global-static-ip-name: my-global-gce-ip
spec:
tls:
- secretName: my-secret
hosts:
- mysite.com
rules:
- host: mysite.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
The service always get's an ephemeral IP address which is thrown away whenever I recreate the controller.
I suspect the issue at hand here is that annotation only works for GCE type of Ingress, not nginx (Even though this is stated nowhere)
Next I attempted setting the IP manually in my ingress resource as showsn in this guide yet when I look at the service created, the external IP address just shows as pending which some github issues seem to point is due to the fact that it is a global and not a regional IP.
With all this in mind, is there any way to have a static global ip on a GKE cluster using an nginx ingress controller?
You have to set the static IP as loadBalancerIP in nginx ingress controller, not in ingress-resource (as you did). As per the documentation, Load Balancer IP is the IP address to assign to load balancer (if supported).
https://github.com/helm/charts/tree/master/stable/nginx-ingress
spec:
...
externalTrafficPolicy: Cluster
loadBalancerIP: [your static IP]
sessionAffinity: None
type: LoadBalancer
And make sure your IP is regional and not global. Only GCP load balancers (GCP built-in ingress controller) support global IP.
I am trying to deploy an application via GKE. As far I created two services and two deployments for the front and the back for the App .
I created an ingress ressource using "gce" controller and I mapped the services as shown
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
part: ingress
name: my-irool-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
backend:
serviceName: client-svc
servicePort: 3000
rules:
- http:
paths:
- path: /back
backend:
serviceName: back-svc
servicePort: 9000
- http:
paths:
- path: /back/*
backend:
serviceName: back-svc
servicePort: 9000
It worked almost fine ( not all the root where mapped correctly but it worked). I added modification on the code ( only the code of the application ) and I rebuild the images and recreated the services, but the ingress seemed angry with the modifications I have added and
all my services became in the unhealthy state
This is the front service
apiVersion: v1
kind: Service
metadata:
labels:
app: app
part: front
name: client
namespace: default
spec:
type: NodePort
ports:
- nodePort: 32585
port: 3000
protocol: TCP
selector:
app: app
part: front
when I do a describe , I got nothing beside that my services are unhealthy.
And in the moment of creation I keep getting
Warning GCE 6m loadbalancer-controller
googleapi: Error 409: The resource
'[project/idproject]/global/healthChecks/k8s-be-32585--17c7......01'
already exists, alreadyExists
My question is:
What is wrong about the code showed above? Should I map all the services to the port 80 ( default ingress port so it could work? )
What are the readinessProbe and livenessProbe? Should I add them or mapping one to the services to default backend should be enough ?
For your first question, deleting and re-creating the ingress may resolve the issue. For the second question, you can review the full steps of configuring Liveness and Readiness probes here. Furthermore, as defined here (as an example for a pod):
livenessProbe: Indicates whether the Container is running. If the
liveness probe fails, the kubelet kills the Container, and the
Container is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is Success.
And readinessProbe: Indicates whether the Container is ready to
service requests. If the readiness probe fails, the endpoints
controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the
initial delay is Failure. If a Container does not provide a readiness
probe, the default state is Success.
I'm running on the Google Container Engine platform have an ingress that I would like to have a default backend service for almost all of my domains (there are quite a few, but have another, specific service for one domain on it. Going by my understanding of the ingress user guide (scan for "Default Backends:" in there), the config below should work correctly.
However, it doesn't ever create the second backend. Running kubectl describe ingress on the ingress made and when looking at the LB in the Google console site, only the first "default" backend service is listed. Changing the default one into a rule one fixes the problem but means I have to explicitly list all of the domains I want to support.
So, I'm assuming I have a bug in the config below. If so, what is it?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boringsites
spec:
backend:
serviceName: boringsites
servicePort: 80
tls:
- secretName: boringsites-tls
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: other-svc
servicePort: 80
I just created https://gist.github.com/bprashanth/9f4533b19fd864b723ba0720a3648fa3#file-default-basic-yaml-L94 on kubernetes 1.3 and it works as expected. Perhaps you can debug backwards? Where are you running kube and what version are you using? There is a known and fixed race in 1.2 that you might be running into, especially if you updated the ingress. Also note that you need services of type=nodeport, or the ingress controller on gce will ignore the service you plugged into the reasource.