I am replacing nginx ingress with gloo ingress controller in kubernetes cluster and want to set timeout for response.There is an annotation for this in nginx.
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
Is there anything similar this in gloo-ingress-controller or else do I have to use virtualservice for this?
The only annotation that you are supposed to use with Gloo is kubernetes.io/ingress.class: gloo which is the standard way to mark an Ingress object as handled by a specific Ingress controller. This requirement will go away if you add the ability for Gloo to be the default Ingress controller for your cluster. Also, according to the documentation:
If you need more advanced routing capabilities, we encourage you to
use Gloo VirtualServices by installing as glooctl install gateway.
Gloo Gateway uses Kubernetes Custom Resources instead of Ingress
Objects as the only way to configure Ingress’ beyond their basic
routing spec is to use lots of vendor-specific Kubernetes Annotations
to your Kubernetes manifests.
So you are supposed to use VirtualService in order to achieve your goal. You can see the example below:
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: 'default'
namespace: 'gloo-system'
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- prefix: '/petstore'
routeAction:
single:
upstream:
name: 'default-petstore-8080'
namespace: 'gloo-system'
options:
timeout: '20s'
retries:
retryOn: 'connect-failure'
numRetries: 3
perTryTimeout: '5s'
I hope this helps.
Related
I want my ingress nginx to return 200 on every request.
If I had access to the nginx configuration, I would have done something like this:
location = /health {
return 200;
}
But I'm not sure how to do it in ingress configuration YAML
Consider that the Kubernetes' ingress object, when using the Nginx controller, is mostly meant to do routing instead of serving requests by itself. What is actually serving them is the backend deployed in the cluster and, this part is what is returning the status codes, not the ingress.
The controller has a feature something similar to what you want, but for errors only. This only makes the ingress to add some headers so that, a backend can interpret them an return some non-standard code response.
It might be possible to make them respond 200 if you modify this backend. However, I find less disruptive and more straight-forward to just "catch-all" all the incoming requests in the ingress to redirect them to a custom Nginx backend that always responds 200 (you already have the Nginx configuration for that):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
app: all-good
name: happy-ingress
spec:
rules:
- host: "*"
http:
paths:
- path: /(.*)
backend:
serviceName: ok-status-test
servicePort: 80
With this approach, you can even add non-200's backends to it, and match them using regex, so the ingress can be fully reusable.
I'm trying to setup RESTful API application with Kubernetes. I have a barebones setup with a cluster, static IP address, app deployed with exposed service of type NodePort, and an ingress configured with a managed certificate for SSL. I need to enable CORS and I am not yet using nginx. Is it possible, or do I need to install nginx instead of the default gce class?
Here is my ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: artsdata-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "artsdasta-static-ip"
networking.gke.io/managed-certificates: artsdata-certificate
ingress.kubernetes.io/enable-cors: "true"
spec:
backend:
serviceName: artsdata-kg
servicePort: 80
To check I am using curl as follows:
curl -H "Access-Control-Request-Method: GET" -H "Origin: http://localhost" --head http://db.artsdata.ca
I am expecting the response to include Access-Control-Allow-*
Currently CORS mechanism is not supported in GCP L7 load balancer, therefore ingress-gce ingress controller does contain appropriate annotation to accomplish this functionality, find here related Stack thread.
If you consider replacing native GCP Ingress class by Nginx Ingress Controller in order to enable Cross-origin requests then you might have to include at least two annotations in the origin Ingress resource definition:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
I've found a great guideline through GCP community tutorials that explains Nginx Ingress Controller implementation procedure in GKE.
There are also the other L7 proxy frameworks available on the market that can leverage CORS requests like Traefik, Skipper, etc.
I am trying to setup an nginx ingress controller in my GKE cluster and I'd like to use a static global IP address but I am struggling figuring it out how.
After a lot of research, most guides/stackoverflow/blogs just say "use the kubernetes.io/ingress.global-static-ip-name annotation on your ingress resource" however that does not do anything.
Below is an example of my Ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.org/websocket-services: "ws-svc"
kubernetes.io/ingress.global-static-ip-name: my-global-gce-ip
spec:
tls:
- secretName: my-secret
hosts:
- mysite.com
rules:
- host: mysite.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
The service always get's an ephemeral IP address which is thrown away whenever I recreate the controller.
I suspect the issue at hand here is that annotation only works for GCE type of Ingress, not nginx (Even though this is stated nowhere)
Next I attempted setting the IP manually in my ingress resource as showsn in this guide yet when I look at the service created, the external IP address just shows as pending which some github issues seem to point is due to the fact that it is a global and not a regional IP.
With all this in mind, is there any way to have a static global ip on a GKE cluster using an nginx ingress controller?
You have to set the static IP as loadBalancerIP in nginx ingress controller, not in ingress-resource (as you did). As per the documentation, Load Balancer IP is the IP address to assign to load balancer (if supported).
https://github.com/helm/charts/tree/master/stable/nginx-ingress
spec:
...
externalTrafficPolicy: Cluster
loadBalancerIP: [your static IP]
sessionAffinity: None
type: LoadBalancer
And make sure your IP is regional and not global. Only GCP load balancers (GCP built-in ingress controller) support global IP.
I am trying to deploy an application via GKE. As far I created two services and two deployments for the front and the back for the App .
I created an ingress ressource using "gce" controller and I mapped the services as shown
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
part: ingress
name: my-irool-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
backend:
serviceName: client-svc
servicePort: 3000
rules:
- http:
paths:
- path: /back
backend:
serviceName: back-svc
servicePort: 9000
- http:
paths:
- path: /back/*
backend:
serviceName: back-svc
servicePort: 9000
It worked almost fine ( not all the root where mapped correctly but it worked). I added modification on the code ( only the code of the application ) and I rebuild the images and recreated the services, but the ingress seemed angry with the modifications I have added and
all my services became in the unhealthy state
This is the front service
apiVersion: v1
kind: Service
metadata:
labels:
app: app
part: front
name: client
namespace: default
spec:
type: NodePort
ports:
- nodePort: 32585
port: 3000
protocol: TCP
selector:
app: app
part: front
when I do a describe , I got nothing beside that my services are unhealthy.
And in the moment of creation I keep getting
Warning GCE 6m loadbalancer-controller
googleapi: Error 409: The resource
'[project/idproject]/global/healthChecks/k8s-be-32585--17c7......01'
already exists, alreadyExists
My question is:
What is wrong about the code showed above? Should I map all the services to the port 80 ( default ingress port so it could work? )
What are the readinessProbe and livenessProbe? Should I add them or mapping one to the services to default backend should be enough ?
For your first question, deleting and re-creating the ingress may resolve the issue. For the second question, you can review the full steps of configuring Liveness and Readiness probes here. Furthermore, as defined here (as an example for a pod):
livenessProbe: Indicates whether the Container is running. If the
liveness probe fails, the kubelet kills the Container, and the
Container is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is Success.
And readinessProbe: Indicates whether the Container is ready to
service requests. If the readiness probe fails, the endpoints
controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the
initial delay is Failure. If a Container does not provide a readiness
probe, the default state is Success.
I'm running on the Google Container Engine platform have an ingress that I would like to have a default backend service for almost all of my domains (there are quite a few, but have another, specific service for one domain on it. Going by my understanding of the ingress user guide (scan for "Default Backends:" in there), the config below should work correctly.
However, it doesn't ever create the second backend. Running kubectl describe ingress on the ingress made and when looking at the LB in the Google console site, only the first "default" backend service is listed. Changing the default one into a rule one fixes the problem but means I have to explicitly list all of the domains I want to support.
So, I'm assuming I have a bug in the config below. If so, what is it?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boringsites
spec:
backend:
serviceName: boringsites
servicePort: 80
tls:
- secretName: boringsites-tls
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: other-svc
servicePort: 80
I just created https://gist.github.com/bprashanth/9f4533b19fd864b723ba0720a3648fa3#file-default-basic-yaml-L94 on kubernetes 1.3 and it works as expected. Perhaps you can debug backwards? Where are you running kube and what version are you using? There is a known and fixed race in 1.2 that you might be running into, especially if you updated the ingress. Also note that you need services of type=nodeport, or the ingress controller on gce will ignore the service you plugged into the reasource.