How to enable CORS with ingress without using nginx? - kubernetes-ingress

I'm trying to setup RESTful API application with Kubernetes. I have a barebones setup with a cluster, static IP address, app deployed with exposed service of type NodePort, and an ingress configured with a managed certificate for SSL. I need to enable CORS and I am not yet using nginx. Is it possible, or do I need to install nginx instead of the default gce class?
Here is my ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: artsdata-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "artsdasta-static-ip"
networking.gke.io/managed-certificates: artsdata-certificate
ingress.kubernetes.io/enable-cors: "true"
spec:
backend:
serviceName: artsdata-kg
servicePort: 80
To check I am using curl as follows:
curl -H "Access-Control-Request-Method: GET" -H "Origin: http://localhost" --head http://db.artsdata.ca
I am expecting the response to include Access-Control-Allow-*

Currently CORS mechanism is not supported in GCP L7 load balancer, therefore ingress-gce ingress controller does contain appropriate annotation to accomplish this functionality, find here related Stack thread.
If you consider replacing native GCP Ingress class by Nginx Ingress Controller in order to enable Cross-origin requests then you might have to include at least two annotations in the origin Ingress resource definition:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
I've found a great guideline through GCP community tutorials that explains Nginx Ingress Controller implementation procedure in GKE.
There are also the other L7 proxy frameworks available on the market that can leverage CORS requests like Traefik, Skipper, etc.

Related

Ingress with and without host

It is really getting hard to understand and debug the rules for ingress. Can anyone share a good reference?
The question is how the ingress works without specifying the host?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: \"false\"
name: my-app
spec:
rules:
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
Upon assigning a host (e.g.- host: aws-dsn-name.org) it doesn't work.
Upon changing the path to path: /v1/ it also doesn't work :( .
How can I debug/check whether the mapping is correctly done?
Additionally, when to use extensions/v1beta1 or networking.k8s.io/v1beta1
There is pretty good documentation available here for getting started. It may not cover all aspects but it does answer your questions. Ingress controller is basically a reverse proxy and follows similar ideas.
The snippet you have shared is called single backend or single service ingress. / Path would be default. It's the only entry so every request on the exposed port will be served by the tied service.
Host entry; host: aws-dns-name.org should work as long as your DNS is resolving aws-dns-name.org to the IP of a node in the cluster or the LB fronting the cluster. Do a ping to that DNS entry and see if it's resolving to the target IP correctly. Try curl -H 'Host: aws-dns-name.org' IP_Address to verify if ingress responding correctly. NGINX is using Host header to decide which backend service to use. If you are sending traffic to IP with a different Host entry, it will not connect to the right service and will serve default-backend.
If you are doing path based routing, which can be combined with host based routing as well, NGINX will route to the correct backend service based on the intercepted path. However, just like any other reverse proxy, it will send the request to the specified path (http://service:80/v1/). Your application may not be listening on /v1/ path so you will end up with a 404. Use the rewrite-target annotation to let NGINX know that you serving at /.
API resources versions do switch around in K8s and can be hard to keep up with. The correct annotation now is networking.k8s.io/v1beta1 (networking.k8s.io/v1 starting 1.19) even though the old version is working but eventually will stop working. I have seen cluster upgrades break applications because somebody forgot to update the API version.

How to set timeout for gloo ingress controller

I am replacing nginx ingress with gloo ingress controller in kubernetes cluster and want to set timeout for response.There is an annotation for this in nginx.
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
Is there anything similar this in gloo-ingress-controller or else do I have to use virtualservice for this?
The only annotation that you are supposed to use with Gloo is kubernetes.io/ingress.class: gloo which is the standard way to mark an Ingress object as handled by a specific Ingress controller. This requirement will go away if you add the ability for Gloo to be the default Ingress controller for your cluster. Also, according to the documentation:
If you need more advanced routing capabilities, we encourage you to
use Gloo VirtualServices by installing as glooctl install gateway.
Gloo Gateway uses Kubernetes Custom Resources instead of Ingress
Objects as the only way to configure Ingress’ beyond their basic
routing spec is to use lots of vendor-specific Kubernetes Annotations
to your Kubernetes manifests.
So you are supposed to use VirtualService in order to achieve your goal. You can see the example below:
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: 'default'
namespace: 'gloo-system'
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- prefix: '/petstore'
routeAction:
single:
upstream:
name: 'default-petstore-8080'
namespace: 'gloo-system'
options:
timeout: '20s'
retries:
retryOn: 'connect-failure'
numRetries: 3
perTryTimeout: '5s'
I hope this helps.

Access a K8s service via DNS name from Cloud Function

I have a K8s cluster running with a few services in it. BEcause of K8s DNS, within the cluster services can talk to each over via HTTP request with their name as the URL (e.g http://foo-bar-svc). This is great because I don't need to use an IP address, which I'm assuming would change every time a pod gets redeployed.
Now I want a Cloud Function to be able to post a request to one of these service.
I've followed this guide and successfully created a VPC Connector.
From my Cloud Function, I can make a HTTP request to a service in my K8s cluster, but only if I use an explicit IP address.
How can I instead use one of the URLS that the K8s DNS can resolve?
The best way to expose a k8s service with ingoing host request, is ingress.
You can define a Ingress ressource link with your service, example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
In this example we define a host foo.bar.com to resolve and depends of the path /foo or /bar we reroute to a service behind. Of course you can replace it by the prefixe "/*" for reroute all to one specific service path.
Please refer the documentation: https://kubernetes.io/docs/concepts/services-networking/ingress/
But with this configuration you need to have a Load balancer in front and an alias to a DNS entry:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress?hl=en
And to be more resilience you can add one ingress controller (nginx,traefik....): https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
So, the schema will be:
DNS server <-> Client resolv DNS -> LB -> Ingress Controller -> Service -> Pod -> container.
I hope it helps.

Static global IP on GKE using Nginx Ingress?

I am trying to setup an nginx ingress controller in my GKE cluster and I'd like to use a static global IP address but I am struggling figuring it out how.
After a lot of research, most guides/stackoverflow/blogs just say "use the kubernetes.io/ingress.global-static-ip-name annotation on your ingress resource" however that does not do anything.
Below is an example of my Ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.org/websocket-services: "ws-svc"
kubernetes.io/ingress.global-static-ip-name: my-global-gce-ip
spec:
tls:
- secretName: my-secret
hosts:
- mysite.com
rules:
- host: mysite.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
The service always get's an ephemeral IP address which is thrown away whenever I recreate the controller.
I suspect the issue at hand here is that annotation only works for GCE type of Ingress, not nginx (Even though this is stated nowhere)
Next I attempted setting the IP manually in my ingress resource as showsn in this guide yet when I look at the service created, the external IP address just shows as pending which some github issues seem to point is due to the fact that it is a global and not a regional IP.
With all this in mind, is there any way to have a static global ip on a GKE cluster using an nginx ingress controller?
You have to set the static IP as loadBalancerIP in nginx ingress controller, not in ingress-resource (as you did). As per the documentation, Load Balancer IP is the IP address to assign to load balancer (if supported).
https://github.com/helm/charts/tree/master/stable/nginx-ingress
spec:
...
externalTrafficPolicy: Cluster
loadBalancerIP: [your static IP]
sessionAffinity: None
type: LoadBalancer
And make sure your IP is regional and not global. Only GCP load balancers (GCP built-in ingress controller) support global IP.

ingress with both rules and default backend in Google Container Engine

I'm running on the Google Container Engine platform have an ingress that I would like to have a default backend service for almost all of my domains (there are quite a few, but have another, specific service for one domain on it. Going by my understanding of the ingress user guide (scan for "Default Backends:" in there), the config below should work correctly.
However, it doesn't ever create the second backend. Running kubectl describe ingress on the ingress made and when looking at the LB in the Google console site, only the first "default" backend service is listed. Changing the default one into a rule one fixes the problem but means I have to explicitly list all of the domains I want to support.
So, I'm assuming I have a bug in the config below. If so, what is it?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boringsites
spec:
backend:
serviceName: boringsites
servicePort: 80
tls:
- secretName: boringsites-tls
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: other-svc
servicePort: 80
I just created https://gist.github.com/bprashanth/9f4533b19fd864b723ba0720a3648fa3#file-default-basic-yaml-L94 on kubernetes 1.3 and it works as expected. Perhaps you can debug backwards? Where are you running kube and what version are you using? There is a known and fixed race in 1.2 that you might be running into, especially if you updated the ingress. Also note that you need services of type=nodeport, or the ingress controller on gce will ignore the service you plugged into the reasource.