We have a host of microservices all being served via a single api-gateway service, in Kubernetes, with an ingress to forward to the same that looks like below ->
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: beta-https
namespace: beta
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- beta.xyz.com
secretName: beta-secret
rules:
- host: beta.xyz.com
http:
paths:
- path: /api/(.*)
backend:
serviceName: api-svc
servicePort: 8443
Now we have a new requirement, wherein a subset of the apis - /api/secure , must be IP restricted. Any ideas on how to achieve this?
I am assuming I can use nginx.ingress.kubernetes.io/whitelist-source-range, in a new config to foroward traffic to /api/secure, but how do I ensure the above config does not server /api/secure?
so for someone looking into doing something similar, I was able to get this working by using nginx.ingress.kubernetes.io/server-snippet to add a snippet and block the traffic
location ~ "^/api/secure/(.*)" {
deny all;
return 403;
}
From what I see you are trying to create two separate paths where /api/secure will be accesible only for specific ip addresses.
I have replicated your problem, made some tests and found a solution.
When creating two ingress objects like yours, which differ on path field e.g.
one has path: /api/(.*) and second has path: /api/secure
nginx will generate the following configuration (output is shortened):
server {
server_name beta.xyz.com ;
listen 80 ;
listen 443 ssl http2 ;
...
location ~* "^/api/secure" {
...
}
location ~* "^/api/(.*)" {
...
}
and in nginx documentation you can read:
To find location matching a given request, nginx first checks locations
defined using the prefix strings (prefix locations). Among them, the location with the longest matching prefix is selected and remembered. Then regular expressions
are checked, in the order of their appearance in the configuration file. The search of
regular expressions terminates on the first match, and the corresponding configuration is used.
meaning: NGINX always fulfills requests using the most specific match
Based on this information, creating two separate ingresses, just like you mentioned,
should solve your problem, because /api/secure will always be more specific path than /api/(.*).
Let me know if that helped.
in my case i have a multi subdomain host plateform and i am trying configs such as
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/api/architect/(.*)" {
allow all;
}
location ~* "^/(.*)" {
deny all;
allow 149.74.110.92;
allow 85.138.230.206;
}
but doesn't seems working so far. I want to block the main url and only access a certain endpoint...
Related
It is really getting hard to understand and debug the rules for ingress. Can anyone share a good reference?
The question is how the ingress works without specifying the host?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: \"false\"
name: my-app
spec:
rules:
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
Upon assigning a host (e.g.- host: aws-dsn-name.org) it doesn't work.
Upon changing the path to path: /v1/ it also doesn't work :( .
How can I debug/check whether the mapping is correctly done?
Additionally, when to use extensions/v1beta1 or networking.k8s.io/v1beta1
There is pretty good documentation available here for getting started. It may not cover all aspects but it does answer your questions. Ingress controller is basically a reverse proxy and follows similar ideas.
The snippet you have shared is called single backend or single service ingress. / Path would be default. It's the only entry so every request on the exposed port will be served by the tied service.
Host entry; host: aws-dns-name.org should work as long as your DNS is resolving aws-dns-name.org to the IP of a node in the cluster or the LB fronting the cluster. Do a ping to that DNS entry and see if it's resolving to the target IP correctly. Try curl -H 'Host: aws-dns-name.org' IP_Address to verify if ingress responding correctly. NGINX is using Host header to decide which backend service to use. If you are sending traffic to IP with a different Host entry, it will not connect to the right service and will serve default-backend.
If you are doing path based routing, which can be combined with host based routing as well, NGINX will route to the correct backend service based on the intercepted path. However, just like any other reverse proxy, it will send the request to the specified path (http://service:80/v1/). Your application may not be listening on /v1/ path so you will end up with a 404. Use the rewrite-target annotation to let NGINX know that you serving at /.
API resources versions do switch around in K8s and can be hard to keep up with. The correct annotation now is networking.k8s.io/v1beta1 (networking.k8s.io/v1 starting 1.19) even though the old version is working but eventually will stop working. I have seen cluster upgrades break applications because somebody forgot to update the API version.
We are using AKS (Azure Kubernetes Service) for managed Kubernetes clusters and for the biggest part we are happy with the benefit the platform brings but we face some issues as well.
On AKS if you host a service of LoadBalancer type it automatically creates a new dynamic IP address (Azure resource) and assigns it to the service. This is not very optimal if you want to whitelist and simply does not make sense hence we switched to Nginx ingress controller (no particular reason to choose Nginx). We have a lot of apps - APIs, SPAs, 1 ingress controller for the whole cluster and separate cluster per environment - QA/Sta/Prod etc.. So we need to manage routing somehow and the ingress path parameter felt like the way to go. Example:
http://region.azurecloud.com/students/
http://region.azurecloud.com/courses/
where students and courses are the ingress paths and then you can add /api/student for example to access a particular API. The result would be http://region.azurecloud.com/students/api/student/1 which is not perfect but does the job for now.
This is how the ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: students-api-ingress
namespace: university
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: https://region.azurecloud.com
http:
paths:
- backend:
serviceName: students-api-service
servicePort: 8001
path: /students(/|$)(.*)
This however does not work very well with SPA applications such as React, Vue or Angular. We face the same problem regardless of technology. They are hosted behind Nginx in docker so this is how the Dockerfile looks like:
# build environment
FROM node:12.2.0-alpine as build
WORKDIR /app
COPY package*.json /app/
RUN npm install --silent
COPY . /app
RUN npm run build
# production environment
FROM nginx:1.16.0-alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And here is the nginx.conf file:
server {
listen 80;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html =404;
index index.html index.htm;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin: $http_origin');
add_header 'Access-Control-Allow-Origin: GET, POST, DELETE, PUT, PATCH, OPTIONS');
add_header 'Access-Control-Allow-Credentials: true');
add_header 'Vary: Origin');
}
add_header 'Access-Control-Allow-Origin' "$http_origin" always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, PATCH, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
}
include /etc/nginx/extra-conf.d/*.conf;
}
The problem comes when assets such as .js files or images are accessed by the application. It creates the url in the format ingress.host/asset.name such as http://region.azurecloud.com/2342424ewfwer.js instead of including the ingress path as well which would look http://region.azurecloud.com/spa/2342424ewfwer.js
and the result is a 404 not found error for all assets.
The applications work properly if the ingress path is just set to / without any rewrite annotations but this is a problem because you cannot have multiple applications using the base ingress host. One solution is to use a separate ingress controller for each SPA application but this brings us back to the initial issue with load balancers - separate load balancer and IP address for each SPA app which is what we want to avoid here.
I guess I am not the only person who is hosting SPA applications behind nginx ingress controller on Kubernetes but all similar topics I managed to find ended pretty much nowhere with no clear solution what should be done or the suggestions did not work for us. I wonder where does the problem come from - the nginx web server or the ingress controller and are ingress controllers generally the way to go for managing application routing on Kubernetes. I would appreciate any help or advice on this.
Thank you,
R
The way I usually deal with this for SPAs is to have different hostnames for each SPA. For example, in a non-production cluster having two SPAs named student-portal and teacher-portal, I would create DNS records for student-portal.mydomain.com, teacher-portal.mydomain.com pointing to the public IP of the cluster load balancer.
Include the domain name in the rules of the ingress resource.
I find this is the most efficient way and avoids needing to deal with each SPA framework individually.
I have a K8s cluster running with a few services in it. BEcause of K8s DNS, within the cluster services can talk to each over via HTTP request with their name as the URL (e.g http://foo-bar-svc). This is great because I don't need to use an IP address, which I'm assuming would change every time a pod gets redeployed.
Now I want a Cloud Function to be able to post a request to one of these service.
I've followed this guide and successfully created a VPC Connector.
From my Cloud Function, I can make a HTTP request to a service in my K8s cluster, but only if I use an explicit IP address.
How can I instead use one of the URLS that the K8s DNS can resolve?
The best way to expose a k8s service with ingoing host request, is ingress.
You can define a Ingress ressource link with your service, example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
In this example we define a host foo.bar.com to resolve and depends of the path /foo or /bar we reroute to a service behind. Of course you can replace it by the prefixe "/*" for reroute all to one specific service path.
Please refer the documentation: https://kubernetes.io/docs/concepts/services-networking/ingress/
But with this configuration you need to have a Load balancer in front and an alias to a DNS entry:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress?hl=en
And to be more resilience you can add one ingress controller (nginx,traefik....): https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
So, the schema will be:
DNS server <-> Client resolv DNS -> LB -> Ingress Controller -> Service -> Pod -> container.
I hope it helps.
I want my ingress nginx to return 200 on every request.
If I had access to the nginx configuration, I would have done something like this:
location = /health {
return 200;
}
But I'm not sure how to do it in ingress configuration YAML
Consider that the Kubernetes' ingress object, when using the Nginx controller, is mostly meant to do routing instead of serving requests by itself. What is actually serving them is the backend deployed in the cluster and, this part is what is returning the status codes, not the ingress.
The controller has a feature something similar to what you want, but for errors only. This only makes the ingress to add some headers so that, a backend can interpret them an return some non-standard code response.
It might be possible to make them respond 200 if you modify this backend. However, I find less disruptive and more straight-forward to just "catch-all" all the incoming requests in the ingress to redirect them to a custom Nginx backend that always responds 200 (you already have the Nginx configuration for that):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
app: all-good
name: happy-ingress
spec:
rules:
- host: "*"
http:
paths:
- path: /(.*)
backend:
serviceName: ok-status-test
servicePort: 80
With this approach, you can even add non-200's backends to it, and match them using regex, so the ingress can be fully reusable.
I'm running on the Google Container Engine platform have an ingress that I would like to have a default backend service for almost all of my domains (there are quite a few, but have another, specific service for one domain on it. Going by my understanding of the ingress user guide (scan for "Default Backends:" in there), the config below should work correctly.
However, it doesn't ever create the second backend. Running kubectl describe ingress on the ingress made and when looking at the LB in the Google console site, only the first "default" backend service is listed. Changing the default one into a rule one fixes the problem but means I have to explicitly list all of the domains I want to support.
So, I'm assuming I have a bug in the config below. If so, what is it?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boringsites
spec:
backend:
serviceName: boringsites
servicePort: 80
tls:
- secretName: boringsites-tls
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: other-svc
servicePort: 80
I just created https://gist.github.com/bprashanth/9f4533b19fd864b723ba0720a3648fa3#file-default-basic-yaml-L94 on kubernetes 1.3 and it works as expected. Perhaps you can debug backwards? Where are you running kube and what version are you using? There is a known and fixed race in 1.2 that you might be running into, especially if you updated the ingress. Also note that you need services of type=nodeport, or the ingress controller on gce will ignore the service you plugged into the reasource.