I've a microservice architecture running on baremetal kubernetes cluster.We've mainly two services out of which one is to be exposed publically whereas the other service is to be made available internally. I'm using ingress nginx to expose my service internally,but now i have to expose the other service also,so i thought of using another ingress controller for that.
When i'm trying to deploy another ingress controller in different namespace,I'm getting error like :
Error: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot list resource "endpoints" in API group "" at the cluster scope
and my first ingress also stops working properly.
The ingress deployment yaml which i'm using is:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml
Whereas,the second ingress yaml which i'm using in another namespace is : https://github.com/wali97/second-ingress-controller.yaml/blob/main/ingress.yaml
Related
I am getting an error "ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet" while deploying ingress in EKS.
Steps already taken:
Cluster Name is correct in Deployment file
below annotation is am using in Ingress-Resource file
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
kubernetes.io/role/internal-elb: 1
alb.ingress.kubernetes.io/subnets: subnet-xxxx, subnet-yyy, subnet-zzz
kubernetes.io/cluster/<ClusterName>: owned ---> (I am using correct cluster name)
Key point:
I am using private subnet in EKS, Subnets were separately created with proper Tags.
2. below annotation is am using in Ingress-Resource file
...
kubernetes.io/role/internal-elb: 1
...
kubernetes.io/cluster/<ClusterName>: owned ---> (I am using correct
cluster name)
The above are tags and not for annotation usage. Try tag the 3 subnets in your question on the AWS console with kubernetes.io/role/internal-elb: 1 and kubernetes.io/cluster/<ClusterName>: owned; so that the LB controller can discover them.
My structure
Kubernetes cluster on GKE
Ingress controller deployed using helm
An application which will return list of IP ranges note: it will get updated periodically
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Secured application which is not working
What I am trying to do?
Have my clients IPs in my API endpoint which is done
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Deploy my example app with ingress so it can pull from the https://allowed.domain.com and allow people to access to the app
What I tried and didn't work?
Deploy the application with include feature of nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
include /tmp/allowed-ips.conf;
deny all;
yes its working but the problem is when /tmp/allowed-ips.conf gets updated the ingress config doesn't
I tried to use if condition to pull the IPs from the endpoint and deny if user is not in the list
nginx.ingress.kubernetes.io/configuration-snippet: |
set $deny_access off;
if ($remote_addr !~ (https://2ce8-73-56-131-204.ngrok.io)) {
set $deny_access on;
}
I am using nginx.ingress.kubernetes.io/whitelist-source-range annotation but that is not what I am looking for
None of the options are working for me.
From the official docs of ingress-nginx controller:
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app)
After the nginx ingress resource was initially created, the ingress controller assembles the nginx.conf file and uses it for routing traffic. Nginx web server does not auto-reload its configuration if the nginx.conf and other config files were changed.
So, you can work around this problem in several ways:
update the k8s ingress resource with new IP addresses and then apply changes to the Kubernetes cluster (kubectl apply / kubectl patch / smth else) / for your options 2 and 3.
run nginx -s reload inside an ingress Pod to reload nginx configuration / for your option 1 with include the allowed list file.
$ kubectl exec ingress-nginx-controller-xxx-xxx -n ingress-nginx -- nginx -s reload
try to write a Lua script (there is a good example for Nginx+Lua+Redis here and here). You should have a good understanding of nginx and lua to estimate if it is worth trying.
Sharing what I implemented at my workplace. We had a managed monitoring tool called Site24x7. The tool pings our server from their VMs with dynamic IPs and we had to automate the whitelisting of the IPs at GKE.
nginx.ingress.kubernetes.io/configuration-snippet allows you to set arbitrary Nginx configurations.
Set up a K8s CronJob resource on the specific namespace.
The CronJob runs a shell script, which
fetches the list of IPs to be allowed (curl, getent, etc.)
generates a set of NGINX configurations (= the value for nginx.ingress.kubernetes.io/configuration-snippet)
runs a kubectl command which overwrites the annotation of the target ingresses.
Example shell/bash script:
#!/bin/bash
site24x7_ip_lookup_url="site24x7.enduserexp.com"
site247_ips=$(getent ahosts $site24x7_ip_lookup_url | awk '{print "allow "$1";"}' | sort -u)
ip_whitelist=$(cat <<-EOT
# ---------- Default whitelist (Static IPs) ----------
# Office
allow vv.xx.yyy.zzz;
# VPN
allow aa.bbb.ccc.ddd;
# ---------- Custom whitelist (Dynamic IPs) ----------
$site247_ips # Here!
deny all;
EOT
)
for target_ingress in $TARGET_INGRESS_NAMES; do
kubectl -n $NAMESPACE annotate ingress/$target_ingress \
--overwrite \
nginx.ingress.kubernetes.io/satisfy="any" \
nginx.ingress.kubernetes.io/configuration-snippet="$ip_whitelist" \
description="*** $(date '+%Y/%m/%d %H:%M:%S') NGINX annotation 'configuration-snippet' updated by cronjob $CRONJOB_NAME ***"
done
The shell/bash script can be stored as ConfigMap to be mounted on the CronJob resource.
It is really getting hard to understand and debug the rules for ingress. Can anyone share a good reference?
The question is how the ingress works without specifying the host?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: \"false\"
name: my-app
spec:
rules:
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
Upon assigning a host (e.g.- host: aws-dsn-name.org) it doesn't work.
Upon changing the path to path: /v1/ it also doesn't work :( .
How can I debug/check whether the mapping is correctly done?
Additionally, when to use extensions/v1beta1 or networking.k8s.io/v1beta1
There is pretty good documentation available here for getting started. It may not cover all aspects but it does answer your questions. Ingress controller is basically a reverse proxy and follows similar ideas.
The snippet you have shared is called single backend or single service ingress. / Path would be default. It's the only entry so every request on the exposed port will be served by the tied service.
Host entry; host: aws-dns-name.org should work as long as your DNS is resolving aws-dns-name.org to the IP of a node in the cluster or the LB fronting the cluster. Do a ping to that DNS entry and see if it's resolving to the target IP correctly. Try curl -H 'Host: aws-dns-name.org' IP_Address to verify if ingress responding correctly. NGINX is using Host header to decide which backend service to use. If you are sending traffic to IP with a different Host entry, it will not connect to the right service and will serve default-backend.
If you are doing path based routing, which can be combined with host based routing as well, NGINX will route to the correct backend service based on the intercepted path. However, just like any other reverse proxy, it will send the request to the specified path (http://service:80/v1/). Your application may not be listening on /v1/ path so you will end up with a 404. Use the rewrite-target annotation to let NGINX know that you serving at /.
API resources versions do switch around in K8s and can be hard to keep up with. The correct annotation now is networking.k8s.io/v1beta1 (networking.k8s.io/v1 starting 1.19) even though the old version is working but eventually will stop working. I have seen cluster upgrades break applications because somebody forgot to update the API version.
EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.
I'm trying to setup RESTful API application with Kubernetes. I have a barebones setup with a cluster, static IP address, app deployed with exposed service of type NodePort, and an ingress configured with a managed certificate for SSL. I need to enable CORS and I am not yet using nginx. Is it possible, or do I need to install nginx instead of the default gce class?
Here is my ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: artsdata-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "artsdasta-static-ip"
networking.gke.io/managed-certificates: artsdata-certificate
ingress.kubernetes.io/enable-cors: "true"
spec:
backend:
serviceName: artsdata-kg
servicePort: 80
To check I am using curl as follows:
curl -H "Access-Control-Request-Method: GET" -H "Origin: http://localhost" --head http://db.artsdata.ca
I am expecting the response to include Access-Control-Allow-*
Currently CORS mechanism is not supported in GCP L7 load balancer, therefore ingress-gce ingress controller does contain appropriate annotation to accomplish this functionality, find here related Stack thread.
If you consider replacing native GCP Ingress class by Nginx Ingress Controller in order to enable Cross-origin requests then you might have to include at least two annotations in the origin Ingress resource definition:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
I've found a great guideline through GCP community tutorials that explains Nginx Ingress Controller implementation procedure in GKE.
There are also the other L7 proxy frameworks available on the market that can leverage CORS requests like Traefik, Skipper, etc.