nginx ingress rewrite target in reverse - kubernetes-ingress

I'm running zoneminder as a pod that responds to the path "/zm." Right now, if I hit the main URL, I get a default index page, and I have to manually add the /zm path to my URL.
I would like to basically like to add "/zm" to the path for a backend service so that zoneminder.example.com goes to "zoneminder.svc/zm".
Long story short, I want to do nginx ingress & rewrite-target but in reverse.
I played with the annotation "nginx.ingress.kubernetes.io/rewrite-target" but nothing worked.

Related

Exception with proxy or filter in k8s

I have domain based microservices architecture but want to handle exception handling in more generic way...I want a wrapper kind of service in generic way so that I can utilise this to send meaningful messages to upstream systems... I heard of proxy and filters but can somebody guide on how to implement or any other way ... Reason of implementing separately is, I don't want to modify each end point call on code
You should look into Nginx Ingress controller and use custom-http-errors.
Enables which HTTP codes should be passed for processing with the error_page directive
Setting at least one code also enables proxy_intercept_errors which are required to process error_page.
Example usage: custom-http-errors: 404,415
This would work with creating a ConfigMap to ingress controller:
apiVersion: v1
kind: ConfigMap
name: nginx-configuration-ext
data:
custom-http-errors: 502,503,504
proxy-next-upstream-tries: "2"
server-tokens: "false"
Also have a look at this blog post.
Another way would be adding Annotations to your ingress, it will catch the errors you want and redirect it to different service in this case nginx-errors-svc
nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc
nginx.ingress.kubernetes.io/custom-http-errors: 404,503
nginx.ingress.kubernetes.io/default-backend: error-pages
If you will have issues using that try using server-snippet which will add custom configuration in the server configuration block :
nginx.ingress.kubernetes.io/server-snippet: |
location #custom_503 {
return 404;
}
error_page 503 #custom_503;
You should consider reading Custom Error Handling with the Kubernetes Nginx Ingress Controller and Custom Error Page for Nginx Ingress Controller.

Nginx ingress controller rate limiting not working

annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/limit-connection: "1"
nginx.ingress.kubernetes.io/limit-rpm: "20"
and the container image version, iam using,
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
trying to send 200 requests in ten mins of range (and per min it is like a 20 requests from a single ipaddress) and after that it has to refuse the requests.
Which nginx ingress version are you using ? please use quay.io/aledbf/nginx-ingress-controller:0.415 and then check, Also Please look at this link - https://github.com/kubernetes/ingress-nginx/issues/1839
Try to change this limit-connection: to limit-connections:
For more info check this
If doesn't help, please put your commands or describe that how are you testing your connection limits.
I changed it to the limit-connections, I am mentioning the annotations in the ingress yml file and applying it and i can in the nginx conf the following
`worker_rlimit_nofile 15360;
limit_req_status 503;
limit_conn_status 503;
# Ratelimit test_nginx
# Ratelimit test_nginx `
` map $whitelist_xxxxxxxxxxxx $limit_xxxxxxxxxx {
limit_req_zone $limit_xxxxxxxx zone=test_nginx_rpm:5m rate=20r/m;
limit_req zone=test_nginx_rpm burst=100 nodelay;
limit_req zone=test_nginx_rpm burst=100 nodelay;
limit_req zone=test_nginx_rpm burst=100 nodelay;`
when i kept this annotations,
` nginx.ingress.kubernetes.io/limit-connections: "1"
nginx.ingress.kubernetes.io/limit-rpm: "20" `
I can see the above burst and other things in the nginx conf file, can you please tell me these make any differences ?
There are two things that could be making you experience rate-limits higher than configured: burst and nginx replicas.
Burst
As you have already noted in https://stackoverflow.com/a/54426317/3477266, nginx-ingress adds a burst configuration to the final config it creates for the rate-limiting.
The burst value is always 5x your rate-limit value (it doesn't matter if it's a limit-rpm or limit-rps setting.)
That's why you got a burst=100 from a limit-rpm=20.
You can read here the effect this burst have in Nginx behavior: https://www.nginx.com/blog/rate-limiting-nginx/#bursts
But basically it's possible that Nginx will not return 429 for all request you would expect, because of the burst.
The total number of requests routed in a given period will be total = rate_limit * period + burst
Nginx replicas
Usually nginx-ingress is deployed with Horizontal Pod AutoScaler enabled, to scale based on demand. Or it's explicitly configured to run with more than 1 replica.
In any case, if you have more than 1 replica of Nginx running, each one will handle rate-limiting individually.
This basically means that your rate-limit configuration will be multiplied by the number of replicas, and you could end up with rate-limits a lot higher than you expected.
There is a way to use a memcached instance to make them share the rate-limiting count, as described in: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#global-rate-limiting

Keycloak redirect_fragment conflicts with react-router hashRouter

I'm running into an issue with the redirection that happens after a user of my app authenticates with Keycloak.
My app uses react-router hashRouter. When the initial redirect happens, I get a redirect_fragment that looks something like this:
http://localhost:3000/lol.html?redirect_fragment=%2F&redirect_fragment=%2Fstate%3D1c5900ee-954f-4532-b01c-dcf5d88f07a2%26code%3DKZNXVqQCcIXTCFu2ZIkx4quXa6zJb59zGKpNIhZwfNo.d2786d1e-67cd-437f-a873-bad49126bad4&redirect_fragment=%2Fstate%3D51a9cb44-b80a-4c14-8f3d-f04dfdb84377%26code%3Dp5cKQ7xVCR_n1s4ucXZTSE3O1T5lwNri_PBKD07Mt1Y.63364a83-f04f-4e64-a33e-faf00f6cd4ff&redirect_fragment=%2Fstate%3D05155315-ab60-4990-8d4e-444c7cce9748%26code%3DBxxpf_uMB28rKAQ6MXFTTrL9RE4rC3UtwCMXLu_K1Zo.4ce56da0-8e52-47e3-a0f2-4f982599bb98#/state=f3e362e4-c030-40ac-80df-9f9882296977&code=8HHTgd3KdlfwcupXR_5nDV0CqZNPV1xdCu3udc6l5xM.97b3ea71-366a-4038-a7ce-30ac2f416807
The URL keeps growing from there. I've read a few posts already that indicate that redirection from keycloak might have a problem with client-side routing via location.hash ... Any thoughts would be appreciated!
I think I figured it out!
The redirection loop seems to stop if I use the 'noslash' hashRouter instead of the default which contains a slash.
My URLs look like this: localhost:3000/lol.html#client/side/route
instead of this: localhost:3000/lol.html#/client/side/route
The redirection now seems to terminate appropriately after one redirect, but now I'm running into a different problem where the hash portion of my route is not being honored by react-router...
EDIT: I figured the second issue out
react-router creates a wrapper around window.location that it uses to tell which client side "page" it is currently on. I found that this wrapper was out of sync with window.location.
Check this console output out. This was taken immediately after the redirection resolved (and the page was blank):
history pathname is /state=aon03i-238hnsln-soih930-8hsdlkh9-982hnkui-89hkgyq-8ihbei78-893hiugsu
history hash is (empty)
window.location pathname is /lol.html
window.location hash is #users/1
The state=blah-blah-blah in the history.pathname is part of the redirect url that keycloak sends back after auth. You'll notice that window.location is updated to the correct path / hash, but that history seems to be one URL behind. Maybe keycloak directly modifies window.location to perform this redirection?
I tried using a history.push(window.location.hash) to push the hash fragment and update react router, but got the error "this entry already exists on the stack". Since it clearly is not on the top of the location stack, this led me to believe that react-router compares window.location with its internal location to figure out where it ultimately is. So how did I get around this?
I used history.replace() instead, which just replaces the entry on the top of the stack with a new value, instead of pushing a new entry to the stack. This also makes sense, since we don't want users who navigate "back" in their browsers to go back to that /state=blah-blah-blah url <-- replace eliminates this entry from the history stack.
One final piece: react-router history.location, like window.location, has both pathname and hash components. HashRouter uses the history.location.pathname component to keep track of the client side route after the hash in the browser. The equivalent of this in window.location is stored in window.location.hash, so we will be using this as the value passed to history.replace() instead of window.location.pathname. This confused me for a bit, but makes sense when you think about it.
react-router history also keeps track of its current route with a prepended / instead of a prepended #, since it's just treating it like any normal URL. Before I called history.replace(), I needed to take my window.location.hash, replace the leading hash with a / and then pass that value history.replace()
const slashPath = window.location.hash.replace('#', '/');
history.replace(slashPath);
Whew!

OpenShift egress router not working

i configured an egress router like described here:
https://docs.openshift.com/container-platform/3.3/admin_guide/managing_pods.html#admin-guide-controlling-egress-traffic
But it does not work.
In my understanding, the options will be resolved like this:
name: EGRESS_SOURCE <-- This is the network where the nodes live (in my case the vm where the Containers are running on)
value: 192.168.12.99
name: EGRESS_GATEWAY <-- The gateway over which the destination ip address is routable.
value: 192.168.12.1
name: EGRESS_DESTINATION <--- The destination ip of the application i want to reach. In my case its a mongoDB living in a classical VM.
value: 203.0.113.25
Am i right or do i miss something ?
How would i be able to reach the target ?
Do i need to address the source ip to access the MongoDB or do i simply address the IP of my MongoDB an the traffic gets nat'd the way over my egress router (This is how i understood the traffic flow will be btw.) ?
How can i troubleshoot this kind of problem ?
Best Regards,
Marcus
Ok, it worked by now. I created a service and adressed the ip of this service to reach my destination.
The alternative way is to address the ip of the container.
So from inside the container to reach your original destination don't use the original ip, rather use the egress pod ip or preferred use the ip of the created service.
!!Attention: The destination ip must be outside of the host/node ip range otherwise it would not work. It seems that, if you use a destination ip from your host/node range, the standard gw will get the request and i think it will discard it. !!
And i would suggest to use the egress router image from redhat, instead the origin, which is stated in the official document from redhat...
image: registry.access.redhat.com/openshift3/ose-egress-router

Can a URL have multiple parts of subdomain to it?

I have a domain name abc.mydomain.com
This is a https URL ( http redirects to the https version )
However, I now need to be able to handle www.abc.mydomain.com to redirect to abc.mydomain.com
How can I do this? is it a webserver level redirect or something to be done at the DNS resolution.
I know my URL already has the "abc" as its sub-domain and I dont need a "www", however, we noticed that "www.news.google.com" resolves to "news.google.com" - hence wondering if I can achieve it too
Thank you!
In short, yes.
DNS works on a hierarchy - the DNS server for .com can delegate down to the nameserver for your domain which can delegate further, or just answer the requests, which needs to be your first step.
If you use Bind style zone files, you can do something like (where 123.45.67.89 is your webserver IP address):
* IN A 123.45.67.89
Then, you also need your webserver to resolve that to the right virtual host/redirect as desired.