Exception with proxy or filter in k8s - exception

I have domain based microservices architecture but want to handle exception handling in more generic way...I want a wrapper kind of service in generic way so that I can utilise this to send meaningful messages to upstream systems... I heard of proxy and filters but can somebody guide on how to implement or any other way ... Reason of implementing separately is, I don't want to modify each end point call on code

You should look into Nginx Ingress controller and use custom-http-errors.
Enables which HTTP codes should be passed for processing with the error_page directive
Setting at least one code also enables proxy_intercept_errors which are required to process error_page.
Example usage: custom-http-errors: 404,415
This would work with creating a ConfigMap to ingress controller:
apiVersion: v1
kind: ConfigMap
name: nginx-configuration-ext
data:
custom-http-errors: 502,503,504
proxy-next-upstream-tries: "2"
server-tokens: "false"
Also have a look at this blog post.
Another way would be adding Annotations to your ingress, it will catch the errors you want and redirect it to different service in this case nginx-errors-svc
nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc
nginx.ingress.kubernetes.io/custom-http-errors: 404,503
nginx.ingress.kubernetes.io/default-backend: error-pages
If you will have issues using that try using server-snippet which will add custom configuration in the server configuration block :
nginx.ingress.kubernetes.io/server-snippet: |
location #custom_503 {
return 404;
}
error_page 503 #custom_503;
You should consider reading Custom Error Handling with the Kubernetes Nginx Ingress Controller and Custom Error Page for Nginx Ingress Controller.

Related

Custom JSON file created by Kubernetes-Helm gets cut off when curling/browsing to it

I am new to the whole Kubernetes-Helm thing, please bear with me and I'll try to give as much clarity to my question as possible
So I have this ConfigMap.yaml file that does this:
apiVersion: v1
kind: ConfigMap
metadata:
name: envread-settings
namespace: {{ .Values.environment.namespace }}
data:
appsettings.environment.json: |-
{
"featureBranch": {{ .Values.component.vars.featureId | quote }},
"BFFServiceUrl": {{ .Values.environment.BFFServiceUrl | quote }}
}
---
Where the Values are:
.Values.component.vars.featureId = 123
.Values.environment.BFFServiceUrl = api.dev.integrations/bff-service
This creates an appsettings.environment.json file in a volume path I specified. I need to dynamically create this json file because I need to insert the above variables in there (can't use environment variables sadly for my app).
When I ssh into the terminal and vim everything looks dandy on that file i.e:
{
"featureBranch": "123",
"BFFServiceUrl": "api.dev.integration/bff-service"
}
But when I curl this file I get:
{
"featureBranch": "123",
and the same can be said when I browse directly to this file (I am running an Angular SPA app using ASP.NET Core 3.1).
Is there something horribly wrong I am doing in the yaml file?
Edit
The curl command that I am running is:
curl https://api.integrations.portal/assets/appsettings.json.
There is a NGINX Ingress running in between the request and response.
I used to have a similar problem. In my case, curl returned error code 18. You can check this for yourself by running your curl and then echo $?. As I mentioned I had error code 18 which means:
CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.
Here you will find a link to the description of any errors that curl may return. In case you get another error.
This seems to be a server side issue. You might try to work it around by forcing HTTP 1.0 connection (to avoid chunked transfer which might cause this problem) with the --http1.0 option.
Additionally, if you have a Reverse Proxy or Load Balancer using Nginx and your /var (or your partition where Nginx logging happens) is full, Nginx's server response might be cut off.
You can also read this question.

Red Hat Service Mesh: custom header transformed into x-b3-traceId is lost

I am trying to integrate a legacy system with microservice hosted on Red Hat OpenShift platform. The service is a java app behind ingress gateway.
The legacy app passes unique operation identifier as a custom header uniqueId. The microservice leverages openshift service mesh support for Jaeger so I can pass tracing headers such as x-b3-traceid and see the request trace in Jaeger UI. Unfortunately, the legacy app cannot be modified and won't send jaeger headers but uniqueId conforms jaeger rules and seems ok to be used for tracing.
I am trying to transform uniqueId into x-b3-traceid on an envoy filter. The problem is that I can copy it to any other header, but cannot modify x-b3-* headers. Istio keeps generating new set of x-b3-* headers no matter what I do in envoy filter. See filter code below.
I tried different filter positions (on ingress gateway, on pod sidecar, before envoy.router, etc). Seems nothing works. Can anyone recommend how can I pass custom header as a traceId for service mesh's jaeger? I can create a custom proxy service transforming one header with another but it looks redundant. Is it possible to achieve that with service mesh only?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: call-id-filter
namespace: mynamespace
spec:
filters:
- filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
headers = request_handle:headers()
uniqueId=headers:get("uniqueId")
if (uniqueId ~= nil) then
request_handle:headers():add("x-b3-traceid", uniqueId) -- istio overwrites these values
request_handle:headers():add("x-b3-spanid", "myspan")
request_handle:headers():add("x-b3-sampled", "1")
request_handle:headers():add("my-custom-unique-id", uniqueId) -- works fine
request_handle:logCritical("envoy filter setting x-b3-traceid with"..uniqueId)
end
end
filterName: envoy.lua
filterType: HTTP
insertPosition:
index: FIRST
listenerMatch:
listenerType: GATEWAY
portNumber: 9011
workloadLabels:
app: istio-ingressgateway

Nginx ingress controller rate limiting not working

annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/limit-connection: "1"
nginx.ingress.kubernetes.io/limit-rpm: "20"
and the container image version, iam using,
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
trying to send 200 requests in ten mins of range (and per min it is like a 20 requests from a single ipaddress) and after that it has to refuse the requests.
Which nginx ingress version are you using ? please use quay.io/aledbf/nginx-ingress-controller:0.415 and then check, Also Please look at this link - https://github.com/kubernetes/ingress-nginx/issues/1839
Try to change this limit-connection: to limit-connections:
For more info check this
If doesn't help, please put your commands or describe that how are you testing your connection limits.
I changed it to the limit-connections, I am mentioning the annotations in the ingress yml file and applying it and i can in the nginx conf the following
`worker_rlimit_nofile 15360;
limit_req_status 503;
limit_conn_status 503;
# Ratelimit test_nginx
# Ratelimit test_nginx `
` map $whitelist_xxxxxxxxxxxx $limit_xxxxxxxxxx {
limit_req_zone $limit_xxxxxxxx zone=test_nginx_rpm:5m rate=20r/m;
limit_req zone=test_nginx_rpm burst=100 nodelay;
limit_req zone=test_nginx_rpm burst=100 nodelay;
limit_req zone=test_nginx_rpm burst=100 nodelay;`
when i kept this annotations,
` nginx.ingress.kubernetes.io/limit-connections: "1"
nginx.ingress.kubernetes.io/limit-rpm: "20" `
I can see the above burst and other things in the nginx conf file, can you please tell me these make any differences ?
There are two things that could be making you experience rate-limits higher than configured: burst and nginx replicas.
Burst
As you have already noted in https://stackoverflow.com/a/54426317/3477266, nginx-ingress adds a burst configuration to the final config it creates for the rate-limiting.
The burst value is always 5x your rate-limit value (it doesn't matter if it's a limit-rpm or limit-rps setting.)
That's why you got a burst=100 from a limit-rpm=20.
You can read here the effect this burst have in Nginx behavior: https://www.nginx.com/blog/rate-limiting-nginx/#bursts
But basically it's possible that Nginx will not return 429 for all request you would expect, because of the burst.
The total number of requests routed in a given period will be total = rate_limit * period + burst
Nginx replicas
Usually nginx-ingress is deployed with Horizontal Pod AutoScaler enabled, to scale based on demand. Or it's explicitly configured to run with more than 1 replica.
In any case, if you have more than 1 replica of Nginx running, each one will handle rate-limiting individually.
This basically means that your rate-limit configuration will be multiplied by the number of replicas, and you could end up with rate-limits a lot higher than you expected.
There is a way to use a memcached instance to make them share the rate-limiting count, as described in: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#global-rate-limiting

Custom service/route creation using feathersjs

I have been reading the documentation for last 2 days. I'm new to feathersjs.
First issue: any link related to feathersjs is not accessible. Such as this.
Giving the following error:
This page isn’t working
legacy.docs.feathersjs.com redirected you too many times.
Hence I'm unable to traceback to similar types or any types of previously asked threads.
Second issue: It's a great framework to start with Real-time applications. But not all real time application just require alone DB access, their might be access required to something like Amazon S3, Microsoft Azure etc. In my case it's the same and it's more like problem with setting up routes.
I have executed the following commands:
feathers generate app
feathers generate service (service name: upload, REST, DB: Mongoose)
feathers generate authentication (username and password)
I have the setup with me, ready but how do I add another custom service?
The granularity of the service starts in the following way (Use case only for upload):
Conventional way of doing it >> router.post('/upload', (req, res, next) =>{});
Assume, I'm sending a file using data form, and some extra param like { storage: "s3"} in the req.
Postman --> POST (Only) to /upload ---> Process request (isStorageExistsInRequest?) --> Then perform the actual upload respectively to the specific Storage in Req and log the details in local db as well --> Send Response (Success or Failure)
Another thread on stack overflow where you have answered with this:
app.use('/Category/ExclusiveContents/:categoryId', {
create(data, params) {
// do complex stuff here
params.categoryId // the id of the category
data // -> additional data from the POST request
}
});
The solution can viewed in this way as well, since featherjs supports micro service approach, It would be great to have sub-routes like:
/upload_s3 -- uploads to s3
/upload_azure -- uploads to azure and so on.
/upload -- main route which is exposed to users. User requests, process request, call the respective sub-route. (Authentication and Auth to be included as well)
How to solve these types of problems using existing setup of feathersjs?
1) This is a deployment issue, Netlify is looking into it. The current documentation is not on the legacy domain though, what you are looking for can be found at docs.feathersjs.com/api/databases/querying.html.
2) A custom service can be added by running feathers generate service and choosing the custom service option. The functionality can then be implemented in src/services/<service-name>/<service-name>.class.js according to the service interface. For file uploads, an example on how to customize the parameters for feathers-blob (which is used in the file uploading guide) can be found in this issue.

How to fix cross-site origin policy for server and web-site

I'm using Dropwizard, which I'm hosting, along with a website, on the google cloud (GCE). This means that there are 2 locations currently active:
Some.IP.Address - UI
Some.IP.Address:8080 - Dropwizard server
When the UI tries to call anything from my dropwizard server, I get cross-site origin errors, which is understandable. However, this is posing a problem for me. How do I fix this? It would be great if I could somehow spoof the addresses so that I don't have to fully qualify the resource in the UI.
What I'm looking to do is this:
$.get('/provider/upload/display_information')
Or, if I have to fully qualify
$.get('http://Some.IP.Address:8080/provider/upload/display_information')
I tried setting Origin Filters in Dropwizard per this google groups thread (https://groups.google.com/forum/#!topic/dropwizard-user/ybDOTOxjlLI), but it doesn't seem to work.
In index.html that is served by the server at http://Some.IP.Address you might have a jQuery script that look as follows.
$.get('http://Some.IP.Address:8080/provider/upload/display_information', data, callback);
Of course your browser will not allow accessing http://Some.IP.Address:8080 due to the Same-Origin-Policy (SOP). The protocol (http, https) and the host as well as the port have to be the same.
To achieve Cross-Origin Resource Sharing (CORS) on Dropwizard, you have to add a CrossOriginFilter to the servlet environment. This filter will add some Access-Control-Headers to every response the server is sending. In the run method of your Dropwizard application write:
import org.eclipse.jetty.servlets.CrossOriginFilter;
public class SomeApplication extends Application<SomeConfiguration> {
#Override
public void run(TodoConfiguration config, Environment environment) throws Exception {
FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class);
filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*");
filter.setInitParameter("allowedOrigins", "http://Some.IP.Address"); // allowed origins comma separated
filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin");
filter.setInitParameter("allowedMethods", "GET,PUT,POST,DELETE,OPTIONS");
filter.setInitParameter("preflightMaxAge", "5184000"); // 2 months
filter.setInitParameter("allowCredentials", "true");
// ...
}
// ...
}
This solution works for Dropwizard 0.7.0 and can be found on https://groups.google.com/d/msg/dropwizard-user/xl5dc_i8V24/gbspHyl4y5QJ.
This filter will add some Access-Control-Headers to every response. Have a look on http://www.eclipse.org/jetty/documentation/current/cross-origin-filter.html for a detailed description of the initialisation parameters of the CrossOriginFilter.