Red Hat Service Mesh: custom header transformed into x-b3-traceId is lost - openshift

I am trying to integrate a legacy system with microservice hosted on Red Hat OpenShift platform. The service is a java app behind ingress gateway.
The legacy app passes unique operation identifier as a custom header uniqueId. The microservice leverages openshift service mesh support for Jaeger so I can pass tracing headers such as x-b3-traceid and see the request trace in Jaeger UI. Unfortunately, the legacy app cannot be modified and won't send jaeger headers but uniqueId conforms jaeger rules and seems ok to be used for tracing.
I am trying to transform uniqueId into x-b3-traceid on an envoy filter. The problem is that I can copy it to any other header, but cannot modify x-b3-* headers. Istio keeps generating new set of x-b3-* headers no matter what I do in envoy filter. See filter code below.
I tried different filter positions (on ingress gateway, on pod sidecar, before envoy.router, etc). Seems nothing works. Can anyone recommend how can I pass custom header as a traceId for service mesh's jaeger? I can create a custom proxy service transforming one header with another but it looks redundant. Is it possible to achieve that with service mesh only?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: call-id-filter
namespace: mynamespace
spec:
filters:
- filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
headers = request_handle:headers()
uniqueId=headers:get("uniqueId")
if (uniqueId ~= nil) then
request_handle:headers():add("x-b3-traceid", uniqueId) -- istio overwrites these values
request_handle:headers():add("x-b3-spanid", "myspan")
request_handle:headers():add("x-b3-sampled", "1")
request_handle:headers():add("my-custom-unique-id", uniqueId) -- works fine
request_handle:logCritical("envoy filter setting x-b3-traceid with"..uniqueId)
end
end
filterName: envoy.lua
filterType: HTTP
insertPosition:
index: FIRST
listenerMatch:
listenerType: GATEWAY
portNumber: 9011
workloadLabels:
app: istio-ingressgateway

Related

Can I split a single scrape target (a load balancer) into multiple targets, based on a label value?

I have a Spring Boot application running on AWS Elastic Beanstalk. There are multiple application instances running. The number of running applications might dynamically increase and decrease from 2 to 10 instances.
I have set up Prometheus, but scraping the metrics has a critical limitation: it is only able to scrape the Elastic Beanstalk load balancer. This means that every scrape will return a different instance (round robin), so the metrics fluctuate wildly.
# prometheus.yml
scrape_configs:
- job_name: "my-backend"
metrics_path: "/metrics/prometheus"
scrape_interval: 5s
static_configs:
- targets: [ "dev.my-app.website.com" ] # this is a load balancer
labels:
application: "my-backend"
env: "dev"
(I am pursuing a correct set up, where Prometheus can directly scrape from the instances, but because of business limitations this is not possible - so I would like a workaround.)
As a workaround I have added a random UUID label to each application instance using RandomValuePropertySource
# application.yml
management:
endpoints:
enabled-by-default: false
web:
exposure:
include: "*"
endpoint:
prometheus.enabled: true
metrics:
tags:
instance_id: "${random.uuid}" # used to differentiate instances behind load-balancer
This means that the metrics can be uniquely identified, so on one refresh I might get
process_uptime_seconds{instance_id="6fb3de0f-7fef-4de2-aca9-46bc80a6ed27",} 81344.727
While on the next I could get
process_uptime_seconds{instance_id="2ef5faad-6e9e-4fc0-b766-c24103e111a9",} 81231.112
Generally this is fine, and helps for most metrics, but it is clear that Prometheus gets confused and doesn't store the two results separately. This is a particular problem for 'counters', as they are supposed to only increase, but because the different instances handle different requests, the counter might increase or decrease. Graphs end up jagged and disconnected.
I've tried relabelling the instance label (I think that's how Prometheus decides how to store the data separately?), but this doesn't seem to have any effect
# prometheus.yml
scrape_configs:
- job_name: "my-backend"
metrics_path: "/metrics/prometheus"
scrape_interval: 5s
static_configs:
- targets: [ "dev.my-app.website.com" ] # this is a load balancer
labels:
application: "my-backend"
env: "dev"
metric_relabel_configs:
- target_label: instance
source_labels: [__address__, instance_id]
separator: ";"
action: replace
To re-iterate: I know this is not ideal, and the correct solution is to directly connect - that is in motion and will happen eventually. For now, I'm just trying to improve my workaround, so I can get something working sooner.

Exception with proxy or filter in k8s

I have domain based microservices architecture but want to handle exception handling in more generic way...I want a wrapper kind of service in generic way so that I can utilise this to send meaningful messages to upstream systems... I heard of proxy and filters but can somebody guide on how to implement or any other way ... Reason of implementing separately is, I don't want to modify each end point call on code
You should look into Nginx Ingress controller and use custom-http-errors.
Enables which HTTP codes should be passed for processing with the error_page directive
Setting at least one code also enables proxy_intercept_errors which are required to process error_page.
Example usage: custom-http-errors: 404,415
This would work with creating a ConfigMap to ingress controller:
apiVersion: v1
kind: ConfigMap
name: nginx-configuration-ext
data:
custom-http-errors: 502,503,504
proxy-next-upstream-tries: "2"
server-tokens: "false"
Also have a look at this blog post.
Another way would be adding Annotations to your ingress, it will catch the errors you want and redirect it to different service in this case nginx-errors-svc
nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc
nginx.ingress.kubernetes.io/custom-http-errors: 404,503
nginx.ingress.kubernetes.io/default-backend: error-pages
If you will have issues using that try using server-snippet which will add custom configuration in the server configuration block :
nginx.ingress.kubernetes.io/server-snippet: |
location #custom_503 {
return 404;
}
error_page 503 #custom_503;
You should consider reading Custom Error Handling with the Kubernetes Nginx Ingress Controller and Custom Error Page for Nginx Ingress Controller.

Server fails to launch in Google App Engine; OK in Localhost

I have a Flex App written in Go and React that is deployed to Google App engine. I would like it to interact with a MySql Database (2nd generation) on Google Cloud over a Unix socket. I believe the issue lies with the Go server not launching/responding to requests (see below for justification). The App is located at https://haveibeenexploited.appspot.com/
The project is simple. I have two routes in my Server:
server.go
package main
import (
"net/http"
"searchcontract"
)
func main() {
http.Handle("/", http.FileServer(http.Dir("./app/build")))
http.HandleFunc("/search", searchcontract.SearchContract)
http.ListenAndServe(":8080", nil)
}
The second route ("/search") is activated when a user hits the search button. Ideal behavior should return a row specifying the exploits available for the given "contract address" which React writes out to the screen.
searchcontract/searchcontract.go
//SearchContract is a handler that queries the DB for compromised contracts.
func SearchContract(w http.ResponseWriter, r *http.Request) {
var contractName contractID //Used for parsing in contractName
queryResult := getRow(&contractName.Name)
w.WriteHeader(200)
json.NewEncoder(w).Encode(queryResult)
}
//processRow queries the DB for a contract with ID value of name.
func getRow(contractName *string) *ContractVulnerabilityInfo {
var storage ContractVulnerabilityInfo //stores row to encode
//Login to database
...
scanErr := db.QueryRow("SELECT * FROM contracts WHERE ContractAddress=?;", &contractName).Scan(&storage.ContractAddress, &storage.IntegerOverflow, &storage.IntegerUnderflow, &storage.DOS, &storage.ExceptionState, &storage.ExternalCall, &storage.ExternalCallFixed, &storage.MultipleCalls, &storage.DelegateCall, &storage.PredictableEnv, &storage.TxOrigin, &storage.EtherWithdrawal, &storage.StateChange, &storage.UnprotectedSelfdestruct, &storage.UncheckedCall)
...
return &storage
}
My app.yaml file should allow me to deploy this flex app and does:
runtime: go1.12
env: flex
handlers:
- url: /.*
script: _server # my server.go file handles all endpoints
automatic_scaling:
max_num_instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
env_variables:
# user:password#unix(/cloudsql/INSTANCE_CONNECTION_NAME)/dbname
MYSQL_CONNECTION: root:root#unix(/cloudsql/haveibeenexploited:us-west1:hibe)/mythril
# https://cloud.google.com/sql/docs/mysql/connect-app-engine
beta_settings:
cloud_sql_instances: haveibeenexploited:us-west1:hibe
I am able to query the database successfully on localhost.Localhost correctly shows address
However, whenever I try to implement and push to AppEngine, when I query something that should be in the database, it does not show up in the remote App! App Engine does not show address in database. Furthermore, I get a status code of '0' returned, which indicates to me that the server function isn't even being called at all ('200' is what I expect if successful or some other error message.').
Summary
I can't wrap my head around this bug. What should work locally should work remotely. Also, I can't debug this app probably because Stackdriver does not support flex apps and the devserver Google Cloud provides does not support Go Apps.
I believe the primary issue is with Go not speaking to the React element correctly or the routing not being taken care of appropriately.
1) The problem does not lie with MySql connection/database access
- I changed my route to only be one page, turned off React, and included a hardcoded query. The result on localhost. The result on App Engine
2) There is an issue in either a) my routing or b) the interaction between React and Go.
3) Go seems to start correctly... at least when React is not started.
Any help is appreciated.
EDIT I believe that the go app indeed is still running, but the searchfunction is failing for whatever reason. The reason I believe this is because when I add another route for haveibeenexploited.com/hello, it works.

Custom service/route creation using feathersjs

I have been reading the documentation for last 2 days. I'm new to feathersjs.
First issue: any link related to feathersjs is not accessible. Such as this.
Giving the following error:
This page isn’t working
legacy.docs.feathersjs.com redirected you too many times.
Hence I'm unable to traceback to similar types or any types of previously asked threads.
Second issue: It's a great framework to start with Real-time applications. But not all real time application just require alone DB access, their might be access required to something like Amazon S3, Microsoft Azure etc. In my case it's the same and it's more like problem with setting up routes.
I have executed the following commands:
feathers generate app
feathers generate service (service name: upload, REST, DB: Mongoose)
feathers generate authentication (username and password)
I have the setup with me, ready but how do I add another custom service?
The granularity of the service starts in the following way (Use case only for upload):
Conventional way of doing it >> router.post('/upload', (req, res, next) =>{});
Assume, I'm sending a file using data form, and some extra param like { storage: "s3"} in the req.
Postman --> POST (Only) to /upload ---> Process request (isStorageExistsInRequest?) --> Then perform the actual upload respectively to the specific Storage in Req and log the details in local db as well --> Send Response (Success or Failure)
Another thread on stack overflow where you have answered with this:
app.use('/Category/ExclusiveContents/:categoryId', {
create(data, params) {
// do complex stuff here
params.categoryId // the id of the category
data // -> additional data from the POST request
}
});
The solution can viewed in this way as well, since featherjs supports micro service approach, It would be great to have sub-routes like:
/upload_s3 -- uploads to s3
/upload_azure -- uploads to azure and so on.
/upload -- main route which is exposed to users. User requests, process request, call the respective sub-route. (Authentication and Auth to be included as well)
How to solve these types of problems using existing setup of feathersjs?
1) This is a deployment issue, Netlify is looking into it. The current documentation is not on the legacy domain though, what you are looking for can be found at docs.feathersjs.com/api/databases/querying.html.
2) A custom service can be added by running feathers generate service and choosing the custom service option. The functionality can then be implemented in src/services/<service-name>/<service-name>.class.js according to the service interface. For file uploads, an example on how to customize the parameters for feathers-blob (which is used in the file uploading guide) can be found in this issue.

How to set proxy server for Json Web Keys

I'm trying to build JWKS object for google JSON web keys to verify the signature of JWT token received from google. Inside our corporate environment, we need to set the proxy server to reach out external one. Below code runs outside the corporate environment.
HttpsJwks https_jwks = new HttpsJwks(GOOGLE_SIGN_KEYS);
List<JsonWebKey> jwks_list = https_jwks.getJsonWebKeys();
Library: jose4j0.4.1
Thanks in advance.
HttpsJwks uses the SimpleGet interface to make the HTTP call. By default it's an instance of Get, which uses java's HttpsURLConnection. So I think using the https proxy properties should work - see https://docs.oracle.com/javase/8/docs/technotes/guides/net/proxies.html for more about https.proxyHost and https.proxyPort.
If you need to do something more exotic for whatever reason, you can set your own implementation/instance of SimpleGet on the HttpsJwks instance too.