I am using Prometheus on OpenShift platform. Authentication is handled by OpenShift for Prometheus and all its sub domains except /metrics endpoint.
It bypasses all authentication and shows Prometheus go client metrics plain texted.
Is it possible to somehow force OpenShift authentication on prometheus/metrics endpoint or to disable that endpoint since I don't really need go client metrics?
I know that node_exporters have flags to control certain collectors but I couldn't find it for Prometheus client itself.
i'm not sure about openshift auth, but you can add basic auth to the '/metrics' endpoint. Alternatively prometheus can also support TLS.
Add the following to the 'scrape_config' section in your prometheus config file (prometheus.yml by default)
basic_auth:
username: "admin"
password: "password"
More info can be found in the official prometheus documentation linked here
Related
I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.
We have an openshift cluster (v3.11) with prometheus collecting metrics as part of the platform. We need long term storage of these metrics and our hope is to use our InfluxDB Time Series DB to store them.
The Telegraf agent (the T in the TickStack) has an input plugin for prometheus and an output plugin for InfluxDB so this would seem like a natural solution.
What I'm struggling with is how is the telegraf agent setup to scrape the metrics within Openshift, I think the config and docs relate to prometheus outside of openshift? I can't see any references to how to set this up with Openshift.
Does a telegraf agent need to reside on openshift itself or can this be setup to collect remotely via a published route?
If anyone has any experience setting this up or can provide some pointers I'd be grateful.
Looks like the easiest way to get metrics from OpenShift Prometheus using Telegraf is to use the default service coming with OpenShift. URL to scrape from is: https://prometheus-k8s-openshift-monitoring.apps.<your domain>/federate?match[]=<your conditions>
As Prometheus stays behind the openshift authentication proxy the only challange is authentication. You should add a new user into the prometheus-k8s-htpasswd secret and use his credentials for scraping.
To do this you should run htpasswd -nbs <login> <password> and then add output to the end of prometheus-k8s-htpasswd secret.
The other way is to disable authentication for /federate endpoint. To do this you should edit the command in the prometheus-proxy container inside prometheus stateful set and add -skip-auth-regex=^/federate option.
I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.
I have a scalable application on elastic beanstalk running on Tomcat. I read that in front of Tomcat there is an Apache server for reverse proxy. I guess I have to install on apache the client certificate and configure it to accept only request encrypted by this certificate, but I have no idea how to do that.
Can you help me?
After many researches I found a solution. According to the difficult to discover it I want share with you my experience.
My platform on elastic beanstalk is Tomcat 8 with load balancer.
To use the client certificate (at the moment I was writing) you have to terminate the https on instance
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance.html
then
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-tomcat.html
I used this configuration to use both client and server certificates (seems that it doesn't work only with client certificate)
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
SSLCertificateChainFile "/etc/pki/tls/certs/GandiStandardSSLCA2.pem"
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
SSLVerifyClient require
SSLVerifyDepth 1
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
And last thing: api gateway doesn't work with self signed cerificate (thanks to Client certificates with AWS API Gateway), so you have to buy one from a CA.
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
This is where you should point the API Gateway provided client side certificate.
You might have to configure the ELB's listener for vanilla TCP on the same port instead of HTTPS. Basically TCP pass through at your ELB, your instance needs to handle on the SSL in order to authorize the requests which provided a valid client certificate.
I am using Apache HttpClient 4.2.3 library for for accessing resources via HTTP/HTTPS. Requests are sent through SOCKs proxy which requires 'basic authentication'.
I looked at the API doc and found that there is class ProxyAuthenticationStrategy, which looks like serves the purpose.
But I am not able to figure it out how to use it. Specifically I am not able to find how to provide proxy credentials to ProxyAuthenticationStrategy.
I looked at the documentation & searched over net but could not find appropriate help over this topic.
Can someone please guide me on how to configure basic authentication for SOCKs Proxy?
Note: I am successfully able to communicate using Apache HttpClient 4.2.3 library through SOCKs proxy without authentication.
Thanks,
Sachin
SOCKS is a TCP/IP level proxy protocol. It has nothing to do with HTTP and is out of scope as far HttpClient is concerned. HttpClient can be configured to connect all network sockets it creates via a SOCKS proxy but it will make no attempt to provide any user credentials to the SOCKS proxy.