OpenShift Origin V3- edge, passthrough and encrypt termination - openshift

Can someone please explain the below OpenShift Route and when to use which route.
passthrough
edge
encrypt

Routes can be either secured or unsecured. Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. Unsecured routes are the simplest to configure, because they require no key or certificates, but secured routes encrypt traffic to and from the pods.
A secured route specifies the TLS termination of the route. The available types of termination are listed below:
Edge Termination
With edge termination, TLS termination occurs at the router, before the traffic gets routed
to the pods. TLS certificates are served by the router, so they must be configured into the
route, otherwise the router’s default certificate is used for TLS termination. Because TLS
is terminated at the router, connections from the router to the endpoints over the internal
network are not encrypted.
Pass-through Termination
With pass-through termination, encrypted traffic is sent straight to the destination pod
without the router providing TLS termination. No key or certificate is required. The
destination pod is responsible for serving certificates for the traffic at the endpoint.
Re-encryption Termination
Re-encryption is a variation on edge termination, where the router terminates TLS with a
certificate, then re-encrypts its connection to the endpoint, which might have a different
certificate. Therefore the full path of the connection is encrypted, even over the internal
network.
for further details
Openshift routes

Related

Why crossorigin attribute matters for preconnect links?

I understand preconnect tells a browser to perform DNS lookup, TCP connection & TLS handshake(in HTTPS) with a given host. All of of those processes are done, prior to sending any HTTP packets, although HTTP version might be negotiated during TLS handshake(ALPN).
I believe that crossorigin attribute affects the following:
No crossorigin attribute: Origin header is not sent, because of which server never sends Allow-Control-Allow-Origin header which can enable CORS.
anonymous mode: Origin header is sent & CORS can be enabled, but Cookies & authentication are not sent during a request.
use-credentials mode: Origin header is sent together with Cookies & Authentication header, which may enable CORS.
Origin, Cookies & Authentication are sent in HTTP request, after DNS+TCP+TLS was already established. In such case, why would crossorigin attribute matter during preconnect?
TLS client certificates.
The browser can authenticate itself to the server not just in HTTP headers when sending a request (at the application layer), but while still establishing a TLS session (at the transport-ish layer), using a client certificate. This requires the browser to know whether such authentication should be performed or not.
This is specified in the HTML Standard and in the Fetch Standard. As of this writing (October 2022), preconnecting is specified as directing the browser to obtain a connection, which in turn is defined in terms of creating a connection, which in step 2 chooses whether to use a certificate based, ultimately, on the CORS credential policy setting specified in the crossorigin= attribute. TLS certificates are currently the only authentication mechanism that CORS credential policy influences at the connection stage.

How to send a request with SNI in k8s ingress or OpenShift route

In OpenShift platform, I created a route for https service as following. The route is https pass-through type, and hostname is "www.https.com".
oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
abc-route www.https.com abc-service 8888 passthrough None
I have a few of questions for the above, in the document https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html, it mentions the route supports https with SNI and TLS with SNI:
(1) Is hostname "www.https.com" a SNI?
(2)I am wondering how client side send a request with SNI? The above mentioned two scenarios: https with SNI and TLS with SNI.
Thanks.
From RFC 3546 and RFC 6066:
3.1. Server Name Indication
[TLS] does not provide a mechanism for a client to tell a server
the name of the server it is contacting. It may be desirable for
clients to provide this information to facilitate secure
connections to servers that host multiple 'virtual' servers at a
single underlying network address.
In order to provide the server name, clients MAY include an
extension of type "server_name" in the (extended) client hello.
Where client hello message is a part of TLS hanshake.
The 'client hello' message: The client initiates the handshake by sending a "hello" message to the server. The message will include which TLS version the client supports, the cipher suites supported, and a string of random bytes known as the "client random."
Is hostname "www.https.com" a SNI?
Any dns name can be a valid SNI. From RFC:
Currently the only server names supported are DNS hostnames, however
this does not imply any dependency of TLS on DNS, and other name
types may be added in the future (by an RFC that Updates this
document). TLS MAY treat provided server names as opaque data and
pass the names and types to the application
I am wondering how client side send a request with SNI? The above mentioned two scenarios: https with SNI and TLS with SNI.
From RFC:
In order to provide the server name, clients MAY include an
extension of type "server_name" in the (extended) client hello.
The "extension_data" field of this extension SHALL contain
"ServerNameList" where:
<<redacted for readibility>>
HTTPS with SNI and TLS with SNI are different in a way that HTTPS is L7 and TSL is L4 of OSI model.
This means that SNI can be used for domain based routing not only for http traffic but also for raw tls traffic.

How to make ELB pass protocol to node.js process (Elastic Beanstalk)

I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.

Howto install the api gateway client certificate into Elastic beanstalk

I have a scalable application on elastic beanstalk running on Tomcat. I read that in front of Tomcat there is an Apache server for reverse proxy. I guess I have to install on apache the client certificate and configure it to accept only request encrypted by this certificate, but I have no idea how to do that.
Can you help me?
After many researches I found a solution. According to the difficult to discover it I want share with you my experience.
My platform on elastic beanstalk is Tomcat 8 with load balancer.
To use the client certificate (at the moment I was writing) you have to terminate the https on instance
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance.html
then
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-tomcat.html
I used this configuration to use both client and server certificates (seems that it doesn't work only with client certificate)
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
SSLCertificateChainFile "/etc/pki/tls/certs/GandiStandardSSLCA2.pem"
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
SSLVerifyClient require
SSLVerifyDepth 1
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
And last thing: api gateway doesn't work with self signed cerificate (thanks to Client certificates with AWS API Gateway), so you have to buy one from a CA.
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
This is where you should point the API Gateway provided client side certificate.
You might have to configure the ELB's listener for vanilla TCP on the same port instead of HTTPS. Basically TCP pass through at your ELB, your instance needs to handle on the SSL in order to authorize the requests which provided a valid client certificate.

GCE Network Load Balancing

I have two websites functioning under Google Compute Engine VM instances. Both sites accept requests and communicate only via HTTPS and not on HTTP.
How can I properly set a Network Load Balancer forwarding rule under GCE for HTTPS? I have my forwarding rule set on both port 80/443 (HTTP/HTTPS) but my health check always shows unhealthy. It seems like it can't handle HTTPS forwarding.
The way I have my site only doing HTTPS is by having a mod header loaded in Apache and strict transport security enabled. I then have a rewrite rule from HTTP to HTTPS for all requests.
As stated here,
There are two types of health checks available:
HTTP health checks, which are required for HTTP and network load
balancing.
HTTPS health checks, which are required when setting up
backend services to use HTTPS.
Therefore, a network load balancer uses an HTTP health check and it can't handle HTTPS forwarding. You'll need to setup a website, at least for the health check, that allows HTTP and returns an HTTP response with code 200.