I understand preconnect tells a browser to perform DNS lookup, TCP connection & TLS handshake(in HTTPS) with a given host. All of of those processes are done, prior to sending any HTTP packets, although HTTP version might be negotiated during TLS handshake(ALPN).
I believe that crossorigin attribute affects the following:
No crossorigin attribute: Origin header is not sent, because of which server never sends Allow-Control-Allow-Origin header which can enable CORS.
anonymous mode: Origin header is sent & CORS can be enabled, but Cookies & authentication are not sent during a request.
use-credentials mode: Origin header is sent together with Cookies & Authentication header, which may enable CORS.
Origin, Cookies & Authentication are sent in HTTP request, after DNS+TCP+TLS was already established. In such case, why would crossorigin attribute matter during preconnect?
TLS client certificates.
The browser can authenticate itself to the server not just in HTTP headers when sending a request (at the application layer), but while still establishing a TLS session (at the transport-ish layer), using a client certificate. This requires the browser to know whether such authentication should be performed or not.
This is specified in the HTML Standard and in the Fetch Standard. As of this writing (October 2022), preconnecting is specified as directing the browser to obtain a connection, which in turn is defined in terms of creating a connection, which in step 2 chooses whether to use a certificate based, ultimately, on the CORS credential policy setting specified in the crossorigin= attribute. TLS certificates are currently the only authentication mechanism that CORS credential policy influences at the connection stage.
Related
In OpenShift platform, I created a route for https service as following. The route is https pass-through type, and hostname is "www.https.com".
oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
abc-route www.https.com abc-service 8888 passthrough None
I have a few of questions for the above, in the document https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html, it mentions the route supports https with SNI and TLS with SNI:
(1) Is hostname "www.https.com" a SNI?
(2)I am wondering how client side send a request with SNI? The above mentioned two scenarios: https with SNI and TLS with SNI.
Thanks.
From RFC 3546 and RFC 6066:
3.1. Server Name Indication
[TLS] does not provide a mechanism for a client to tell a server
the name of the server it is contacting. It may be desirable for
clients to provide this information to facilitate secure
connections to servers that host multiple 'virtual' servers at a
single underlying network address.
In order to provide the server name, clients MAY include an
extension of type "server_name" in the (extended) client hello.
Where client hello message is a part of TLS hanshake.
The 'client hello' message: The client initiates the handshake by sending a "hello" message to the server. The message will include which TLS version the client supports, the cipher suites supported, and a string of random bytes known as the "client random."
Is hostname "www.https.com" a SNI?
Any dns name can be a valid SNI. From RFC:
Currently the only server names supported are DNS hostnames, however
this does not imply any dependency of TLS on DNS, and other name
types may be added in the future (by an RFC that Updates this
document). TLS MAY treat provided server names as opaque data and
pass the names and types to the application
I am wondering how client side send a request with SNI? The above mentioned two scenarios: https with SNI and TLS with SNI.
From RFC:
In order to provide the server name, clients MAY include an
extension of type "server_name" in the (extended) client hello.
The "extension_data" field of this extension SHALL contain
"ServerNameList" where:
<<redacted for readibility>>
HTTPS with SNI and TLS with SNI are different in a way that HTTPS is L7 and TSL is L4 of OSI model.
This means that SNI can be used for domain based routing not only for http traffic but also for raw tls traffic.
This question has puzzled me while looking into a Mutual SSL failure between my client app and an external Server.
When my app tries to connection to the external server's rest API - let's call it https://www.server.com/api/resolve - I expect a "Certificate Request" handshake element to be sent with their Server hello. As far as I can tell from a tcpdump of all traffic between me and the server, it is not sent. Only a "Server Hello, Certificate, Certificate Status, Server Key Exchange, Server Hello Done" is sent:
tcpdump of TLSv1.2 handshake: https://i.stack.imgur.com/50Ous.png
However when I try to access that same API URL in Chrome, the browser displays a box asking me to select my client certificate for mutual authentication. When I capture a dump of that handshake up to the point where the browser prompts me for a certificate, I still see no "Certificate Request" sent by the Server:
Tcpdump of browser navigation to API: https://i.stack.imgur.com/hvOEx.png
After selecting a certificate in Chrome, I'm directed to the site, however I see no Client "Certificate" sent in my TLS1.2 capture either.
My question is, is there any way can Chrome know a client cert was requested by the server if that request is not sent in the TLS handshake?
Alternatively, is it possible wireshark is lying to me? When I test against, for example: https://client.badssl.com/ which requests Mutual SSL, I see the Certificate Request right after the Server Key Exchange exactly as I should. I noticed in the TLSv1.2 RFC (https://www.rfc-editor.org/rfc/rfc5246) it notes:
"In particular, the certificate and certificate request
handshake messages can be large enough to require fragmentation."
But this should be irrelevant to how Wireshark is displaying the TLS info.
There are several Encrypted Handshake Message in the packet capture after the application data. This very likely means that the server itself does not request a client certificate by default but that the certificate is only requested for specific URL.
In this case first a TLS handshake is done without a CertificateRequest. Once the handshake is finished the client sends the HTTP request over the encrypted connection which is the Application Data in the packet capture. The server will determine that the requested URL needs a client certificate and initiate a renegotiation, i.e. another TLS handshake but this time with a CertificateRequest. But since the connection is already encrypted this renegotiation is only visible as Encrypted Handshake Message and the details cannot be seen without decrypting the traffic.
Is it possible to configure the proxy on a secured route so that on a redirect the location header field in the response is rewritten to HTTPS?
I get Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://complan-complan.a3c1.starter-us-west-1.openshiftapps.com/planner
when I log in to the application. But also without login the request to the above URL is redirected to HTTP and again to HTTPS.
Thanks!
When the exposed route in OpenShift is set to TLS edge termination then the build-in HAProxy will terminate the HTTPS connection and create a new HTTP connection to your application.
To get the original client-ip/protocol/port the proxy inserts the HTTP headers X-Forwarded-For, X-Forwarded-Proto and X-Forwarded-Port.
For redirection to work correctly you have to tell your framework/server to use those fields. In your case with Wildfly you can follow theses instructions
There are samples for other frameworks/servers in the OpenShift FAQ:
https://developers.openshift.com/faq/troubleshooting.html#_how_do_i_redirect_traffic_to_https
I have two websites functioning under Google Compute Engine VM instances. Both sites accept requests and communicate only via HTTPS and not on HTTP.
How can I properly set a Network Load Balancer forwarding rule under GCE for HTTPS? I have my forwarding rule set on both port 80/443 (HTTP/HTTPS) but my health check always shows unhealthy. It seems like it can't handle HTTPS forwarding.
The way I have my site only doing HTTPS is by having a mod header loaded in Apache and strict transport security enabled. I then have a rewrite rule from HTTP to HTTPS for all requests.
As stated here,
There are two types of health checks available:
HTTP health checks, which are required for HTTP and network load
balancing.
HTTPS health checks, which are required when setting up
backend services to use HTTPS.
Therefore, a network load balancer uses an HTTP health check and it can't handle HTTPS forwarding. You'll need to setup a website, at least for the health check, that allows HTTP and returns an HTTP response with code 200.
I'm running a websocket server and asking myself, if it's planed, that clients authentication will be done with handshake in future... draft xxxx maybe :)
Do you have information? I have heard that with draft07 a session id can be sent to server, so maybe that can help to auth the client...
What I'm doing atm is to wait a maximum of 10 seconds, till the clients sends me a message with login header, username and password. But i think this is not "THE" solution. How do you guys out there doing it?
The WebSockets protocol permits standard HTTP authentication headers to be exchanged during the handshake. If you have a WebSockets server that plugs into an existing web server as a module then existing authentication in the web server should already work. Otherwise if you have a standalone WebSockets server then you may need to add the authentication support.
Update
As #Jon points out, unlike normal HTTP/XHR requests, the browser API does not allow you to set arbitrary "X-*" headers for WebSocket connections. The only header value that you can set is the protocol. This is unfortunate. One common solution is to use a ticket based system that relies on existing HTTP mechanism for authorization/authentication and then this ticket is passed along with the websocket connection and validated that way: https://devcenter.heroku.com/articles/websocket-security