Can't access WildFly over HTTPS with browsers but can with OpenSSL client - google-chrome

I've deployed Keycloak on WildFly 10 via Docker. SSL support was enabled via cli. Final standalone.xml has:
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
<keystore path="keycloak.jks" relative-to="jboss.server.config.dir" keystore-password="changeit"
alias="mydomain" key-password="changeit"/>
</ssl>
</server-identities>
</security-realm>
Undertow subsystem:
<https-listener name="default-https" security-realm="UndertowRealm"
socket-binding="https"/>
Key was generated and placed in $JBOSS_HOME/standalone/configuration
keytool -genkey -noprompt -alias mydomain -dname "CN=mydomain,
OU=mydomain, O=mydomain, L=none, S=none, C=SI" -keystore
keycloak.jks -storepass changeit -keypass changeit
Port 8443 is exposed via Docker.
Accessing https://mydomain:8443/ in chrome results in ERR_CONNECTION_CLOSED. Firefox returns "Secure Connection Failed, the connection was interrupted..."
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
Input:
GET / HTTP/1.1
Host: https://mydomain:8443
This returns the Keycloak welcome page.
So clearly WildFly is working but I am being blocked by the browsers for whatever reason. What could this reason be? I was under the impression that I should be able to add an exception for self signed certificate in either browser. Maybe the generated key length is too short or maybe I am hitting some other security constraint imposed by Firefox/Chrome?

Using these parameters in keytool solved the problem: -keyalg RSA -keysize 2048

... -dname "CN=mydomain
The certificate is probably malformed. The Browsers and other user agents, like cURL and OpenSSL, use different policies to validate a end-entity certificate. The browser will reject a certificate if the hostname is in the Common Name (CN), while other user agents will accept it.
The short answer to this problem is, place DNS names in the Subject Alternate Name (SAN), and not the Common Name (CN).
You may still encounter other problems, but getting the names right will help immensely with browsers.
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
OpenSSL prior to 1.1.0 did not perform hostname validation. Prior version will accept any name.
cURL or Wget would be a better tool to test with in this case.
For reading on the verification you should perform when using OpenSSL, see:
SSL/TLS Client
For reading on the rules for hostnames and where they should appear in a X509 certificate, see:
How do you sign Certificate Signing Request with your Certification Authority?
How to create a self-signed certificate with openssl?

Related

What are the new requirements for certificates in Chrome?

Chrome now throws NET::ERR_CERT_INVALID for some certificates that are supported by other browsers.
The only clue I can find is in this list of questions about the new Chrome Root Store that is also blocking enterprise CA installations.
https://chromium.googlesource.com/chromium/src/+/main/net/data/ssl/chrome_root_store/faq.md
In particular,
The Chrome Certificate Verifier will apply standard processing to include checking:
the certificate's key usage and extended key usage are consistent with TLS use-cases.
the certificate validity period is not in the past or future.
key sizes and algorithms are of known and acceptable quality.
whether mismatched or unknown signature algorithms are included.
that the certificate does not chain to or through a blocked CA.
conformance with RFC 5280.
I verified my certificates work as expected in Edge.
Further, I verified the certificate is version "3", has a 2048-bit key, and has the extended key usage for server authentication.
I still don't understand which "standard" this certificate is expected to conform to when the browser only says "invalid". Is there a simple template or policy I can use?
Chrome now rejects TLS certificates containing a variable known as pathLenConstraint or sometimes displayed as Path Length Constraint.
I was using certificates issued by Microsoft Active Directory Certificate Services. The Basic Constraints extension was enabled, and the AD CS incorrectly injects the Path length Constraint=0 for end entity, non-CA certificates in this configuration.
The solution is to issue certificates without Basic Constraints. Chrome is equally happy with Basic Constraints on or off, so long as the path length variable is not present.
One of the better resources for troubleshooting was this Certificate Linter:
https://crt.sh/lintcert
It found several errors in the server certificate, including the path length set to zero.
I also found a thread discussing a variety of Certificate Authorities that would issue certificates the same way, so it is a fairly common issue.
https://github.com/pyca/cryptography/issues/3856
Another good resource was the smallstep open source project that I installed as an alternative CA. After generating a generic certificate, the invalid cert error went away and I realized there was something going on between the Microsoft and Google programs.
The best favour you can do yourself is to run Chrome with debug logging to find the exact cause of the issue:
chrome --enable-logging --v=1
This, I believe, will print:
ERROR: Target certificate looks like a CA but does not set all CA properties
Meanwhile it seems they have reverted this verification, which if I'm not mistaken will be released as Chrome 111 in the beginning of March.
See: https://chromium-review.googlesource.com/c/chromium/src/+/4119124
Following #Robert's answer, I used https://crt.sh/lintcert to fix all the issues that I had, so my self-signed certificate will keep on working, as it suddenly stopped working and I got NET::ERR_CERT_INVALID
Here's How I did it:
# https://www.openssl.org/docs/manmaster/man5/x509v3_config.html
cat > "$_X509V3_CONFIG_PATH" << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=critical,CA:true
keyUsage=critical,digitalSignature,nonRepudiation,cRLSign,keyCertSign
subjectAltName=#alt_names
issuerAltName=issuer:copy
subjectKeyIdentifier=hash
[alt_names]
DNS.1=somesubdomain.mydomain.com.test
EOF
openssl x509 -req \
-days "$_ROOTCA_CERT_EXPIRE_DAYS" \
-in "$_ROOTCA_PEM_PATH" \
-signkey "$_ROOTCA_KEY_PATH" \
-extfile "$_X509V3_CONFIG_PATH" \ # <--- Consuming the extensions file
-out "$_DOMAIN_CRT_PATH"
Following the above, my lint errors/issues are, and even though there's a single ERROR, my Chrome browser trusts the root CA and the self-signed certificate
cablint WARNING CA certificates should not include subject alternative names
cablint INFO CA certificate identified
x509lint ERROR AKID without a key identifier
x509lint INFO Checking as root CA certificate
For those of you who wish to generate a self-signed certificate for local development with HTTPS, the following gist does that trick- https://gist.github.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0
Usage:
curl -L --output generate_self_signed_ca_certificate.sh https://gist.githubusercontent.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0/raw/07aaa1035469f1e705fd74d4cf7f45062a23c523/generate_self_signed_ca_certificate.sh && \
chmod +x generate_self_signed_ca_certificate.sh
./generate_self_signed_ca_certificate.sh somesubdomain.mydomain.com
# Will automatically create a self-signed certificate for `somesubdomain.mydomain.com.test`

How to get InfluxDB to accept a self-signed certificate?

I've been trying to get InfluxDB to accept a self-signed certificate, but so for, no luck. I've been following instructions from here:
https://docs.influxdata.com/influxdb/v2.3/security/enable-tls/#configure-influxdb-to-use-tls
I created the cert and key with this command:
openssl req -x509 -nodes -newkey rsa:2048 -keyout influxdb-selfsigned.key -out influxdb-selfsigned.crt -days 9999 -config "C:\OpenSSL\openssl.cnf"
The config.yml file is as follows:
http-bind-address: ":8087"
tls-cert: influxdb-selfsigned.crt
tls-key: influxdb-selfsigned.key
Note, I made the bind port 8087 to ensure it was reading the configuration.
When I start influx from the command line, there are no error messages. Initially there were some TLS handshake errors, but those disappeared, I think when I added the crt and key to the configuration.
However, when I access the URL https://localhost:8087, chrome shows a "not secure" message and I have to click through warnings to get to the site.
To try to get Chrome to trust the certificate, - I followed the instructions from this site:
https://www.pico.net/kb/how-do-you-get-chrome-to-accept-a-self-signed-certificate
I exported the cert, then re-imported it as trusted.
However, I still get the "not secure" message in Chrome.
Also, the InfluxDB console shows this message:
info http: TLS handshake error from [::1]:63065: remote error: tls: unknown certificate {"log_id": "0cKnmWB0000", "service": "http"}
Any ideas how to get the cert working?
Currently it seems this is no easy way or workaround in open source version. The community has been asking for this feature but no progress yet. See more details here.
However, in Enterprise version, you could configure the server to know you are using self-signed certificates by setting this configuration in influxdb-meta.conf file:
# If using a self-signed certificate:
https-insecure-tls = true

Client certificate for postman and Chrome browser

I need postman to connect to my server which requires client certificate. After some research, it seems that postman does not handle certificates itself but relies on Chrome certificates instead. My next step was to try to install the certificates in Chrome, my cert structure is like this:
Self-signed root CA => intermediate CA 1 => intermediate CA 2 => client cert
I have a file certchain.pem that contains client cert followed by intermediate CA 2 cert then intermediate CA 1 cert, I also have client.key file. I tried to install the chain in Chrome but it seems that Chrome requires pkcs12 so I split off certchain.pem into client.crt and middle.pem, then I convert everything to pkcs12 by:
openssl pkcs12 -in client.crt -inkey client.key -certfile middle.pem -export -out client.p12
I could install client.p12 into Chrome but it doesn't seem to retain the intermediate certs, when I choose View, it only shows info about client cert.
I've tested that client.p12 works by installing it into Firefox, there I can see info about the intermediate certs. I've tested that my certs work by doing a curl with them. Any other idea?

Using Openssl S_server to test chrome HTTPS

I write a HTTPS Server by Openssl. Using Chrome to connect the server, It has ERR_CONNECTION_REFUSED. But Using Firefox to connect the server,It work fine.
I follow the sites http://blog.jorisvisscher.com/2015/07/22/create-a-simple-https-server-with-openssl-s_server/
openssl s_server -key key.pem -cert cert.pem -accept 44330 -www
The result is the same.
How can I solve it!
Thanks for reading this Q!
Your chrome probably refused the connection because it was unsecure (here Firefox Developer Edition also refused). By default, openssl uses weak DH parameters, and unsupported protocols (like SSLv3), you should add additional options to secure your server.
First, generate stronger DH params:
openssl dhparam -out dhparam.pem 2048
Use at least 2048, the bigger, the better (I usually use 4096). Then run your server with this command instead:
openssl s_server -key key.pem -cert cert.pem -accept 44330 \
-no_ssl3 -dhparam dhparam.pem -www
Be aware that SSLv2 is also in ways of being deprecated (PCI compliance will fail for SSLv2 by the middle of this year), and there are also several ciphers that are unsecure.
If you'd like a really strong dhparam, consider installing a service for generating more entropy, like haveged (before generating dhparams):
apt-get install haveged

CakePHP 3 - Enable SSL on development server [duplicate]

OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.