What are the new requirements for certificates in Chrome? - google-chrome

Chrome now throws NET::ERR_CERT_INVALID for some certificates that are supported by other browsers.
The only clue I can find is in this list of questions about the new Chrome Root Store that is also blocking enterprise CA installations.
https://chromium.googlesource.com/chromium/src/+/main/net/data/ssl/chrome_root_store/faq.md
In particular,
The Chrome Certificate Verifier will apply standard processing to include checking:
the certificate's key usage and extended key usage are consistent with TLS use-cases.
the certificate validity period is not in the past or future.
key sizes and algorithms are of known and acceptable quality.
whether mismatched or unknown signature algorithms are included.
that the certificate does not chain to or through a blocked CA.
conformance with RFC 5280.
I verified my certificates work as expected in Edge.
Further, I verified the certificate is version "3", has a 2048-bit key, and has the extended key usage for server authentication.
I still don't understand which "standard" this certificate is expected to conform to when the browser only says "invalid". Is there a simple template or policy I can use?

Chrome now rejects TLS certificates containing a variable known as pathLenConstraint or sometimes displayed as Path Length Constraint.
I was using certificates issued by Microsoft Active Directory Certificate Services. The Basic Constraints extension was enabled, and the AD CS incorrectly injects the Path length Constraint=0 for end entity, non-CA certificates in this configuration.
The solution is to issue certificates without Basic Constraints. Chrome is equally happy with Basic Constraints on or off, so long as the path length variable is not present.
One of the better resources for troubleshooting was this Certificate Linter:
https://crt.sh/lintcert
It found several errors in the server certificate, including the path length set to zero.
I also found a thread discussing a variety of Certificate Authorities that would issue certificates the same way, so it is a fairly common issue.
https://github.com/pyca/cryptography/issues/3856
Another good resource was the smallstep open source project that I installed as an alternative CA. After generating a generic certificate, the invalid cert error went away and I realized there was something going on between the Microsoft and Google programs.

The best favour you can do yourself is to run Chrome with debug logging to find the exact cause of the issue:
chrome --enable-logging --v=1
This, I believe, will print:
ERROR: Target certificate looks like a CA but does not set all CA properties
Meanwhile it seems they have reverted this verification, which if I'm not mistaken will be released as Chrome 111 in the beginning of March.
See: https://chromium-review.googlesource.com/c/chromium/src/+/4119124

Following #Robert's answer, I used https://crt.sh/lintcert to fix all the issues that I had, so my self-signed certificate will keep on working, as it suddenly stopped working and I got NET::ERR_CERT_INVALID
Here's How I did it:
# https://www.openssl.org/docs/manmaster/man5/x509v3_config.html
cat > "$_X509V3_CONFIG_PATH" << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=critical,CA:true
keyUsage=critical,digitalSignature,nonRepudiation,cRLSign,keyCertSign
subjectAltName=#alt_names
issuerAltName=issuer:copy
subjectKeyIdentifier=hash
[alt_names]
DNS.1=somesubdomain.mydomain.com.test
EOF
openssl x509 -req \
-days "$_ROOTCA_CERT_EXPIRE_DAYS" \
-in "$_ROOTCA_PEM_PATH" \
-signkey "$_ROOTCA_KEY_PATH" \
-extfile "$_X509V3_CONFIG_PATH" \ # <--- Consuming the extensions file
-out "$_DOMAIN_CRT_PATH"
Following the above, my lint errors/issues are, and even though there's a single ERROR, my Chrome browser trusts the root CA and the self-signed certificate
cablint WARNING CA certificates should not include subject alternative names
cablint INFO CA certificate identified
x509lint ERROR AKID without a key identifier
x509lint INFO Checking as root CA certificate
For those of you who wish to generate a self-signed certificate for local development with HTTPS, the following gist does that trick- https://gist.github.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0
Usage:
curl -L --output generate_self_signed_ca_certificate.sh https://gist.githubusercontent.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0/raw/07aaa1035469f1e705fd74d4cf7f45062a23c523/generate_self_signed_ca_certificate.sh && \
chmod +x generate_self_signed_ca_certificate.sh
./generate_self_signed_ca_certificate.sh somesubdomain.mydomain.com
# Will automatically create a self-signed certificate for `somesubdomain.mydomain.com.test`

Related

How to get InfluxDB to accept a self-signed certificate?

I've been trying to get InfluxDB to accept a self-signed certificate, but so for, no luck. I've been following instructions from here:
https://docs.influxdata.com/influxdb/v2.3/security/enable-tls/#configure-influxdb-to-use-tls
I created the cert and key with this command:
openssl req -x509 -nodes -newkey rsa:2048 -keyout influxdb-selfsigned.key -out influxdb-selfsigned.crt -days 9999 -config "C:\OpenSSL\openssl.cnf"
The config.yml file is as follows:
http-bind-address: ":8087"
tls-cert: influxdb-selfsigned.crt
tls-key: influxdb-selfsigned.key
Note, I made the bind port 8087 to ensure it was reading the configuration.
When I start influx from the command line, there are no error messages. Initially there were some TLS handshake errors, but those disappeared, I think when I added the crt and key to the configuration.
However, when I access the URL https://localhost:8087, chrome shows a "not secure" message and I have to click through warnings to get to the site.
To try to get Chrome to trust the certificate, - I followed the instructions from this site:
https://www.pico.net/kb/how-do-you-get-chrome-to-accept-a-self-signed-certificate
I exported the cert, then re-imported it as trusted.
However, I still get the "not secure" message in Chrome.
Also, the InfluxDB console shows this message:
info http: TLS handshake error from [::1]:63065: remote error: tls: unknown certificate {"log_id": "0cKnmWB0000", "service": "http"}
Any ideas how to get the cert working?
Currently it seems this is no easy way or workaround in open source version. The community has been asking for this feature but no progress yet. See more details here.
However, in Enterprise version, you could configure the server to know you are using self-signed certificates by setting this configuration in influxdb-meta.conf file:
# If using a self-signed certificate:
https-insecure-tls = true

How to debug self signed certificate?

I have created a self signed certificate and imported the CA cert into Trusted Root Certification Authorities but Chrome still gives me ERR_CERT_COMMON_NAME_INVALID. I have followed https://gist.github.com/jchandra74/36d5f8d0e11960dd8f80260801109ab0 this guide. When opening the domain in Chrome the PEM encoded chain gives me the server and the certificate I supplied. I set both commonName and DNS.1 under alt_names to my.site.com and started chrome --host-rules="MAP my.site.com 127.0.0.1". How could I debug this? How can I check whether Chrome sees the CA I imported, and whether it tries to use it with this cert Apache supplies?
If I bypass the warning, under security in Developer Tools I see "Certificate - valid and trusted. The connection to this site is using a valid, trusted server certificate issued by unknown name." but "Certificate - missing This site is missing a valid, trusted certificate (net::ERR_CERT_COMMON_NAME_INVALID)."
What I would like to see is something like "In field X of the certificate, expected Y, got Z".
Once chrome --host-rules="MAP my.site.com 127.0.0.1" is supplied Chrome does not look for a certificate for my.site.com instead it wants 127.0.0.1.
Make sure you have IP.1 = 127.0.0.1 in your alt_names section.

Getting net::ERR_CERT_COMMON_NAME_INVALID

I'm getting this error on Chrome (v 59.0.3071.109), I have tried a couple of answers without any luck.
This is what shows in the security tab:
The certificate for this site does not contain a Subject Alternative Name extension containing a domain name or IP address
There are issues with the site's certificate chain (net::ERR_CERT_COMMON_NAME_INVALID).
I followed this tutorial to create the certificate with this values:
CN = localhost
OU = ort
O = ort
L = montevideo
S = MVD
C = UY
And this is my host https://localhost:8181/Gateway-war/
So far I have tried:
Enabling this flag chrome://flags/#allow-insecure-localhost
Adding this --ignore-certificate-errors to the Chrome Shortcut, it shows a message saying this command isn't allowed because it affects security and stability
Using this workaround: reg add HKLM\Software\Policies\Google\Chrome /v EnableCommonNameFallbackForLocalAnchors /t REG_DWORD /d 1
In all the cases I restarted Chrome before trying it out.
Maybe my CN should be something more than localhost?
Any ideas are welcome
When you have configured your certificate right, you don't have to do all those workarounds to make it work. All you have to do is to add the SubjectAltName extension in your certificate to make the browser happy.
I assume you must be using a self-signed certificate. If so, your certificate must look like this for the 'SubjectAltName' extension. You could use the keystore-explorer (opensource GUI for keytool) to generate your certificate like this:
If it is a CA signed, you need to make sure you send these extension attributes in your CSR.
You need to create a certificate with the "Subject Alternative Name". If using windows one can use PowerShell. The cerificate will be stored in the windows register. You can access the certificates via certml.msc which can then be exported to a drive in certmgr.msc. An example of a certificate with "Subject Alternative Name" is bellow by using the TextExtension parameter on New-SelfSignedCertificate.
New-SelfSignedCertificate -CertStoreLocation cert:\LocalMachine\My -NotAfter (Get-Date).AddYears(10) -FriendlyName "My Network Name" -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" -TextExtension #("2.5.29.17={text}dns=*.example.com&ipaddress=192.168.1.1")

How to create re-encrypting route for hawkular-metrics in OpenShift Enterprise 3.2

As per documentation to enable cluster metrics, I should create re-encrypting route as per the below statement
$ oc create route reencrypt hawkular-metrics-reencrypt \
--hostname hawkular-metrics.example.com \
--key /path/to/key \
--cert /path/to/cert \
--ca-cert /path/to/ca.crt \
--service hawkular-metrics
--dest-ca-cert /path/to/internal-ca.crt
What exactly should I use for these keys and certificates?
Are these already exists somewhere or I need to create them?
Openshift Metrics developer here.
Sorry if the docs were not clear enough.
The route is used to expose Hawkular Metrics, particularly to the browser running the OpenShift console.
If you don't specify any certificates, the system will use a self signed certificate instead. The browser will complain that this self signed certificate is not trusted, but you can usually just click through to accept it anyways. If you are ok with this, then you don't need to do any extra steps.
If you want the browser to trust this connection by default, then you will need to provide your own certificates signed by a trusted certificate authority. This is exactly similar to how you would have to generate your own certificate if you are running a normal site under https.
From the following command:
$ oc create route reencrypt hawkular-metrics-reencrypt \ --hostname hawkular-metrics.example.com \ --key /path/to/key \ --cert /path/to/cert \ --ca-cert /path/to/ca.crt \ --service hawkular-metrics --dest-ca-cert /path/to/internal-ca.crt
'cert' corresponds to your certificate signed by the certificate authority
'key' corresponds to the key for your certificate
'ca-cert' corresponds to the certificate authorities certificate
'dest-ca-cert' corresponds to the certificate authority which signed the self signed certificate generated by the metrics deployer
The docs https://docs.openshift.com/enterprise/3.2/install_config/cluster_metrics.html#metrics-reencrypting-route should explain how to get the dest-ca-cert from the system
First of all and as far as I know, note that using a re-encrypting route is optional. The documentation mentions deploying without importing any certificate:
oc secrets new metrics-deployer nothing=/dev/null
And you should be able to start with that and make hawkular working (for instance you'll be able to curl with '-k' option). But re-encrypting route is sometimes necessary, some clients refuse to communicate with untrusted certificates.
This page explains what are the certificates needed here: https://docs.openshift.com/enterprise/3.1/install_config/cluster_metrics.html#metrics-reencrypting-route
Note that you can also configure it from the web console if you find it more convenient: from https://(your_openshift_host)/console/project/openshift-infra/browse/routes , you can create a new route and upload the certificate files from that page. Under "TLS termination" select "Re-Encrypt", then provide the 4 certificate files.
If you don't know how to generate self-signed certificates you can follow steps described here: https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/ . You'll end up with a rootCA.pem file (use it for "CA Certificate"), a device.key file (or name it hawkular.key, and upload it as private key) and a device.crt file (you can name it hawkular.pem, it's the PEM format certificate). When asked for the Common Name, make sure to enter the hostname for your hawkular server, like "hawkular-metrics.example.com"
The final one to provide is the current self-signed certificate used by Hawkular, under so-called "Destination CA Certificate". OpenShift documentation explains how to get it: run
base64 -d <<< \
`oc get -o yaml secrets hawkular-metrics-certificate \
| grep -i hawkular-metrics-ca.certificate | awk '{print $2}'`
and, if you're using the web console, save it to a file then upload it under Destination CA Certificate.
Now you should be done with re-encrypting.

Can't access WildFly over HTTPS with browsers but can with OpenSSL client

I've deployed Keycloak on WildFly 10 via Docker. SSL support was enabled via cli. Final standalone.xml has:
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
<keystore path="keycloak.jks" relative-to="jboss.server.config.dir" keystore-password="changeit"
alias="mydomain" key-password="changeit"/>
</ssl>
</server-identities>
</security-realm>
Undertow subsystem:
<https-listener name="default-https" security-realm="UndertowRealm"
socket-binding="https"/>
Key was generated and placed in $JBOSS_HOME/standalone/configuration
keytool -genkey -noprompt -alias mydomain -dname "CN=mydomain,
OU=mydomain, O=mydomain, L=none, S=none, C=SI" -keystore
keycloak.jks -storepass changeit -keypass changeit
Port 8443 is exposed via Docker.
Accessing https://mydomain:8443/ in chrome results in ERR_CONNECTION_CLOSED. Firefox returns "Secure Connection Failed, the connection was interrupted..."
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
Input:
GET / HTTP/1.1
Host: https://mydomain:8443
This returns the Keycloak welcome page.
So clearly WildFly is working but I am being blocked by the browsers for whatever reason. What could this reason be? I was under the impression that I should be able to add an exception for self signed certificate in either browser. Maybe the generated key length is too short or maybe I am hitting some other security constraint imposed by Firefox/Chrome?
Using these parameters in keytool solved the problem: -keyalg RSA -keysize 2048
... -dname "CN=mydomain
The certificate is probably malformed. The Browsers and other user agents, like cURL and OpenSSL, use different policies to validate a end-entity certificate. The browser will reject a certificate if the hostname is in the Common Name (CN), while other user agents will accept it.
The short answer to this problem is, place DNS names in the Subject Alternate Name (SAN), and not the Common Name (CN).
You may still encounter other problems, but getting the names right will help immensely with browsers.
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
OpenSSL prior to 1.1.0 did not perform hostname validation. Prior version will accept any name.
cURL or Wget would be a better tool to test with in this case.
For reading on the verification you should perform when using OpenSSL, see:
SSL/TLS Client
For reading on the rules for hostnames and where they should appear in a X509 certificate, see:
How do you sign Certificate Signing Request with your Certification Authority?
How to create a self-signed certificate with openssl?