Can someone tell me how to make a server choose a ECDH_* cipher over ECDHE_*? - diffie-hellman

I am using RSA cipher for signing the certificate and SSL_CTX_set_tmp_ecdh_callback() api to set the ECDH parameters for key-exchange. The server always ends up choosing TLS_ECDHE_RSA_* cipher suite. If i make the client send only TLS_ECDH_* cipher suites in the clientHello, the server breaks the connection stating "no shared cipher".
Can someone tell me how to make a server choose a ECDH_* cipher over ECDHE_* ?
How is it that the server decides I should choose ECDH_* cipher over ECDHE_* ciphers ?

Now that this is moved where it's ontopic, and clarified enough, and the partying is over:
Ephemeral ECDH suites: TLS suites that use ephemeral ECDH key exchange (ECDHE-*) use at least nominally ephemeral ECDH keys, which OpenSSL calls 'temporary'. OpenSSL through 1.0.2 has 4-6 ways of setting these keys:
SSL_CTX_set_tmp_ecdh or SSL_set_tmp_ecdh set (only) the 'curve' to be used; to be exact this is an EC_GROUP or formally 'parameter set' that consists of an actual curve defined by a curve equation on an underlying field, plus a specified base point which generates a subgroup on the curve of sufficiently high order and low cofactor, but most of the time we ignore this detail and just call it a 'curve'. OpenSSL then generates a random key on that curve for and during each handshake.
SSL_CTX_set_tmp_ecdh_callback or SSL_set_tmp_ecdh_callback sets a function that is called during each handshake and can either set a specific key, or set a curve and OpenSSL generates a random key on that curve.
SSL_CTX_set_ecdh_auto or SSL_set_ecdh_auto new in 1.0.2 causes OpenSSL during each handshake to choose a curve based on the client hello, and generate a random key on that curve.
Note that each ciphersuite using ECDHE also defines the type of key with matching certificate chain the server must use to authenticate: ECDHE-RSA must use an RSA key&cert while ECDHE-ECDSA must use an ECDSA key&cert (or to be precise EC key and ECDSA cert, since the same EC key can be used for ECDSA, ECDH, ECIES, and more, but usually shouldn't). OpenSSL library can be configured with multiple key&cert pairs, one of each type, and commandline s_server can do two static pairs using -cert -key -dcert -dkey plus one for SNI -cert2 -key2, but other programs may or may not.
However, in 1.1.0 these functions are removed and it appears OpenSSL always does what was formerly ecdh_auto.
Static ECDH suites: TLS suites that use static aka fixed ECDH key exchange (ECDH-*) use a static ECDH key and do not use an ephemeral or temporary ECDH key. Since they do not use a temporary key, the functions involved in setting a temporary curve or key are irrelevant and have no useful effect. Instead the static ECDH key must be in the server's configured key and certificate pair, and the certificate must allow ECDH i.e. it must not have keyUsage that excludes keyAgreement. In addition, in TLS 1.0 and 1.1 the configured certificate must be signed by a CA using a signature algorithm matching the ciphersuite: ECDH-ECDSA ciphersuites must use an ECDH cert signed by an ECDSA CA, and ECDH-RSA ciphersuites must use an ECDH cert signed by an RSA CA; see rfc4492 section 5.3. For TLS 1.2 rfc 5246 section 7.4.2 and A.7 for ECC relaxes this requirement and allows the CA cert to be any algorithm permitted by the client's signature_algorithms extension. However on checking I found OpenSSL doesn't implement this relaxation, so part of my earlier comment is wrong; even for 1.2 it requires the CA signature algorithm match the ciphersuite.
In addition for all protocol versions the key and (EE) cert must use a curve supported by the client in supported_curves extension, and the cert must express that key in 'named' form (using an OID to identify the curve rather than explicit parameters) and a point format supported by the client in supported_formats extension. With OpenSSL client this is never an issue because it supports all named curves and point formats, and in practice certificates don't use explicit curve parameters.
Thus to get static ECDH with OpenSSL:
configure the server with an EC key (SSL_[CTX_]use_PrivateKey*) and matching certificate (SSL_[CTX_]use_certificate[_chain]*) that allows keyAgreement and is signed by a CA using RSA or ECDSA -- and like all PK-based ciphersuites also configure any chain certs needed by the client(s) to validate the cert
configure both ends to allow (which is true by default) and at least one end to require or the preference end to prefer ciphersuite(s) using ECDH-xyz where xyz is RSA or ECDSA to match the CA signature on the server cert
ignore ecdh_tmp and ecdh_auto entirely
... except in 1.1.0, which on checking I found no longer implements any static-ECDH or static-DH ciphersuites -- even though the static-DH suites are still in the manpage for ciphers. This is not in the CHANGES file that I can find, and I haven't had time to go through the code yet.

... except in 1.1.0, which on checking I found no longer implements
any static-ECDH or static-DH ciphersuites -- even though the static-DH
suites are still in the manpage for ciphers. This is not in the
CHANGES file that I can find, and I haven't had time to go through the
code yet.
https://github.com/openssl/openssl/commit/ce0c1f2bb2fd296f10a2847844205df0ed95fb8e

Related

PSD2 QSealC signed message

I have looked everywhere for an example of a QSealC signed message, and I could not find any info.
I need to verify the signature of a QsealC signed payload AND I need to sign the responding payload but I understand that the payload is all in json,
Are there examples out there of QSealC signed payloads?
thanks
You will do both the signing and validation as detailed by IETF's draft-cavage-http-signatures, where you should pay special attention to section 4.1.1 for constructing and section 2.5 for verifying the Signature header.
This draft is referenced by both Berlin Group's XS2A NextGenPSD2 Framework Implementation Guidelines and Stet (France). However, note that it's normal that each unique implementation imposes additional requirements on the HTTP Message Signing standard, e.g. by requiring specific headers to be signed or using a specially formatted keyId. I am not sure whether other standardizations such as Open Banking (UK) reference it.
Take note that you do not need actual QsealC PSD2 certificates to begin your implementation work of the neither the signing nor validation process, as you can create your own self-issued certificates, e.g. using OpenSSL, by adding the OID's found in the ASN.1 profile described in ETSI TS 119 495.
However, I strongly recommend you find a QTSP in your region and order certificates both for development and testing, and for use in production when the time comes.
I won't go into details on the actual process of creating the signature itself, as it's very well detailed in draft-cavage-http-signatures, but consider the following example;
You're requesting GET https://api.bank.eu/v1/accounts, and after processing your outgoing request you end up with the following signing string;
date: Sun, 12 May 2019 17:03:04 GMT
x-request-id: 69df69c1-76d0-4590-8f28-50449a21d0d8
psu-id: 289da2e6-5a01-430d-8075-8f7af71f6d2b
tpp-redirect-uri: https://httpbin.org/get
The resulting Signature could look something like this;
keyId=\"SN=D9EA5432EA92D254,CA=CN=Buypass Class 3 CA 3,O=Buypass AS-983163327,C=NO\",
algorithm=\"rsa-sha256\",
headers=\"date x-request-id psu-id tpp-redirect-uri\",
signature=\"base64(rsa-sha256(signing_string))\"
The above signature adheres to Berlin Group requirements as detailed in Section 12.2 in their implementation guidelines (per. v1.3), except some linebreaks added for readability, which in short are ;
the keyId must be formatted as SN={serial},CA={issuer}, but note that it seems to be up to the ASPSP to decide how the serial and issuer is formatted. However, most are likely to require the serial to be in uppercase hexadecimal representation and issuer formatting in conformance with RFC 2253 or RFC 4514.
The algorithm used must be either rsa-sha256 or rsa-sha512
The following headers must be part of the signing string if present in the request; date, digest, x-request-id, psu-id, psu-corporate-id, tpp-redirect-uri
The signature must be base-64 encoded
As developers have just begun to adopt this way of signing messages, you'll likely have you implement this yourself - but it's not too difficult if you just carefully read the above mentioned draft.
However, vendors have begun supporting the scheme, e.g. Apache CXF currently supports both signing and validation from v3.3.0 in their cxf-rt-rs-security-http-signature module as mentioned in the Apache CXF 3.3 Migration Guide. Surely, others will follow.
Validating the actual signature is easy enough, but validating the actual QsealC PSD2 certificate is a bit more cumbersome, but you'll likely have to integrate with EU's List of Trusted Lists to retrieve root- and intermediate certificates used to issued these certificates, and form a chain of trust together with e.g. Java's cacerts and Microsoft Trusted Root Certificate Program. I personally have good experiences using difi-certvalidator (Java) for the actual validation process, as it proved very easy to extend to our needs, but there are surely many other good tools and libraries out there.
You'll also need to pay special attention the certificate's organizationIdentifier (OID: 2.5.4.97) and qcStatements (OID: 1.3.6.1.5.5.7.1.3). You should check the certificate's organizationIdentifier against the Preta directory, as there might be instances where a TPP's authorization is revoked by it's NCA, but a CRL revocation hasn't yet been published by it's QTSP.
DISCLAIMER: When it comes to the whole QsealC PSD2 certificate signing and validation process, I find both Berlin Group and EBA to be very diffuse, leaving several aspects open to interpretation.
This is a rough explanation, but hopefully it give you enough to get started.

CAS X.509 auth with attributes from database

I want to configure Apereo CAS 6.0.x to perform X.509 authentication and then retrieve principal attributes from a database table.
Rudimentary X.509 authentication is working with these lines in application.properties (and appropriate reverse proxy setup):
cas.authn.x509.extractCert=true
cas.authn.x509.sslHeaderName=SSL_CLIENT_CERT
cas.authn.x509.principalDescriptor=SUBJECT_DN
The default "Log In Successful" page shows that it knows how to get my certificate's subject DN.
But I can't figure out how to tell CAS to then use that subject DN value to query my database for additional attributes.
This page explicitly mentions my need (though with LDAP instead of JDBC), but does not say specifically how to achieve it:
In many cases it is necessary to perform authentication by one means and resolve principals by another. The PrincipalResolver component provides this functionality. A common use case for this this mix-and-match strategy arises with X.509 authentication. It is common to store certificates in an LDAP directory and query the directory to resolve the principal ID and attributes from directory attributes. The X509CertificateAuthenticationHandler may be be combined with an LDAP-based principal resolver to accommodate this case.
What properties need to be set so that the X509 authentication handler resolves the principal against the database?
The missing ingredient was this line in application.properties:
cas.authn.x509.principalType=SUBJECT_DN
Without it, CAS does not attempt to query any attributeRepository settings that you may have.

NET::ERR_CERT_REVOKED in Chrome, when the certificate is not actually revoked

I am looking for some help, trying to satisfy my curiosity by figuring out why Chrome 46.0.2490.80 won't let me access https://www.evernote.com, while Firefox works fine. Chrome was working fine too, until 2 days ago, but now it throws the NET::ERR_CERT_REVOKED error.
So I got curious - is the certificate actually revoked? Well let's check...
I opened the certificate dialog and exported the certificate (evernote.pem), and it's issuer chain (evernote-chain.pem):
Grab the OCSP Responder URI from the certificate:
$ openssl x509 -noout -ocsp_uri -in evernote.pem
http://ss.symcd.com
Let's check the certificate status now:
$ openssl ocsp -no_nonce -issuer evernote-chain.pem -CAfile evernote-chain.pem -cert evernote.pem -url http://ss.symcd.com
Response verify OK
evernote.pem: good
This Update: Dec 16 09:14:05 2015 GMT
Next Update: Dec 23 09:14:05 2015 GMT
So, the certificate is not revoked, which is why Firefox works correctly. So what's going on with Chrome? Why does it think that this certificate is revoked?
I did notice another detail, which may or may not be important - I don't really understand it. The chain of certificates in Chrome is different from the chain obtained from Firefox, or from openssl. Chrome is seeing the following chain:
|- Class 3 Public Primary Certification Authority (Builtin Object Token, self-signed)
|---- VeriSign Class 3 Public Primary Certification Authority - G5 (35:97:31:87:F3:87:3A:07:32:7E:CE:58:0C:9B:7E:DA)
|------- Symantec Class 3 Secure Server CA - G4
|---------- www.evernote.com
Firefox and openssl see this instead:
|- VeriSign Class 3 Public Primary Certification Authority - G5 (18:DA:D1:9E:26:7D:E8:BB:4A:21:58:CD:CC:6B:3B:4A, self-signed)
|---- Symantec Class 3 Secure Server CA - G4
|------- www.evernote.com
I am not sure how to interpret this. It seems VeriSign Class 3 Public Primary CA is almost the same in Chrome, except instead of being self signed, it is replaced with something that looks exactly like it, but signed by this other "Builtin Object Token" in Chrome... What does this mean? Could it have anything to do with the problem I am experiencing?
UPDATE:
The first part of the question has been answered below. The reason why the site stopped working recently is b/c Google decided to distrust the “Class 3 Public Primary CA” root certificate, as explained here:
https://googleonlinesecurity.blogspot.ca/2015/12/proactive-measures-in-digital.html
and here: https://code.google.com/p/chromium/issues/detail?id=570892
Since the decision was reversed for now, the problem can be fixed by getting the latest CRLSet in chrome://components/
However, the second part of the question remains:
Where is Chrome getting it's certificate chain from?
Firefox, openssl and https://www.digicert.com/help/, all show this same chain:
VeriSign Class 3 Public Primary Certification Authority - G5
18:DA:D1:9E:26:7D:E8:BB:4A:21:58:CD:CC:6B:3B:4A
Symantec Class 3 Secure Server CA - G4
51:3F:B9:74:38:70:B7:34:40:41:8D:30:93:06:99:FF
www.evernote.com
18:A9:E9:D2:F7:F4:D9:A1:40:23:36:D0:F0:6F:DC:91
Yet Chrome is using:
Class 3 Public Primary Certification Authority
70:BA:E4:1D:10:D9:29:34:B6:38:CA:7B:03:CC:BA:BF
- This is the no longer trusted Root CA
VeriSign Class 3 Public Primary Certification Authority - G5
35:97:31:87:F3:87:3A:07:32:7E:CE:58:0C:9B:7E:DA <- WTF?!
Symantec Class 3 Secure Server CA - G4
51:3F:B9:74:38:70:B7:34:40:41:8D:30:93:06:99:FF
www.evernote.com
18:A9:E9:D2:F7:F4:D9:A1:40:23:36:D0:F0:6F:DC:91
The only explanation I can think of is that the "evil" version of the "VeriSign Class 3 Public Primary Certification Authority - G5" certificate is already in my certificate store somewhere. It has exactly the same CN and "Authority Key Identifier" as the "good" version, but it references a CA that is no longer trusted by Chrome. I have verified this assumption by direct comparison of the certificate in question between Chrome and Firefox. They are identical, and must generated using the same private key (or they wouldn't both be correctly verifying the signature on the Symantec cert), but one is self signed (the good one), and the other is not (the evil one).
So where is this certificate store/cache, which Chrome is using? Is it internal, or would it be system wide on Ubuntu? I am pretty sure that if I was to find and clear this cache, www.evernote.com would send me a full certificate chain during the next TLS handshake, and everything would right itself (this seems to support my theory: https://security.stackexchange.com/questions/37409/certificate-chain-checking).
But how do I blow away all of my cached certificates in Chrome?
Not sure what it all means, but the answer is there:
https://code.google.com/p/chromium/issues/detail?id=570892
and
https://googleonlinesecurity.blogspot.ca/2015/12/proactive-measures-in-digital.html
Google revoked a Symantec certificate from Google products, but they have suspended the revocation following the type of issues you're describing (which I also experienced). Quoting the Chromium ticket:
First, the good news is the change has been temporarily reverted, and you should find access restored. You can force an update by going to chrome://components and under CRLSet, clicking "Update". You should have version 2698 or later to fix this issue.
To answer the second part of the question, i.e. why Chrome finds a different trust path than the others.
The server is actually sending the following certificates to the client:
www.evernote.com
18:A9:E9:D2:F7:F4:D9:A1:40:23:36:D0:F0:6F:DC:91
Symantec Class 3 Secure Server CA - G4
51:3F:B9:74:38:70:B7:34:40:41:8D:30:93:06:99:FF
VeriSign Class 3 Public Primary Certification Authority - G5
25:0c:e8:e0:30:61:2e:9f:2b:89:f7:05:4d:7c:f8:fd
Note that the serial number on the last certificate is different to both serial numbers you have in your text, which means that there are several certificates with the same subject and public key out there which all can be used to built the trust chain.
Depending on the certificates you have installed in your system and the SSL stack used for validation different trust path are possible. For instance with openssl 1.0.1 on Ubuntu 14.04 it will also use the longer trust path you've seen with Chrome, i.e. the one which ends with the locally installed CA "Class 3 Public Primary Certification Authority". With OpenSSL 1.0.2 the handling of multiple trust path was changed and it will now prefer the shorter path which actually ignores the "VeriSign Class 3 Public Primary Certification Authority - G5" sent by the server and instead ends the trust chain in the locally installed version of a similar certificate (same public key and subject, different serial number).
The version of Chrome you've used had the specific Symantec certificate removed and now one of the possible trust path contains a revoked certificate, which means that the path cannot be trusted any longer. The bug is probably that Chrome then uses this result as the final result instead of looking for a different trust path which still would be valid.
From what I understand, Chrome uses certificates/cert store of the local machine, but Firefox uses its own in-built cert store. This would explain the discrepancy between the browsers.

Mac OS X - Accept self-signed multi domain SSL Certificate

I've got a self-signed certificate for multiple domains (let's say my.foo.bar.com & yours.foo.bar.com) that I've imported using Keychain Access but Chrome will still not accept it, prompting me for verification at the beginning of each browsing session per domain.
The Certificate was generated using the x509v3 subject alternative name extension to validate multiple domains. If I navigate to the site before I import the certificate, I get a different warning message than after importing. Attached below is an image of the two errors (with the top being the error before imported)
Is there any way to accept a self-signed multi-domain certificate? I only get warnings in Chrome, btw. FF and Safari work great (except those browsers suck ;) )
UPDATE: I tried generating the cert both with the openssl cli and the xca GUI
The problem is that you're trying to use too broad a wildcard (* or *.com).
The specifications (RFC 6125 and RFC 2818 Section 3.1) talk about "left-most" labels, which implies there should be more than one label:
1. The client SHOULD NOT attempt to match a presented identifier in
which the wildcard character comprises a label other than the
left-most label (e.g., do not match bar.*.example.net).
2. If the wildcard character is the only character of the left-most
label in the presented identifier, the client SHOULD NOT compare
against anything but the left-most label of the reference
identifier (e.g., *.example.com would match foo.example.com but
not bar.foo.example.com or example.com).
I'm not sure whether there's a specification to say how many minimum labels there should be, but the Chromium code indicates that there must be at least 2 dots:
We required at least 3 components (i.e. 2 dots) as a basic protection
against too-broad wild-carding.
This is indeed to prevent too broad cases like *.com. This may seem inconvenient, but CAs make mistakes once in a while, and having a measure to prevent a potential rogue cert issued to *.com to work isn't necessarily a bad thing.
If I remember correctly, some implementations go further than this and have a list domains that would be too broad for second-level domains too (e.g. .co.uk).
Regarding your second example: "CN:bar.com, SANs: DNS:my.foo.bar.com, DNS:yours.foo.bar.com". This certificate should be valid for my.foo.bar.com and yours.foo.bar.com but not bar.com. The CN is only a fallback solution when no SANs are present. If there are any SANs, the CN should be ignored (although some implementations will be more tolerant).

Certificate with SubjectAlternativeName (SAN) gives ERR_CONNECTION_RESET in Google Chrome

There is a non-public-facing application. I am trying to make sure that there are no HTTPS-related warnings/errors.
The error I receive after putting the certificate with SAN field (signed by a trusted CA) is that the web application won't load at all i.e. exclusively Google Chrome (latest) throws ERR_CONNECTION_RESET. This is the only error I get. Firefox/Safari works fine.
In contrast, a self-signed certificate with SAN field gives no broken HTTPS error (SAN-related warning); I am going to not talk about the other error I get because of the certificate being self-signed.
An anomaly I see is that the the root certificate (one level up; there are 2 levels of depth in total) in the certificate chain, in case of trusted CA-signed certificate, is not having any SAN field in it.
I have not tried fixing that part (unsure if that can be fixed at all, so wouldn't talk much) i.e. attaching SAN field to root certificate. I want to talk about it here before taking more actions from my side.
The application (for which the certificate with SAN is needed) is non-public i.e. the DNS entry of the SAN field contains a domain address which isn't publicly resolvable (shouldn't be an issue till the point my browser can resolve it?).
Any kind of insight is greatly helpful. Please suggest if clarification is needed.
P.S.
Some updates:
I wanted to test whether an openssl-generated self-signed CA with X509 extensions (SAN, Key Usage) is going to give an error. Interestingly, it did not gave that Chrome error. It works fine as it should (apart from the self-signed error, which isn't a concern atm).
Upon opening the certificate file in Windows OS and checking the extensions, Key Usage for self-signed certificate is Key Encipherment, Data Encipherment (30) and it is marked non-critical, however, for CA-signed (Microsoft Active Directory Certificate Services) it is Digital Signature, Key Encipherment (a0) and it is marked critical.
To rule out if this could be an issue with Key Usage being marked as critical, I generated a self-signed certificate (again using openssl) with that X509 extension marked as critical. Interestingly, it worked fine as well. In this case as well, the only error I get was of the certificate being self-signed.
When a requested signed-certificate is returned back by the CA, there are a bunch of other X509 non-critical extensions being added to the certificate. I do not see that as a problem at this point, however, it could be a reason where a critical Key Usage extension alongwith those non-critical extensions might open a scope of failure.
For the TLS handshake, there is no reply from the server-side at all.
Is it fair to think that this could be an issue around any involved encoding in the CA-signed certificate?.
This post is now followed up at Serverfault
The CA-signed certificate was supposed to be in PEM/Base64 format. It was rather in DER format. Changing it fixed the problem.