I've got a self-signed certificate for multiple domains (let's say my.foo.bar.com & yours.foo.bar.com) that I've imported using Keychain Access but Chrome will still not accept it, prompting me for verification at the beginning of each browsing session per domain.
The Certificate was generated using the x509v3 subject alternative name extension to validate multiple domains. If I navigate to the site before I import the certificate, I get a different warning message than after importing. Attached below is an image of the two errors (with the top being the error before imported)
Is there any way to accept a self-signed multi-domain certificate? I only get warnings in Chrome, btw. FF and Safari work great (except those browsers suck ;) )
UPDATE: I tried generating the cert both with the openssl cli and the xca GUI
The problem is that you're trying to use too broad a wildcard (* or *.com).
The specifications (RFC 6125 and RFC 2818 Section 3.1) talk about "left-most" labels, which implies there should be more than one label:
1. The client SHOULD NOT attempt to match a presented identifier in
which the wildcard character comprises a label other than the
left-most label (e.g., do not match bar.*.example.net).
2. If the wildcard character is the only character of the left-most
label in the presented identifier, the client SHOULD NOT compare
against anything but the left-most label of the reference
identifier (e.g., *.example.com would match foo.example.com but
not bar.foo.example.com or example.com).
I'm not sure whether there's a specification to say how many minimum labels there should be, but the Chromium code indicates that there must be at least 2 dots:
We required at least 3 components (i.e. 2 dots) as a basic protection
against too-broad wild-carding.
This is indeed to prevent too broad cases like *.com. This may seem inconvenient, but CAs make mistakes once in a while, and having a measure to prevent a potential rogue cert issued to *.com to work isn't necessarily a bad thing.
If I remember correctly, some implementations go further than this and have a list domains that would be too broad for second-level domains too (e.g. .co.uk).
Regarding your second example: "CN:bar.com, SANs: DNS:my.foo.bar.com, DNS:yours.foo.bar.com". This certificate should be valid for my.foo.bar.com and yours.foo.bar.com but not bar.com. The CN is only a fallback solution when no SANs are present. If there are any SANs, the CN should be ignored (although some implementations will be more tolerant).
Related
Say I have this header set on mywebsite.com:
Content-Security-Policy: script-src self https://*.example.com
I know it will allow https://foo.example.com and https://bar.example.com, but will it allow https://example.com alone?
Looking at the spec....
Hosts such as example.com (which matches any resource on the host, regardless of scheme) or *.example.com (which matches any resource on the host or any of its subdomains (and any of its subdomains' subdomains, and so on))
...it seems as it should allow plain https://example.com. However, I've found several different sites (site 1, site 2, site 3, site 4) that all say that https://example.com isn't included. Which is it?
*.example.com for a CSP header doesn’t also match example.com, per the current CSP spec.
That text cited from the (old) CSP spec is wrong (now fixed). The other sources cited are right.
But that https://www.w3.org/TR/CSP/#source-expression section cited, which defines what a CSP source expression is, isn’t actually stating the relevant normative requirements.
Instead the section of the CSP spec that does actually normatively define the relevant requirements is the Does url match expression in origin with redirect count algorithm, in a substep at https://www.w3.org/TR/CSP/#ref-for-grammardef-host-part-2 which reads:
If the first character of expression’s host-part is an U+002A ASTERISK character (*):
Let remaining be the result of removing the leading "*" from expression.
If remaining (including the leading U+002E FULL STOP character (.)) is not an ASCII case-insensitive match for the rightmost
characters of url’s host, then return "Does Not Match".
The including the leading U+002E FULL STOP character (.) part of the requirement indicates the remaining part after the asterisk is removed includes the leading full stop, and so the rightmost characters of url’s host must also start with a dot in order to match that.
In other words, if you start with *.example.com and walk through that part of the algorithm, you start by removing the * to get .example.com as the remaining part, and then you match the rightmost characters of url's host against that, including the leading full stop.
So https://foo.example.com matches, because the rightmost characters of its host part match .example.com, but https://example.com doesn’t match, because the rightmost characters of its host part don’t match .example.com (because it lacks the included full stop).
2017-10-13 update
A while back I reported the problem with the CSP spec and it’s now been fixed.
The relevant part of the CSP spec now reads:
Hosts such as example.com (which matches any resource on the host, regardless of scheme) or *.example.com (which matches any resource on the host’s subdomains (and any of its subdomains' subdomains, and so on))
Notice that the part which had read “matches any resource on the host or any of its subdomains” now just reads “matches any resource on the host’s subdomains”.
According to Mozilla's docs you should include 'self' as well as *.example.com in the CSP header if you want to include the base domain.
I have looked everywhere for an example of a QSealC signed message, and I could not find any info.
I need to verify the signature of a QsealC signed payload AND I need to sign the responding payload but I understand that the payload is all in json,
Are there examples out there of QSealC signed payloads?
thanks
You will do both the signing and validation as detailed by IETF's draft-cavage-http-signatures, where you should pay special attention to section 4.1.1 for constructing and section 2.5 for verifying the Signature header.
This draft is referenced by both Berlin Group's XS2A NextGenPSD2 Framework Implementation Guidelines and Stet (France). However, note that it's normal that each unique implementation imposes additional requirements on the HTTP Message Signing standard, e.g. by requiring specific headers to be signed or using a specially formatted keyId. I am not sure whether other standardizations such as Open Banking (UK) reference it.
Take note that you do not need actual QsealC PSD2 certificates to begin your implementation work of the neither the signing nor validation process, as you can create your own self-issued certificates, e.g. using OpenSSL, by adding the OID's found in the ASN.1 profile described in ETSI TS 119 495.
However, I strongly recommend you find a QTSP in your region and order certificates both for development and testing, and for use in production when the time comes.
I won't go into details on the actual process of creating the signature itself, as it's very well detailed in draft-cavage-http-signatures, but consider the following example;
You're requesting GET https://api.bank.eu/v1/accounts, and after processing your outgoing request you end up with the following signing string;
date: Sun, 12 May 2019 17:03:04 GMT
x-request-id: 69df69c1-76d0-4590-8f28-50449a21d0d8
psu-id: 289da2e6-5a01-430d-8075-8f7af71f6d2b
tpp-redirect-uri: https://httpbin.org/get
The resulting Signature could look something like this;
keyId=\"SN=D9EA5432EA92D254,CA=CN=Buypass Class 3 CA 3,O=Buypass AS-983163327,C=NO\",
algorithm=\"rsa-sha256\",
headers=\"date x-request-id psu-id tpp-redirect-uri\",
signature=\"base64(rsa-sha256(signing_string))\"
The above signature adheres to Berlin Group requirements as detailed in Section 12.2 in their implementation guidelines (per. v1.3), except some linebreaks added for readability, which in short are ;
the keyId must be formatted as SN={serial},CA={issuer}, but note that it seems to be up to the ASPSP to decide how the serial and issuer is formatted. However, most are likely to require the serial to be in uppercase hexadecimal representation and issuer formatting in conformance with RFC 2253 or RFC 4514.
The algorithm used must be either rsa-sha256 or rsa-sha512
The following headers must be part of the signing string if present in the request; date, digest, x-request-id, psu-id, psu-corporate-id, tpp-redirect-uri
The signature must be base-64 encoded
As developers have just begun to adopt this way of signing messages, you'll likely have you implement this yourself - but it's not too difficult if you just carefully read the above mentioned draft.
However, vendors have begun supporting the scheme, e.g. Apache CXF currently supports both signing and validation from v3.3.0 in their cxf-rt-rs-security-http-signature module as mentioned in the Apache CXF 3.3 Migration Guide. Surely, others will follow.
Validating the actual signature is easy enough, but validating the actual QsealC PSD2 certificate is a bit more cumbersome, but you'll likely have to integrate with EU's List of Trusted Lists to retrieve root- and intermediate certificates used to issued these certificates, and form a chain of trust together with e.g. Java's cacerts and Microsoft Trusted Root Certificate Program. I personally have good experiences using difi-certvalidator (Java) for the actual validation process, as it proved very easy to extend to our needs, but there are surely many other good tools and libraries out there.
You'll also need to pay special attention the certificate's organizationIdentifier (OID: 2.5.4.97) and qcStatements (OID: 1.3.6.1.5.5.7.1.3). You should check the certificate's organizationIdentifier against the Preta directory, as there might be instances where a TPP's authorization is revoked by it's NCA, but a CRL revocation hasn't yet been published by it's QTSP.
DISCLAIMER: When it comes to the whole QsealC PSD2 certificate signing and validation process, I find both Berlin Group and EBA to be very diffuse, leaving several aspects open to interpretation.
This is a rough explanation, but hopefully it give you enough to get started.
I'm using a wildcard certificate for two separate domains abc.com and xyz.com
It's a Self-Signed Cert and I'm getting NET::ERR_CERT_AUTHORITY_INVALID.
Will ignoring the warnings thrown by one domain (abc.com) cause Chrome to ignore the warnings thrown in the other domain (xyz.com) by the same certificate?
The warnings gets only ignored for the domain you've explicitly added the exception for. Otherwise bad attacks would be possible, like having a self-signed certificate with both "does-not-matter-if-secure.example.com" and "www.paypal.com" as subject alternative names and then using the fact that the user adds an exception for the first unimportant site to mount a man in the middle attack against www.paypal.com.
What is the "official" url I should use if I want to indicate just a resource that fails as soon as possible?
I don't want to use www.example.com since its an actual site that accepts and responds requests and I don't want something that takes forever and fails from a timeout (like typing using a random, private IP address can lead to).
I thought about writing an invalid address or just some random text but I figured it wouldn't look as nice and clear as "www.example.com" is.
If you want an invalid IP, trying using 0.0.0.0.
The first octet of an IP cannot be 0, so 0.0.0.0 to 0.255.255.255 will be invalid.
For more info, see this question: what is a good invalid IP address to use for unit tests?
https://www.rfc-editor.org/rfc/rfc5735:
192.0.2.0/24 - This block is assigned as "TEST-NET-1" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry. See[RFC1166].
Use .invalid, as per RFC 6761:
The domain "invalid." and any names falling within ".invalid." are special [...] Users MAY assume that queries for "invalid" names will always return NXDOMAIN responses.
So a request for https://foo.invalid/bar will always fail, assuming well-behaved DNS.
Related question: What is a guaranteed-unresolvable (but valid) URL?
if it's in a browser then about: is fairly useless - but it would be better if your service returned the correct HTTP status code - e.g. 200 = good, 404 = not found, etc.
http://en.wikipedia.org/wiki/List_of_HTTP_status_codes
There is a non-public-facing application. I am trying to make sure that there are no HTTPS-related warnings/errors.
The error I receive after putting the certificate with SAN field (signed by a trusted CA) is that the web application won't load at all i.e. exclusively Google Chrome (latest) throws ERR_CONNECTION_RESET. This is the only error I get. Firefox/Safari works fine.
In contrast, a self-signed certificate with SAN field gives no broken HTTPS error (SAN-related warning); I am going to not talk about the other error I get because of the certificate being self-signed.
An anomaly I see is that the the root certificate (one level up; there are 2 levels of depth in total) in the certificate chain, in case of trusted CA-signed certificate, is not having any SAN field in it.
I have not tried fixing that part (unsure if that can be fixed at all, so wouldn't talk much) i.e. attaching SAN field to root certificate. I want to talk about it here before taking more actions from my side.
The application (for which the certificate with SAN is needed) is non-public i.e. the DNS entry of the SAN field contains a domain address which isn't publicly resolvable (shouldn't be an issue till the point my browser can resolve it?).
Any kind of insight is greatly helpful. Please suggest if clarification is needed.
P.S.
Some updates:
I wanted to test whether an openssl-generated self-signed CA with X509 extensions (SAN, Key Usage) is going to give an error. Interestingly, it did not gave that Chrome error. It works fine as it should (apart from the self-signed error, which isn't a concern atm).
Upon opening the certificate file in Windows OS and checking the extensions, Key Usage for self-signed certificate is Key Encipherment, Data Encipherment (30) and it is marked non-critical, however, for CA-signed (Microsoft Active Directory Certificate Services) it is Digital Signature, Key Encipherment (a0) and it is marked critical.
To rule out if this could be an issue with Key Usage being marked as critical, I generated a self-signed certificate (again using openssl) with that X509 extension marked as critical. Interestingly, it worked fine as well. In this case as well, the only error I get was of the certificate being self-signed.
When a requested signed-certificate is returned back by the CA, there are a bunch of other X509 non-critical extensions being added to the certificate. I do not see that as a problem at this point, however, it could be a reason where a critical Key Usage extension alongwith those non-critical extensions might open a scope of failure.
For the TLS handshake, there is no reply from the server-side at all.
Is it fair to think that this could be an issue around any involved encoding in the CA-signed certificate?.
This post is now followed up at Serverfault
The CA-signed certificate was supposed to be in PEM/Base64 format. It was rather in DER format. Changing it fixed the problem.