Are CAs allowed to modify CSRs before signing? - digital-certificate

Can anyone please tell me if Certifying Authorities (CAs) are allowed to make modifications to the Certificate Signing Request (CSR) before actually signing the certificate with their own private key?
Specifically, I'd like to know if it's valid for the CA to insert additional fields (such as EKUs) into the cert before adding their signature.

Yes
The Certificate Authority is responsible for enforcing the organisations PKI security policy via its policy files and templates. This may include EKU (extended key usage) attributes.
In reality you are requesting a certificate of a certain type from the CA on behalf of your subject. It is up to the CA to enforce the type of certificates (and the associated uses) that it will issue.
The CA is not actually modifying the request so much as issuing a cert of a permitted type.

I can't speak about CAs in general, but I once ran a Windows Server 2003 network with its own CA, and it's definitely possible to make certreq (through the -attrib option) add additional fields to the CSR before it gets to the CA. Thus, it looks to me like it's possible for the CA itself to do much the same thing.
Your mileage may vary.

Related

FIDO2 key without user presence check

Is it possible to have a FIDO2 usb key which I can use as a second factor without requiring me to perform the user presence check? All the keys I've checked so far (YubiKey, Solo Keys, etc.) require me to tab them.
The intention is to use such a key in order to verify that the authentication process was really initiated from my computer and nothing more. That means, I do not care if my computer gets cracked and then some bad guy performs an authentication via my computer. However, the key would at least prohibit others to authenticate as me from other devices. Having a "tab-less" FIDO2 key would be really convenient (for example, I would like to use it for my SSH keys, however, tabbing the FIDO key every time I login is cumbersome).
All FIDO2 devices have silent authenticator mode(no UV and no UP). This is done by setting specific flags during the request to the authenticator. (UV=0 and UP=0. And need to check if GetInfo has UV and/or UP set to true(available)
However, browser don't have this option right now(as of NOV 2020). This is because there are security and privacy implications. There are some discussion about how this can be properly implemented, so in future websites might be able to use that.
The ssh-keygen command from OpenSSH (since 8.2p1) has the -O no-touch-required option that will not require tapping the key. Note that the SSH server has to be setup to allow this, e.g. by adding the no-touch-required to the respective authorized_keys entry.
This is against the FIDO standard and user presence or user verification is a mandatory feature of a certified CTAP product.
you can use, open-source key and have your modified key to respond to user presence automatically.

Use a wildcard ssl cert to sign other certs

Is it possible to use a wildcard SSL certificate to sign other certificates?
i.e. I have bought a root signed wildcard certificate for *.example.com
I want to allow a third party to provide a service for me, on thirdparty.example.com.
Is it possible for me to create a certificate for thirdparty.example.com and sign it using my *.example.com cert? Or do I have to buy a separate cert for the third party.
If this is not possible, is it possible to buy a domain-signing-cert? Just to be clear, I only want to sign certs for .example.com, not a root level (.com) signing cert.
No, you can't (although it is technically possible, but it won't work). Because certificate signing certificate must have two extensions with the following values:
Basic Constraints must be set to CA=True and be marked as critical
KeyUsages extension must have a keyCertSign and cRLSign bits enabled.
is it possible to buy a domain-signing-cert?
yes, it is possible, but it would be very expensive for you (if you don't plan to issue a large number of certificates). Because you will have to pay a huge price for this service, buy required hardware (HSM is mandatory), write documentation (CPS at a minumum) and process external audits to verify whether you comply with provider's CPS (certificate practice statement). Several time ago I wrote an article about root certificate signing: Certification Authority Root Signing.
HTH

Using a SSl certificate [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
So, I'm brand new to creating a https-compatible site.
I'm currently working with a client with whom I developed a custom Facebook tab for; the files are currently hosted on my server which I have not purchased a security certificate for.
My client has a security certificate for one of their websites, which I do not have access to. My client sent me a text file with a combination of letters and numbers, and I have absolutely no idea what I'm supposed to do with it.
Anybody have any clue how I'm supposed to use it?
In short, you (probably) can't use it for that. But we need to check to be sure.
Background
As you know, SSL is used to secure the communication between two systems, one a server and the other a client (well, for the purposes of this communication link anyway). For the code that sits on initiating, client end of the communication channel to know that it's talking to the right server, it needs not just to have secure communication but also the identity of the server. (Without that, DNS spoofing or any number of IP-level tricks would be utterly massive problems.) This is where certificates come in.
Servers have a cryptographic identity (a public/private key-pair) that they use as part of the boot-strapping of the SSL connection which proves who they are. The public part of that is told to anyone who asks, and the server proves that it has private part through the fact that it can do the key-pair based cryptography (basically, that's mathematical magic, a.k.a. number theory). Then, all the client has to do to know whether to trust the connection is to work out whether they trust the identity stamped into the public key. This can either be by having been previously told directly “trust this certificate” or by the fact that it was digitally signed by someone it trusts (which is how the Certificate Authority system works).
A certificate is basically the public key of a key-pair, at least one digital signature, plus additional information. Examples of the additional information that could be there are the name of the host for which this is a certificate, the period of time for which the certificate is valid, who the administrative contact is, or where to go to find out whether the certificate has been withdrawn early. There are many other options.
What to do with a bare certificate?
With a bare certificate (in PEM format, as you say) all you can do is add it to your collection of trusted certificates or look at the information encoded within the certificate. So we'll start by looking at the information. For that, we use the openssl program (which has a horrible command line interface):
openssl x509 -in thecert.pem -text -noout
That will splurge a whole bunch of information out. The most important part is the “Subject” field; what or who is this certificate talking about? Since this is about HTTPS (which imposes a few extra constraints of its own) we should check whether that contains a hostname of some kind, and what host it is talking about.
Now you have the information to be able to figure out what's going on.
If the whole certificate matches up (especially the digital signature) with what you've already got deployed on your own HTTPS-enabled server, then your customer has just sent you back something you already have. Ho hum.
If the hostname is for a machine that you control and your customer doesn't (e.g., your development server) then your customer has just tried to get a certificate on your behalf. That's a bit of a no-no, but I advise taking it well — especially if you've not yet set up HTTPS. For the purposes of testing, you can get your own single-host certificate (that signs a public key where you've generated the private key yourself) for next to nothing. It's also a reasonable expense to bill your customer.
If the hostname is for the machine where the customer has told you they want to deploy your code in production, then they've just given you something you don't really need. I suppose it might be relevant for client code that wants to connect to the deployment server, but that's not as useful as all that; certificates expire, stuff moves round, and all sorts of things happen in production that can mean that it is useful to issue a new server certificate. Having to push updates to all the deployed clients just because someone accidentally deleted the server certificate without keeping a backup (a more common thing than you might wish) would Truly Suck. Thus, the deployment host certificate is not something you should need.
If its none of these, and it's a long lived certificate (check the Validity field from the information you printed out before) then it might actually be the certificate of a back end service that you're supposed to talk to. Or the certificate of a private CA that signs all the certificates of the back-end services that you talk to. (Are you doing this? I don't know, and I don't know your app, but it's quite possible.) In this case you would add the certificate to the list of trusted certificates in your code (the exact way depends on how your code handles SSL) and this is the only use I can think of for a certificate at the stage you're at.
Trouble is, I don't think (on the basis of what you write) that it's all that likely. Talk to your customer; security is something where you want to get it right, and use and trust of certificates is key to that.
If it's truly none of the above, talk to the customer and say you're a bit confused. I know I am in this case!

Demystifying Web Authentication

I'm currently researching user authentication protocols for a website I'm developing. I would like to create an authentication cookie so users can stay logged in between pages.
Here is my first bash:
cookie = user_id|expiry_date|HMAC(user_id|expiry_date, k)
Where k is HMAC(user_id|expiry_date, sk) and sk is a 256 bit key only known to the server. HMAC is a SHA-256 hash. Note that '|' is a separator, not just concatenation.
This looks like this in PHP:
$key = hash_hmac('sha256', $user_id . '|' . $expiry_time, SECRET_KEY);
$digest = hash_hmac('sha256', $user_id . '|' . $expiry_time, $key);
$cookie = $user_id . '|' . $expiry_time . '|' . $digest;
I can see that it's vulnerable to Replay Attacks as stated in A Secure Cookie Protocol, but should be resistant to Volume Attacks, and Cryptographic Splicing.
THE QUESTION: Am I on the right lines here, or is there a massive vulnerability that I've missed? Is there a way to defend against Replay Attacks that works with dynamically assigned IP addresses and doesn't use sessions?
NOTES
The most recent material I have read:
Dos and Don'ts of Client Authentication on the Web
aka Fu et al.
(https://pdos.csail.mit.edu/papers/webauth:sec10.pdf)
A Secure Cookie Protocol
aka Liu et al.
(http://www.cse.msu.edu/~alexliu/publications/Cookie/cookie.pdf)
which expands on the previous method
Hardened Stateless Session Cookies
(http://www.lightbluetouchpaper.org/2008/05/16/hardened-stateless-session-cookies/)
which also expands on the previous method.
As the subject is extremely complicated I'm am only looking for answers from security experts with real world experience in creating and breaking authentication schemes.
This is fine in general, I've done something similar in multiple apps. It is no more susceptible to replay attacks than session IDs already were. You can protect the tokens from leakage for replay by using SSL, same as you would for session IDs.
Minor suggestions:
Put a field in your user data that gets updated on change-password (maybe password generation counter, or even just the random salt), and include that field in the token and signed-part. Then when the user changes their passwords they are also invalidating any other stolen tokens. Without this you are limited on how long you can reasonably allow a token to live before expiry.
Put a scheme identifier in the token and signed-part, so that (a) you can have different types of token for different purposes (eg one for auth and one for XSRF protection), and (b) you can update the mechanism with a new version without having to invalidate all the old tokens.
Ensure user_id is never re-used, to prevent a token being used to gain access to a different resource with the same ID.
Pipe-delimiting assumes | can never appear in any of the field values. This probably works for the numeric values you are (presumably) dealing with, but you might at some point need a more involved format, eg URL-encoded name/value pairs.
The double-HMAC doesn't seem to really get you much. Both brute force and cryptanalysis against HMAC-SHA256 are already implausibly hard by current understanding.
Unless your transactions/second will tax your hardware, I would only pass a hash in the cookie (i.e. leave out the user_id and expiry_date -- no sense giving the bad people any more information than you absolutely have to).
You could make some assumptions about what the next dynamic IP address should be, given the previous dynamic IP address (I don't have the details handy, alas). Hashing only the unchanging part of the dynamic IP address would help in verifying the user even when their IP address changes. This may or may not work, given the varieties of IP address allocation schemes.
You could get information about the system and hash that also -- in Linux, you could uname -a (but there are similar capabilities available for other OSes). Enough system information, and you might be able to skip using the (partial) IP address entirely. This technique will require some experimentation. Using only normally-browser-supplied system information would make it easier.
You need to think about how long your cookies should remain fresh. If you can live with people having to authenticate once daily, that would be easier on your system authentication coding than allowing people to authenticate only once a month (and so on).
I would consider this protocol as very weak!
your session-cookie is not a random source with high entropy.
The server must do asymmetric encryption on each page to verify a user.
The security of ANY user only relies in the security of the server-key sk.
The server-key SK is the most vulnerable part here.
If anyone can guess it or steal it, he can login as a specific user.
So if sk is generated for each session and user, then why the hmac?
I think you will use TLS anyway, if not, consider your protocol as broken because of replay attacks and eavesdropping in general!
If sk is generated for each user, but not for each session, it is similar to a 256bit password.
If sk is identical for all users, someone just has to crack 256 bits and he can log in as any user he wants. He only has to guess the id and the exiration date.
Have a look at digest-authentication.
It's a per request authentication, specified by the rfc2617.
It is secure for repay attacks using nonces, sent on each request.
It is secure for eavesdropping using hashing.
It is integrated in HTTP.

Is a software token a valid second factor in multi-factor security?

We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"?
Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough.
I would say "no". I don't think you can really get the "something you have" part of multi-factor authentication without issuing something the end user can carry with them. If you "have" something, it implies it can be lost - not many users lose their entire desktop machines. The security of "something you have", after all, comes from the following:
you would notice when you don't have it - a clear indication security has been compromised
only 1 person can have it. So if you do, someone else doesn't
Software tokens do not offer the same guarantees, and I would not in good conscience class it as something the user "has".
While I am not sure it is a "valid" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing.
Insofar as regarding "something the user has" vs "something the user knows", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under "things the user has". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is "something the user knows".
With regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations.
A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing).
I agree with #freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system?
The software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though.
What you're describing is something the computer has, not the user.
So you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user...
Now, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine.
Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks.
You can find out more here:
http://www.wikidsystems.com/learn-more/technology/mutual_authentication
and
http://en.wikipedia.org/wiki/Mutual_authentication
and here is a tutorial on setting up mutual authentication to prevent phishing:
http://www.howtoforge.net/prevent_phishing_with_mutual_authentication.
The image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.