FIDO2 key without user presence check - fido-u2f

Is it possible to have a FIDO2 usb key which I can use as a second factor without requiring me to perform the user presence check? All the keys I've checked so far (YubiKey, Solo Keys, etc.) require me to tab them.
The intention is to use such a key in order to verify that the authentication process was really initiated from my computer and nothing more. That means, I do not care if my computer gets cracked and then some bad guy performs an authentication via my computer. However, the key would at least prohibit others to authenticate as me from other devices. Having a "tab-less" FIDO2 key would be really convenient (for example, I would like to use it for my SSH keys, however, tabbing the FIDO key every time I login is cumbersome).

All FIDO2 devices have silent authenticator mode(no UV and no UP). This is done by setting specific flags during the request to the authenticator. (UV=0 and UP=0. And need to check if GetInfo has UV and/or UP set to true(available)
However, browser don't have this option right now(as of NOV 2020). This is because there are security and privacy implications. There are some discussion about how this can be properly implemented, so in future websites might be able to use that.

The ssh-keygen command from OpenSSH (since 8.2p1) has the -O no-touch-required option that will not require tapping the key. Note that the SSH server has to be setup to allow this, e.g. by adding the no-touch-required to the respective authorized_keys entry.

This is against the FIDO standard and user presence or user verification is a mandatory feature of a certified CTAP product.
you can use, open-source key and have your modified key to respond to user presence automatically.

Related

HTML- Required "bug" [duplicate]

Using a simple tool like FireBug, anyone can change javascript parameters on the client side. If anyone take time and study your application for a while, they can learn how to change JS parameters resulting in hacking your site.
For example, a simple user can delete entities which they see but are not allowed to change. I know a good developer must check everything on server side, but this means more overhead, you must do checks with data from a DB first, in order to validate the request. This takes a lot of time, for every action someone must validate it, and can only do this by fetching the needed data from DB.
What would you do to minimize hacking in that case?
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you?
Our team use teamworkpm.net to organize our work. I just discovered that I can edit someone else tasks by changing a javascript function (which initially edit my own tasks).
when every function call to server, in server side before you do the action , you need to check if this user is allowed to do this action.
It is necessary to build server-side permissions mechanism to prevent unwanted actions, you may want to define groups of users, not individual user level, it makes it easier.
Anything on the client side could be spoofed. If you use some type of secret key + parameter signature, your signature algorithm must be sufficiently random/secure that it cannot be reverse engineered.
The overhead created with adding client side complexity is better spent crafting proper server side validations.
What would you do to minimize hacking in that case ?
You can't work around using validation methods on the server side.
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you ?
And how do you use the secret key without the client seeing it? As you self mentioned before, the user easily can manipulate your javascript, and also he can read everything in javascript, the secret key, too!
You can't hide anything in JavaScript, the only thing you can do is to obscure things in JavaScript, and hope nobody tries to find out what you try to hide.
This is why you must validate everything on the server. You can never guarantee that the user won't mess about with things on the client.
Everything, even your javascript source code is visible to the client and can be changed by them, theres no way around this.
There's really no way to do this completely client-side. If the person has a valid auth cookie, they can craft any sort of request they want regardless of the code on the page and send it to your server. You can do things with other, encrypted cookies that must sent back with the request and also must match the inputs on the page, but you still need to check this server-side. Server-side security is essential in protecting your application from unauthorized access and you must ensure, server-side, that every action being performed is one that the user is authorized to perform.
You certainly cannot hide anything client side, so there is little point in trying to do so.
If what you are saying is that you are sending something like a user ID and you want to ensure that the returned value has not been illicitly changed then the simplest way of doing so it probably to generate and send a UUID alongside it, and check on return that the value of the uuid matched that stored on the server for the userID before doing any further processing. The space for uuid's is so large that you can discount any false hits ever occurring.
As to actual server side processing vulnerabilities:- you should simply always build in your security/permissions as close to the database as you can, and defiantly not in the client. There's nothing different in the scenario you outline from any normal client-server design.
Peter from Teamworkpm.net here - I'm one of the main developers and was concerned to come across this report about a security problem. I checked into this and I am happy that is not possible to delete a task that you shouldn't have access to.
You get a message saying "You do not have permission to delete this task".
I think it is just the confusion between being a Project Administrator and being an overall Administrator that is the problem here :- You may not be a member of a project but as an overall administrator, you still have permission to delete any task within your Teamwork site. This is by design.
We take security very seriously and it's all implemented server side because as Jens F says, we can't reply on client side security.
If you do come across any issues in TeamworkPM that you would like to discuss, we'd encourage any of you to just hit the feedback link and you'll typically get an answer within a few hours.

Embedding cryptographic keys into software

The proposal for inclusion of DRM on html5 is hitting the news lately. It's only predictable that the key storage mechanism will eventually be cracked, as it was on dvd playback software. This is also known as the trusted client problem
My question is simple: is there a way to encrypt data such that only a specific piece of executable code is able to decrypt it?
Normally, a private (asymmetric) key is included in the software code, and used to decrypt the symmetric key (distributed with the content) that the content was encrypted with. This makes it trivial to extract the said private key from the software and bypass it.
I was wondering if it was possible to have decryption depend on the integrity of the software itself.
I can't see any obvious solution with existing cryptographic primitives. The must obvious one would be to take a hash of some internal program state on runtime, and pass it through a key derivation function, but this will still fail on memory inspection
Is this possible at all? If it's not, is there a mathematical proof?
I'm not looking for definitive answers here, just pointers to existing work.
DRM is basically impossible without some kind of trusted device or service. It may be possible with quantum physics, but that is mostly because anything seems a possibility when you just point to quantum physics :)
Many motherboards already have a TPM module installed on it. If the market allows for such kind of devices then secure DRM may become a reality. Even then TPM modules have been broken already, as such a device in the hands of hackers is some kind of hardware DRM.
I don't believe you will find a mathematical proof, as you say.
There is a fairly well understood approach, which is commonly called 'Whitebox cryptography'.
The usual key question with white box cryptography is the difference between it and obfuscation. There is a good discussion around this here: https://crypto.stackexchange.com/a/392

Using a SSl certificate [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
So, I'm brand new to creating a https-compatible site.
I'm currently working with a client with whom I developed a custom Facebook tab for; the files are currently hosted on my server which I have not purchased a security certificate for.
My client has a security certificate for one of their websites, which I do not have access to. My client sent me a text file with a combination of letters and numbers, and I have absolutely no idea what I'm supposed to do with it.
Anybody have any clue how I'm supposed to use it?
In short, you (probably) can't use it for that. But we need to check to be sure.
Background
As you know, SSL is used to secure the communication between two systems, one a server and the other a client (well, for the purposes of this communication link anyway). For the code that sits on initiating, client end of the communication channel to know that it's talking to the right server, it needs not just to have secure communication but also the identity of the server. (Without that, DNS spoofing or any number of IP-level tricks would be utterly massive problems.) This is where certificates come in.
Servers have a cryptographic identity (a public/private key-pair) that they use as part of the boot-strapping of the SSL connection which proves who they are. The public part of that is told to anyone who asks, and the server proves that it has private part through the fact that it can do the key-pair based cryptography (basically, that's mathematical magic, a.k.a. number theory). Then, all the client has to do to know whether to trust the connection is to work out whether they trust the identity stamped into the public key. This can either be by having been previously told directly “trust this certificate” or by the fact that it was digitally signed by someone it trusts (which is how the Certificate Authority system works).
A certificate is basically the public key of a key-pair, at least one digital signature, plus additional information. Examples of the additional information that could be there are the name of the host for which this is a certificate, the period of time for which the certificate is valid, who the administrative contact is, or where to go to find out whether the certificate has been withdrawn early. There are many other options.
What to do with a bare certificate?
With a bare certificate (in PEM format, as you say) all you can do is add it to your collection of trusted certificates or look at the information encoded within the certificate. So we'll start by looking at the information. For that, we use the openssl program (which has a horrible command line interface):
openssl x509 -in thecert.pem -text -noout
That will splurge a whole bunch of information out. The most important part is the “Subject” field; what or who is this certificate talking about? Since this is about HTTPS (which imposes a few extra constraints of its own) we should check whether that contains a hostname of some kind, and what host it is talking about.
Now you have the information to be able to figure out what's going on.
If the whole certificate matches up (especially the digital signature) with what you've already got deployed on your own HTTPS-enabled server, then your customer has just sent you back something you already have. Ho hum.
If the hostname is for a machine that you control and your customer doesn't (e.g., your development server) then your customer has just tried to get a certificate on your behalf. That's a bit of a no-no, but I advise taking it well — especially if you've not yet set up HTTPS. For the purposes of testing, you can get your own single-host certificate (that signs a public key where you've generated the private key yourself) for next to nothing. It's also a reasonable expense to bill your customer.
If the hostname is for the machine where the customer has told you they want to deploy your code in production, then they've just given you something you don't really need. I suppose it might be relevant for client code that wants to connect to the deployment server, but that's not as useful as all that; certificates expire, stuff moves round, and all sorts of things happen in production that can mean that it is useful to issue a new server certificate. Having to push updates to all the deployed clients just because someone accidentally deleted the server certificate without keeping a backup (a more common thing than you might wish) would Truly Suck. Thus, the deployment host certificate is not something you should need.
If its none of these, and it's a long lived certificate (check the Validity field from the information you printed out before) then it might actually be the certificate of a back end service that you're supposed to talk to. Or the certificate of a private CA that signs all the certificates of the back-end services that you talk to. (Are you doing this? I don't know, and I don't know your app, but it's quite possible.) In this case you would add the certificate to the list of trusted certificates in your code (the exact way depends on how your code handles SSL) and this is the only use I can think of for a certificate at the stage you're at.
Trouble is, I don't think (on the basis of what you write) that it's all that likely. Talk to your customer; security is something where you want to get it right, and use and trust of certificates is key to that.
If it's truly none of the above, talk to the customer and say you're a bit confused. I know I am in this case!

Demystifying Web Authentication

I'm currently researching user authentication protocols for a website I'm developing. I would like to create an authentication cookie so users can stay logged in between pages.
Here is my first bash:
cookie = user_id|expiry_date|HMAC(user_id|expiry_date, k)
Where k is HMAC(user_id|expiry_date, sk) and sk is a 256 bit key only known to the server. HMAC is a SHA-256 hash. Note that '|' is a separator, not just concatenation.
This looks like this in PHP:
$key = hash_hmac('sha256', $user_id . '|' . $expiry_time, SECRET_KEY);
$digest = hash_hmac('sha256', $user_id . '|' . $expiry_time, $key);
$cookie = $user_id . '|' . $expiry_time . '|' . $digest;
I can see that it's vulnerable to Replay Attacks as stated in A Secure Cookie Protocol, but should be resistant to Volume Attacks, and Cryptographic Splicing.
THE QUESTION: Am I on the right lines here, or is there a massive vulnerability that I've missed? Is there a way to defend against Replay Attacks that works with dynamically assigned IP addresses and doesn't use sessions?
NOTES
The most recent material I have read:
Dos and Don'ts of Client Authentication on the Web
aka Fu et al.
(https://pdos.csail.mit.edu/papers/webauth:sec10.pdf)
A Secure Cookie Protocol
aka Liu et al.
(http://www.cse.msu.edu/~alexliu/publications/Cookie/cookie.pdf)
which expands on the previous method
Hardened Stateless Session Cookies
(http://www.lightbluetouchpaper.org/2008/05/16/hardened-stateless-session-cookies/)
which also expands on the previous method.
As the subject is extremely complicated I'm am only looking for answers from security experts with real world experience in creating and breaking authentication schemes.
This is fine in general, I've done something similar in multiple apps. It is no more susceptible to replay attacks than session IDs already were. You can protect the tokens from leakage for replay by using SSL, same as you would for session IDs.
Minor suggestions:
Put a field in your user data that gets updated on change-password (maybe password generation counter, or even just the random salt), and include that field in the token and signed-part. Then when the user changes their passwords they are also invalidating any other stolen tokens. Without this you are limited on how long you can reasonably allow a token to live before expiry.
Put a scheme identifier in the token and signed-part, so that (a) you can have different types of token for different purposes (eg one for auth and one for XSRF protection), and (b) you can update the mechanism with a new version without having to invalidate all the old tokens.
Ensure user_id is never re-used, to prevent a token being used to gain access to a different resource with the same ID.
Pipe-delimiting assumes | can never appear in any of the field values. This probably works for the numeric values you are (presumably) dealing with, but you might at some point need a more involved format, eg URL-encoded name/value pairs.
The double-HMAC doesn't seem to really get you much. Both brute force and cryptanalysis against HMAC-SHA256 are already implausibly hard by current understanding.
Unless your transactions/second will tax your hardware, I would only pass a hash in the cookie (i.e. leave out the user_id and expiry_date -- no sense giving the bad people any more information than you absolutely have to).
You could make some assumptions about what the next dynamic IP address should be, given the previous dynamic IP address (I don't have the details handy, alas). Hashing only the unchanging part of the dynamic IP address would help in verifying the user even when their IP address changes. This may or may not work, given the varieties of IP address allocation schemes.
You could get information about the system and hash that also -- in Linux, you could uname -a (but there are similar capabilities available for other OSes). Enough system information, and you might be able to skip using the (partial) IP address entirely. This technique will require some experimentation. Using only normally-browser-supplied system information would make it easier.
You need to think about how long your cookies should remain fresh. If you can live with people having to authenticate once daily, that would be easier on your system authentication coding than allowing people to authenticate only once a month (and so on).
I would consider this protocol as very weak!
your session-cookie is not a random source with high entropy.
The server must do asymmetric encryption on each page to verify a user.
The security of ANY user only relies in the security of the server-key sk.
The server-key SK is the most vulnerable part here.
If anyone can guess it or steal it, he can login as a specific user.
So if sk is generated for each session and user, then why the hmac?
I think you will use TLS anyway, if not, consider your protocol as broken because of replay attacks and eavesdropping in general!
If sk is generated for each user, but not for each session, it is similar to a 256bit password.
If sk is identical for all users, someone just has to crack 256 bits and he can log in as any user he wants. He only has to guess the id and the exiration date.
Have a look at digest-authentication.
It's a per request authentication, specified by the rfc2617.
It is secure for repay attacks using nonces, sent on each request.
It is secure for eavesdropping using hashing.
It is integrated in HTTP.

Is a software token a valid second factor in multi-factor security?

We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"?
Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough.
I would say "no". I don't think you can really get the "something you have" part of multi-factor authentication without issuing something the end user can carry with them. If you "have" something, it implies it can be lost - not many users lose their entire desktop machines. The security of "something you have", after all, comes from the following:
you would notice when you don't have it - a clear indication security has been compromised
only 1 person can have it. So if you do, someone else doesn't
Software tokens do not offer the same guarantees, and I would not in good conscience class it as something the user "has".
While I am not sure it is a "valid" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing.
Insofar as regarding "something the user has" vs "something the user knows", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under "things the user has". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is "something the user knows".
With regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations.
A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing).
I agree with #freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system?
The software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though.
What you're describing is something the computer has, not the user.
So you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user...
Now, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine.
Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks.
You can find out more here:
http://www.wikidsystems.com/learn-more/technology/mutual_authentication
and
http://en.wikipedia.org/wiki/Mutual_authentication
and here is a tutorial on setting up mutual authentication to prevent phishing:
http://www.howtoforge.net/prevent_phishing_with_mutual_authentication.
The image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.