I'm currently researching user authentication protocols for a website I'm developing. I would like to create an authentication cookie so users can stay logged in between pages.
Here is my first bash:
cookie = user_id|expiry_date|HMAC(user_id|expiry_date, k)
Where k is HMAC(user_id|expiry_date, sk) and sk is a 256 bit key only known to the server. HMAC is a SHA-256 hash. Note that '|' is a separator, not just concatenation.
This looks like this in PHP:
$key = hash_hmac('sha256', $user_id . '|' . $expiry_time, SECRET_KEY);
$digest = hash_hmac('sha256', $user_id . '|' . $expiry_time, $key);
$cookie = $user_id . '|' . $expiry_time . '|' . $digest;
I can see that it's vulnerable to Replay Attacks as stated in A Secure Cookie Protocol, but should be resistant to Volume Attacks, and Cryptographic Splicing.
THE QUESTION: Am I on the right lines here, or is there a massive vulnerability that I've missed? Is there a way to defend against Replay Attacks that works with dynamically assigned IP addresses and doesn't use sessions?
NOTES
The most recent material I have read:
Dos and Don'ts of Client Authentication on the Web
aka Fu et al.
(https://pdos.csail.mit.edu/papers/webauth:sec10.pdf)
A Secure Cookie Protocol
aka Liu et al.
(http://www.cse.msu.edu/~alexliu/publications/Cookie/cookie.pdf)
which expands on the previous method
Hardened Stateless Session Cookies
(http://www.lightbluetouchpaper.org/2008/05/16/hardened-stateless-session-cookies/)
which also expands on the previous method.
As the subject is extremely complicated I'm am only looking for answers from security experts with real world experience in creating and breaking authentication schemes.
This is fine in general, I've done something similar in multiple apps. It is no more susceptible to replay attacks than session IDs already were. You can protect the tokens from leakage for replay by using SSL, same as you would for session IDs.
Minor suggestions:
Put a field in your user data that gets updated on change-password (maybe password generation counter, or even just the random salt), and include that field in the token and signed-part. Then when the user changes their passwords they are also invalidating any other stolen tokens. Without this you are limited on how long you can reasonably allow a token to live before expiry.
Put a scheme identifier in the token and signed-part, so that (a) you can have different types of token for different purposes (eg one for auth and one for XSRF protection), and (b) you can update the mechanism with a new version without having to invalidate all the old tokens.
Ensure user_id is never re-used, to prevent a token being used to gain access to a different resource with the same ID.
Pipe-delimiting assumes | can never appear in any of the field values. This probably works for the numeric values you are (presumably) dealing with, but you might at some point need a more involved format, eg URL-encoded name/value pairs.
The double-HMAC doesn't seem to really get you much. Both brute force and cryptanalysis against HMAC-SHA256 are already implausibly hard by current understanding.
Unless your transactions/second will tax your hardware, I would only pass a hash in the cookie (i.e. leave out the user_id and expiry_date -- no sense giving the bad people any more information than you absolutely have to).
You could make some assumptions about what the next dynamic IP address should be, given the previous dynamic IP address (I don't have the details handy, alas). Hashing only the unchanging part of the dynamic IP address would help in verifying the user even when their IP address changes. This may or may not work, given the varieties of IP address allocation schemes.
You could get information about the system and hash that also -- in Linux, you could uname -a (but there are similar capabilities available for other OSes). Enough system information, and you might be able to skip using the (partial) IP address entirely. This technique will require some experimentation. Using only normally-browser-supplied system information would make it easier.
You need to think about how long your cookies should remain fresh. If you can live with people having to authenticate once daily, that would be easier on your system authentication coding than allowing people to authenticate only once a month (and so on).
I would consider this protocol as very weak!
your session-cookie is not a random source with high entropy.
The server must do asymmetric encryption on each page to verify a user.
The security of ANY user only relies in the security of the server-key sk.
The server-key SK is the most vulnerable part here.
If anyone can guess it or steal it, he can login as a specific user.
So if sk is generated for each session and user, then why the hmac?
I think you will use TLS anyway, if not, consider your protocol as broken because of replay attacks and eavesdropping in general!
If sk is generated for each user, but not for each session, it is similar to a 256bit password.
If sk is identical for all users, someone just has to crack 256 bits and he can log in as any user he wants. He only has to guess the id and the exiration date.
Have a look at digest-authentication.
It's a per request authentication, specified by the rfc2617.
It is secure for repay attacks using nonces, sent on each request.
It is secure for eavesdropping using hashing.
It is integrated in HTTP.
Related
We use JIRA Cloud for our ticketing system, which does not support using email aliases. Since we now have two domains in our system, with the second domain added as an alias in G Suite (same usernames across both). Management decided to use this new domain, domain2, as the primary FROM address for all users, which has caused issues in several places, such as in JIRA, since we cannot change the main domain in G Suite OR in JIRA, and emails can come from either domain1 or domain2.
So I'd like to set up a procmail (or equivalent) filter that checks the helpdesk# email account via POP3, and for emails sent from domain1, it would add "inc" at the end so it matches domain2 in the email headers and email FROM field, and then send that message to a second email address that JIRA listens to. It would need appear as coming FROM user#domain1 as well, not the actual account sending it (which I know requires additional work on the G Suite end to allow).
Since JIRA doesn't allow any of this email processing internally, this would allow JIRA to work properly without add-ons that may not do what we need them to, and can get expensive since they're charged monthly, per user.
So I'm trying to see if procmail is even the easiest (or best) thing to set up for this (considering it's not maintained anymore), and which combination of agents would be easiest for this. There are so many options but I'm not sure which would be easiest to set up for this, or quite how to do it.
Once I know which direction to go, I should be able to figure out how to make it work. Just not sure where to begin here, which agents to use, how best to approach this.
Thank you!
Your question is really not about programming; maybe try https://serverfault.com/ or https://unix.stackexchange.com/ for the infrastructure parts. I'll focus on answering the question in the title, though the details on that are also rather muddy.
:0fH
* domain1
| sed 's/domain1/domain2/g'
I'm guessing from your description that domain1 is actually a substring of domain2. If that's the case, the regexes need to be sharpened a bit (or you'll end up replacing domain1inc with domain1incinc, etc). As a quick first approximation, doman1($|[^i]) will match domain1 when it is followed by nothing, or a character which isn't i. When substituting, you will want to keep that character, which is usually done in sed by remembering it, and substituting it with itself. Or you can switch to Perl, which supports a much richer regex dialect.
:0fH
* domain1($|[^i])
| perl -pe 's/domain1(?!inc)/domain2/g'
Though of course, perhaps your real use case looks more iike s/domain1.com/domain2.com/g in which case the additional context of the .com suffix is quite sufficient for avoiding to substitute strings which should remain unchanged, and you can safely stay with the simpler and thus faster and probably more secure sed.
Again, how exactly to run Procmail on your incoming email in the first place is a separate topic which isn't really programming-related. If you have Postfix and Procmail on the mail server, simply creating a .procmailrc in the helpdesk account's home directory should suffice.
I am trying to figure out how I will manage sessions using json web tokens in a microservice architecture.
Looking at the design in this article what I currently have in mind is that the client will send a request that first goes through a firewall. This request will contain an opaque/reference token which the firewall sends to an authorization server. The authorization server responds with a value token containing all the session information for the user. The firewall then passes the request along with the value token to the API, and the value token will then get propagated to all the different microservices required to fulfill the request.
I have 2 questions:
How should updates to the session information in the value token be handled? To elaborate, when the session info in a token gets updated, it needs to be updated in the authorization server. Should each service that changes the token talk to the authorization server?
Should all the microservices use this single token to store their session info? Or would it be better for each service to have a personalized token? If it's the latter, please explain how to adjust the design.
A very(!) significant "fly in the ointment" of this kind of design ... which requires careful advance thought on your part ... is: “precisely what is meant by ‘session’ information.” In this architecture, “everyone is racing with everyone else.” If the session information is updated, you do not and basically cannot(!) know which of the agents knows about that change and which does not. To further complicate things, new requests are arriving asynchronously and will overlap other requests in unpredictable ways.
Therefore, the Authorization Server must be exactly that ... and, no more. It validates (authenticates ...) the opaque token, and supplies a trustworthy description of what the request is authorized to do. But, the information that it harbors basically cannot change. And specifically, it cannot hold “session state” data in the web server sense of that term.
Each microservice provider must maintain its own “tote board” *(my term ... “its own particular subset of what in a web-server would be ‘the session pool’”), and it is desirable but not always feasible that its board would be independent of the others. Almost certainly, it must use a central database (with transactions) to coordinate with other service-providers similarly situated. And still, if the truth is that the content of any of these “totes” is causally related to any other, you now have an out-of-sync issue between them.
Although microservice architecture has a certain academic appeal, IMHO designs must be carefully studied to be certain that they are, in fact, compatible with this approach.
I have a Detail Search form on the Startpage, where the user have many Search options available.
What would be the best practice to keep Search paramets for the user Session.
What are the Pros and Cons if the put them in
URL
Session
Cookie
What should be used as Best practice.
I'm going to plump for Cookie on the basis that URL persistence will make all your URLs ugly and poor for link sharing; not only that but some devices might balk at very long URLs (you say there are a lot of options). Session persistence requires cookies anyway; or query string persistence to maintain the state (back to link-sharing and ugly URL problems).
With a cookie you can store a lot of data (well, within reason) and it doesn't affect your urls.
However - if search parameter persistence is crucial to your application, then you should have a fallback that detects whether cookies are available, and resorts to URL persistence if not.
Best practice really depends on the scenario (including business case, programming language, etc.). However, here are some high level pros/cons.
URL Pros: easy to read/write
URL Cons: user can easily manipulate them causing unintended results, nasty URLs
Session pros: should be pretty easy to read/write programmatically (depending on the language), don't have to worry about parameters in a URL
Session cons: takes up more memory (may be negligible depending on the data)
Cookie pros: doesn't take up memory
Cookie cons: must read/write to a file, user could delete cookies at any time (mid-session), cookies shared within the browser (1 cookie for any number of sessions)
I'd say a session is the best option. If you have several pages, you most likely will need to keep some global state -- the alternative being the user resubmitting all the previous data when he moves to the next page.
That said, you cannot just use a session that relies on a cookie to store the session identifier, at least not without some extra data that is in fact passed around between the several pages as a hidden field or a URL parameter.
The problem is that with just a cookie you won't have web conversations, you have a global cookie that's shared between all the tabs/windows in the browser. If the user opens a new tab and starts a new search, the session cookie will be replaced and the session in the other tab will be lost.
So either you:
Pass the session id in the URL instead of using a cookie (beware of session fixation, though).
Include an extra GET parameter or hidden field that identifies the conversation.
(Note: these two questions are similar, but more specific to ASP.Net)
Consider a typical web app with a rich client (it's Flex in my case), where you have a form, an underlying client logic that maps the form's input to a data model, some way of remoting these objects to a server logic, which usually puts it in a database.
Where should I - generally speaking - put the validation logic, i. e. ensuring correct format of email adresses, numbers etc.?
As early as possible. Rich client frameworks like Flex provide built-in validator logic that lets you validate right upon form submission, even before it reaches your data model. This is nice and responsive, but if you develop something extensible and you want the validation to protect from programming mistakes of later contributors, this doesn't catch it.
At the data model on the client side. Since this is the 'official' representation of your data and you have data types and getters / setters already there, this validation captures user errors and programming errors from people extending your system.
Upon receiving the data on the server. This adds protection from broken or malicious clients that may join the system later. Also in a multi-client scenario, this gives you one authorative source of validation.
Just before you store the data in the backend. This includes protection from all mistakes made anywhere in the chain (except the storing logic itself), but may require bubbling up the error all the way back.
I'm sort of leaning towards using both 2 and 4, as I'm building an application that has various points of potential extension by third parties. Using 2 in addition to 4 might seem superfluous, but I think it makes the client app behave more user friendly because it doesn't require a roundtrip to the server to see if the data is OK. What's your approach?
Without getting too specific, I think there should validations for the following reasons:
Let the user know that the input is incorrect in some way.
Protect the system from attacks.
Letting the user know that some data is incorrect early would be friendly -- for example, an e-mail entry field may have a red background until the # sign and a domain name is entered. Only when an e-mail address follows the format in RFC 5321/5322, the e-mail field should turn green, and perhaps put a little nice check mark to let the user know that the e-mail address looks good.
Also, letting the user know that the information provided is probably incorrect in some way would be helpful as well. For example, ask the user whether or not he or she really means to have the same recipient twice for the same e-mail message.
Then, next should be checks on the server side -- and never assume that the data that is coming through is well-formed. Perform checks to be sure that the data is sound, and beware of any attacks.
Assuming that the client will thwart SQL injections, and blindly accepting data from connections to the server can be a serious vulnerability. As mentioned, a malicious client whose sole purpose is to attack the system could easily compromise the system if the server was too trusting.
And finally, perform whatever checks to see if the data is correct, and the logic can deal with the data correctly. If there are any problems, notify the user of any problems.
I guess that being friendly and defensive is what it comes down to, from my perspective.
There's only a rule which is using at least some kind of server validation always (number 3/4 in your list).
Client validation (Number 2/1) makes the user experience snappier and reduces load (because you don't post to the server stuff that doesn't pass client validation).
An important thing to point out is that if you go with client validation only you're at great risk (just imagine if your client validation relies on javascript and users disable javascript on their browser).
There shoudl definitely be validation on the server end. I am thinking taht the validation should be done as early as possible on the server end, so there's less chance of malicious (or incorrect) data entering the system.
Input validation on the client end is helpful, since it makes the interface snappier, but there's no guarantee that data coming in to the server has been through the client-side validation, so there MUST be validation on the server end.
Because of security an convenience: server side and as early as possible
But what is also important is to have some global model/business logic validation so when you have for example multiple forms with common data (for example name of the product) the validation rule should remain consistent unless the requirements says otherwise.
We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"?
Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough.
I would say "no". I don't think you can really get the "something you have" part of multi-factor authentication without issuing something the end user can carry with them. If you "have" something, it implies it can be lost - not many users lose their entire desktop machines. The security of "something you have", after all, comes from the following:
you would notice when you don't have it - a clear indication security has been compromised
only 1 person can have it. So if you do, someone else doesn't
Software tokens do not offer the same guarantees, and I would not in good conscience class it as something the user "has".
While I am not sure it is a "valid" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing.
Insofar as regarding "something the user has" vs "something the user knows", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under "things the user has". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is "something the user knows".
With regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations.
A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing).
I agree with #freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system?
The software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though.
What you're describing is something the computer has, not the user.
So you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user...
Now, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine.
Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks.
You can find out more here:
http://www.wikidsystems.com/learn-more/technology/mutual_authentication
and
http://en.wikipedia.org/wiki/Mutual_authentication
and here is a tutorial on setting up mutual authentication to prevent phishing:
http://www.howtoforge.net/prevent_phishing_with_mutual_authentication.
The image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.