Implementing NTLM silent login with Java - ntlmv2

Hoping someone can remedy my naivety when it comes to calling a simple URL to an application (which returns XML) using NTLMv2.
I have read pretty much every question and page there is but I am left with one overriding curiosity. I am using the HTTPClient at present (although this can be changed) along with the latest JDK (at the time of writing).
Here is an example page which appears to call the JCIFS library:
http://hc.apache.org/httpcomponents-client-ga/ntlm.html
All looks good, albeit confusing, but this highlights the question that many of the examples I have seen raises - the issue of supplying NTCredentials.
To me the whole point of NTLM is so that I do not have to supply credentials. The target aplication is set up to use NTLM so surely the user credntials of the currently logged in user should be used? Why should I be supplying any credentials myself?
Apologies if I am missing something obvious here. I just need the most basic for of NTLM SSO possible using Java. I don't care what version of what, I am able to use the latest of anything.
Holding out hope! Thanks for reading.

Unfortunately, there's way to do single sign-on in a pure Java environment.
NTLM isn't a solution to single sign-on directly. NTLM is a challenge/response authentication mechanism and it requires the NTLM hash of the user's password. Windows machines are able to provide single sign-on using NTLM because the NTLM hash is persisted. They are then able to compute the response to a challenge based on the persisted hash.
Without access to that hash (and, to my knowledge, you can't simply request it) you need to compute it yourself. And that requires having the user's password.
Similarly, you can do single sign-on with a Kerberos ticket using SPNEGO authentication (if the remote system is setup to support it, of course) but Java unfortunately reimplemented Kerberos instead of using the system Kerberos libraries. So even if you were already logged in to the domain, you'd need to go get another Kerberos ticket for Java. And that means typing your password in again.
The only realistic way to avoid typing in a password to authenticate is to call the native methods. On Windows, this is SSPI, which will provide you the ability to respond to an NTLM or SPNEGO challenge. On non-Windows platforms, this is handled by the very similar GSSAPI and provides the ability to respond to SPNEGO (Kerberos).

Related

Chrome and Safari not honorring HPKP

I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.

How to keep backend session information in Polymer SPA

I'd like to login to a RESTful back-end server written in Laravel5, with the single page front-end application leveraging Polymer's custom element.
In this system, the persistence(CRUD) layer lives in the server. So, authentication should be done at the server in responding to client's api request. When a request is valid, the server returns User object in JSON format including user's role for access control in client.
Here, my questions is how I can keep the session, even when a user refreshes the front-end page? Thanks.
This is an issue beyond Polymer, or even just single page apps. The question is how you keep session information in a browser. With SPAs it is a bit easier, since you can keep authentication tokens in memory, but traditional Web apps have had this issue since the beginning.
You have two things you need to do:
Tokens: You need a user token that indicates that this user is authenticated. You want it to be something that cannot be guessed, else someone can spoof it. So the token better not be "jimsmith" but something more reliable. You have two choices. Either you can have a randomly generated token which the server stores, so that when presented on future requests, it can validate the token. This is how just most session managers work in app servers like nodejs sessions or Jetty session or etc. The alternative is to do something cryptographic so that the server only needs to validate mathematically, not check in a store to see if the token is valid. I did that for node in http://github.com/deitch/cansecurity but there are various options for it.
Storage: You need some way to store the tokens client-side that does not depend on JS memory, since you expect to reload the page.
There are several ways to do client-side storage. The most common by far is cookies. Since the browser stores them without your trying too hard, and presents them whenever you access the domain that the cookie is registered for, it is pretty easy to do. Many client-side and server-side auth libraries are built around them.
An alternative is html5 local storage. Depending on your target browsers and support, you can consider using it.
There also are ways you can play with URL parameters, but then you run the risk of losing it when someone switches pages. It can work, but I tend to avoid that.
I have not seen any components that handle cookies directly, but it shouldn't be too hard to build one.
Here is the gist for cookie management code I use for a recent app. Feel free to wrap it to build a Web component for cookie management.. as long as you share alike!
https://gist.github.com/deitch/dea1a3a752d54dc0d00a
UPDATE:
component.kitchen has a storage component here http://component.kitchen/components/TylerGarlick/core-resource-storage
Simplest way if you use PHP is to keep the user in a PHP session (like a normal non SPA application).
PHP will store the user info on the server, and generate automatically a cookie that the browser will send with any request. With a single server with no load balancing, the session data is local and very fast.

XMPP Google-like solution for sync server notifications

I'm looking for an easy way to implement the XMPP server running with the following protocol:
https://developers.google.com/cloud-print/docs/rawxmpp
The only difference is that I must use X-GOOGLE-TOKEN authentication mechanism: https://stackoverflow.com/a/6211324/227244
The procedure is simple: I get the token from the data sent by a client, request user data based on this token and set the JID accordingly, appending some random chars to the resulting JID.
After that other clients with possibly different tokens, but same user account, connect to the XMPP resource and for clients who are subscribed the broadcast of push notifications is enabled.
What amount of the server code can be borrowed from the currently available implementations? I would avoid writing all of the server code myself, though the logic is pretty simple. I know there're ejabberd and prosody xmpp servers which implement lots of XEP. Which one is easier to add the custom handling mechanism to? Can you suggest other stable alternatives for the core xmpp server?
The way google has designed X-OAUTH2 is dead simple and straightforward to implement. Infact, there is no difference between how PLAIN and X-OAUTH2 mechanisms work. You can simply pick a standard PLAIN implementation and make it work for google X-OAUTH2 authentication mechanism with no extra effort.
I am author of Jaxl PHP library and I recently announced support for X-OAUTH2 inside the library. Here you can see exact lines of code I had to write to support this. The only relevant piece of code is:
switch($mechanism) {
case 'PLAIN':
case 'X-OAUTH2':
$stanza->t(base64_encode("\x00".$user."\x00".$pass));
break;
For X-OAUTH2 implementation $pass is nothing but your oauth token. In short, password field from PLAIN auth mechanism becomes oauth token for X-OAUTH2 mechanism. Rest all remains the same.

HTTP DELETE request with extra authentication

I was searching for a solution of the following problem, so far without success: I'm planning a RESTful web service, where certain actions (e.g. DELETE) should require a special authentication.
The idea is, that users have a normal username/password login (session based or Basic Auth, doesn't really matter here) using which they can access the service. Some actions require an additional authentication in form of a PIN code or maybe even a one-time password. Including the extra piece of authentication into the login process is not possible (and would miss the point of the whole exercise).
I thought about special headers (something like X-OTP-Authetication) but that would make it impossible to access the service via a standard HTML page (no means to include a custom header into a link).
Another option was HTTP query parameters, but that seems to be discouraged, especially for DELETE.
Any ideas how to tackle this problem?
From REST Web Service Security with jQuery Front-End
If you haven't already, I'd recommend some reading on OAuth 1.0 and 2.0. They are both used by some of the bigger API, such as Facebook, Netflix, Twitter, and more. 2.0 is still in draft, but that hasn't stopped anyone from implementing it and using it as it is more simple for a client to use. It sounds like you want something more complicated and more secure, so you might want to focus on 1.0.
I always found Netflix's Authentication Overview to be a good explanation for clients.

How to implement a single sign-on authentication server?

I want to implement a discrete remote authentication server that handles login for many sites. Somewhat similar to OpenID.
Basically, I have site-1 and site-2 and they're both reliant on the same user database, which is on a separate auth-site. So, auth-site handles user authentication for them, and during this process, makes information on the authenticating user available to the requesting system.
Each site can be on a completely separate domain name, on completely separate machines.
This is all via HTTP(S), there can be no direct database access.
There's one last quirk: once an user has logged in to site-1, when accessing any other site reliant on auth-site, the site must treat the user as already authenticated.
This whole business must be entirely fuss-free to the end-user. It should work like a simple everyday login form.
As a concrete example, say we're talking about stackoverflow.com and serverfault.com, and they both authenticate via authentic-overflow-server-stack.com. Again, once logged in to either site, I can go to the other and do my business without logging in again.
What I'd like to know are the general interaction mechanism between the sites behind this scenario.
In my particular setup, I'm using Rails, but I'm not looking for code[1], just general best practice and guidance, so feel free to answer in pseudo-code or any generally readable language. OTOH, bear in mind that I'll have decent MVC, REST, and meta-programming in my toolkit.
[1]: unless you happen to know an existing tiny neat free MIT/BSD-licensed app/plugin/generator that handles this.
It sounds like (especially with the emphasis on fuss-free), you want something like what the Wikimedia Foundation is doing. Basically, you log on to en.wikipedia.org, then that server communicates with other servers (e.g. en.wikinews.org) and gets authentication tokens. Finally, those tokens are embedded into images, e.g. http://en.wikinews.org/wiki/Special:AutoLogin?token=xxxxxxxxxxxxxxx , and when your browser visits that url (img src) it gets a authentication cookie for Wikinews. Of course, the source code is available for your reivew at http://www.mediawiki.org/wiki/Extension:CentralAuth .
OpenID is also a good choice, but it does require that the user "consciously" visit two domains. An example of one entity with two domains doing this is Canonical. E.g., if you go to https://help.ubuntu.com/community/UserPreferences they will redirect you to Launchpad (https://login.launchpad.net/+openid) for authentication.
Note that Wikipedia is doing this over http, but you can do it all https to ensure the img src tokens aren't intercepted.
Looks like CAS is good enough for me, and has ruby implementations, along with dozens of other lesser languages, e.g. one that rhymes with femoral bone rage.
http://code.google.com/p/rubycas-server/
http://code.google.com/p/rubycas-client/
It sounds like you want to actually use the OpenID protocol itself. There's no reason you can't restrict the authentication provider to only your own server, and do some shortcuts that make the authentication process transparent. Also, the OpenID protocol supports what you describe about logging into one implies logging in to all services.