How do people handle authentication for RESTful api's (technology agnostic) - json

i'm looking at building some mobile applications. Therefore, these apps will 'talk' to my server via JSON and via REST (eg. put, post, etc).
If I want to make sure a client phone app is trying to do something that requires some 'permission', how to people handle this?
For example:
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
How can this be done?
I've had a look at how twitter does it with their OAuth .. and it looks like they have a number of values in a REQUEST HEADER? If so (and I sorta like this approach), is it possible that I can use another 3rd party as the website to store the username / password (eg. twitter or Facebook are the OAuth providers) .. and all I do is somehow retrieve the custom header data .. and make sure it exists in my db .. else .. get them to authenticate with their OAuth provider?
Or is there another way?
PS. I really don't like the idea of having an API key - I feel that it can be too easily handed to another person, to use (which we can't take the risk).

Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
If this really is a requirement then you need to store user identities in your system. The most popular form of identity tracking is via username and password.
I've had a look at how twitter does it with their OAuth .. and it
looks like they have a number of values in a REQUEST HEADER? If so
(and I sorta like this approach), is it possible that I can use
another 3rd party as the website to store the username / password (eg.
twitter or Facebook are the OAuth providers) .. and all I do is
somehow retrieve the custom header data .. and make sure it exists in
my db .. else .. get them to authenticate with their OAuth provider?
You are confusing two differing technologies here, OpenID and OAuth (don't feel bad, many people get tripped up on this). OpenID allows you to defer identify tracking and authentication to a provider, and then accept these identities in your application, as the acceptor or relying party. OAuth on the other hand allows an application (consumer) to access user data that belongs to another application or system, without compromising that other applications core security. You would stand up OAuth if you wanted third party developers to access your API on behalf of your users (which is not something you have stated you want to do).
For your stated requirements you can definitely take a look at integrating Open ID into your application. There are many libraries available for integration, but since you asked for an agnostic answer I will not list any of them.
Or is there another way?
Of course. You can store user id's in your system and use basic or digest authentication to secure your API. Basic authentication requires only one (easily computed) additional header on your requests:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
If you use either basic or digest authentication then make sure that your API endpoints are protected with SSL, as otherwise user credentials can easily be sniffed over-the-air. You could also fore go user identification and instead effectively authenticate the user at checkout via credit card information, but that's a judgement call.

As RESTful services uses HTTP calls, you could relay on HTTP Basic Authentication for security purposes. It's simple, direct and is already supported for the protocol; and if you wan't an additional security in transport you could use SSL. Well established products like IBM Websphere Process Server use this approach.
The other way is to build your own security framework according to your application needs. For example, if you wan't your service only to be consumed by certain devices, you'll need maybe to send an encoded token as a header over the wire to verify that the request come from an authorized source. Amazon has an interesting way to do this , you can check it here.

Related

Which hook to limit the number of messages a user can send per day?

We want to use ejabberd in the context of a web application having fairly unique and business rules, we'd therefore need to have every chat message (not protocol message, but message a user sends to another one) go through our web application first, and then have the web application deliver the message to ejabberd on behalf of the user (if our business rules allow the message to be sent).
The web application is also the one providing the contact lists (called rosters if I understand correctly to ejabberd). We need to be and remain the single source of truth to ease maintenance.
To us, ejabberd value added would be to deliver chat messages in near real-time to clients, and enable cool things such as presence indicators. Web clients will maintain a direct connection to ejabberd through websocket, but this connection will have to be read-only as far as chat messages are concerned, and read-write as far as presence messages are concerned.
The situation is similar with regards to audio and video calls. While this time the call per see will directly be managed by ejabberd to take advantage of built-in STURN, TURN etc... and will not need to go through our web app, we have custom business logic to manage who is able to call who, when, how often etc... (so in order words, we have custom business logic to authorize the call or not and we'd like to keep all the business logic centralized in the web app).
My question is what would be the proper hooks we'd need to look into to achieve what we are after? I spent an hour or so in the documentation, but I couldn't find what I am after so hopefully someone can provide me pointers. In an ideal world, we'd like to expose API endpoints from our web app that ejabberd hooks can hit. However, the first question is: which relevant hooks is ejabberd offering and where are these hooks documented?
Any help would be greatly appreciated, thank you!
When a client sends a packet to ejabberd, it triggers the user_send_packet hook, providing the packet and the state of the client's session process. Several modules use that hook, for example mod_service_log.

How to keep backend session information in Polymer SPA

I'd like to login to a RESTful back-end server written in Laravel5, with the single page front-end application leveraging Polymer's custom element.
In this system, the persistence(CRUD) layer lives in the server. So, authentication should be done at the server in responding to client's api request. When a request is valid, the server returns User object in JSON format including user's role for access control in client.
Here, my questions is how I can keep the session, even when a user refreshes the front-end page? Thanks.
This is an issue beyond Polymer, or even just single page apps. The question is how you keep session information in a browser. With SPAs it is a bit easier, since you can keep authentication tokens in memory, but traditional Web apps have had this issue since the beginning.
You have two things you need to do:
Tokens: You need a user token that indicates that this user is authenticated. You want it to be something that cannot be guessed, else someone can spoof it. So the token better not be "jimsmith" but something more reliable. You have two choices. Either you can have a randomly generated token which the server stores, so that when presented on future requests, it can validate the token. This is how just most session managers work in app servers like nodejs sessions or Jetty session or etc. The alternative is to do something cryptographic so that the server only needs to validate mathematically, not check in a store to see if the token is valid. I did that for node in http://github.com/deitch/cansecurity but there are various options for it.
Storage: You need some way to store the tokens client-side that does not depend on JS memory, since you expect to reload the page.
There are several ways to do client-side storage. The most common by far is cookies. Since the browser stores them without your trying too hard, and presents them whenever you access the domain that the cookie is registered for, it is pretty easy to do. Many client-side and server-side auth libraries are built around them.
An alternative is html5 local storage. Depending on your target browsers and support, you can consider using it.
There also are ways you can play with URL parameters, but then you run the risk of losing it when someone switches pages. It can work, but I tend to avoid that.
I have not seen any components that handle cookies directly, but it shouldn't be too hard to build one.
Here is the gist for cookie management code I use for a recent app. Feel free to wrap it to build a Web component for cookie management.. as long as you share alike!
https://gist.github.com/deitch/dea1a3a752d54dc0d00a
UPDATE:
component.kitchen has a storage component here http://component.kitchen/components/TylerGarlick/core-resource-storage
Simplest way if you use PHP is to keep the user in a PHP session (like a normal non SPA application).
PHP will store the user info on the server, and generate automatically a cookie that the browser will send with any request. With a single server with no load balancing, the session data is local and very fast.

rest api for 3rd party customers (AAA)

I am currently working on a REST/JSON API that has to provide some services through remote websites. I do not know the end-customers of these websites and they would/should not have an account on the API server. The only accounts existent on the API server would be the accounts identifying the websites. Since this is all RESTful and therefore all communication would be between end-user browser (through javascript/JSON) and my REST API service, how can I make sure that the system won't be abused by 3rd parties interested in increasing the middleman's bill? (where the middleman is the owner of the website reselling my services). What authentication methods would you recommend that would work and would prevent users from just taking the js code from the website and call it 1000000 times just to bankrupt the website owner? I was thinking of using the HTTP_REFERER , and translate that to IP address (to find out which server is hosting the code, and authenticate based on this IP), but I presume the HTTP_REFERER can easily be spoofed. I'm not looking for my customer's end customers to register on the API server, this would defeat the purpose of this API.
Some ideas please?
Thanks,
Dan
This might not be an option for you, but what I've done before in this case is to make a proxy on top of the REST calls. The website calls its own internal service and then that service calls your REST calls. The advantage is that, like you said, no one can hit your REST calls directly or try to spoof calls.
Failing that, you could implement an authentication scheme like HMAC (http://en.wikipedia.org/wiki/Hash-based_message_authentication_code). I've seen a lot of APIs use this.
Using HMAC-SHA1 for API authentication - how to store the client password securely?
Here is what Java code might look like to authenticate: http://support.ooyala.com/developers/documentation/api/signature_java.html
Either way I think you'll have to do some work server side. Otherwise people might be able to reverse engineer the API if everything is purely client side.

How to implement a single sign-on authentication server?

I want to implement a discrete remote authentication server that handles login for many sites. Somewhat similar to OpenID.
Basically, I have site-1 and site-2 and they're both reliant on the same user database, which is on a separate auth-site. So, auth-site handles user authentication for them, and during this process, makes information on the authenticating user available to the requesting system.
Each site can be on a completely separate domain name, on completely separate machines.
This is all via HTTP(S), there can be no direct database access.
There's one last quirk: once an user has logged in to site-1, when accessing any other site reliant on auth-site, the site must treat the user as already authenticated.
This whole business must be entirely fuss-free to the end-user. It should work like a simple everyday login form.
As a concrete example, say we're talking about stackoverflow.com and serverfault.com, and they both authenticate via authentic-overflow-server-stack.com. Again, once logged in to either site, I can go to the other and do my business without logging in again.
What I'd like to know are the general interaction mechanism between the sites behind this scenario.
In my particular setup, I'm using Rails, but I'm not looking for code[1], just general best practice and guidance, so feel free to answer in pseudo-code or any generally readable language. OTOH, bear in mind that I'll have decent MVC, REST, and meta-programming in my toolkit.
[1]: unless you happen to know an existing tiny neat free MIT/BSD-licensed app/plugin/generator that handles this.
It sounds like (especially with the emphasis on fuss-free), you want something like what the Wikimedia Foundation is doing. Basically, you log on to en.wikipedia.org, then that server communicates with other servers (e.g. en.wikinews.org) and gets authentication tokens. Finally, those tokens are embedded into images, e.g. http://en.wikinews.org/wiki/Special:AutoLogin?token=xxxxxxxxxxxxxxx , and when your browser visits that url (img src) it gets a authentication cookie for Wikinews. Of course, the source code is available for your reivew at http://www.mediawiki.org/wiki/Extension:CentralAuth .
OpenID is also a good choice, but it does require that the user "consciously" visit two domains. An example of one entity with two domains doing this is Canonical. E.g., if you go to https://help.ubuntu.com/community/UserPreferences they will redirect you to Launchpad (https://login.launchpad.net/+openid) for authentication.
Note that Wikipedia is doing this over http, but you can do it all https to ensure the img src tokens aren't intercepted.
Looks like CAS is good enough for me, and has ruby implementations, along with dozens of other lesser languages, e.g. one that rhymes with femoral bone rage.
http://code.google.com/p/rubycas-server/
http://code.google.com/p/rubycas-client/
It sounds like you want to actually use the OpenID protocol itself. There's no reason you can't restrict the authentication provider to only your own server, and do some shortcuts that make the authentication process transparent. Also, the OpenID protocol supports what you describe about logging into one implies logging in to all services.

Multi-site login ala Google

Not sure if the title is quite right for the question but I can't think of any other way to put it..
Suppose you wanted to create multiple different web apps, but you wanted a user who was logged into one app to be able to go straight to your other app without re-logging in (assuming they have perms to look at the other app as well). If I'm not mistaken, if you're logged into gmail you can go straight to your iGoogle, googleReader, etc without re-logging in (if you set it up right).
How would you approach this? What would you use? Assume the apps already exist and you don't want to change the initial login page for the users.
What you're looking for is called Single Sign On. If you follow the link you'll find several implementations.
Open ID as others have mentioned is not such a scheme as it requires a seperate login for each site. Open ID is merely a shared authentication system.
You would issue a cookie against foo.com, which would then be visible on app1.foo.com, app2.foo.com.
Each application can then use the cookie to access a centralised authentication system.
Try CAS it should provide the features you are looking for.
What you want is a single sign-on (SSO).
There are two approaches to solving this problem:
Roll your own implementation. In its most trivial form it can be implemented by the first site setting a cookie that holds the ticket for the logged on user and the second site verifying that ticket and accepting the logged on user. There are quite a lot of potential pitfalls here:
you have to protect yourself against information disclosure - make sure that the ticket does not contain the actual user credentials
you have to protect yourself against spoofing - a man in the middle stealing a valid ticket and impersonating one of your users
and others
Adopt a third party SSO mechanism. Google, Microsoft, Facebook and other big companies allow integrating with their identity providers, so that your users could log on to their website and they handle verification, ticket issuing and so on. There's also OpenID, which is an open protocol you can use to enable SSO on your site through virtually any identity provider that supports OpenID. The potential drawback here is that somebody else controls your access to your user identity and can limit the features you can offer and data you can mine for your users.
As mentioned you can use something like OpenId or similar to make the process simple. Otherwise if you roll your own you could use a cookie to store the login, then basically ALL applications must have an entry point that mimics the base url.
Google for example uses mail.google.com to as a pipline into Gmail which allows it to read a cookie stored with the google.com domain.